Skip to main content

Stanford Alpaca: the "Son of the ChatGPT". And the next step for AI research.


The Alpaca-AI costs are very small, and that causes problems for companies. Who pays thousands of dollars for some AI tool that has some limits, when they can order the AI to make a copy of itself to their servers? The Stanford Alpaca's costs were less than 600 dollars and the Open AI and Microsoft spends millions for Chat GPT. 

Now it's done. Artificial intelligence created the new artificial intelligence. "The son of ChatGPT" called "Stanford Alpaca" caused discussions in the media. The Stanford Alpaca is an artificial intelligence created by ChatGPT. The price of the Alpaca was less than 500 dollars, and there is probably a server that runs the software involved to that price. The researchers of the University of Stanford just gave ChatGPT orders copy of itself. Maybe they used the command "make a copy of yourself" and then the ChatGPT generated the next version of itself. 

And then the problem with that new Artificial intelligence called "Alpaca" is that there were no limits, involved in ChatGPT. That means Stanford Alpaca could use it as a malware generator, even if those kinds of applications are denied in ChatGPT. But if we think that the AI generates the descendant for itself, the programmers can search those codes that make limits for software that the AI should not do. 



And then they must only remove those command lines, and that gives a new and powerful application for hackers that want to make new malicious code very fast. AI is a tool that is impossible to control. The world is full of programmers that want to make history. And AI can make almost any kind of software faster and with fewer mistakes than any human. If we think of the possibility to use AI, to make a copy of it, that thing is interesting, and also a challenging situation. 

Anybody could order the AI to make a similar program as it is. There is a possibility that the user of that kind of system can simply tell the AI that it must make a similar programming tool or descendant for itself. Or the users can give orders by using descriptions, about the benefits and abilities that the AI has. AI is a powerful tool. 

It can use for making perfect customized software. But the AI also can operate as a game generator. They can make program code very fast. And if somebody wants to create a copy of the hit game. The thing that the operator requires is a description of the game and its user interface. 

Then the AI creates a similar game for the gamer. These kinds of applications can have many more targets to use than just games. The AI can make a custom application for any purpose. Maybe the next-generation software business application is the AI that generates the custom software that is made by following customers' orders. And in the wrong hands, this kind of technology is very dangerous. 


https://the-decoder.com/stanfords-alpaca-shows-that-openai-may-have-a-problem/

https://interestingengineering.com/innovation/stanford-researchers-clone-chatgpt-ai




******************************************************************


The next part is copy-paste from the article: "Interesting engineering/ Alpaca AI: Stanford researchers clone ChatGPT AI for just $600"

"How Stanford trained AI for minimal costs"

"A critical component of this achievement was LLaMA 7B, an open-source language model, which the researchers got access to. Interestingly, this model comes from Meta, Mark Zuckerberg's company, and is one of the smallest and most low-cost language models available today". ("Interesting engineering/ Alpaca AI: Stanford researchers clone ChatGPT AI for just $600")

"Trained on trillion tokens, the language model has some capabilities that are equipped with but nowhere close to the levels that we have seen with ChatGPT. The researchers then turned to GPT, the AI behind the chatbot, and used an Application Programming Interface (API) to use 175 human-written instruction/output pairs to generate more in the same style and format".("Interesting engineering/ Alpaca AI: Stanford researchers clone ChatGPT AI for just $600")

"Generating 20 such statements at a time, the researchers amassed 52,000 sample conversations in very little time, which cost them $500. This dataset was then used to post-train the LLaMa model. Turning to eight 80-GB A100 cloud processing computers, the researchers completed this task in just three hours having spent less than $100".("Interesting engineering/ Alpaca AI: Stanford researchers clone ChatGPT AI for just $600")

"The trained model, dubbed, Alpaca was then tested against ChatGPT itself in various domains and beat GPT in its own game. The researchers go on to state that their process wasn't really optimized and they could have gotten better results, had they used GPT-4, the latest version of the AI".("Interesting engineering/ Alpaca AI: Stanford researchers clone ChatGPT AI for just $600")

"The researchers have now released the 52,000 questions that were used in the research alongside the code that was used to generate them, allowing many others to repeat the process and replicate the results. The AI and its responses are not subject to any guardrails that OpenAI has ensured in its chatbot, so one can expect some really nasty replies".("Interesting engineering/ Alpaca AI: Stanford researchers clone ChatGPT AI for just $600")

"But what if someone does not really care what the chatbot says and about whom and wants it to work without filters? There are Open AI's user terms that prevent users from building competing AI and LLaMA access available only for researchers. But beyond that, there is hardly anything that could prevent one from developing their pet AI".("Interesting engineering/ Alpaca AI: Stanford researchers clone ChatGPT AI for just $600")

"Guess, this is where regulation comes in. AI is racing very fast and lawmakers are really to catch up soon, else AI will write it itself".("Interesting engineering/ Alpaca AI: Stanford researchers clone ChatGPT AI for just $600")


https://interestingengineering.com/innovation/stanford-researchers-clone-chatgpt-ai


Comments

Popular posts from this blog

Chinese innovations and space lasers are interesting combinations.

Above: "Tiangong is China's operational space station located in low Earth orbit. (Image credit: Alejomiranda via Getty Images)" (Scpace.com, China's space station, Tiangong: A complete guide) Chinese are close to making nuclear-powered spacecraft.  Almost every day, we can read about Chinese technical advances. So are, the Chinese more innovative than Western people? Or is there some kind of difference in culture and morale between Western and Chinese societies? The Chinese superiority in hypersonic technology is one of the things that tells something about the Chinese way of making things.  In China, the mission means. And the only thing that means is mission. That means that things like budgets and safety orders are far different from Western standards. If some project serves the Chinese communist party and PLA (People's Liberation Army) that guarantees unlimited resources for those projects. Chinese authorities must not care about the public opinion.  If we th

Iron Dome is one of the most effective air defense systems.

The Iron Dome is a missile defense system whose missiles operate with highly sophisticated and effective artificial intelligence. The power of this missile defense base is in selective fire. The system calculates the incoming missile's trajectory. And it shoots only missiles that will hit the inhabited area. The system saves missiles and focuses defense on areas that mean something. The system shares the incoming missiles in, maybe two groups. Another is harmless and another is harmful.  Things like killer drones are also problematic because their trajectories are harder to calculate than ballistic missiles. The thing that makes drones dangerous is that they can make masks for ballistic missiles. And even if those drones are slow, all of them must be shot down.  The thing is that the cooperation between drone swarms and ballistic missiles is the next danger in conflict areas. In the film, you can see how drones make light images of the skies. The killer drones can also carry LED li

The innovative shield that protects OSIRIS-APEX can also protect the new hypersonic aircraft.

"NASA’s OSIRIS-APEX spacecraft successfully completed its closest solar pass, protected by innovative engineering solutions and showing improvements in onboard instruments. Credit: NASA’s Goddard Space Flight Center/CI Lab" (ScitechDaily, Innovative Engineering Shields NASA’s OSIRIS-APEX During Close Encounter With the Sun) The OSIRIS-APEX probe travels close to the sun. The mission plan is to research the sun. And especially find things that can warn about solar storms. Solar storms are things that can danger satellites at the Earth orbiter. And the purpose of OSIRIS-APEX is to find the method of how to predict those solar storms. Another thing is that the OSIRIS-APEX tests the systems and materials that protect this probe against heat and plasma impacts.  The same technology. The researchers created for OSIRIS-APEX can used in the materials and structures. That protects satellites against nuclear explosions. That means this kind of system delivers information on how to prot