What is GPT-4

what-is-gpt-4

Don’t know what GPT-4 is? Well let us explain to you all about this advanced LLM

What is GPT-4
Can I introduce you to our Lord and Saviour GPT-4

OpenAI initiated the first steps towards doomsday when it released ChatGPT into the wild. The release of this next level chat bot has allowed anyone with a stable internet connection, the ability to use this piece of advanced artificial intelligence (AI). Even thought ChatGPT is a very advanced form of technology you still have people just eager to see what else OpenAI are capable of creating.

This is where we are today the creation of GPT-4. Or in it’s official name the “Generative Pre-Trained Transformer 4”. Can you see why everyone shortens it down to GPT-4? This latest Large Language Model (LLM) creation launched on March the 14th 2023. The release of this new model has proven to be a significant part of GPT’s evolution.

GPT-4 is OpenAI’s creation it is basically what is known as a large language model (LLM). It supports text and image inputs from users, in turn it responds in text outputs. It’s predecessors with GPT-3.5 this LLM was previously used with the infamous chat bot ChatGPT. It has since been upgraded to GPT-4.

The Creation of GPT-4 had the idea in mind to improve performance, be more focused on human like responses and to have a much larger dataset. From trailing out GPT-4 via using ChatGPT it has proven to be a great improvement from it’s predecessor.

What Can This LLM GPT-4 actually Do?

what-can-this-llm-gpt-4-actually-do
Picture GPT-4 like a huge library

GPT-4 is a LLM that is capable of visually interreacting with text, generation code and debugging it, demonstrating human-like responses, being more creative and more conscious of it’s responses compared to GPT-3.5. One big difference compared to both LLM’s is that GPT-4 can handle up to 25,000 words pre session, this allows it to be more complex with it’s responses. Some times this is a good thing, sometimes it’s not.

OpenAI has stated that GPT-4 has an increased capability to resist malicious prompts by over 80%. This makes this LLM a much more safter model than it’s previous version. This is a step in the right direct as their was a lot of concerns around bad actors using AI for a bad cause.

GPT-3.5 was and still is a great piece of technology, ChatGPT took the world by storm and was all over the news, in regards to it’s human-like responses. All be it some of it was questionable it was the best Chatbot example we have had to date. But, even so OpenAI had other plans and pushed forward to an even better LLM which we now have.

On top of what GPT3.5 was capable of the latest addition to the family GPT-4 comes with way more processing power and the ability to do tasks at a much faster and accurate rate. Not to mention all the extras that have been thrown at it, including an array of online plugins so now you can even get ChatGPT to interact with other tools online.

But even with the amazing features that GPT-4 has to offer we are only just realizing it’s capabilities, new discoveries are appearing everyday on what can achieved with this advanced piece of AI technology. You can even use GPT-4 to write online content for you, heck plenty of users online have shared there experiences with using GPT-4 to make money online.

There are also rumours suggesting we might even see a release of GPT-5 over the next 12 months, we are really eager to see what more new additions they have added to this already amazing LLM. To be fair though we wish they would change the name and put a stop to just changing the number of “GPT”.

So can GPT-4 connect to the Internet? 

There has been a lot of speculation in regards to this topic. The answer is kind of. Confusing right? well gpt-4 itself does not connect to the internet it’s data used is from old information that was crawled. It does not crawl the internet looking for new data unless you give it new information.

If you were to write a custom prompt and tell GPT-4 a key piece of information it can then relay that information in it’s response. So you can give it new information but it’s not connecting to the internet.

There is a way around this, well kind of. It’s through the use of plugins and the API connected to GPT-4. So basically GPT-4 is interacting with users over the web, it’s just not crawling that data. Hope that clears a few things up for you. Maybe in the next 12 months this might become a possibility. Bing Chat that uses this LLM is capable of doing Bing searches for your queries, so maybe OpenAI might add this feature in the future.

How much does GPT-4 Cost?

how-much-does-gpt-4-cost
GPT-4 can get pretty expensive if used on a mass scale

The overall price for using GPT-4 API really depends on the model version you are using and how many words you are generating in your response. It’s completely itemized. 

For the models with 8k context lengths (e.g. gpt-4 and gpt-4-0314), the price is:

  • $0.03/1k prompt tokens
  • $0.06/1k sampled tokens

For models with 32k context lengths (e.g. gpt-4-32k and gpt-4-32k-0314), the price is:

  • $0.06/1k prompt tokens
  • $0.12/1k sampled tokens

It might not seem like much money, but if you are using this on a professional level it can soon rack up. So just remember this when crating an online app that uses this AI technology.

If you don’t fancy using the API, but still would love to try out GPT-4’s potential, you can upgrade your account from the free version to ChatGPT plus. If you do upgrade your account you will be able to set the model version from GPT-3.5 to GPT-4.

How to Access GPT-4?

There are currently 3 ways of accessing GPT-4. The most common way to test this LLM out is via ChatGPT plus. Here is a list of ways you can try out GPT-4:

API Access

Most users will need to join a waitlist. OpenAI are working hard to grant access to everyone that has an account but this is going to take some time. They want everyone to have the option to try out this piece of tech.

ChatGPT Plus

If you are already a ChatGPT plus subscriber than you will get access to GPT-4 by visiting chat.openai.com but there is a usage cap. This cap is dynamically adjusted depending on high demand on the performance of the application. GPT-4 currently has a cap of 25 messages every 3 hours, which we feel is far too small! Over the next few months we expect this to change as more investment comes in.

Research Access Program

Open AI also allow researchers the chance to join in on testing out this new model. So if you are a company that want to perform research and studies on how artificial intelligence is effecting the world, perhaps this might be the best route forward for you. You will need to submit an application here.

Can GPT-4 Improve its self?

No, not yet, GPT-4 does not have the ability to do this. This is something that could only be performed by artificial general intelligence (AGI). AGI does not exist, yet anyways. This is an advanced form of AI that has the ability to perform at a human-level in a vast scope of areas. It will also be able to self learn and improve on it’s own data. Open AI does not have this type of technology, it might seem like it, but they do not.

Users of GPT-4 are limited by it’s version number. The only way this model can improve is by programmers updating it’s version and adding information, sure it uses machine learning to learn more information but that is based off of a set of rules. In the future this might change, but some users think we should stay well away from AGI and just had a smart AI that helps users, not dictates information to them.

Limits and Risks of using GPT-4 

limits-and-risks-of-using-gpt-4
There are some risks when using gpt-4

One thing we have to applaud OpenAI for is the fact it has been up front with the fact GPT-4 is not the perfect model we all think it is. It is still prone to issues with hallucinations, biases and bad actors using prompts to try to get it to talk about certain subjects.

Despite the efforts OpenAI has gone to mitigate these risks it still falls short. You can just ask DAN about this and ChatGPT will answer any question. DAN is continuously updated, which proves that users are actively trying to jailbreak ChatGPT in order to ask it “dodgy” questions. Weather this is malicious or for fun is besides the point, it just proves that it is possible to achieve and users will most likely continue to do this throughout it’s existence.

With the use of prompt engineering users have created a lot of interesting conversations with GPT-4 and sometimes this has given OpenAI some bad publicity. Our real questions here is can OpenAI solve this problem or will LLM’s always be venerable to bad actors?

GPT-4’s Parameters Are Currently Unkown

Now this is a concerning factor to a lot of it’s user base and AI developers across the world. Some may see it as a way of protecting their intellectual property or protecting them from potential law suites that might arise from crawled data. Needless to say OpenAI still has not disclosed GPT-4’s parameters and it continues to be a kept secret to this day.

You might have heard rumours it’s running on a data set of 100 trillion parameters this is fake news and even the CEO Sam Altman called social media out for this. Of course no one knows the exact amount of parameters it would be under 1 trillion or over 10 we would never know until OpenAI discloses this information.

If we were to guess, we would assume that GPT-4 is an advanced version of GPT-3.5 it uses the same 175 billion parameter as it’s predecessor but more effectively. It is faster with it’s response time and provides more words in it’s response. This would make more sense as a Chatbot that can communicate almost human-like does not need to crawl more data it juts needs to be fine tuned. Sure in the future it will need to crawl up to date information to stay relevant, but for now it’s a fine example of advanced AI.

What people are forgetting that when it comes to LLM’s bigger is not always better. What you need is AI that is optimized for speed. What’s the point in a chatbot crawling so much data if it takes forever to respond. Then when it does respond, it does it in a confused manner due to the sheer scale of it’s data sets. 

We feel this might be a trend moving forward. If these LLMs get the base data set fixed so they can reply on a competent level, that’s a good starting point. What these companies should be aiming for is a much smaller model that can run on a single GPU. This would make this type of technology available to so many people. This also allows more people to edit and play around with this type of technology.

ChatGPT and other LLMs require servers and huge amounts of computing power in order to run successfully. Here at So Artificial we do hope in the future these models become much smaller and easier to maintain. This would allow more research groups and individuals to run these types of AI on their home computer or dare we say it their smartphones. Now that would be crazy.

Could GPT-4 replace your job?

If you work in the following fields :Customer service, transcriptionists, copywriters, email marketers, proofreading then perhaps GPT-4 might replace your job. Well more likely someone using GPT-4 will.

Some of you might be shocked by this response, no one wants to lose their job. It’s important to point out that only those who are easily replaced will be and this is why we recommend people learn how to use AI to improve the way you get hired. Let’s face it AI is reshaping the job market and if you fall behind you might find it difficult to seek work.

These LLM’s main focus is text based, so they are quite good when it comes to using their words. So if you job involves written words, it might be at risk. ChatGPT has already been proven that is good as communication, providing empathy, creative writing, speed and much more. This is why some people are so scared of AI, also why governments are so eager to restrict it.

Although these type of LLMs have the potential to replace certain jobs, it is important that we consider the limitations and potential risks this type of technology has. Before we start replacing humans for AI. These type of technology is growing at such an alarming rate jobs could be replaced sooner than later. Which is quite worrying especially if people are replaced prematurely.

OpenAI and GPT-5 

Some people in AI circles will tell you OpenAI are working on GPT-5 others will deny it. The CEO of OpenAI has denied the company is working on a new model as they are more focused on improving the way GPT-4 works. Which we can understand. We would like to see a faster, smaller model in the future.

This new technology is a huge leap in AI progress, to jump into creating something that could potential be classed as AGI we would be think that would be moving far too quickly in this space. So maybe it might be in it’s best interest to cool off a little bit.

Sng just a month after GPT-4 was officially released to the public, and at a time when everybody is still trying to figure out how to get the most out of this advanced language model. 

Conclusion

Here at So Artificial we love AI, we live breath and eat the stuff. So when GPT-4 came out we were so excited to try it out. It took us a while to get into the API, but when we did we were quite surprised how quickly it works and it’s responses. It is superior to it’s predecessors so if you are interested in LLM’s we would recommend it, without a doubt.

However when it comes to the speed this type of technology is being developed, we believe it should be slowed down a little bit. GPT-3.5 had issues that were not fixed and have now continued onto GPT-4 this is not acceptable. We should be working hard to fix issues not carrying them onto a new model.

We do hope OpenAI focuses more on perfecting it’s models before developing the next one GPT-5 if they ever do.

We did hear about the CEO stating they would prefer to lean on safety rather than releasing new models. Which is a good sign and shows sthat they are being conscious about their creations.

What do you feel about the use of LLMs? good or bad? do you use them yourself and how do you use them? we are eager to know, tell us in the comments below

Leave a Reply

Your email address will not be published. Required fields are marked *