OpenAI Gives Paid Version ChatGPT Users Access to GPT-4
This article is outdated – GPT-4 as of now in 2023 is available for al even free users. GPT-4 is the default model used with ChatGPT, you must have an account to use ChatGPT though
You heard it right, GPT-4 is now available to premium users of the ChatGPT platform. GPT-4 is the latest artificial intelligence model released by OpenAI it’s predecessor is GPT-3.5. This new version claims it offers users a “human-level performance” on several academic and professional exams. We was only just suggesting GPT-4 was just around the corner and already it is here!
Currently you can only access the new modal via the paid version of ChatGPT ( ChatGPT Plus $20). The huge news around this version is that it’s multimodal, this means that it can accept input in image and text from, oh yes you heard that right! With this data it will then respond to these queries using a text output.
OpenAI Has Embedded it’s software in other Apps
It might come as a surprise to some but OpenAI has embedded it’s software in multiple apps. Including Duolingo where they are using it to build language bots and Morgan Stanley Wealth Management where they are testing a chatbot. Let’s face it this can easily be done with OpenAI’s handy API system.
The Next Big Step for OpenAI’s GPT System
The ability for the next version of GPT to be able to accept images and text as an input is a huge advance for this piece of AI. GPT-4 can reply to users based on the content of an image making it a big plus for people with vision impairment. Picture a virtual volunteer service that can help blind or partifally sighted people navigate an unkown place or read labels for them.
ChatGPT captured the imagination of millions of users since late last year (2022). Since then it’s popularity as continued to grow at an alarming rate. It was only a month or so ago when they showed up GPT3.5 which reduced the response time when using ChatGPT.
GPT-4 is the Most Advanced System Yet
OpenAI have stated that, GPT-4 is its “most advanced system yet”. It has bold claims that is more reliable and able to handle more complicated queries far better than it’s other model versions. It has already proven to beat previous records on exam scores and can provide more text in it’s replies.
But it’s not with out it’s flaws. OpenAI noted some problems: “Despite its capabilities, GPT-4 has similar limitations to earlier GPT models: it is not fully reliable (eg can suffer from ‘hallucinations’), has a limited context window, and does not learn from experience.” also they stated that “Care should be taken when using the outputs of GPT-4, particularly in contexts where reliability is important,”. So no surprise here then.
So What about Google?
Some people may have forgotten that other big tech companies have joined the race. We know that Microsoft is heavily invested in OpenAI and has included the GPT model in their “bing chat” service. To use Bing chat make sure you have signed up and are using Microsoft edge.
As for other tech giants, Google is still working on Bard and has just opened a waitlist for testers. So a limited amount of people will have cloud access to their large language model PaLM for the first time. We do hope these testers create some really innovative apps.
Meta had a data leak and people are already trying to run their system locally!
OpenAI still not “Open”
Look we all love OpenAI for providing such a great piece of AI kit that everyone has been implementing into their own software with their API. But one of our main concerns is that they are not very “open”.
In regards to GPT-4 They will not be revealing any details about the technical side of this bit of kit. Which is quite disappointing. So we won’t know the architecture of the model, what data is was trained on or even the hardware and computing network the are using. They claim it’s down to the competitive nature of the industry and safety concerns.
Protecting The Future of AI – Preventing Bad Actors
It’s a well known fact that people no matter what will try to cause harm to others. So GPT-4 had been put through stress tests to try to reduce bias, disinformation, privacy and issues with cyber security.
But, GPT-4 can still generate potentially harmful content and hate speech if pushed and this is something OpenAI is working on to stop. Some people see this is censorship or “woke” others see it as protecting people. We believe in free speech here at So Artificial but I think we can all agree that some information is best left censored.
As of writing this article GPT-4 is not capable of carrying out autonomous actions without human input. So for the moment we are safe from Skynet…