ChatGPT-4 wows with new tricks

ChatGPT-4 wows with new tricks

It can solve problems with greater accuracy, but despite OpenAI’s efforts to make the model resistant to abuse, it can be prompted into misbehaving

From helping the visually impaired understand what’s written on the screen or inside the fridge, with its image to text object recognition tool, to helping investment banking firm Morgan Stanley to “unlock the cumulative knowledge of Morgan Stanley Wealth Management” or the Iceland government to preserve its language, ChatGPT-4, the newest kid on the Generative AI (artificial intelligence) line of products from OpenAI, has taken the world by storm. It’s precursor, ChatGPT3 had already become viral in a matter of weeks.

ChatGPT Plus is a subscription service provided by OpenAI to allow users to chat with OpenAI’s chatbot service, ChatGPT more effectively. It costs 20 USD a month, and besides gaining access to the latest GPT-4. GPT-4 is more creative and collaborative than ever before. It can generate, edit, and iterate with users on creative and technical writing tasks, such as composing songs, writing screenplays, or learning a user’s writing style.

40% more factual

OpenAI claims that “GPT-4 can solve difficult problems with greater accuracy, thanks to its broader general knowledge and problem-solving abilities.” They also tied to allay fears about safety claiming that it was “82% less likely to respond to requests for disallowed content and 40% more likely to produce factual responses than GPT-3.5 on our internal evaluations.”

Not everyone is convinced though. According to technology magazine Wired: “However, GPT-4 suffers from the same problems that have bedeviled ChatGPT and cause some AI experts to be skeptical of its usefulness—including tendencies to “hallucinate” incorrect information, exhibit problematic social biases, and misbehave or assume disturbing personas when given an “adversarial” prompt.”

Morgan Stanley trains AI financial advisors

Nevertheless, this hasn’t stopped Morgan Stanley Wealth Management from using the new GPT-4 technology to help its advisors as part of the launch of a strategic initiative with the artificial intelligence research company. As part of the initiative, MSWM is creating its own unique solutions with OpenAI. It is already developing an “internal-facing service that leverages” OpenAI technology and Morgan Stanley’s intellectual capital to “deliver relevant content and insights into the hands of Financial Advisors in seconds, helping drive efficiency and scale.”

Morgan Stanley’s huge team of financial advisors and their expertise in serving clients. Today, more than 200 employees are querying the system on a daily basis and providing feedback. The focus will always be on getting advisors the insight they need, in the format they need, instantly. The firm feels that the effort will also further enrich the relationship between Morgan Stanley advisors and their clients by enabling them to assist more people more quickly.The wealth management firm is also evaluating additional OpenAI technology with the potential to enhance the insights from advisor notes and streamline follow-up client communications.

Iceland using ChatGPT-4 to save its language

Meanwhile in Iceland, an island nation in the middle of the North Atlantic with vibrant technology industry and booming tourism, the government is partnering with OpenAI to use GPT-4 in the preservation effort of the Icelandic language. Most of its 370,000 citizens speak English or another second language, its integration with the United States and Europe has put the country’s native tongue, Icelandic, at risk. Today there’s increasing worry that in a few generations, if Icelandic can’t remain the country’s default language in the face of rapid digitalization, the language might face de facto extinction.

On the initiative of the country’s President, HE Guðni Th. Jóhannesson, and with the help of private industry, Iceland has partnered with OpenAI to use GPT-4 in the preservation effort of the Icelandic language—and to turn a defensive position into an opportunity to innovate.The partnership was envisioned not only as a way to boost GPT-4’s ability to service a new corner of the world, but also as a step towards creating resources that could serve to promote the preservation of other low-resource languages.

A virtual tutor

Khan Academy, the online education non-profit organisation launched Khanmigo – its GPT-4 powered AI learning that functions as both a virtual tutor for students and a classroom assistant for teachers. Khanmigo is set to be revolutionary in the self-learning space because Khanmigo can mimic a writing coach by giving prompts and suggestions to move students forward as they write, debate, and collaborate in exciting new ways. In addition, Khanmigo’s interactive experiences and real-time feedback will help learners hone their computer science skills.

Helping the visually impaired

Since 2012, Be My Eyes has been creating technology for the community of over 250 million people who are blind or have low vision. The Danish start-up connects people who are blind or have low vision with volunteers for help with hundreds of daily life tasks like identifying a product or navigating an airport. With the new visual input capability of GPT-4, Be My Eyes began developing a GPT-4 powered Virtual Volunteer within the Be My Eyes app that can generate the same level of context and understanding as a human volunteer.

The implications for global accessibility are profound. In the not-so-distant future, the blind and low vision community will utilize these tools not only for a host of visual interpretation needs, but also to have a greater degree of independence in their lives.Suddenly, the image someone sends of, say, the contents of their fridge, GPT-4 technology not only recognizes and names what’s in there, but extrapolates and analyses what you can make with those ingredients. You could then ask it for a good recipe.

It’s still a bot

But sceptics are not entirely convinced by ChatGPT4’s new bag of tricks. Despite OpenAI’s efforts to make the model resistant to abuse, it can be prompted into misbehaving, for example by suggesting it role-play doing something it refuses to do when asked directly. OpenAI says GPT-4 is 40 percent more likely to provide “factual responses” and says that GPT-4 is 82 percent less likely to respond to requests that should be disallowed. The company did not say how often the previous version, GPT-3, provides factually incorrect responses or responds to requests it should reject. We have to remember that, however eloquent ChatGPT is, it’s still just a chatbot.

Know more about the syllabus and placement record of our Top Ranked Data Science Course in KolkataData Science course in BangaloreData Science course in Hyderabad, and Data Science course in Chennai.

https://praxis.ac.in/old-backup/data-science-course-in-chennai/

© 2024 Praxis. All rights reserved. | Privacy Policy
   Contact Us