ChatGPT has been creating a buzz in the technology world for some time now. Since its launch just a few months ago, this AI-powered chatbot has become a popular tool thanks to its ability to generate human-like responses. It can write essays and poems, answer questions, ask follow-up questions, admit mistakes, challenge incorrect premises, and even reject inappropriate requests. What sets it apart from other chatbots and search engines is its generative, conversational nature, its ability to provide prompt and engaging responses that are often indistinguishable from those of a human. This is why its popularity has grown so rapidly. However, concerns have also arisen about ChatGPT and its implications, leading to its usage being banned in several places, including educational institutions in several countries.
While not all human intelligence can be replaced, certain routine activities can be automated through the use of conversational AI. For example, customer service jobs can be automated using chatbots or IVR systems. Unfortunately, many existing systems are frustrating to use and often fail to answer our questions properly. ChatGPT aims to address this issue by relying on a vast database to provide responses that are nearly as good as talking to a human. However, in more complex or subtle contexts, ChatGPT may face challenges as it is a probabilistic model that works most of the time. Despite these limitations, the potential of ChatGPT and other conversational AI systems to automate routine activities is significant.
There is a valid concern that academic standards could suffer due to the widespread use of ChatGPT and similar tools. Many academic systems rely on exams with standardized sets of questions. With ChatGPT, an average or below-average student could easily obtain a higher grade without putting in much effort. However, this approach does not promote learning, and students who do not learn may struggle to find job opportunities in the future. While ChatGPT has the potential to revolutionize the process of finding information, refining its output to make it impactful requires a higher level of intelligence and skill. While ChatGPT can provide average-level output, it is up to the user to raise it above average and convey the intended message more effectively. This is where human skill becomes essential in adding value to the marketplace. As a result, the nature of training and examination will have to evolve to keep up with the changing landscape.
The potential of ChatGPT lies in its ability to collaborate with humans. A human can guide ChatGPT’s responses, edit them for factual errors, and use its output as a first draft to improve upon. This has the potential to revolutionize the way that law is practiced by removing mundane tasks from the workload of lawyers. In some studies, ChatGPT was tested on essays and multiple-choice questions in the field of law. The results showed that it was better at generating essays and reproducing legal rules than at spotting legal issues, identifying potential problems, or providing deep analysis. In a world where ChatGPT is widely available, the most valuable skills will be the ability to take its output, edit it, recognize when mistakes have been made, conduct deeper analysis, and apply legal judgment in ways that only humans can do. While possessing encyclopaedic knowledge of the law used to be highly valuable, now ChatGPT can provide an answer to nearly any constitutional law question. However, it is still unable to identify deeper issues. As a result, tests and legal education will need to adapt to emphasize the skills that are most valuable in this new landscape.
Fake news is a growing concern, particularly with respect to language models. If the training data or hyperparameters used to build the model are biased, then the output produced can be biased and even offensive. To address this issue, we need to closely examine the data used to train the models, but this is impractical on a large scale. Instead, we may need to engineer the prompts given to the models, providing them with specific inputs and tools, and observing their output across various input-output scenarios. We must be cautious not to use these models in harmful ways and ensure that we rigorously test them. The time has come for large language models to undergo thorough testing to prevent bias and other harmful outcomes.
In some cases, penalties can be imposed under ordinary legal doctrine for spreading misinformation. However, the challenge is that the cost of producing fake news is so low that the market may respond by disseminating more false information. As a result, people are becoming more aware that the information they find randomly on the internet is often unreliable, and they are turning to trusted sources with a reputation for honesty. We are already living in a world where misinformation is rampant, and random links on Facebook cannot always be trusted. Therefore, it will be increasingly important for aggregators to filter for high-quality news, and people will rely more on certain sources for reliable information.
Social engineering tactics, such as receiving an email, phone call, or message that appears genuine and convincing but is actually intended to steal your information, are a common starting point for many cyber-attacks. As more sophisticated techniques, including AI-based methods, are developed to create new attack strategies and make messages more convincing, this risk increases.
While ChatGPT is a conversational AI and cannot directly attack computers, it can be used to convince humans of false information and persuade them to take actions they would not normally take. This is the space where many attacks occur –we receive a message, click a link, and suddenly everything else becomes compromised. This is the inherent risk in many types of attacks.
We hope that this technology is used responsibly and solely for the benefit of humans. Its intended purpose should be to assist us in our work, save time, and reduce effort on our part.
(The author is an advocate by profession)

