Friend or Foe: Can Computer Coders Trust ChatGPT?
ChatGPT is an artificial intelligence (AI) tool that allows users to type in a question and receive a generated response. The responses are human-like, friendly, and intelligent, thanks to the AI model called GPT-4. Users can use ChatGPT to make computer programs quickly and easily. But, can developers trust ChatGPT to produce efficient, secure, and accurate code?
The Benefits of ChatGPT
ChatGPT offers many benefits to developers. For example, creating games like Pong, Breakout, and Asteroids would usually take an experienced developer at least 30 minutes to complete. With ChatGPT, it only takes 40 seconds to generate the code. ChatGPT can also give instructions tailored to the developer’s needs, making it easier for inexperienced developers to create a simple web app.
Moreover, ChatGPT uses predictive algorithms to generate text, so users do not have to search the internet for code examples. It has a lot of contextual understanding that someone who has never developed could not get from Google. ChatGPT can even recreate older PC games like Skyroads, for which there are no examples online.
The Risks of ChatGPT
Despite the productivity improvements that ChatGPT offers, some hesitate to adopt AI for coding. One major issue is the risk of security flaws or poorly written code. Some developers report that ChatGPT could generate flawed or inefficient code. For example, Tony Smith, the Chief Technology Officer of Rightly, asked ChatGPT to create code to work out the number of days in a given month. However, the code generated a subtle bug: it thinks March has 30 days, due to the shift to British Summer Time.
Moreover, ChatGPT has limited knowledge of events after 2021, raising concerns about outdated techniques or security vulnerabilities. Developers also worry about the temptation to use code they do not understand, which increases business risks.
The Importance of Developer Accountability
While AI can generate code, it remains the responsibility of the developer to ensure that the code is efficient, accurate, and secure. Developers must review the ChatGPT generated code multiple times to check for flaws or vulnerabilities. At Venafi, where the VP of Security Strategy and Threat Intelligence used ChatGPT to make Excel macros, the code is reviewed multiple times by humans. Ultimately, professional developers are responsible and accountable for code written, regardless of its source.
- ChatGPT is made using vast amounts of web content but generates text specifically for each user, predicting the right answer to their question.
- ChatGPT can help developers create parts of major games, but it is unlikely to be used to make a complete modern console game.
- ChatGPT recreates games like Pong, Breakout, and Asteroids faster than experienced developers could.
- Developers who trust ChatGPT-generated code might face higher business risks due to the risk of security flaws and outdated techniques.
- ChatGPT is an AI tool that allows developers to type in a question and receive a generated response.
- ChatGPT generates human-like, friendly, and intelligent responses by using predictive algorithms.
- Developers can use ChatGPT to create parts of a game or software, but professional developers remain responsible and accountable for its accuracy, efficiency, and security.
- The risks of using ChatGPT include security flaws, poorly written code, and the temptation to use code they do not understand, leading to an increase in business risks.
Overall, ChatGPT is a promising tool for developers looking to create web apps or games quickly and easily. However, ChatGPT-generated code must be reviewed multiple times by humans to ensure its accuracy, efficiency, and security. Developers must remain accountable for the code they write, regardless of its source. So, while ChatGPT can be a friend to Developers, it is not a full-fledged replacement for human expertise.