Coding Responsibly with Generative AI#
Researchers are increasingly using generative artificial intelligence (GAI) tools to assist with coding. These tools use large language models (LLMs) to generate code, but they can also incorporate additional technologies such as version control, web searches, and code execution to help you with programming tasks. This tutorial shares best practices for using these tools safely, effectively, and in a way that supports the development of your coding skills.
Decide what type of GAI tool to use.
Different types of GAI tools have different strengths. Chatbots (e.g., ChatGPT, Microsoft Copilot, Gemini, Claude) are helpful for brainstorming ideas, explaining code and errors, and generating standalone scripts or short code segments. Tools integrated into integrated development environments, or IDEs, (e.g., GitHub Copilot, Cursor) are useful for writing code through iterative, natural language interaction via a chat interface or comments in the code. Coding agents (e.g., Claude Code, OpenAI Codex) can execute complex tasks independently to modify existing code or generate new code when given clear specifications.
Be aware of the data you are sharing.
GAI tools must access and process your data to work. Unless you are using a tool that runs entirely on your own computer, this means you are sharing your information with the GAI tool providers, who can archive your data or use it for their own purposes. Terms of use for such services can change frequently. If you are working with sensitive data, consult Northwestern’s Guidance on the Use of Generative AI to identify tools that are backed by University contracts and data protections. When using GAI tools integrated into IDEs or coding agents, give them limited, selective access to your files, and keep in mind that you may be giving them access to other files beyond your scripts.
Be responsible for your code.
It can be tempting to delegate your work to GAI tools. However, you are responsible for your research outputs, including code. As one position statement notes, GAI “is a tool, not a substitute for human judgment.” Review, understand, and test the code generated by GAI tools. When testing, ask yourself: how would I know if this code produced the wrong result? Where possible, validate outputs against cases where you already know the answer. To support reproducibility, ensure you keep a copy of all code you run as part of your project, and document what was generated by GAI tools, including the LLM name and version.
Disclose your use of GAI tools.
Many academic publishers now require authors to disclose when generative AI was used in preparing a manuscript, including for generating or assisting with code. Generally, using AI for basic spelling, grammar, or style corrections does not require disclosure. However, using AI to generate code, draft text, assist with analysis, or compile literature does. Major publishers each have their own specific policies: see guidelines from Elsevier , Wiley , Springer Nature , and SAGE . Check the AI policy for your target journal before submitting.
Use GAI tools to strengthen, not replace, your skill development.
GAI tools can help accelerate your learning and the development of your coding skills. But this requires your active engagement and intentional use. Ask the tools to explain any code they generate. Ask about alternative approaches, and ask the tools to justify their choices. Ask for feedback on your code. To use GAI tools effectively for more than simple tasks, experience writing clear descriptions of what you want your code to do, evaluating alternative approaches, and troubleshooting issues is key. Don’t limit your future development and effectiveness by engaging with these tools passively.
Give context to GAI tools.
The quality of the information generated and provided by GAI tools varies widely, as does the information they were trained on. If you have documentation, code, or other relevant information, provide it to the GAI tool. Keep in mind that some chatbots that use GAI have web browsing capabilities, so you can paste a URL and ask the tool to get the information. Others require you to upload additional information directly. Be specific: share the exact error message you are getting, describe the structure of your data, specify the programming language and key package versions, and explain what you have already tried. Context can also include descriptions of your expected outputs and preferences for coding style, specific packages, or frameworks.
Engage critically with the output.
GAI tools can produce poor-quality outputs and false information. Hallucinations — cases where the tool generates plausible-looking but incorrect code — are particularly common when working with specialized research packages for which the LLMs have little context. Even when code runs without errors, it can contain subtle bugs such as off-by-one errors, incorrect formulas, or silent data type conversions that produce wrong results — and these are harder to catch than code that fails outright. Verify information they provide, especially when you’re working with an uncommon or recently updated package. You may need to check official documentation yourself to ensure that functions really exist and that they do what a GAI tool says they do. More generally, ask GAI tools to find flaws in their outputs, including better approaches or bugs in their code. Keep in mind, however, that tools can also miss their own errors or falsely confirm that their code is correct — external verification through official documentation, running the code, or consulting a colleague is more reliable than asking the tool to check itself. Correct them if you spot a mistake and tell them to fix it (and potentially how to do it).
Try again, with the same tool and with a different one.
GAI tools vary widely, and their responses can vary even for the same input. Even with the same tool, the same prompt can lead to different results. It is useful to prompt the same chatbot across independent conversations and try different chatbots. For complex tasks, consider using more capable models, which may require a paid subscription.
Keep experimenting: the landscape is constantly changing.
GAI tools, and our understanding of how to use them effectively, are changing rapidly. New tools emerge frequently, and existing tools are updated with new capabilities. Current GAI tools can successfully do things they couldn’t just a few months ago. Talk with colleagues about what they are doing — and share what works for you.
Disclosure: Claude Opus 4.6 was used in reviewing, editing, and improving this document.