Research with Generative AI
Resources for scholars and researchers
Generative AI (GenAI) technologies offer new opportunities to advance research and scholarship. This resource page aims to provide Harvard researchers and scholars with basic guidance, information on available resources, and contacts. The content will be regularly updated as these technologies continue to evolve. Your feedback is welcome.
Leading the way
Harvard’s researchers are making strides not only on generative AI, but the larger world of artificial intelligence and its applications. Learn more about key efforts.
The Kempner Institute
The Kempner Institute is dedicated to revealing the foundations of intelligence in both natural and artificial contexts, and to leveraging these findings to develop groundbreaking technologies.
Harvard Data Science Initiative
The Harvard Data Science Initiative is dedicated to understanding the many dimensions of data science and propelling it forward.
More AI @ Harvard
Generative AI is only part of the fascinating world of artificial intelligence. Explore Harvard’s groundbreaking and cross-disciplinary academic work in AI.
Frequently asked questions
Academic publishers have a range of policies on the use of AI in research papers. In some cases, publishers may prohibit the use of AI for certain aspects of paper development. You should review the specific policies of the target publisher to determine what is permitted.
Here is a sampling of policies available online:
Guidance will likely develop as AI systems evolve, but some leading style guides have offered recommendations:
Yes. Most academic publishers require researchers using AI tools to document this use in the methods or acknowledgements sections of their papers. You should review the specific guidelines of the target publisher to determine what is required.
You should review the specific policies of potential funders to determine if the use of AI is permitted. For its part, the National Institutes of Health (NIH) advises caution: “If you use an AI tool to help write your application, you also do so at your own risk,” as these tools may inadvertently introduce issues associated with research misconduct, such as plagiarism or fabrication.
Many funders have not yet published policies on the use of AI in the peer review process. However, the National Institutes of Health (NIH) has prohibited such use “for analyzing and formulating peer review critiques for grant applications and R&D contract proposals.” You should carefully review the specific policies of funders to determine their stance on the use of AI
Yes. Some of the primary safety issues and risks include the following:
- Bias and discrimination: The potential for AI systems to exhibit unfair or discriminatory behavior.
- Misinformation, impersonation, and manipulation: The risk of AI systems disseminating false or misleading information, or being used to deceive or manipulate individuals.
- Research and IP compliance: The necessity for AI systems to adhere to legal and ethical guidelines when utilizing proprietary information or conducting research.
- Security vulnerabilities: The susceptibility of AI systems to hacking or unauthorized access.
- Unpredictability: The difficulty in predicting the behavior or outcomes of AI systems.
- Overreliance: The risk of relying excessively on AI systems without considering their limitations or potential errors.
See Initial guidelines for the use of Generative AI tools at Harvard for more information.