top of page

NYU Office of the Provost Generative AI Workshop: (Re)Designing Assignments and (Re)thinking Assessment for Learning in the Age of GenAI


Recently, the NYU Office of the Provost offered a morning of Generative AI Workshops.

I attended “(Re)Designing Assignments and (Re)thinking Assessment for Learning in the Age of GenAI.”


Approaches

Of multiple topics covered during the training, I appreciated how Associate Vice Provost DeAngela Duff framed two approaches for professors to consider when it comes to student assignment or assessment completion: Human-First vs. AI-First.


  • Human-First AI Use: In this approach, students first independently brainstorm, research, and draft. Then, they ask AI to help refine, iterate, and even innovate. By the end, students ensure they have vetted all outputs for accuracy and quality. Associate Vice Provost Duff shared that as a professor, she favors this approach, calling it the “human sandwich”: Human generation - AI editing - Human finalizing.


    To me, this approach empowers and preserves human creativity and critical 

    thinking skills—even confidence—through productive struggle (Lee et al., 2025). AI is 

    employed as an assistant or editor for humans to ultimately vet and finalize outputs, 

    rather than the other way around.


  • AI-First AI Use: This approach acknowledges that many students are already using generative AI to create first drafts of written assignments, presentations, and more. After this step, human-intervention is then paramount to evaluate accuracy, prompt and edit, and revise before finalizing: AI generation - Human editing and iterating with AI - Human finalizing.


    I appreciate that the latter approach helps realign habits away from students using AI only to complete assignments for them, which is a common concern expressed by professors today.


Each approach offers positives and challenges, including ethical questions, for professors to explore. There is no right answer; it depends on the professor and content area. 


AI Policies & Student Choice

NYU provides guidelines and considerations for generative AI, but Associate Vice Provost Duff emphasized that it’s up to professors how students should be using AI in their assignments and assessments. Professors might come up with a blanket syllabus statement, and/or create guidelines for acceptable AI use in different assignments.


The guidelines also note that instructors are fully responsible for any use of AI in their courses, including the design of assessments, the grading of student work, the writing of feedback, and the protection of student privacy and information. Faculty use of GenAI to develop course materials and/or student feedback should always be disclosed to students. 


Duff also cautioned professors not to assume that all students use, or even want to use, AI in their assignments. For example, Dartmouth professor, Scott Anthony, felt surprised by his Gen Z students’ fear and anxiety of having their humanity and critical thinking skills replaced by artificial intelligence (Lichtenberg, 2025). For this reason, professors might create classroom AI policies collaboratively (NYU login required) with students to include their views on how they can/will use AI in their coursework.


GenAI Learning Strategies

Rather than only share restrictions of AI use, this workshop offered positively-framed strategies for effective, responsible completion of assignments with AI. For example:


  • Meta-reflections: Have students continuously reflect and capture notes on the process of completing an assignment, not just the output.


  • Brainstorm with AI: Have students prompt AI for options rather than one answer.


  • Compare and Contrast: Have students compare and pull from several prompts or several AI tools before choosing. 


  • Confirm Ownership: Emphasize that your students are responsible for everything, including accuracy, in an assignment.


Assessment Strategies

To ensure student accountability for learning, Associate Vice Provost Duff shared the following direct assessment strategies:


  • Live Demonstrations 

  • Oral Defenses

  • Post-Presentation Q+A

  • Peer Review

  • Traditional Written Exams

  • Process Portfolios

  • Google Doc Written Assignments (Version History)

  • Meta-Reflections (e.g. Reflective Journaling on AI Use)

  • and more!


My Meta-Reflection: Process for Writing this Blog

Although I strongly prefer a “Human-First/Human-Mostly" approach to GenAI, I decided to test an AI-First approach to write this blog post. This section models a process reflection that professors might consider integrating into student assignment submissions.


First, I uploaded my general notes, resources, and ideas from the NYU GenAI workshop into Google Gemini, and prompted it to come up with a first draft.*


Though it captured the content of some of my ideas, the level and quality of writing felt generic and uninspiring to me. What I found after a couple of iterative prompts to help shape it is that I deeply enjoy the craft of writing—ever since childhood—as a way to integrate my learning from my mind and my notes into my own voice and finally into words. The process is important to me, and it makes me feel inspired and happy. 


By experimenting with an AI-First approach, I also reconfirmed for myself that although I’ve learned to be a careful editor from 20+ years of experience, I do not love editing. It’s tedious.


Therefore, given the poor quality of the initial drafts from generative AI, I concluded that I still value a Human-First—well, Human-Only or Human-Mostly—approach when it comes to writing and creative work.


How I did use generative AI as my assistant in my Human-Mostly approach:


  • Formatting APA references, which I vetted and finalized in this publication.

  • A quick check of grammar, especially when a colleague isn’t available for a peer-proofread. (I still prefer the latter. After all, it is human beings - you! - who are my audience, so having a second pair of human eyes helps.) :)


For more information and resources, check out the (Re)designing Assignments and (Re)thinking Assessments workshop slide deck (NYU login required).



References


*Google. (2026). Gemini [Large language model]. https://gemini.google.com


Lee, H.-P. (Hank), Sarkar, A., Tankelevitch, L., Drosos, I., Rintel, S., Banks, R., & Wilson, N. (2025). The impact of generative AI on critical thinking: Self-reported reductions in cognitive effort and confidence effects from a survey of knowledge workers (Tech. Rep.). Microsoft Research. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI ’25). https://doi.org/10.1145/3706598.3713778


Lichtenberg, N. (2025, December 20). ‘They’ll lose their humanity’: Dartmouth professor says he’s surprised just how scared his Gen Z students are of AI. Fortune. https://fortune.com/2025/12/20/does-ai-make-you-dumb-dartmouth-professor-says-gen-z-scared



*Appendix: Gemini-Generated Blog



Header image credit: NYU Office of the Provost GenAI Workshop, 2026


bottom of page