Blog 4: How GenAI Works
Published on:
This case study details the history and intricate workings of generative AI, pointing out its capabilities and faults. Furthermore, it discusses ethical concerns with the training and use of such technology especially as it becomes available to the public.
Case Study:
How Generative AI Works and How It Fails
Attempting to Use AI as a Learning Tool
For this activity, I didn’t really want to try learning a completely foreign subject with a chatbot as I wanted to be able to point out its inaccuracies somewhat confidently. I also didn’t want to do something I knew enough about already though, which gave me an idea. Because the article touched on how chatbots are inherently good at translation, I decided to use this as an opportunity to study Japanese since I had something to gain in both regards (in grammar practice and chatbot functionality). So, I opened up ChatGPT and asked it to create an outline for studying Lesson 13 of Genki specifically, noting to exclude vocabulary and grammar from later lessons. Immediately, it created what I can only imagine was a made-up lesson combining old grammar points and ones I definitely knew hadn’t been taught (likely because 3 editions of the book exist), so I had to fine-tune it to make sure it knew exactly what I wanted to study. It gave me example sentences in English, then asked me to translate them. Once I was finished, it graded the answers on whether or not they were correct, but there were times it contradicted itself in its responses. I was able to correct it on those occasions, but I imagine it would’ve agreed with anything I were to correct it on no matter if I was right or not. For the most part though it seemed like I was getting something out of the experience from the practice material it gave me, but I don’t think ChatGPT was accurate enough to learn from mistakes-wise so much as it is being a fancy practice-quiz generator. I don’t think I’d try and use it again for studying Japanese, and I especially wouldn’t consider using it to learn a subject I’m not at all familiar with since I won’t have the luxury of being able to fact-check it. I believe then that it might be useful for topics the user already knows for at least getting them to think about it, but chatbots in their current state are still too-inaccurate to be viable as independent learning tools.
My Own Discussion Question
After using it yourself, do you believe AI in its current state is a viable tool for learning? If not, do you think with the current way they’re being developed and the information they’re being trained off of that they’ll ever be accurate enough as a faultless tool for learning new information?
This question is intended to be a follow-up to the ‘Learning’ discussion portion as I felt that part could’ve expanded beyond anecdotes. I think it’s a good question to follow up that part since it opens up AI discussion to being more than just personal beliefs: it would make the people answering have legitimate justifications based on that personal experience to support their view.
Final Reflection
I really liked this assignment since it not only gave me the opportunity to actually learn a little bit about how generative AI works (since I haven’t taken any relavant courses at the time of writing this), but also essentially let me make my own evidence for the discussion. In terms of using AI as a learning tool, I can say now that I think it could be ok for generating practice materials but definitely cannot replace a tutor. Basically I liked that the specific discussion topic I chose was very open-ended and didn’t basically ask a complicated “yes or no” question (which I realize I kind of did, but what I said in that reflection still applies). This definitely felt more fitting as a blog post than the previous ones which is great.
