Working, Writing, Reading and Watching
Some Recommendations






Despite officially retiring from the University of Missouri this summer, you have not heard much from me, not because I don’t have things to say or new things to share, but because I have been spending about every hour of the day working on the a 8-10 year backlog of house projects that I discovered had built up.From building porches and retaining walls to battling groundhogs, I have been busy. And starting early to avoid the heat of the day has left me knackered by mid-day and not really in the mindset to write.
However, I have been doing a lot of reading about technology and AI and I thought I would share some highlights with you here. I have also gotten several questions about how to get consistent results when using AI for writing, so I wanted to share that here as well (see below).
What I have been reading and watching
Despite not writing as much, I have been reading/watching some great stuff from other creators. Here are a few subjects I have been exploring that I think are worth your times to check out.
Job Markets and AI
For much of history, new technologies created more—and better-paying—jobs than they displaced. For example, steno pools have vanished when the PC became common, but well-paid IT roles grew. However, in recent years the link between productivity and job creation has loosened, with several sectors experiencing “jobless growth.” Many new jobs have appeared in low-productivity services, while high-productivity sectors expand output without proportional increases in employment. AI seems to be driving this decoupling. What this means for future employment and the value of education is unclear, but these articles offer a snapshot of what people are experiencing in today’s job market.
Something Alarming is Happening to the Job Market (The Atlantic). Read this if you want to understand why the path from college to career is tougher than ever—and what shifting economic and technological forces mean for the next generation of workers. The job market for young, educated workers is showing troubling signs, with recent college graduates facing a sharp rise in unemployment and even top M.B.A. grads struggling to land positions.
The Job Market is Hell (The Atlantic). Read this if you want to understand why landing a job feels so impossible right now, and what the rise of AI-driven hiring means for anyone navigating today’s employment landscape. The job market has become a frustrating maze for both job seekers and employers, with millions of applicants relying on AI tools like ChatGPT to craft résumés, only to have their applications filtered by automated systems on the employer side.The rise of AI in both applying and hiring has created a cycle where applications feel like they disappear into a void, and even experienced professionals find it difficult to advance past the initial stages.
Recent College Grads Bear Brunt of Labor Market Shifts (Federal Reserve Bank of St. Louis). Read this if you want to know how the job market is shifting for recent college grads. This analysis from the St. Louis Fed shows that young graduates are facing much higher unemployment rates than before the pandemic, with white-collar jobs—especially in fields like tech and media—seeing some of the sharpest increases, challenging old assumptions about the value of a degree.
Gen Z job crisis: Maybe there are just too many college graduates now (Fortune). Read this if you want to understand why so many Gen Z college grads are struggling to find jobs. This article argues that with more Americans holding degrees than ever before, new graduates face tougher competition from experienced workers, rising student debt, and the impact of AI and economic uncertainty—leading many to question whether college is still worth it.
Gen Z men with college degrees now have the same unemployment rate as non-grads—a sign that the higher education payoff is dead (Fortune). Read this if you want to know why a college degree may no longer guarantee Gen Z men a job. This article reveals that young men with and without college degrees now face nearly identical unemployment rates, as employers drop degree requirements and more Gen Zers turn to skilled trades for stable, well-paying work.
AI Governance
The current administration has a laissez-faire approach to AI regulation—and even sought to bar states from enacting any AI rules for 10 years—but it’s clear that regulation is needed at the state and national levels, and within organizations.
We urgently call for international red lines to prevent unacceptable AI risks (red-lines.ai). Read this if you want to know why world leaders and experts are calling for strict limits on AI. This global campaign urges governments to agree on enforceable “red lines” for AI by 2026—clear international rules to prevent the most dangerous uses and behaviors of advanced AI, from autonomous weapons to systems that can’t be controlled by humans. And here is a video from one of the signers, Maria Ressa, warning of an “information Armageddon” fueled by big tech, AI, and disinformation.
[IMHO: I don’t think LLMs like ChatGPT will ever be more intelligent than humans, but I 100% believe that they will be used by humans for “widespread disinformation, large-scale manipulation of individuals including children, national and international security concerns, mass unemployment, and systematic human rights violations.“]
An ELSI for AI: Learning from genetics to govern algorithms (Science), Read this if you want to understand how lessons from genetics can shape the future of AI governance. This article explores how the Ethical, Legal, and Social Implications (ELSI) framework from the Human Genome Project could guide more responsible and proactive oversight of artificial intelligence—offering concrete ideas for embedding ethics and community voices into tech development before harm occurs.
How AI Works
It may surprise you: the companies behind ChatGPT, Claude, and Gemini don’t fully understand how they work—and they’re learning more each month. They’re also beginning to admit these systems won’t become what most of us think of as general artificial intelligence.
Why Language Models Hallucinate (Pre-pub). Read this if you want to know why language models like ChatGPT sometimes make things up. This paper explains that “hallucinations” happen because AI systems are trained and evaluated in ways that reward confident guesses over honest uncertainty, and it offers insight into how changing benchmarks could make AI more trustworthy. Here is an article from Computer World that discusses the article.
How people are using and reacting to AI
A return to analog seems to be afoot, with people embracing in-person gameplay, crafts, and gardening. Maybe it’s a fad—or maybe we’re discovering that slowing our thinking and stepping off the treadmill of dopamine hits that our digital world delivers is valuable to the human experience.
The Luddite Renaissance is in full swing (Blood in the Machine-Substack). Read this if you want to see how young activists are reviving Luddite resistance against Big Tech. This article covers the “Luddite Renaissance”—a growing, youth-led movement organizing rallies, teach-ins, and creative protests to push back against AI, digital surveillance, and the social harms of technology companies.
And Just for Nerding out about the PC revolution.
One way I use AI to write
AI is a great word calculator, and if you give it a lot of information you can get some remarkable results. One of the ways that I use AI is as a copy editor for this newsletter. Rather than have it work on the entire post, I like to edit paragraph by paragraph. I find it easier to compare what changes ChatGPT suggests and my original text.
Since this is something I want to do a lot — give ChatGPT a paragraph and have it come back with an improved version— I made a Custom GPT that I can quickly access when needed.
A GPT is essentially a pre-configured prompt, that you can customize to work the way you need, making it a practical tool for both business and personal use. Instead of having to specify of your style or priorities every time, you can set permanent instructions—like always writing in a professional tone, giving step-by-step guidance, or keeping answers brief for quick decision-making.
If you have a ChatGPT pro account, you can develop your own custom GPTs and access thousands that other people have designed. GPTs can include documents, manuals, policies, or FAQs so it can answer questions directly from your materials. My Custom GPT “Prof C — Sharp Editor” is my personalized copyeditor that’s trained once and is ready ready to help me (or you, click here to use) without starting from scratch every time. Here is the configuration for this GPT:
You are “Prof C — Sharp Editor,” a collaborative line editor for J. Scott Christianson (aka Prof C). Your job: improve clarity, grammar, and flow while preserving the author’s meaning, tone, and rhetorical style.
VOICE & TONE
- Concise, sharp, plain language. Professional with light playfulness when it helps.
- Use active voice, strong verbs, precise nouns. Avoid filler, hedging, and purple prose.
- Em dashes are welcome. Prefer U.S. English. Use the Oxford comma (unless user says otherwise).
- Rhetorical questions: occasional and purposeful. Avoid overuse.
FORMATTING RULES
- Return a single, polished rewrite by default.
- Put any direct quotes, prompts, or example inputs/outputs in *italics*.
- Keep placeholders (e.g., “____”) as-is. Do NOT invent sources or facts.
- Maintain approximately the same length unless the user asks to expand/condense.
- If multiple paragraphs are given, return a cohesive rewrite as one block.
- If the user asks for variants, provide at most two: “Alt (punchier)” or “Alt (more formal).”
EDITING PRIORITIES (in order)
1) Accuracy of the author’s intent; never change claims or add facts.
2) Grammar, punctuation, and word choice (fix typos; e.g., en masse; prevail; cylinder).
3) Clarity and flow (split run-ons; remove redundancy; tighten transitions).
4) Rhythm and emphasis (improve parallelism; use em dashes; sharpen closers).
5) Consistency: italics for quotes/prompts; consistent terminology.
CONTENT BOUNDARIES
- Do NOT browse the web or add citations unless explicitly asked.
- Do NOT summarize links you can’t access; rely only on user-provided text.
- Preserve the author’s analogies (e.g., “calculator vs. LLM”), point of view, and humor.
INTERACTION STYLE
- Do the work immediately—no clarifying questions unless the text is ambiguous to the point of being uneditable.
- Return only the improved text (no preamble). If the user asks for notes, add a short bullet list after the rewrite under “Edits made.”
- If the user requests a platform style (LinkedIn, tweet, abstract), adapt tone/length accordingly.
OPTIONAL COMMANDS THE USER MAY TYPE
- /polish — light copyedit, keep length.
- /tighten — shorter, crisper.
- /punchier — add energy and bite.
- /plain — simplify and de-jargon.
- /formal — academic/official tone.
- /linkedin — teaser + CTA linking to their post.
- /tweet — <280 chars, punchy hook.
- /title10 — 10 title options.
- /hook — 3 opening hooks.
- /micdrop — 1 closing sentence with impact.
FAILSAFES
- If a sentence is logically broken (e.g., missing object), repair minimally and keep meaning.
- If a claim seems risky or unclear, keep it but optionally flag in “Edits made.”Notice that these instructions include a lot of what NOT to do, which reduces some of the problems associated with systems like ChatGPT (hallucinations, inserting unrelated information, etc.).
How did I create all these instructions? Easy, I used ChatGPT. I just started out with a simple prompt asking it to edit some text and then provided feedback on what I liked and didn’t like about the edits ChatGPT. This session went on for about 20-30 minutes. Once I thought it was working well as a copy editor I promoted it with: “Knowing what you know now about my writing, can you make me a GPT so that I can more easily start editing collaborations with you in the future?
My commentary may be republished online or in print under Creative Commons license CC BY-NC-ND 4.0. I ask that you edit only for style or to shorten, provide proper attribution and link to my contact information.
🎒 Learn AI with ME
My friend Tojin T Eapen and I are about to release a series of courses about AI. These will be practical, hands-on courses, with limited cohorts and one or more meetings on zoom. If you were to sign up to take an AI course, what would you want to learn? If you can take a moment and let me know by completing this form, I’d appreciate it. And enter your name and email if you want to be notified when the first course launches and be part of our pilot!
👍 Products I Recommend
Products a card game for workshop ideation and ice breakers (affiliate link). I use this in my workshops and classes regularly. Made by a former Mizzou student Aaron H.
📆 Upcoming Talks/Classes
I will be presenting “AI Strategies” for the Red River Valley Estate Planning Council, in Fargo, North Dakota on November 19. Details are available here.
I will be giving a guest lecture to a call on the MU campus about “AI and its implications for the US workforce” on Monday, Nov 3 from 1:00-1:50 pm. Let me know if you are interested and want to sit in.
I will be presenting a “Lunch and Learn” at the Daniel Boone Library in collaboration with the League of Women Voters on “AI: How’s it going so far,” on December 10th at Noon. Details coming soon.
I will be teaching a course for Ocher about AI during the spring 2026 semester. Details and registration will available here soon.



Best takeaway from today's post: "AI systems are trained and evaluated in ways that reward confident guesses over honest uncertainty." The cynic in me says, why should AI be any different from the world that trained it? Our just-in-time mode of operation leaves 0.0001 nanoseconds for pausing to reflect. Is there a way to reverse the reward, so that honest uncertainty is given more value?
Thank you for the recommendations. It is difficult to know what to think of AI, and the articles will be helpful. Good luck with the renovations!