AI-Assisted R Development: Speaking at the Inaugural R+AI Conference 2025
I'm speaking at the R+AI Conference on November 12th at 10:45 AM Eastern about something that fundamentally changed how I approach R & Shiny development - and it started with a healthy dose of panic.
The Existential Dread Phase
My AI journey began like many others using ChatGPT for the first time: using AI for writing better emails, blog posts, and vacation planning. Then realizing that Claude (and Claude Code in early Spring of this year) started outputting decent R code, offering good suggestions on bug fixes and creating Shiny apps. My honest first take was: "Am I about to be obsolete?"
That fear quickly morphed into lots of negative self talk: "Am I becoming dependent on Claude Code? Am I losing my hard-earned R expertise by chatting with an AI?"
As it turns out I was asking the wrong questions entirely and going about leveraging this new tool with the wrong mindset.
The Unexpected Freedom
In running a mostly solo analytics consultancy, I have some clear resource restraints with my time which means I can't scale hours beyond what is humanly possible and frankly my world (aka my family💞) outside of work would be terribly sad if I worked 60+ weeks. AI coding assistance has let me take on more sophisticated projects that I would have previously declined to avoid burnout.
But here's what surprised me most: Collaborating with Claude to handle some portions of development work has freed me up mentally for the R & Shiny work that brings me the most professional joy - community contributions & conversations, quirky custom package development, exploring new (to me) techniques in data engineering & devops to deliver overall more comprehensive solutions for my clients. The mental bandwidth that use to go to writing boilerplate code, chasing down bugs and syntax wrestling? Now it goes toward giving back to the R ecosystem that's given me so much.
What I Learned About Learning
The talk title is inspired from the movie: Me, Myself & Irene (2000)
Here's the surprising part: spotting when AI makes shit up (aka erroneous output) and debugging what it generates has taught me more about R patterns than years of scouring the treasure trove of questions and answers from Stack Overflow (or #rstats on Twitter RIP). When I'm constantly evaluating AI suggestions, I've started developing better instincts for code "smells", anti-patterns, and architectural decisions.
The flip side? AI loves over-abstraction. It'll cheerfully create unnecessary complexity or violate R idioms I actually care about. I've found that when Claude ventures to write R code, the models love to be overly clever - which is sometimes very useful to see and other times it's like there was totally a simpler way to do that thing. This is where Claude.md and {mcptools} help drastically. Learning to recognize these moments - and confidently push back - has strengthened my judgment significantly.
What I'll Cover in the Talk
Through real examples from my work (nonprofit dashboards for survey analysis, conservation analytics platforms, and academic internal decision support tools), I'll walk through:
Force Multiplier Effects: Taking on bigger projects while having bandwidth for community contributions
Building Better Intuition: Using rapid AI-assisted experimentation to understand R patterns more deeply
Avoiding the Traps: Recognizing when AI creates problems (over-abstraction, code that works but shouldn't ship)
Staying Sharp: Maintaining genuine R expertise while leveraging AI strategically
Already experimenting with AI in your R workflow? I'd love to hear what you've discovered - the breakthroughs, the straight up mistakes, and what it's freed you up to explore. See you at the conference! 💭
