<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Technical Strategy on Figuring it Out</title><link>https://ashleyedds.dev/tags/technical-strategy/</link><description>Recent content in Technical Strategy on Figuring it Out</description><generator>Hugo -- gohugo.io</generator><language>en-us</language><lastBuildDate>Thu, 09 Apr 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://ashleyedds.dev/tags/technical-strategy/index.xml" rel="self" type="application/rss+xml"/><item><title>Three Things Leaders Owe Their Dev Teams Right Now</title><link>https://ashleyedds.dev/p/three-things-leaders-owe-dev-teams-ai/</link><pubDate>Thu, 09 Apr 2026 00:00:00 +0000</pubDate><guid>https://ashleyedds.dev/p/three-things-leaders-owe-dev-teams-ai/</guid><description>&lt;img src="https://ashleyedds.dev/p/three-things-leaders-owe-dev-teams-ai/cover.jpg" alt="Featured image of post Three Things Leaders Owe Their Dev Teams Right Now" /&gt;&lt;p&gt;This week, I sat in a leadership conversation about AI. How are we supporting our teams to use it? How are we pushing the envelope? What does success actually look like?&lt;/p&gt;
&lt;p&gt;I walked away with three things rattling around in my brain that I haven&amp;rsquo;t been able to shake. Not three things I figured out, to be clear — three things I realized we &lt;em&gt;owe&lt;/em&gt; our teams, and maybe haven&amp;rsquo;t said clearly enough yet.&lt;/p&gt;
&lt;h2 id="1-failure-has-to-be-safe-and-you-have-to-say-so-out-loud"&gt;1. Failure Has to Be Safe (And You Have to Say So Out Loud)
&lt;/h2&gt;&lt;p&gt;Here&amp;rsquo;s the tension I&amp;rsquo;ve been sitting with: we&amp;rsquo;re asking our teams to experiment boldly with AI tooling, push the boundaries of what&amp;rsquo;s possible, and embrace a technology that is genuinely moving faster than any of us can fully keep up with. At the same time, those same humans have quarterly commitments, bonus goals, sprint reviews, and stakeholders watching their delivery metrics.&lt;/p&gt;
&lt;p&gt;That&amp;rsquo;s a recipe for paralysis, not innovation.&lt;/p&gt;
&lt;p&gt;The thing about calculated risk is that it only works if the person taking it actually believes they&amp;rsquo;ll be supported when it doesn&amp;rsquo;t pan out. And &amp;ldquo;we support fast failure&amp;rdquo; isn&amp;rsquo;t the kind of thing you can put in a slide deck once and consider done. It has to be lived. It has to show up when someone raises their hand and says &lt;em&gt;we tried the agentic workflow, it added two weeks to the project, and here&amp;rsquo;s what we learned.&lt;/em&gt; That moment — how leadership responds to that moment — is the actual policy.&lt;/p&gt;
&lt;p&gt;So if you haven&amp;rsquo;t said it explicitly, and recently, and in a way your team actually believes: say it now. The psychological safety to experiment isn&amp;rsquo;t a nice-to-have in a fast-moving AI environment. It&amp;rsquo;s load-bearing.&lt;/p&gt;
&lt;h2 id="2-their-dev-skills-still-matter--protect-them"&gt;2. Their Dev Skills Still Matter — Protect Them
&lt;/h2&gt;&lt;p&gt;There&amp;rsquo;s a real and documented concern in the developer community right now that is, frankly, a little uncomfortable to name when you&amp;rsquo;re simultaneously championing AI adoption: &lt;em&gt;using AI too much may be quietly eroding the skills that make developers good at their jobs.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;The research backs this up. Studies on AI coding tools show that junior developers who start with AI sometimes never fully develop fundamental skills — and there&amp;rsquo;s growing concern that in a few years, we may have &amp;ldquo;developers&amp;rdquo; who can&amp;rsquo;t actually develop without the tool. Microsoft&amp;rsquo;s own guidance on Copilot acknowledges this, noting there&amp;rsquo;s growing evidence that over-reliance on AI leads to &amp;ldquo;skill atrophy&amp;rdquo; — developers forgetting how to solve problems independently. Some teams have even started reserving time for deliberate no-AI practice to keep fundamentals sharp.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;ve seen some of my own team members hesitate to lean further into AI tools, and I don&amp;rsquo;t think it&amp;rsquo;s resistance to change. I think it&amp;rsquo;s self-awareness — a gut sense that something important might be at stake. That instinct deserves respect, not dismissal.&lt;/p&gt;
&lt;p&gt;We can hold both things at once: &lt;em&gt;yes, adopt these tools, push hard, learn everything you can&lt;/em&gt; — &lt;strong&gt;and&lt;/strong&gt; &lt;em&gt;we recognize that your craft matters and we support maintaining it.&lt;/em&gt; The balance isn&amp;rsquo;t complicated to articulate, but it requires leaders to actually say it, mean it, and build in the space for it. What does a &amp;ldquo;keep your skills sharp&amp;rdquo; practice look like on your team? That&amp;rsquo;s a conversation worth having.&lt;/p&gt;
&lt;h2 id="3-the-cost-bubble-is-real-and-we-should-be-thinking-about-it"&gt;3. The Cost Bubble Is Real and We Should Be Thinking About It
&lt;/h2&gt;&lt;p&gt;This one requires a brief detour through the early days of on-demand apps, but I promise it&amp;rsquo;s worth it.&lt;/p&gt;
&lt;p&gt;Do you remember the early days of Uber and DoorDash? Ride prices so low they felt illegal. Free delivery. Subsidized everything. I remember getting meals delivered for less than it cost me to drive to pick them up. It was &lt;em&gt;great.&lt;/em&gt; It was also not real. Because what paid for all of it was venture capital, burning cash to buy market share, with the eventual plan of raising prices once we were all dependent.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;The sub from Jersey Mike&amp;rsquo;s I ordered delivered to my office last month cost me $13 in fees. It was two miles away.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;m not saying AI is a scam. I&amp;rsquo;m saying the economics of the current moment are not what they appear to be, and that has real implications for how we build.&lt;/p&gt;
&lt;p&gt;The financial picture is startling: major AI providers are losing money at significant scale, with OpenAI reportedly spending $2.25 for every dollar it earns. Internal projections suggest OpenAI won&amp;rsquo;t turn a profit until 2030, while Anthropic expects losses to continue through much of the same period — spending on AI training alone projected to surpass $30 billion by 2029. A recent analysis from Man Group put it starkly: the financial architecture supporting AI has been built for a demand curve that may not arrive, and whose economics may deteriorate if it does.&lt;/p&gt;
&lt;p&gt;None of this means the technology isn&amp;rsquo;t real or valuable. It absolutely is. But it does mean that the pricing we&amp;rsquo;re operating under today — for inference, for API calls, for the agent orchestration workflows we&amp;rsquo;re starting to build dependencies on — is almost certainly not the price we&amp;rsquo;ll be paying down the road.&lt;/p&gt;
&lt;p&gt;So here&amp;rsquo;s what I think we owe our teams: clear-eyed strategy. Not reckless dependency-building, but &lt;em&gt;thoughtful&lt;/em&gt; adoption. What workflows are we comfortable building deep dependencies into? Where do we want to preserve optionality? What happens to our velocity if a major model provider reprices or pivots?&lt;/p&gt;
&lt;p&gt;It&amp;rsquo;s our job as leaders to be looking around that corner, even when the wave feels really good right now.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;I don&amp;rsquo;t have perfect answers to any of these. What I do have is a team I care about, a leadership conversation that stuck with me, and a strong suspicion that the most important things leaders need to say right now are the ones that feel slightly risky to say.&lt;/p&gt;
&lt;p&gt;Which, when you think about it, is a pretty good argument for saying them anyway.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;em&gt;What&amp;rsquo;s your team navigating as you push into deeper AI adoption? I&amp;rsquo;d love to hear what&amp;rsquo;s working — and what&amp;rsquo;s keeping you up at night. Drop a comment below.&lt;/em&gt;&lt;/p&gt;</description></item></channel></rss>