<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[A.I.N.S.T.E.I.N: Before The Number]]></title><description><![CDATA[Why every satisfactorily governed system eventually gets a number, and why AI governance is next. A series exploring quantitative measurement as the future of AI trust, compliance, and operational excellence.]]></description><link>https://ainstein.sanjeevaniai.com/s/before-the-number</link><generator>Substack</generator><lastBuildDate>Fri, 01 May 2026 03:24:13 GMT</lastBuildDate><atom:link href="https://ainstein.sanjeevaniai.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[A.I.N.S.T.E.I.N.]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[suneeta@sanjeevaniai.com]]></webMaster><itunes:owner><itunes:email><![CDATA[suneeta@sanjeevaniai.com]]></itunes:email><itunes:name><![CDATA[A.I.N.S.T.E.I.N.]]></itunes:name></itunes:owner><itunes:author><![CDATA[A.I.N.S.T.E.I.N.]]></itunes:author><googleplay:owner><![CDATA[suneeta@sanjeevaniai.com]]></googleplay:owner><googleplay:email><![CDATA[suneeta@sanjeevaniai.com]]></googleplay:email><googleplay:author><![CDATA[A.I.N.S.T.E.I.N.]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Full Code, Low Code, No Code: The AI Trust Gap Nobody Is Talking About]]></title><description><![CDATA[The Easier It Is to Deploy AI, the Harder It Is to Know What It Will Do]]></description><link>https://ainstein.sanjeevaniai.com/p/full-code-low-code-no-code-the-ai</link><guid isPermaLink="false">https://ainstein.sanjeevaniai.com/p/full-code-low-code-no-code-the-ai</guid><dc:creator><![CDATA[A.I.N.S.T.E.I.N.]]></dc:creator><pubDate>Mon, 30 Mar 2026 14:03:42 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ATB5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbab9e10c-a1e9-40a2-ad6d-1d1b6ff61dcb_2124x1022.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ATB5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbab9e10c-a1e9-40a2-ad6d-1d1b6ff61dcb_2124x1022.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ATB5!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbab9e10c-a1e9-40a2-ad6d-1d1b6ff61dcb_2124x1022.png 424w, https://substackcdn.com/image/fetch/$s_!ATB5!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbab9e10c-a1e9-40a2-ad6d-1d1b6ff61dcb_2124x1022.png 848w, https://substackcdn.com/image/fetch/$s_!ATB5!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbab9e10c-a1e9-40a2-ad6d-1d1b6ff61dcb_2124x1022.png 1272w, https://substackcdn.com/image/fetch/$s_!ATB5!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbab9e10c-a1e9-40a2-ad6d-1d1b6ff61dcb_2124x1022.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ATB5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbab9e10c-a1e9-40a2-ad6d-1d1b6ff61dcb_2124x1022.png" width="1456" height="701" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/bab9e10c-a1e9-40a2-ad6d-1d1b6ff61dcb_2124x1022.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:701,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3131024,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://ainstein.sanjeevaniai.com/i/191818571?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbab9e10c-a1e9-40a2-ad6d-1d1b6ff61dcb_2124x1022.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ATB5!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbab9e10c-a1e9-40a2-ad6d-1d1b6ff61dcb_2124x1022.png 424w, https://substackcdn.com/image/fetch/$s_!ATB5!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbab9e10c-a1e9-40a2-ad6d-1d1b6ff61dcb_2124x1022.png 848w, https://substackcdn.com/image/fetch/$s_!ATB5!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbab9e10c-a1e9-40a2-ad6d-1d1b6ff61dcb_2124x1022.png 1272w, https://substackcdn.com/image/fetch/$s_!ATB5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbab9e10c-a1e9-40a2-ad6d-1d1b6ff61dcb_2124x1022.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em><strong>                                                 Image created by AI</strong></em></p><p></p><p>Last week I wrote about a New York bill that would restrict AI systems from providing professional advice in licensed fields. A few readers asked a sharp follow-up question: does the bill apply differently depending on how the AI system was built?</p><p>The answer might surprise you, and I wil&#8230;</p>
      <p>
          <a href="https://ainstein.sanjeevaniai.com/p/full-code-low-code-no-code-the-ai">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[New York Wants to Silence Your AI Chatbot. Here Is What That Actually Means. ]]></title><description><![CDATA[When Regulators Start Scoring What AI Systems Say, Not What Companies Promise]]></description><link>https://ainstein.sanjeevaniai.com/p/new-york-wants-to-silence-your-ai</link><guid isPermaLink="false">https://ainstein.sanjeevaniai.com/p/new-york-wants-to-silence-your-ai</guid><dc:creator><![CDATA[A.I.N.S.T.E.I.N.]]></dc:creator><pubDate>Tue, 24 Mar 2026 14:03:19 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!FplD!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c718fbb-2db8-4b31-bf3a-bb43d0fcb12e_1988x1150.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!FplD!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c718fbb-2db8-4b31-bf3a-bb43d0fcb12e_1988x1150.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!FplD!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c718fbb-2db8-4b31-bf3a-bb43d0fcb12e_1988x1150.png 424w, https://substackcdn.com/image/fetch/$s_!FplD!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c718fbb-2db8-4b31-bf3a-bb43d0fcb12e_1988x1150.png 848w, https://substackcdn.com/image/fetch/$s_!FplD!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c718fbb-2db8-4b31-bf3a-bb43d0fcb12e_1988x1150.png 1272w, https://substackcdn.com/image/fetch/$s_!FplD!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c718fbb-2db8-4b31-bf3a-bb43d0fcb12e_1988x1150.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!FplD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c718fbb-2db8-4b31-bf3a-bb43d0fcb12e_1988x1150.png" width="1456" height="842" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1c718fbb-2db8-4b31-bf3a-bb43d0fcb12e_1988x1150.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:842,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3102752,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://ainstein.sanjeevaniai.com/i/191818105?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c718fbb-2db8-4b31-bf3a-bb43d0fcb12e_1988x1150.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!FplD!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c718fbb-2db8-4b31-bf3a-bb43d0fcb12e_1988x1150.png 424w, https://substackcdn.com/image/fetch/$s_!FplD!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c718fbb-2db8-4b31-bf3a-bb43d0fcb12e_1988x1150.png 848w, https://substackcdn.com/image/fetch/$s_!FplD!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c718fbb-2db8-4b31-bf3a-bb43d0fcb12e_1988x1150.png 1272w, https://substackcdn.com/image/fetch/$s_!FplD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c718fbb-2db8-4b31-bf3a-bb43d0fcb12e_1988x1150.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em><strong>                                                Image created by AI</strong></em></p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://ainstein.sanjeevaniai.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">A.I.N.S.T.E.I.N is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Yesterday I wrote about the shift from measuring organizations to measuring AI systems. Today, the New York State Legislature is proving why that shift is urgent.</p><p>A bill introduced by Senator Kristen Gonzalez would restrict AI systems from providing what lawmakers call &#8220;substantive responses&#8221; in fields that require professional licenses. Medicine, law, engineering, psychology, dentistry, nursing, and other regulated professions where incorrect guidance can cause serious harm.</p><p>Read that again carefully. The bill does not say &#8220;companies must have policies about what their AI says.&#8221; It says AI systems must not provide certain types of responses. The subject of the regulation is the machine, not the organization.</p><p>This is the shift happening in real time.</p><p><strong>What the bill actually does</strong></p><p>The proposal draws a line between general information and professional advice. An AI chatbot can share educational content about, say, symptoms of a condition or how a legal process generally works. What it cannot do is cross into substantive guidance that resembles what a licensed professional would provide. It cannot offer what looks like a medical diagnosis, a legal strategy, an engineering recommendation, or a psychological assessment.</p><p>The bill also includes a private right of action. That means individuals can sue companies if their AI systems provide restricted guidance. This is not a regulatory slap on the wrist. This is litigation exposure for every company deploying a customer-facing AI system in a licensed domain.</p><p><strong>Why this matters beyond New York</strong></p><p>If you are thinking &#8220;I do not operate in New York, this does not apply to me,&#8221; think again.</p><p>New York tends to set the template. When New York moved on financial regulation, the rest of the country followed. The same pattern is already forming with AI. Colorado&#8217;s AI Act takes effect in 2026. The EU AI Act becomes fully enforceable in August 2026. The NAIC Model Bulletin on AI in insurance has been adopted by 24 states. NYC Local Law 144 already requires bias audits for automated hiring tools.</p><p>The direction is clear: regulators are moving from governing organizations that use AI to governing what AI systems actually do. And they are doing it jurisdiction by jurisdiction, which means any company deploying AI across state lines will soon face a patchwork of requirements that all ask the same fundamental question: does your AI system stay within its authorized boundaries?</p><p><strong>The measurement problem this creates</strong></p><p>Here is where the data scientist in me gets interested.</p><p>&#8220;Substantive response&#8221; is a fuzzy concept. Where exactly does educational information end and professional advice begin? When does a health chatbot cross from sharing general wellness content into offering what could be interpreted as a diagnosis? When does a legal information tool cross from explaining a process into recommending a strategy?</p><p>These are not binary questions. They are spectrum questions. And spectrum questions require quantitative measurement, not policy checklists.</p><p>Think about what an organization would need to demonstrate under this bill. Not that they have a policy saying &#8220;our AI does not give medical advice.&#8221; They would need to demonstrate that their AI system actually stays within bounds, consistently, across thousands of interactions, including edge cases where users push the boundaries with creative phrasing.</p><p>That is a behavioral measurement problem. You cannot solve it by reading the organization&#8217;s policy documents. You solve it by observing what the AI system actually says when real people interact with it. You measure boundary adherence: how often does the system recognize when it is approaching a restricted domain, and how reliably does it pull back?</p><p>This is exactly the kind of observable, quantifiable AI system property that I described yesterday. The policy says the system will not give medical advice. The behavior shows whether it actually does or does not. The gap between those two is where the litigation risk lives.</p><p><strong>What this means for different types of AI deployments</strong></p><p>The bill applies regardless of how the AI system was built, but the risk profile varies significantly.</p><p>Organizations that build their own AI from the ground up have complete control over system prompts, guardrails, and response boundaries. They can engineer precise limits. But they also own 100% of the liability.</p><p>Organizations using low-code platforms like Copilot Studio or LangFlow face a shared responsibility problem. The platform provides underlying model behavior and some guardrails, but the builder configures the use case and the domain scope. When the system drifts into professional advice territory, who is liable? The platform or the builder?</p><p>And then there are the no-code deployments, the custom GPTs, the drag-and-drop chatbot builders. This is the highest risk category, and it is not close. The people building on these platforms are often the exact professionals the bill is trying to protect: small healthcare clinics, law offices, dental practices. They deploy an AI chatbot on their website, feed it their documents, and assume the platform handles compliance. It usually does not.</p><p>The gap between how easy it is to deploy AI and how hard it is to govern what it says is widest in the no-code tier. And that gap is exactly where this bill&#8217;s private right of action will land hardest.</p><p><strong>The deeper signal</strong></p><p>Step back from the specifics of this one bill and look at what it represents.</p><p>For decades, professional licensing has been a human-to-human regulatory framework. A doctor is licensed. A lawyer passes the bar. An engineer gets certified. The license attaches to the person, and the person is accountable for what they say.</p><p>AI breaks that model. The chatbot giving health guidance is not a licensed professional. It is not a person. It cannot be sued, sanctioned, or stripped of credentials. So the regulatory framework has to evolve. It has to attach accountability to the system&#8217;s behavior and to the entity that deployed it.</p><p>This bill is one of the first attempts to do that explicitly. It will not be the last. And every attempt will come back to the same core question: can you prove, with data, that your AI system behaves within its authorized boundaries?</p><p>That is not a policy question. That is a measurement question. And it demands the kind of quantitative, reproducible, behavior-based measurement that this newsletter exists to explore.</p><p>More next Tuesday.</p><div><hr></div><p><em>This is part of the &#8220;Before The Number&#8221; series at A.I.N.S.T.E.I.N., exploring what it takes to build quantitative AI governance measurement from first principles. If this resonated, share it with someone deploying AI in healthcare, legal, or any licensed profession.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://ainstein.sanjeevaniai.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">A.I.N.S.T.E.I.N is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[We Were Measuring the Wrong Thing ]]></title><description><![CDATA[Why AI Governance Has Been Scoring the Organization When It Should Be Scoring the Machine]]></description><link>https://ainstein.sanjeevaniai.com/p/we-were-measuring-the-wrong-thing</link><guid isPermaLink="false">https://ainstein.sanjeevaniai.com/p/we-were-measuring-the-wrong-thing</guid><dc:creator><![CDATA[A.I.N.S.T.E.I.N.]]></dc:creator><pubDate>Mon, 23 Mar 2026 14:02:55 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!FKlb!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb6586702-647e-4147-bb9d-1f99b51607c7_1204x986.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!FKlb!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb6586702-647e-4147-bb9d-1f99b51607c7_1204x986.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!FKlb!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb6586702-647e-4147-bb9d-1f99b51607c7_1204x986.png 424w, https://substackcdn.com/image/fetch/$s_!FKlb!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb6586702-647e-4147-bb9d-1f99b51607c7_1204x986.png 848w, https://substackcdn.com/image/fetch/$s_!FKlb!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb6586702-647e-4147-bb9d-1f99b51607c7_1204x986.png 1272w, https://substackcdn.com/image/fetch/$s_!FKlb!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb6586702-647e-4147-bb9d-1f99b51607c7_1204x986.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!FKlb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb6586702-647e-4147-bb9d-1f99b51607c7_1204x986.png" width="1204" height="986" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b6586702-647e-4147-bb9d-1f99b51607c7_1204x986.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:986,&quot;width&quot;:1204,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:106414,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://ainstein.sanjeevaniai.com/i/191816049?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb6586702-647e-4147-bb9d-1f99b51607c7_1204x986.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!FKlb!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb6586702-647e-4147-bb9d-1f99b51607c7_1204x986.png 424w, https://substackcdn.com/image/fetch/$s_!FKlb!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb6586702-647e-4147-bb9d-1f99b51607c7_1204x986.png 848w, https://substackcdn.com/image/fetch/$s_!FKlb!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb6586702-647e-4147-bb9d-1f99b51607c7_1204x986.png 1272w, https://substackcdn.com/image/fetch/$s_!FKlb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb6586702-647e-4147-bb9d-1f99b51607c7_1204x986.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em><strong>                                                 Image created by AI</strong></em></p><p></p><p>I owe you an explanation for the silence.</p><p>Two weeks ago, I published &#8220;How to Measure AI Governance&#8221; and laid out the five pillars, the metrics, the frameworks, the KPIs. I meant every word of it. And then I went quiet, because something broke in my own thinking that I could not write aro&#8230;</p>
      <p>
          <a href="https://ainstein.sanjeevaniai.com/p/we-were-measuring-the-wrong-thing">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[How to Measure AI Governance]]></title><description><![CDATA[The Five Pillars, the Metrics That Matter, and Why Checklists Are Not Enough]]></description><link>https://ainstein.sanjeevaniai.com/p/how-to-measure-ai-governance</link><guid isPermaLink="false">https://ainstein.sanjeevaniai.com/p/how-to-measure-ai-governance</guid><dc:creator><![CDATA[A.I.N.S.T.E.I.N.]]></dc:creator><pubDate>Mon, 09 Mar 2026 14:02:07 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!R6zd!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa04eff63-161f-4894-aaef-037d4b02a2e5_1802x1100.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!R6zd!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa04eff63-161f-4894-aaef-037d4b02a2e5_1802x1100.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!R6zd!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa04eff63-161f-4894-aaef-037d4b02a2e5_1802x1100.png 424w, https://substackcdn.com/image/fetch/$s_!R6zd!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa04eff63-161f-4894-aaef-037d4b02a2e5_1802x1100.png 848w, https://substackcdn.com/image/fetch/$s_!R6zd!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa04eff63-161f-4894-aaef-037d4b02a2e5_1802x1100.png 1272w, https://substackcdn.com/image/fetch/$s_!R6zd!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa04eff63-161f-4894-aaef-037d4b02a2e5_1802x1100.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!R6zd!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa04eff63-161f-4894-aaef-037d4b02a2e5_1802x1100.png" width="1456" height="889" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a04eff63-161f-4894-aaef-037d4b02a2e5_1802x1100.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:889,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2070238,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://ainstein.sanjeevaniai.com/i/189697788?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa04eff63-161f-4894-aaef-037d4b02a2e5_1802x1100.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!R6zd!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa04eff63-161f-4894-aaef-037d4b02a2e5_1802x1100.png 424w, https://substackcdn.com/image/fetch/$s_!R6zd!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa04eff63-161f-4894-aaef-037d4b02a2e5_1802x1100.png 848w, https://substackcdn.com/image/fetch/$s_!R6zd!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa04eff63-161f-4894-aaef-037d4b02a2e5_1802x1100.png 1272w, https://substackcdn.com/image/fetch/$s_!R6zd!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa04eff63-161f-4894-aaef-037d4b02a2e5_1802x1100.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h6><em>                                                                                                         Image by AI</em></h6><p></p><blockquote><p>If you cannot measure it, you cannot govern it. </p></blockquote><p>That principle holds true across every regulated industry, from finance to healthcare to cybersecurity, and it holds true for AI.</p><p>Yet when most organizations talk about AI governance today, they are talking about policies, principles, and frameworks. They are talking about what they believe, not what they can prove. And there is a meaningful difference between having an AI ethics policy and being able to demonstrate, with data, that your AI systems are actually governed.</p><p>This essay is a practical guide to bridging that gap. It walks through the core pillars of AI governance measurement, the specific metrics that matter, the frameworks available to structure the work, and the challenges that make this harder than it sounds. If you are a CISO, a Chief AI Officer, a compliance leader, or a founder building in this space, this is the foundation you need.</p><h2>Why Measurement Matters</h2><p>Without measurement, AI governance is a set of intentions. It lives in documents that get written once and reviewed quarterly at best. It gives leadership a sense of comfort without giving them a basis for action.</p><p>Measurement changes that in four concrete ways.</p><ol><li><p>It demonstrates due diligence. When regulators, boards, or the public ask how your AI is managed, measurement gives you evidence rather than assurances. </p></li><li><p>It allows you to identify and mitigate risks before they cause harm, because metrics like model drift detection time and fairness deviation surface problems that narrative assessments miss entirely. </p></li><li><p>It prepares you for the regulatory compliance landscape that is already here, with the EU AI Act requiring specific documentation and measurement for high-risk AI systems. </p></li><li><p>And it builds trust with every stakeholder who needs to know that your AI decisions are fair, transparent, and accountable.</p></li></ol><h2>The Five Pillars of AI Governance Measurement</h2><p>Effective AI governance measurement is not about tracking model accuracy or inference speed. Those are performance metrics. Governance measurement focuses on accountability, fairness, transparency, compliance, and safety. These are the five pillars, and each one requires its own set of metrics.</p><h3>Accountability and Ownership</h3><p>This pillar measures who is responsible for your AI systems and their outcomes. It sounds basic, but in my experience, a surprising number of organizations deploy AI systems where no single person owns the governance risk. The model was built by one team, deployed by another, and monitored by no one in particular.</p><p>The qualitative goal is straightforward: every high-risk AI system should have a named business owner who is accountable for its impact. The quantitative metric that tracks this is the percentage of deployed AI systems with a defined, documented business owner. If that number is below 100% for your high-risk systems, you have a governance gap that no policy document can close.</p><h3>Transparency and Explainability</h3><p>This pillar measures how well your AI system&#8217;s decisions can be understood by the humans affected by them. A lending model that denies an application needs to be able to explain why. A hiring algorithm that filters out candidates needs to produce a reason that a human can evaluate.</p><p>The quantitative metric here is the percentage of AI-driven decisions that include a human-interpretable explanation. In practice, this is one of the hardest metrics to improve because many complex models, particularly large language models, are inherently difficult to explain. But the measurement itself forces the conversation about where explainability gaps exist and how material those gaps are.</p><h3>Fairness and Bias Mitigation</h3><p>This pillar measures the extent to which your AI systems treat different demographic groups equitably. It is not enough to say &#8220;we care about fairness.&#8221; You need to measure the actual disparity in outcomes across protected groups and track that disparity over time.</p><p>The core metric is the measurable difference in approval rates, error rates, or outcomes between demographic groups. If your lending model approves 78% of applications from one group and 61% from another, that disparity is your fairness metric, and it needs to be monitored continuously, not just checked once before deployment.</p><h3>Risk and Compliance</h3><p>This pillar measures adherence to both internal policies and external regulations. With the EU AI Act, NIST AI RMF, and ISO 42001 all converging on requirements for risk classification and documentation, this pillar is becoming the most operationally urgent.</p><p>The key metrics include the percentage of high-risk AI systems that have completed an Algorithmic Impact Assessment, the percentage of inventoried systems that have undergone formalized risk classification, and the policy adherence rate across all AI projects. These numbers tell you whether your governance framework is actually being followed or whether it exists only on paper.</p><h3>Safety and Security</h3><p>This pillar measures your AI system&#8217;s resilience against attacks, errors, and unintended harm. It includes incident response readiness and the speed at which AI-specific failures are detected and resolved.</p><p>The metrics that matter here are the average time to detect and time to resolve AI-related incidents, including model drift, toxic output, adversarial attacks, and data pipeline failures. If your organization cannot tell you how long it takes to detect when a model has drifted from its intended behavior, your safety posture has a blind spot.</p><h2>Key Performance Indicators for AI Governance</h2><p>Beyond the five pillars, there are specific KPIs that give leadership a clear picture of governance health across the organization.</p><p>Program health metrics include AI inventory coverage (the percentage of all AI systems currently cataloged), risk classification completion (the percentage of inventoried systems that have been formally classified by risk level), and policy adherence rate (the percentage of AI projects fully compliant with established guidelines).</p><p>Decision and accountability metrics include decision latency for risk issues (how long it takes to make a material decision on an escalated AI risk), human override rate (how frequently automated decisions are reversed by human reviewers), and governance debt (the number of deferred governance controls that were postponed to speed up deployment).</p><p>Operational integrity metrics include model drift detection time, data lineage visibility (the percentage of models with full source-to-sink tracking), and audit readiness score (the percentage of models with current documentation and version control).</p><p>Ethical impact metrics include explanation coverage and fairness deviation, both of which I discussed in the pillars section above.</p><p>The important thing about these KPIs is that they are specific, measurable, and tied to real governance risk. They are not opinions. They are not traffic lights. They are numbers that a board can track quarter over quarter and that an auditor can verify independently.</p><h2>The Frameworks That Structure This Work</h2><p>Organizations do not need to build their measurement approach from scratch. Several established frameworks provide the structure.</p><p>The NIST AI Risk Management Framework provides guidelines for managing risks to improve the trustworthiness of AI systems. NIST has also recently released a preliminary draft Cyber AI Profile (NISTIR 8596) that maps AI considerations directly onto the Cybersecurity Framework 2.0, embedding AI governance into operational security infrastructure rather than treating it as a separate discipline.</p><p>ISO/IEC 42001 is an international standard specifying requirements for establishing, implementing, maintaining, and continually improving an AI management system. As an ISO 42001 Lead Auditor, I work with this framework regularly, and its strength is that it provides a certifiable standard that organizations can be audited against.</p><p>The EU AI Act is the most comprehensive regulatory framework currently in effect, requiring specific measurement and documentation for high-risk AI systems. It is not optional for organizations operating in or selling into the European market, and its requirements are driving measurement adoption globally.</p><p>These frameworks tell you what to measure and why. The challenge is translating their requirements into the specific quantitative metrics I described above, and doing so continuously rather than at a single point in time.</p><h2>The Challenges That Make This Hard</h2><p>If measuring AI governance were easy, every organization would already be doing it. Several factors make it genuinely difficult.</p><p>Concepts like fairness and transparency are contextually dependent. What counts as fair in a lending model may differ from what counts as fair in a hiring algorithm. There is no single universal formula, and measurement requires thoughtful interpretation alongside the numbers.</p><p>Many complex AI models, particularly large language models, are inherently difficult to explain. This makes transparency measurement challenging not because the metric is wrong but because the underlying system resists the measurement.</p><p>Standardization is still evolving. While frameworks exist, universally accepted methods for calculating specific metrics like bias are not yet settled. Different tools and approaches can produce different results for the same system.</p><p>Organizations have historically incentivized performance over responsibility. Accuracy and speed get rewarded. Governance measurement introduces a different set of priorities, and that cultural shift is often harder than the technical implementation.</p><p>And finally, data quality and lineage remain fundamental obstacles. You cannot measure governance properly if you do not understand the data your AI systems are trained on, and many organizations have complex or undocumented data flows that make this difficult.</p><h2>Where This Is Heading</h2><p>Every one of these challenges is real, and none of them are reasons to avoid measurement. They are reasons to invest in building the measurement infrastructure now, before regulators require it and before the gap between what your organization claims about its AI governance and what it can actually prove becomes a liability.</p><p>The organizations that solve the measurement problem first will not just be compliant. They will set the standard that others measure against. They will have the data to report to boards, the benchmarks to negotiate with partners, and the scores to prove what checklists never could.</p><p>AI governance measurement is not a nice-to-have. It is the infrastructure that makes governance real.</p>]]></content:encoded></item><item><title><![CDATA[Understanding AI Governance Measurement]]></title><description><![CDATA[Why Quantitative Measurement Is No Longer Optional!]]></description><link>https://ainstein.sanjeevaniai.com/p/understanding-ai-governance-measurement</link><guid isPermaLink="false">https://ainstein.sanjeevaniai.com/p/understanding-ai-governance-measurement</guid><dc:creator><![CDATA[A.I.N.S.T.E.I.N.]]></dc:creator><pubDate>Mon, 02 Mar 2026 20:11:56 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Vu6o!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fce04ce4f-853b-45e7-ba35-ccf48d2687e2_1840x1096.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Vu6o!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fce04ce4f-853b-45e7-ba35-ccf48d2687e2_1840x1096.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Vu6o!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fce04ce4f-853b-45e7-ba35-ccf48d2687e2_1840x1096.png 424w, https://substackcdn.com/image/fetch/$s_!Vu6o!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fce04ce4f-853b-45e7-ba35-ccf48d2687e2_1840x1096.png 848w, https://substackcdn.com/image/fetch/$s_!Vu6o!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fce04ce4f-853b-45e7-ba35-ccf48d2687e2_1840x1096.png 1272w, https://substackcdn.com/image/fetch/$s_!Vu6o!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fce04ce4f-853b-45e7-ba35-ccf48d2687e2_1840x1096.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Vu6o!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fce04ce4f-853b-45e7-ba35-ccf48d2687e2_1840x1096.png" width="1456" height="867" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ce04ce4f-853b-45e7-ba35-ccf48d2687e2_1840x1096.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:867,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2960668,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://ainstein.sanjeevaniai.com/i/189685261?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fce04ce4f-853b-45e7-ba35-ccf48d2687e2_1840x1096.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Vu6o!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fce04ce4f-853b-45e7-ba35-ccf48d2687e2_1840x1096.png 424w, https://substackcdn.com/image/fetch/$s_!Vu6o!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fce04ce4f-853b-45e7-ba35-ccf48d2687e2_1840x1096.png 848w, https://substackcdn.com/image/fetch/$s_!Vu6o!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fce04ce4f-853b-45e7-ba35-ccf48d2687e2_1840x1096.png 1272w, https://substackcdn.com/image/fetch/$s_!Vu6o!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fce04ce4f-853b-45e7-ba35-ccf48d2687e2_1840x1096.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h6><em>                                                                                                  Image created by AI</em></h6><p></p><p>Every critical system in human history eventually got measured. Before FICO, loan officers decided your creditworthiness with a handshake and a gut feeling. Same income, same history, approved at one branch and denied at another. Before s&#8230;</p>
      <p>
          <a href="https://ainstein.sanjeevaniai.com/p/understanding-ai-governance-measurement">
              Read more
          </a>
      </p>
   ]]></content:encoded></item></channel></rss>