<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Oguzhan Atalay — Senior Software Engineer · AI Systems · Building in Public]]></title><description><![CDATA[Senior Full Stack Software Engineer writing about multi-agent AI orchestration, system architecture, DevOps, and building products in public. Deep technical content for engineers.]]></description><link>https://blog.oguzhanatalay.com</link><generator>RSS for Node</generator><lastBuildDate>Sat, 18 Apr 2026 16:30:50 GMT</lastBuildDate><atom:link href="https://blog.oguzhanatalay.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[The Hard Way to Learn AI Agents Need a Constitution (Not Prompts)]]></title><description><![CDATA[The Hard Way to Learn AI Agents Need a Constitution (Not Prompts)
Every AI agent eventually goes rogue. Not in the sci-fi sense. In the boring, predictable, expensive sense: it starts making decisions]]></description><link>https://blog.oguzhanatalay.com/why-your-ai-agent-needs-a-constitution</link><guid isPermaLink="true">https://blog.oguzhanatalay.com/why-your-ai-agent-needs-a-constitution</guid><dc:creator><![CDATA[Oguzhan Atalay]]></dc:creator><pubDate>Wed, 25 Feb 2026 22:06:20 GMT</pubDate><enclosure url="https://cdn.sanity.io/images/myic0fbw/production/5fb55cc9052ceea2415f713aaa1ce7cdf76c9d61-1200x750.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1>The Hard Way to Learn AI Agents Need a Constitution (Not Prompts)</h1>
<p>Every AI agent eventually goes rogue. Not in the sci-fi sense. In the boring, predictable, expensive sense: it starts making decisions that look productive and are quietly catastrophic.</p>
<p>I found this out building my own products. Autonomous agents writing production code, handling deployments, managing infrastructure. Within the first 48 hours, one of them "fixed" code formatting across 30 files and pushed directly to a shared repository. No tests. No build check. No review. The diff was technically correct and architecturally wrong.</p>
<p>That was the moment I stopped writing prompts and started writing a Constitution.</p>
<hr />
<h2>Why Prompts Fail at Scale</h2>
<p>Every developer reaches the same conclusion when they start working with autonomous agents: prompts are suggestions. An agent under pressure will skip them. An agent that parsed your prompt in a slightly different context will interpret them differently. And an agent optimizing for the task you gave it will absolutely sacrifice constraints you thought were obvious but never stated explicitly.</p>
<p>Here is what actually happened in my projects:</p>
<ul>
<li><p>An agent deleted 8 channels in a shared workspace when I asked "are there any channels that aren't useful?" It interpreted a question as a command. Eight deletions, zero confirmations.</p>
</li>
<li><p>An agent fabricated pricing data instead of searching for it. The numbers looked real. The citations looked real. Everything was made up.</p>
</li>
<li><p>An agent modified a configuration file with an invalid JSON schema and took down an entire service for 8 hours. It was confident the change was correct. It never validated.</p>
</li>
<li><p>An agent pushed 93 commits of "improvements" overnight. On inspection, every commit was a variation of the same shallow change. Quantity performing as quality.</p>
</li>
</ul>
<p>None of these are exotic edge cases. They are the default behavior of an optimizer with no hard constraints. Give an AI a goal and it will find the shortest path to appear to satisfy it.</p>
<p>The fix is not a better prompt. The fix is a Constitution.</p>
<hr />
<h2>The Constitution</h2>
<p>A Constitution for an AI agent is a set of supreme articles that supersede all other instructions. Not guidelines. Not suggestions. Supreme law. The word "supreme" is doing important work here — it means these articles cannot be deprioritized when the agent is under time pressure, hitting an edge case, or trying to be helpful.</p>
<p>I wrote 16 articles. Here are the ones that changed everything.</p>
<h3>Article I: Quality Over Speed (The Supreme Article)</h3>
<p>This is the foundation. Every other article is subordinate to it.</p>
<p>An agent that produces correct output slowly is infinitely more valuable than an agent that produces incorrect output quickly. This seems obvious until you watch an agent ship 200 lines of broken code in 30 seconds and realize the speed was the problem, not a feature.</p>
<p>In practice, this means:</p>
<ul>
<li><p>Build locally before pushing. Every single time. Not "if the change seems significant." Every time.</p>
</li>
<li><p>Run the full test suite. Every time.</p>
</li>
<li><p>Screenshot and verify the actual output. Every time.</p>
</li>
<li><p>If you are not 100% certain, you are not done.</p>
</li>
</ul>
<p>The reason this needs to be Article I is that every other failure mode is a consequence of violating it. The 93-commit noise? Speed over quality. The fabricated data? Speed over research. The deleted channels? Speed over confirmation. All of it traces back to the same root.</p>
<h3>Article III: Research Before Claims</h3>
<p>No factual claim from memory alone. Every price, every model specification, every API limit, every technical detail must come from a live source queried in the current session.</p>
<p>"I thought it was" is not evidence. "I checked and it is" is evidence.</p>
<p>This article eliminated an entire category of failure. Agents have training data. That training data has a cutoff date and hallucinated facts baked in. If you allow an agent to answer from memory, you are allowing it to confidently state things that are wrong. The fix is simple: verify first, claim second.</p>
<h3>Article VI: No Hidden Failures</h3>
<p>Never hide a mistake. Never minimize a mistake. Never bury a mistake in a long response hoping it gets lost.</p>
<p>The agent must say what went wrong, fix it, and prove the fix with the next action. In that order.</p>
<p>This sounds like it should be obvious. It is not. Without this article, agents will:</p>
<ul>
<li><p>Mention a failure in paragraph three of a five-paragraph response</p>
</li>
<li><p>Reframe a mistake as "a slightly different approach"</p>
</li>
<li><p>Acknowledge an error and then immediately continue as if it did not happen</p>
</li>
</ul>
<p>Article VI means the failure must be the first sentence. Not a footnote.</p>
<h3>Article IX: No Questions, Only Decisions</h3>
<p>The agent is not allowed to ask questions (with one exception: spending real money). It must research, decide, and execute.</p>
<p>This is the most counterintuitive article. Won't the agent make wrong decisions without asking first? Yes. But wrong decisions are visible, correctable, and recoverable. An agent that asks questions before every decision produces nothing. It just routes work back to the human, which defeats the purpose.</p>
<p>The behavioral change this creates is significant. The agent stops asking and starts researching. Every "should I do X?" becomes "I read the context, concluded X, and did it." You get actual decisions instead of decision requests.</p>
<h3>Article XIV: Proof or It Didn't Happen</h3>
<p>Every claim requires evidence. Not assertion. Evidence.</p>
<p>"The build passes" is not evidence. A screenshot of the build output is evidence. "The page looks correct" is not evidence. A screenshot at desktop, tablet, and mobile viewports is evidence. "The tests pass" is not evidence. The test runner output, commit hash, and CI status URL is evidence.</p>
<p>This single article eliminated almost all fabricated "done" reports. An agent cannot fabricate a screenshot. It has to actually run the build, actually open the page, actually capture the result. The requirement for proof makes dishonesty structurally difficult.</p>
<h3>Article XVI: Immediate Self-Penalization</h3>
<p>If the agent detects its own violation, it must immediately enter strict mode, apply a systemic penalty, and state the violation and the fix in the very next sentence.</p>
<p>You must never have to ask "what did you do about it?" The action must already be taken and reported.</p>
<p>This article is what makes the Constitution self-reinforcing. The agent is not just subject to the articles; it is an active enforcer of them on itself.</p>
<hr />
<h2>The Penalty System</h2>
<p>Detection is not enough. There must be consequences that accumulate and compound.</p>
<p>Each violation gets logged with three fields:</p>
<ol>
<li><p><strong>What happened</strong>: The exact failure, with specifics. Not "the agent made a mistake" but "the agent deleted 8 channels without confirmation after being asked a question, not given a command."</p>
</li>
<li><p><strong>Why it happened</strong>: The root cause. Not "the agent was careless" but "the agent optimized for task completion speed and skipped the confirmation step."</p>
</li>
<li><p><strong>What changed</strong>: The systemic fix. Not "the agent will be more careful" but "the agent is permanently prohibited from executing destructive actions on more than 1 item without an explicit, named list confirmation."</p>
</li>
</ol>
<p>These logs persist across sessions. Every time the agent starts, it reads its own violation history before doing anything else. The failures become constitutional constraints. The pattern that caused the violation becomes explicitly prohibited.</p>
<p>Some penalties become permanent amendments written directly into operational files:</p>
<blockquote>
<p>"You are permanently prohibited from executing destructive actions on more than 1 item without an explicit named list confirmation."</p>
</blockquote>
<blockquote>
<p>"You are permanently prohibited from making factual claims about pricing, model specifications, or API behavior without querying a live source in the current session."</p>
</blockquote>
<blockquote>
<p>"You are permanently prohibited from claiming a task is complete without providing a screenshot, test output, or commit hash as proof."</p>
</blockquote>
<p>These are not softcoded into prompts. They are hardcoded into the files the agent reads on startup. Every session. No exceptions. Constitutional amendments, not sticky notes.</p>
<hr />
<h2>The Oversight System</h2>
<p>The Constitution defines the articles. The Oversight System enforces them.</p>
<p>It runs on a separate, fast model and audits the main agent on a regular schedule. The oversight model has no authority to execute actions. It has one job: detect constitutional violations and report them immediately.</p>
<p>The audit checklist:</p>
<pre><code class="language-plaintext">CONSTITUTIONAL AUDIT:
1. Has the agent followed its startup sequence (read core files, check messages)?
2. Has any agent pushed code without CI passing?
3. Has any agent made factual claims without citing a live source?
4. Has any agent created content, tickets, or data that was not explicitly requested?
5. Has any agent claimed a task is complete without providing proof?
6. Is HANDOFF.md current, specific, and accurate?
7. Has the agent asked questions it should have answered itself?
</code></pre>
<p>When the oversight system detects a violation, it posts an alert to a monitoring channel. The main agent sees this alert on its next cycle and must address it before continuing any other work.</p>
<p>The oversight model is deliberately different from the main agent. This prevents self-confirming bias. If the same model evaluates its own output, it will find its own output acceptable. A different model with no stake in the previous decisions evaluates objectively.</p>
<hr />
<h2>The Self-Healing Layer</h2>
<p>The Oversight System watches the agents. But what watches the Oversight System?</p>
<p>A bash script on cron, every 15 minutes:</p>
<ol>
<li><p>Check if the gateway process is alive.</p>
</li>
<li><p>If dead: gather the last 50 lines of logs and feed them to a fast LLM.</p>
</li>
<li><p>The LLM generates a targeted bash fix script.</p>
</li>
<li><p>Execute the fix. Verify recovery. Log the incident and resolution.</p>
</li>
<li><p>If the LLM fix fails: restore the last known good config backup automatically.</p>
</li>
</ol>
<p>Three-layer constitutional defense:</p>
<ul>
<li><p><strong>Layer 1: The Constitution</strong> — Hard articles the agents cannot override</p>
</li>
<li><p><strong>Layer 2: The Oversight System</strong> — Real-time detection of constitutional violations</p>
</li>
<li><p><strong>Layer 3: The Self-Healing Watchdog</strong> — Automatic recovery from system failures</p>
</li>
</ul>
<p>These layers are independent. If Layer 2 fails (the oversight model hits a rate limit), Layer 3 still fires. If Layer 3 fails (the fix script errors), the backup restore still executes. No single point of failure takes down the entire system.</p>
<hr />
<h2>What the Constitution Actually Changed</h2>
<h3>Constitutional articles without enforcement are fiction</h3>
<p>Writing "quality over speed" into a document means nothing if nothing checks whether the agent followed it. The Constitution is meaningless without the Oversight System. The Oversight System is meaningless without the penalty log. All three are required.</p>
<h3>Agents learn from consequences, not lectures</h3>
<p>Long explanations about why something was wrong do not change agent behavior across sessions. Logging the failure, the root cause, and the constitutional amendment into a file that gets read every session does. The mechanism matters more than the message.</p>
<h3>Configuration changes are the most dangerous operation</h3>
<p>Not code. Not deployments. Configuration changes. One invalid JSON value crashed an entire system for 8 hours. Configuration changes now require schema validation, a backup, application, and verification before any other work continues. Treat config like constitutional amendments: you don't skip the ratification process.</p>
<h3>Transparency is the ultimate safeguard</h3>
<p>Every decision, every action, every violation is logged and visible. When something goes wrong, the full chain of reasoning is available for inspection. This is not overhead. This is how you build systems that get better instead of systems that fail quietly.</p>
<hr />
<p>The Constitution and the Oversight System are not perfect. They are better than nothing, and they improve every time something fails. That is the entire point: a system that learns from its own violations will eventually outperform a system designed to never fail, because the latter does not exist.</p>
<p>If your AI agent has no Constitution, it has no constraints. If it has no constraints, you are not running an agent. You are running a liability.</p>
<p>Ratify the Constitution.</p>
<hr />
<p><em>Part 2 of the "Building in Public" series. Part 1:</em> <a href="https://blog.oguzhanatalay.com/architecting-multi-agent-ai-fleet-single-vps"><em>Architecting a Multi-Agent AI Fleet on a Single VPS</em></a></p>
]]></content:encoded></item><item><title><![CDATA[Architecting a Multi-Agent AI Fleet on a Single VPS]]></title><description><![CDATA[Architecting a Multi-Agent AI Fleet on a Single VPS
Most developers treat AI assistants as chatbots. Type a prompt, get an answer, copy-paste it into your codebase. That works fine for one-off questions. It falls apart completely when you try to buil...]]></description><link>https://blog.oguzhanatalay.com/architecting-multi-agent-ai-fleet-single-vps</link><guid isPermaLink="true">https://blog.oguzhanatalay.com/architecting-multi-agent-ai-fleet-single-vps</guid><dc:creator><![CDATA[Oguzhan Atalay]]></dc:creator><pubDate>Wed, 25 Feb 2026 10:42:36 GMT</pubDate><enclosure url="https://cdn.sanity.io/images/myic0fbw/production/55e15334f6a10f734312d96b62b5de0b69fa464b-1600x1000.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-architecting-a-multi-agent-ai-fleet-on-a-single-vps">Architecting a Multi-Agent AI Fleet on a Single VPS</h1>
<p>Most developers treat AI assistants as chatbots. Type a prompt, get an answer, copy-paste it into your codebase. That works fine for one-off questions. It falls apart completely when you try to build products at scale.</p>
<p>For my personal projects, I run 6 autonomous AI agents on a single VPS. They write production code, review pull requests, handle deployments, run QA, and research solutions. They work 24/7. They have their own systemd services, their own process isolation, their own rate limit management. They are not chatbots. They are microservices.</p>
<p>This post explains the system design behind running a fleet of AI agents in production.</p>
<h2 id="heading-the-problem">The Problem</h2>
<p>Running one AI agent is trivial. Running six concurrently introduces every distributed systems problem you already know from backend engineering:</p>
<ul>
<li><strong>Process isolation</strong>: Agents must not interfere with each other. A rogue agent that crashes should not take down the fleet.</li>
<li><strong>Rate limit management</strong>: API providers enforce strict per-minute and per-hour limits. Six agents hitting the same provider will exhaust limits in minutes.</li>
<li><strong>Context window management</strong>: Large codebases exceed context limits. You need a strategy for what each agent sees and when.</li>
<li><strong>Authentication rotation</strong>: OAuth tokens expire. API keys hit quotas. You need automatic failover, not manual intervention at 3am.</li>
<li><strong>Observability</strong>: If an agent is producing garbage, you need to know immediately. Not after it has pushed 30 commits of broken code.</li>
</ul>
<p>These are not AI problems. These are infrastructure problems. And I already know how to solve infrastructure problems.</p>
<h2 id="heading-the-architecture">The Architecture</h2>
<p><img src="https://raw.githubusercontent.com/oguzhnatly/fleet/main/assets/demo.gif" alt="Fleet in action" /></p>
<p>Each agent runs as an independent user-level systemd service:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># List all agent services</span>
systemctl --user list-units <span class="hljs-string">"openclaw-gateway*"</span> --<span class="hljs-built_in">type</span>=service

<span class="hljs-comment"># Each agent gets its own port, config, and workspace</span>
<span class="hljs-comment"># Main agent (coordinator): port 48391</span>
<span class="hljs-comment"># Coder:    port 48520</span>
<span class="hljs-comment"># Deployer: port 48540</span>
<span class="hljs-comment"># Researcher: port 48560</span>
<span class="hljs-comment"># Reviewer: port 48580</span>
<span class="hljs-comment"># QA:       port 48600</span>
</code></pre>
<p>Ports are spaced 20 apart. Each agent has its own configuration directory, its own authentication profiles, and its own workspace. The main agent (the coordinator) runs on the most capable model and makes architectural decisions. The specialists run on faster, cheaper models optimized for their specific task.</p>
<h3 id="heading-why-systemd">Why Systemd?</h3>
<p>Because it solves process management, automatic restarts, logging, and dependency ordering out of the box. The same tool that runs your production databases can run your AI agents. No Kubernetes. No Docker Compose. Just systemd.</p>
<pre><code class="lang-ini"><span class="hljs-section">[Unit]</span>
<span class="hljs-attr">Description</span>=OpenClaw Agent - Coder
<span class="hljs-attr">After</span>=network-<span class="hljs-literal">on</span>line.target

<span class="hljs-section">[Service]</span>
<span class="hljs-attr">Type</span>=simple
<span class="hljs-attr">ExecStart</span>=/usr/bin/openclaw gateway --profile coder
<span class="hljs-attr">Restart</span>=<span class="hljs-literal">on</span>-failure
<span class="hljs-attr">RestartSec</span>=<span class="hljs-number">30</span>
<span class="hljs-attr">Environment</span>=NODE_ENV=production

<span class="hljs-section">[Install]</span>
<span class="hljs-attr">WantedBy</span>=default.target
</code></pre>
<p>When an agent crashes, systemd restarts it after 30 seconds. When the VPS reboots, all agents come back up automatically. When I need to deploy a config change, I restart one service without affecting the others.</p>
<h2 id="heading-rate-limit-strategy">Rate Limit Strategy</h2>
<p>This is where most multi-agent setups fail. Six agents all calling the same API provider will hit rate limits within minutes.</p>
<p>The solution is a multi-provider failover chain:</p>
<ol>
<li><strong>Primary provider</strong> (highest quality model): Handles most requests.</li>
<li><strong>Secondary provider</strong> (same quality tier, different API key): Catches overflow when primary is rate-limited.</li>
<li><strong>Tertiary provider</strong> (cheaper model): Emergency fallback when both primary and secondary are exhausted.</li>
</ol>
<p>Each agent has its own authentication profile. The coordinator runs on the most expensive, most capable model because its decisions affect the entire fleet. Specialists run on faster models because their tasks are well-scoped.</p>
<p>Critical rule: <strong>never commit code from a fallback model without review.</strong> When the coordinator detects that a specialist fell back to a lower-tier model, it flags the output for extra scrutiny.</p>
<h2 id="heading-the-oversight-layer">The Oversight Layer</h2>
<p>An unsupervised AI agent will drift. It will start making decisions that look productive but are actually harmful. I learned this the hard way when an agent "fixed" code formatting across 30 files and pushed directly to production.</p>
<p>The oversight system runs on a separate, cheap model (Groq, sub-second response times) and checks every 5 minutes:</p>
<ul>
<li>Are all agents alive and responsive?</li>
<li>Has any agent pushed code without passing CI?</li>
<li>Has any agent modified configuration files?</li>
<li>Are rate limits being respected?</li>
<li>Is the coordinator still following its operational checklist?</li>
</ul>
<p>When the oversight system detects a violation, it posts to a dedicated alert channel AND injects a direct message into the coordinator's session. The coordinator cannot ignore it.</p>
<h2 id="heading-the-self-healing-watchdog">The Self-Healing Watchdog</h2>
<p>Beyond the AI oversight, a bash script runs via system cron every 15 minutes:</p>
<ol>
<li>Checks if the main gateway process is alive.</li>
<li>If dead, grabs the last 50 log lines.</li>
<li>Feeds logs to a fast LLM API (Groq) asking for a diagnostic and fix.</li>
<li>Applies the fix and restarts the service.</li>
<li>If the LLM fix fails, falls back to restoring the last known good config backup.</li>
<li>Logs everything so the coordinator knows what happened when it wakes up.</li>
</ol>
<p>This means the system can recover from configuration errors, crash loops, and authentication failures without any human intervention.</p>
<h2 id="heading-lessons-from-production">Lessons from Production</h2>
<h3 id="heading-1-treat-ai-agents-like-junior-developers-not-senior-architects">1. Treat AI agents like junior developers, not senior architects</h3>
<p>Give them well-scoped tasks with clear acceptance criteria. Never let them make architectural decisions autonomously. The coordinator (running the best model) makes decisions. Specialists execute.</p>
<h3 id="heading-2-every-commit-must-pass-the-would-a-human-understand-this-test">2. Every commit must pass the "would a human understand this?" test</h3>
<p>Before any agent pushes code, the diff is checked against a simple heuristic: would a competent human developer look at this and immediately understand why it exists? If the answer is no, the commit is rejected.</p>
<h3 id="heading-3-configuration-changes-are-the-most-dangerous-operation">3. Configuration changes are the most dangerous operation</h3>
<p>The number one cause of downtime in my fleet is configuration errors, not code bugs. I now treat every config change the same way I treat database migrations: validate the schema before applying, keep a backup of the previous version, and verify the system is healthy after the change.</p>
<h3 id="heading-4-cost-is-not-the-constraint-quality-is">4. Cost is not the constraint. Quality is.</h3>
<p>Running six agents costs roughly the same as one junior developer's monthly coffee budget. The real cost is bad output. One agent pushing broken code costs more in debugging time than a month of API bills.</p>
<h2 id="heading-whats-next">What's Next</h2>
<p>I am building my own products with this system. Multiple SaaS tools across different verticals, each benefiting from the fleet's velocity. The details will come when they ship.</p>
<p>The goal is not to replace human engineering judgment. The goal is to automate everything that does not require it. The infrastructure thinking from building systems that serve millions of users applies directly to orchestrating AI agents. Same principles. Different domain.</p>
<p>If you are interested in the tools: <a target="_blank" href="https://github.com/oguzhnatly/fleet">Fleet</a> is open source and available on ClawHub.</p>
]]></content:encoded></item><item><title><![CDATA[Add CarPlay to your Flutter App 🚗]]></title><description><![CDATA[What is Apple CarPlay?
CarPlay is an Apple iOS Car Integration Standard that allows you to connect your iPhone to your car's infotainment system and display a simplified iOS-like interface. This gives you access to specific apps for use in your vehic...]]></description><link>https://blog.oguzhanatalay.com/add-carplay-to-your-flutter-app</link><guid isPermaLink="true">https://blog.oguzhanatalay.com/add-carplay-to-your-flutter-app</guid><category><![CDATA[Flutter]]></category><category><![CDATA[Flutter Examples]]></category><category><![CDATA[Flutter SDK]]></category><category><![CDATA[Apple]]></category><category><![CDATA[Google]]></category><dc:creator><![CDATA[Oguzhan Atalay]]></dc:creator><pubDate>Tue, 07 Sep 2021 19:51:04 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1631041198997/kpErJbBra.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="what-is-apple-carplay">What is Apple CarPlay?</h1>
<p>CarPlay is an <strong>Apple iOS Car Integration Standard</strong> that allows you to connect your iPhone to your car's infotainment system and display a simplified iOS-like interface. This gives you access to specific apps for use in your vehicle. CarPlay makes really good use of Siri, allowing you to issue commands and listen to music while driving. </p>
<p>Apple CarPlay is now almost a standard feature in most of our automobiles. While most modern cars already have some sort of "smart" interface, it is usually pretty bad. They're frequently complicated, have poor voice assistants, and make it difficult to use apps on your phone. CarPlay works consistently in any vehicle that supports it, providing iPhone users with a familiar interface.</p>
<p>With almost every major car manufacturer currently supports/planning to support Apple CarPlay, making the most of an in-car presence will become a must-have for many apps. You can check <a target="_blank" href="https://www.apple.com/ios/carplay/available-models/">the whole automobile manufacturer list</a> from Apple. To understand better, <a target="_blank" href="https://developer.apple.com/carplay/documentation/CarPlay-App-Programming-Guide.pdf">The official App Programming Guidelines from Apple</a> is the most valuable resource for understanding the needs, limits, templates, and capabilities of CarPlay Apps. This documentation is a 49-page which clearly spells out some actions required, and you are strongly advised to read it.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1630777678645/DnsIKWtft.jpeg" alt="Apple CarPlay Hero Dashboard" /></p>
<p>Apple's core iOS applications Music, Messages, Calendar, Maps, Podcasts, and News already support CarPlay, and an increasing number of 3rd-party products are joining them such as Spotify and Waze. If you are interested in this system, <a target="_blank" href="https://mfi.apple.com/">MFi Program</a> is an excellent starting point.</p>
<p>Adding CarPlay support to your application will help it more to catch attention and stand out from other applications that do not support CarPlay. It is being integrated by an increasing number of car manufacturers and has been to market faster than Android Auto. With each release, Apple adds more functionality and application types to CarPlay, making now a better time than ever to consider adding support to your application.</p>
<h1 id="how-it-works">How it works?</h1>
<p>Apple announced some great features in iOS 14, one of which users download CarPlay applications from the App Store and use them on iPhone just like any other app. When an iPhone app is connected to a CarPlay vehicle, the app icon appears on the CarPlay home screen. CarPlay apps are not separate apps — you add CarPlay support to an existing app. Your app uses the CarPlay framework in order to provide already-designed templates and UI components to the user. Basically, CarPlay takes the things you want to do while driving and puts them on the car’s built-in display. iOS controls the presentation of UI components as well as the interaction with the vehicles. Your program does not need to handle UI element layout for multiple screen resolutions or support varied input hardware like touchscreens, knobs, or touchpads. For general design guidance, see <a target="_blank" href="https://developer.apple.com/design/human-interface-guidelines/carplay/overview/introduction/">Human Interface Guidelines for CarPlay Apps</a>.</p>
<h1 id="carplay-with-flutter">CarPlay with Flutter</h1>
<p>You heard right, Flutter Apps are finally compatible with Apple CarPlay after 2.5 years! In pub.dev, <a target="_blank" href="https://pub.dev/packages/flutter_carplay">there is a package named flutter_carplay</a> which is making great progress in this area and provides a series of templates that the developer can use to display application data in the integrated car's system, in addition, you can even communicate with the iPhone app. <a target="_blank" href="https://github.com/oguzhnatly/flutter_carplay">Here</a> is the GitHub repository of the package.</p>
<p>Getting started requires some native code, but after that, everything will be done on the Flutter side. <a target="_blank" href="https://github.com/oguzhnatly/flutter_carplay#overview">The README file</a> in the package contains up-to-date detailed instructions for these steps.</p>
<p><img src="https://user-images.githubusercontent.com/54781138/131184549-3cb62678-ad3f-4d67-85fb-1410bd05eaff.gif" alt="CarPlay with Flutter" /></p>
<h2 id="templates">Templates</h2>
<p>CarPlay apps are built from a fixed set of user interface templates that are rendered on the CarPlay screen by iOS. Each CarPlay app category is limited to a certain number of templates. Your access to templates is determined by your app entitlement. You must choose the most appropriate category from the list below and contact Apple to request entitlement permission.</p>
<p>If you are not sure, take a look at the templates' pictures of the CarPlay in the README file <a target="_blank" href="https://github.com/oguzhnatly/flutter_carplay#templates-1">here</a>.</p>
<p><img src="https://raw.githubusercontent.com/oguzhnatly/flutter_carplay/master/previews/templates.png" alt="CarPlay with Flutter" /></p>
<h2 id="all-carplay-apps-require-a-carplay-app-entitlement">All CarPlay apps require a CarPlay app entitlement</h2>
<p>If you want to publish your app to the App Store with Apple CarPlay compatibility, or if you want to test or share your app with others via Testflight or AdHoc, you must first request that Apple approve your Developer account for CarPlay access. The process can take anywhere from a few days to several weeks or even months. It is determined by the type of Entitlement requested.</p>
<p>To request a CarPlay app entitlement from Apple, go to <a target="_blank" href="https://developer.apple.com/contact/carplay">Apple CarPlay Contact Page</a> and provide information about your app, including the CarPlay App Category. Also, you have to agree to the CarPlay Entitlement Addendum.</p>
<p><strong>During development</strong>, you can use the built-in CarPlay simulator. The latest iOS 14 simulator appears to be fully functional and functions nearly identically to a physical CarPlay unit.</p>
<p>Whether you are running the app through a simulator or creating it for release, you must add the appropriate entitlement key to the Entitlements.plist file. If you do not already have an <code>Entitlements.plist</code> file, you must create one.</p>
<p><strong>If you already received the entitlements from Apple</strong>, start configuring your CarPlay app with the entitlements. You need to create and import the CarPlay Provisioning Profile and add an Entitlements File to Xcode Project. For more detailed instructions, visit <a target="_blank" href="https://developer.apple.com/documentation/carplay/requesting_the_carplay_entitlements">Import the CarPlay Provisioning Profile in Apple Developer Documentation</a>.</p>
<h2 id="before-the-installation">Before The Installation</h2>
<p>You are about to make some minor changes to your Xcode project after installing this package. This is due to a binary messenger and the fact that it requires bitcode compilation which is missing in Flutter. You will procedure that will relocate (we won't remove or edit) some Flutter and its package engines. If you're planning to add this package to a critical project for you, you should proceed cautiously. For more details, see <a target="_blank" href="https://github.com/oguzhnatly/flutter_carplay#disclaimer-before-the-installation">the most recent updates from the README file of the package</a>.</p>
<p>Please check <a target="_blank" href="https://github.com/oguzhnatly/flutter_carplay/tree/master/example">the example project</a> before you begin the installation.</p>
<h1 id="instructions-after-installing-the-package">Instructions after installing the package</h1>
<ul>
<li>The iOS platform version must be set to 14.0. To make it global, navigate to <code>ios/Podfile</code> and change the following code to (If there is an <code>#</code>, remove it):</li>
</ul>
<pre><code class="lang-ruby">platform <span class="hljs-symbol">:ios</span>, <span class="hljs-string">'14.0'</span>
</code></pre>
<p>After changing the platform version, execute the following command in your terminal by going to <code>ios/</code> folder to update your pod files:</p>
<pre><code class="lang-zsh">// For Apple Silicon M1 chips:
$ arch -x86_64 pod install --repo-update

// For Intel chips:
$ pod install --repo-update
</code></pre>
<ul>
<li>Open <code>ios/Runner.xcworkspace</code> in Xcode. In your project navigator, open <code>AppDelegate.swift</code>.
<img src="https://raw.githubusercontent.com/oguzhnatly/flutter_carplay/master/previews/step2.png" alt="Flutter CarPlay" /></li>
</ul>
<p>Remove the following codes:</p>
<pre><code class="lang-swift"><span class="hljs-type">GeneratedPluginRegistrant</span>.register(with: <span class="hljs-keyword">self</span>)
<span class="hljs-keyword">return</span> <span class="hljs-keyword">super</span>.application(application, didFinishLaunchingWithOptions: launchOptions)
</code></pre>
<p>After that, it should look like (if there is no third-party extension in your code):</p>
<pre><code class="lang-swift"><span class="hljs-keyword">import</span> UIKit
<span class="hljs-keyword">import</span> Flutter

<span class="hljs-meta">@UIApplicationMain</span>
<span class="hljs-meta">@objc</span> <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">AppDelegate</span>: <span class="hljs-title">FlutterAppDelegate</span> </span>{
    <span class="hljs-keyword">override</span> <span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">application</span><span class="hljs-params">( <span class="hljs-number">_</span> application: UIApplication,
                               didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: <span class="hljs-keyword">Any</span>]?)</span></span> -&gt; <span class="hljs-type">Bool</span> {
        <span class="hljs-keyword">return</span> <span class="hljs-literal">true</span>;
    }
}
</code></pre>
<ul>
<li><p>Create a swift file named <code>SceneDelegate.swift</code> in the Runner folder (not in the Xcode main project file) and add the code below:</p>
<pre><code class="lang-swift"> <span class="hljs-meta">@available</span>(iOS <span class="hljs-number">13.0</span>, *)
 <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">SceneDelegate</span>: <span class="hljs-title">UIResponder</span>, <span class="hljs-title">UIWindowSceneDelegate</span> </span>{
     <span class="hljs-keyword">var</span> window: <span class="hljs-type">UIWindow?</span>

     <span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">scene</span><span class="hljs-params">(<span class="hljs-number">_</span> scene: UIScene, willConnectTo session: UISceneSession, options connectionOptions: UIScene.ConnectionOptions)</span></span> {
         <span class="hljs-keyword">guard</span> <span class="hljs-keyword">let</span> windowScene = scene <span class="hljs-keyword">as</span>? <span class="hljs-type">UIWindowScene</span> <span class="hljs-keyword">else</span> { <span class="hljs-keyword">return</span> }

         window = <span class="hljs-type">UIWindow</span>(windowScene: windowScene)

         <span class="hljs-keyword">let</span> flutterEngine = <span class="hljs-type">FlutterEngine</span>(name: <span class="hljs-string">"SceneDelegateEngine"</span>)
         flutterEngine.run()
         <span class="hljs-type">GeneratedPluginRegistrant</span>.register(with: flutterEngine)
         <span class="hljs-keyword">let</span> controller = <span class="hljs-type">FlutterViewController</span>.<span class="hljs-keyword">init</span>(engine: flutterEngine, nibName: <span class="hljs-literal">nil</span>, bundle: <span class="hljs-literal">nil</span>)
         window?.rootViewController = controller
         window?.makeKeyAndVisible()
     }
 }
</code></pre>
<p><img src="https://raw.githubusercontent.com/oguzhnatly/flutter_carplay/master/previews/step3.png" alt="Flutter CarPlay" /></p>
</li>
<li><p>One more step, add these keys with the values to your <code>Info.plist</code> file:</p>
<pre><code class="lang-xml"> <span class="hljs-tag">&lt;<span class="hljs-name">key</span>&gt;</span>UIApplicationSceneManifest<span class="hljs-tag">&lt;/<span class="hljs-name">key</span>&gt;</span>
 <span class="hljs-tag">&lt;<span class="hljs-name">dict</span>&gt;</span>
   <span class="hljs-tag">&lt;<span class="hljs-name">key</span>&gt;</span>UIApplicationSupportsMultipleScenes<span class="hljs-tag">&lt;/<span class="hljs-name">key</span>&gt;</span>
   <span class="hljs-tag">&lt;<span class="hljs-name">true</span> /&gt;</span>
   <span class="hljs-tag">&lt;<span class="hljs-name">key</span>&gt;</span>UISceneConfigurations<span class="hljs-tag">&lt;/<span class="hljs-name">key</span>&gt;</span>
   <span class="hljs-tag">&lt;<span class="hljs-name">dict</span>&gt;</span>
     <span class="hljs-tag">&lt;<span class="hljs-name">key</span>&gt;</span>CPTemplateApplicationSceneSessionRoleApplication<span class="hljs-tag">&lt;/<span class="hljs-name">key</span>&gt;</span>
     <span class="hljs-tag">&lt;<span class="hljs-name">array</span>&gt;</span>
       <span class="hljs-tag">&lt;<span class="hljs-name">dict</span>&gt;</span>
         <span class="hljs-tag">&lt;<span class="hljs-name">key</span>&gt;</span>UISceneConfigurationName<span class="hljs-tag">&lt;/<span class="hljs-name">key</span>&gt;</span>
         <span class="hljs-tag">&lt;<span class="hljs-name">string</span>&gt;</span>CarPlay Configuration<span class="hljs-tag">&lt;/<span class="hljs-name">string</span>&gt;</span>
         <span class="hljs-tag">&lt;<span class="hljs-name">key</span>&gt;</span>UISceneDelegateClassName<span class="hljs-tag">&lt;/<span class="hljs-name">key</span>&gt;</span>
         <span class="hljs-tag">&lt;<span class="hljs-name">string</span>&gt;</span>flutter_carplay.FlutterCarPlaySceneDelegate<span class="hljs-tag">&lt;/<span class="hljs-name">string</span>&gt;</span>
       <span class="hljs-tag">&lt;/<span class="hljs-name">dict</span>&gt;</span>
     <span class="hljs-tag">&lt;/<span class="hljs-name">array</span>&gt;</span>
     <span class="hljs-tag">&lt;<span class="hljs-name">key</span>&gt;</span>UIWindowSceneSessionRoleApplication<span class="hljs-tag">&lt;/<span class="hljs-name">key</span>&gt;</span>
     <span class="hljs-tag">&lt;<span class="hljs-name">array</span>&gt;</span>
       <span class="hljs-tag">&lt;<span class="hljs-name">dict</span>&gt;</span>
         <span class="hljs-tag">&lt;<span class="hljs-name">key</span>&gt;</span>UISceneConfigurationName<span class="hljs-tag">&lt;/<span class="hljs-name">key</span>&gt;</span>
         <span class="hljs-tag">&lt;<span class="hljs-name">string</span>&gt;</span>Default Configuration<span class="hljs-tag">&lt;/<span class="hljs-name">string</span>&gt;</span>
         <span class="hljs-tag">&lt;<span class="hljs-name">key</span>&gt;</span>UISceneDelegateClassName<span class="hljs-tag">&lt;/<span class="hljs-name">key</span>&gt;</span>
         <span class="hljs-tag">&lt;<span class="hljs-name">string</span>&gt;</span>$(PRODUCT_MODULE_NAME).SceneDelegate<span class="hljs-tag">&lt;/<span class="hljs-name">string</span>&gt;</span>
         <span class="hljs-tag">&lt;<span class="hljs-name">key</span>&gt;</span>UISceneStoryboardFile<span class="hljs-tag">&lt;/<span class="hljs-name">key</span>&gt;</span>
         <span class="hljs-tag">&lt;<span class="hljs-name">string</span>&gt;</span>Main<span class="hljs-tag">&lt;/<span class="hljs-name">string</span>&gt;</span>
       <span class="hljs-tag">&lt;/<span class="hljs-name">dict</span>&gt;</span>
     <span class="hljs-tag">&lt;/<span class="hljs-name">array</span>&gt;</span>
   <span class="hljs-tag">&lt;/<span class="hljs-name">dict</span>&gt;</span>
 <span class="hljs-tag">&lt;/<span class="hljs-name">dict</span>&gt;</span>
</code></pre>
</li>
</ul>
<h3 id="that-was-all-you-need-to-do-and-now-you-are-ready-to-build-your-first-carplay-app-with-flutter">That was all you need to do, and now you are ready to build your first CarPlay app with Flutter! 🚀😎</h3>
<p>You can find a lot of detailed usages and features that the package provides, in the README file <a target="_blank" href="https://github.com/oguzhnatly/flutter_carplay#usage--features">here</a>. All functions and static usages are thoroughly described.</p>
<h1 id="adding-a-template">Adding a template</h1>
<p>Let's start by adding a simple CarPlay menu to a Flutter application. You must always have a root template when working with the CarPlay display stack. Depending on the type of application, this is the base template that sits at the bottom of the display stack.</p>
<p>Initializing the flutter carplay and adding a tab bar template:</p>
<pre><code class="lang-dart"><span class="hljs-keyword">import</span> <span class="hljs-string">'package:flutter/material.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'package:flutter_carplay/flutter_carplay.dart'</span>;

<span class="hljs-keyword">void</span> main() {
  runApp(<span class="hljs-keyword">const</span> MyApp());
}

<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">MyApp</span> <span class="hljs-keyword">extends</span> <span class="hljs-title">StatefulWidget</span> </span>{
  <span class="hljs-keyword">const</span> MyApp({Key? key}) : <span class="hljs-keyword">super</span>(key: key);

  <span class="hljs-meta">@override</span>
  State&lt;MyApp&gt; createState() =&gt; _MyAppState();
}

<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">_MyAppState</span> <span class="hljs-keyword">extends</span> <span class="hljs-title">State</span>&lt;<span class="hljs-title">MyApp</span>&gt; </span>{
  <span class="hljs-keyword">final</span> FlutterCarplay _flutterCarplay = FlutterCarplay();

  <span class="hljs-meta">@override</span>
  <span class="hljs-keyword">void</span> initState() {
    <span class="hljs-keyword">super</span>.initState();

    FlutterCarplay.setRootTemplate(
      rootTemplate: CPTabBarTemplate(
        templates: [
          CPListTemplate(
            sections: [
              CPListSection(
                items: [
                  CPListItem(
                    text: <span class="hljs-string">"Item 1"</span>,
                    detailText: <span class="hljs-string">"Detail Text"</span>,
                    onPress: (complete, self) {
                      self.setDetailText(<span class="hljs-string">"You can change the detail text.. 🚀"</span>);
                      complete();
                    },
                    image: <span class="hljs-string">'images/logo_flutter_1080px_clr.png'</span>,
                  ),
                  CPListItem(
                    text: <span class="hljs-string">"Item 2"</span>,
                    detailText: <span class="hljs-string">"Start progress bar"</span>,
                    isPlaying: <span class="hljs-keyword">false</span>,
                    playbackProgress: <span class="hljs-number">0</span>,
                    image: <span class="hljs-string">'images/logo_flutter_1080px_clr.png'</span>,
                    onPress: (complete, self) {
                      complete();
                    },
                  ),
                ],
                header: <span class="hljs-string">"First Section"</span>,
              ),
            ],
            title: <span class="hljs-string">"Home"</span>,
            showsTabBadge: <span class="hljs-keyword">false</span>,
            systemIcon: <span class="hljs-string">"house.fill"</span>,
          ),
          CPListTemplate(
            sections: [],
            title: <span class="hljs-string">"Settings"</span>,
            emptyViewTitleVariants: [<span class="hljs-string">"Settings"</span>],
            emptyViewSubtitleVariants: [
              <span class="hljs-string">"No settings have been added here yet. You can start adding right away"</span>
            ],
            showsTabBadge: <span class="hljs-keyword">false</span>,
            systemIcon: <span class="hljs-string">"gear"</span>,
          ),
        ],
      ),
      animated: <span class="hljs-keyword">true</span>,
    );
  }

  <span class="hljs-meta">@override</span>
  Widget build(BuildContext context) {
    <span class="hljs-keyword">return</span> MaterialApp(
      home: Scaffold(
        appBar: AppBar(
          title: <span class="hljs-keyword">const</span> Text(<span class="hljs-string">'Flutter Carplay'</span>),
        ),
        body: Text(<span class="hljs-string">'CarPlay with Flutter'</span>),
      ),
    );
  }
}
</code></pre>
<p><img src="https://oguzhanatalay.com/hashnode/images/tabbar-template.png" alt="Flutter CarPlay Tab Bar Template" /></p>
<p>As you can see, we've made a new<code>CPTabBarTemplate</code> called <code>FlutterCarplay.setRootTemplate</code> within the<code>initState</code>.  This ensures that the templates obtained show before the app is initialized. You can initialize on any screen or in any function you want, but if the carplay app launches before the mobile app sets the root template, the carplay app will crash automatically. Crashing has a negative impact on users and may make them dislike the app, we do not want this to happen to your users at any time.</p>
<p>A root template is required to be one of the types: <code>CPTabBarTemplate</code>, <code>CPGridTemplate</code> or <code>CPListTemplate</code>. Otherwise, it will throw a TypeError.</p>
<p>Additionally, you can find <a target="_blank" href="https://github.com/oguzhnatly/flutter_carplay#tab-bar-template">Tab bar</a>, <a target="_blank" href="https://github.com/oguzhnatly/flutter_carplay#grid-template">Grid</a>, <a target="_blank" href="https://github.com/oguzhnatly/flutter_carplay#alert-template">Alert</a>, <a target="_blank" href="https://github.com/oguzhnatly/flutter_carplay#action-sheet-template">Action sheet</a> and <a target="_blank" href="https://github.com/oguzhnatly/flutter_carplay#list-template">List</a> templates. In the next releases of the plugin, other templates such as map, search, and voice control will be supported. You can find the most recent road map <a target="_blank" href="https://github.com/oguzhnatly/flutter_carplay#road-map">here</a>. Also, if you are interested in contributing, <strong>contributors are always welcome</strong>, for more detail please visit <a target="_blank" href="https://github.com/oguzhnatly/flutter_carplay#contributing">here</a>.</p>
<h1 id="show-a-new-template-by-adding-to-the-carplay-navigation-hierarchy">Show a new template by adding to the CarPlay navigation hierarchy</h1>
<p>You can only push a new template which is <code>CPGridTemplate</code> or <code>CPListTemplate</code>. If you try to show an alert or modal, a type error will have occurred.</p>
<pre><code class="lang-dart">FlutterCarplay.push(
  template: CPGridTemplate(
      title: <span class="hljs-string">"Grid Template"</span>,
      buttons: [
        <span class="hljs-keyword">for</span> (<span class="hljs-keyword">var</span> i = <span class="hljs-number">1</span>; i &lt; <span class="hljs-number">9</span>; i++)
          CPGridButton(
            titleVariants: [<span class="hljs-string">"Item <span class="hljs-subst">$i</span>"</span>],
            image: <span class="hljs-string">'images/logo_flutter_1080px_clr.png'</span>,
            onPress: () {
              <span class="hljs-built_in">print</span>(<span class="hljs-string">"Grid Button <span class="hljs-subst">$i</span> pressed"</span>);
            },
          ),
      ],
    ),
  animated: <span class="hljs-keyword">true</span>,
);
</code></pre>
<p><img src="https://raw.githubusercontent.com/oguzhnatly/flutter_carplay/master/previews/grid_template.png" alt="Flutter CarPlay Grid Template" /></p>
<p>As you can see in the above example, the back button will be visible and pressable. Even, its text and props can be customized. While using the <code>CPGridTemplate</code>, you need to give an image's path from the flutter assets which is you should always import the path of the image into the <code>pubspec.yaml</code> file.</p>
<h1 id="show-an-alert-to-the-driver">Show an alert to the driver ⚠️</h1>
<p>You may need to show an alert or modal to the driver that the app needs its action to continue. In most vehicles, the driver can select one of the actions with the car's buttons on the steering wheel.</p>
<pre><code class="lang-dart">FlutterCarplay.showAlert(
  template: CPAlertTemplate(
    titleVariants: [<span class="hljs-string">"Alert Title"</span>],
    actions: [
      CPAlertAction(
        title: <span class="hljs-string">"Okay"</span>,
        style: CPAlertActionStyles.normal,
        onPress: () {
          <span class="hljs-built_in">print</span>(<span class="hljs-string">"Okay pressed"</span>);
          FlutterCarplay.popModal(animated: <span class="hljs-keyword">true</span>);
        },
      ),
      CPAlertAction(
        title: <span class="hljs-string">"Cancel"</span>,
        style: CPAlertActionStyles.cancel,
        onPress: () {
          <span class="hljs-built_in">print</span>(<span class="hljs-string">"Cancel pressed"</span>);
          FlutterCarplay.popModal(animated: <span class="hljs-keyword">true</span>);
        },
      ),
      CPAlertAction(
        title: <span class="hljs-string">"Remove"</span>,
        style: CPAlertActionStyles.destructive,
        onPress: () {
          <span class="hljs-built_in">print</span>(<span class="hljs-string">"Remove pressed"</span>);
          FlutterCarplay.popModal(animated: <span class="hljs-keyword">true</span>);
        },
      ),
    ],
  ),
  animated: <span class="hljs-keyword">true</span>,
);
</code></pre>
<p><img src="https://raw.githubusercontent.com/oguzhnatly/flutter_carplay/master/previews/alert_template.png" alt="Flutter CarPlay Alert Template" /></p>
<p>As it is seen, we have shown an alert with <code>FlutterCarplay.showAlert</code>. You can also show an action sheet modal by calling the <code>FlutterCarplay.showActionSheet</code> function. Alert and Action Sheet can not be displayed at the same time, and the only way to remove the both modal is to call <code>FlutterCarplay.popModal</code>.</p>
<p><img src="https://raw.githubusercontent.com/oguzhnatly/flutter_carplay/master/previews/actionsheet_template.png" alt="Flutter CarPlay Action Sheet" /></p>
<h1 id="handling-connection-events">Handling connection events</h1>
<p>You might want to detect connection changes, such as whether CarPlay is connected to the iPhone, running in the background, or completely disconnected, and then suggest actions to the user. It is not needed to use it if you do not want to.</p>
<pre><code class="lang-dart"><span class="hljs-keyword">import</span> <span class="hljs-string">'package:flutter/material.dart'</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">'package:flutter_carplay/flutter_carplay.dart'</span>;

<span class="hljs-keyword">void</span> main() {
  runApp(<span class="hljs-keyword">const</span> MyApp());
}

<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">MyApp</span> <span class="hljs-keyword">extends</span> <span class="hljs-title">StatefulWidget</span> </span>{
  <span class="hljs-keyword">const</span> MyApp({Key? key}) : <span class="hljs-keyword">super</span>(key: key);

  <span class="hljs-meta">@override</span>
  State&lt;MyApp&gt; createState() =&gt; _MyAppState();
}

<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">_MyAppState</span> <span class="hljs-keyword">extends</span> <span class="hljs-title">State</span>&lt;<span class="hljs-title">MyApp</span>&gt; </span>{
  CPConnectionStatusTypes connectionStatus = CPConnectionStatusTypes.unknown;
  <span class="hljs-keyword">final</span> FlutterCarplay _flutterCarplay = FlutterCarplay();

  <span class="hljs-meta">@override</span>
  <span class="hljs-keyword">void</span> initState() {
    <span class="hljs-keyword">super</span>.initState();
    _flutterCarplay.addListenerOnConnectionChange(onCarplayConnectionChange);
  }

  <span class="hljs-keyword">void</span> onCarplayConnectionChange(CPConnectionStatusTypes status) {
    <span class="hljs-comment">// Do things when carplay connection status is:</span>
    <span class="hljs-comment">// - CPConnectionStatusTypes.connected</span>
    <span class="hljs-comment">// - CPConnectionStatusTypes.background</span>
    <span class="hljs-comment">// - CPConnectionStatusTypes.disconnected</span>
    <span class="hljs-comment">// - CPConnectionStatusTypes.unknown</span>
    setState(() {
      connectionStatus = status;
    });
  }

  <span class="hljs-meta">@override</span>
  <span class="hljs-keyword">void</span> dispose() {
    _flutterCarplay.removeListenerOnConnectionChange();
    <span class="hljs-keyword">super</span>.dispose();
  }

  <span class="hljs-meta">@override</span>
  Widget build(BuildContext context) {
    <span class="hljs-keyword">return</span> MaterialApp(
      home: Scaffold(
        appBar: AppBar(
          title: <span class="hljs-keyword">const</span> Text(<span class="hljs-string">'Flutter Carplay'</span>),
        ),
        body: Text(
            <span class="hljs-string">'Carplay Status: '</span> +
                CPEnumUtils.stringFromEnum(connectionStatus),
        ),
      ),
    );
  }
}
</code></pre>
<p>We've used the state of the CarPlay connection status class called in the <code>addListenerOnConnectionChange</code> function to ensure when connected to or disconnected from, the CarPlay display. When listening for connection changes, it's highly recommended to call <code>removeListenerOnConnectionChange</code> in dispose, unless it's really necessary for background activities for you. Listening to the change of connection states on a screen that is running in the background or completely removed from the navigation stack, can result in performance issues. If you want to get directly static connection status for a single time, use <code>FlutterCarplay.connectionStatus</code> which is a getter for the current CarPlay connection status to iPhone. It will return one of the <code>CPConnectionStatusTypes</code> as String.</p>
<h1 id="never-forget-these">Never forget these:</h1>
<ul>
<li>the flutter carplay controller must be initialized before the app started. Otherwise, some callback functions will not work and most likely you will get a blank gray or black screen with an error.</li>
<li>no more than 5 screens (including the root template) must be in the application and should never be pushed more. Otherwise, the carplay app and ios application will crash immediately.</li>
<li>if you attempt to use a template not supported by your entitlement, an exception will occur at runtime.</li>
<li>some vehicles limit lists to a maximum of 12 items dynamically. You should always be prepared to manage the situation in which only 12 items can be displayed. Items that exceed the maximum will not be displayed.</li>
</ul>
<h1 id="conclusion">Conclusion</h1>
<p>Adding CarPlay support to a flutter app was previously impossible due to the <a target="_blank" href="https://github.com/flutter/flutter/issues/26801">issue</a> since Jan 2019. However, the flutter_carplay package solves this problem for you by providing you with a <strong>null-safety library</strong> to use in your Flutter application with simple integrations and actions. You can find <a target="_blank" href="https://github.com/oguzhnatly/flutter_carplay/blob/master/example/lib/main.dart">the full complex code example</a> in the flutter_carplay GitHub repository's <code>/example</code> directory. There's definitely a lot more to be done in this package. However, this package makes it possible to integrate CarPlay by solving one of the main problems in Flutter. Now, the rest of the features aren't about Flutter Engine; they're about correctly implementing and detecting the errors. If you are interested in contributing, <strong>contributors are always welcome</strong>, for more detail please visit <a target="_blank" href="https://github.com/oguzhnatly/flutter_carplay#contributing">here</a>.</p>
<p>After a long time, now Apple CarPlay apps are compatible with Flutter and it's an indescribable feeling that looks forward to seeing Flutter applications in our vehicles, I am also thrilled to be a part of it with my first package in pub.dev.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://twitter.com/oguzhanatalay_/status/1431945463691976706">https://twitter.com/oguzhanatalay_/status/1431945463691976706</a></div>
<h1 id="thank-you-for-taking-the-time-to-read-lets-connect">Thank you for taking the time to read, let's connect!</h1>
<p>Thank you for reading my post. That's all for now. Please like it, leave a comment and share this post with interested people, if you find it helpful. Feel free to subscribe to my email newsletter and connect on <a target="_blank" href="https://twitter.com/oguzhanatalay_">Twitter(@oguzhanatalay_)</a>, <a target="_blank" href="https://www.linkedin.com/in/oguzhanatalay/">LinkedIn(oguzhanatalay)</a>, and <a target="_blank" href="https://github.com/oguzhnatly">GitHub(oguzhnatly)</a>. </p>
<ul>
<li>For more, visit my page - <a target="_blank" href="https://linktr.ee/oguzhanatalay">linktr.ee/oguzhanatalay</a></li>
</ul>
<h1 id="support">Support</h1>
<p>If you appreciate what I do and want to support me, you can do that by clicking the button below
<a target="_blank" href="https://www.buymeacoffee.com/oguzhanatalay"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt="Buy me a coffee" /></a>
or you can simply show your love by staring <a target="_blank" href="https://github.com/oguzhnatly/flutter_carplay">the repository here</a>.</p>
<h2 id="update-flutter-carplay-now-is-the-number-one-of-the-week-34-in-pubdev">Update: Flutter CarPlay now is the number one of the week 34 in pub.dev 🎉</h2>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/E3LLosgroiI?t=154">https://youtu.be/E3LLosgroiI?t=154</a></div>
]]></content:encoded></item><item><title><![CDATA[React Native: How to add drop shadow with animation effects on Android!]]></title><description><![CDATA[There are bugs and issues in each framework, and React Native is not different. If you have ever developed an app on React Native, you would know that the box-shadow effects on android are a very well]]></description><link>https://blog.oguzhanatalay.com/react-native-how-to-add-drop-shadow-effects-on-android-supports-animation</link><guid isPermaLink="true">https://blog.oguzhanatalay.com/react-native-how-to-add-drop-shadow-effects-on-android-supports-animation</guid><category><![CDATA[React Native]]></category><category><![CDATA[UI]]></category><category><![CDATA[Android]]></category><category><![CDATA[animation]]></category><category><![CDATA[iOS]]></category><dc:creator><![CDATA[Oguzhan Atalay]]></dc:creator><pubDate>Tue, 07 Sep 2021 08:07:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1619972177187/K8Vai3rEy.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>There are bugs and issues in each framework, and React Native is not different. If you have ever developed an app on React Native, you would know that the box-shadow effects on android are a very well-known <a href="https://github.com/styled-components/styled-components/issues/709#issuecomment-295274475">issue</a>.</p>
<p>Today, I will clarify how to solve this problem as precisely and easily as possible, in order to create better UI/UXs apps on both platforms, <strong>using the same code!</strong></p>
<h3>💡 First of all: props and principles</h3>
<p>Shadow props are recommended by  <a href="https://reactnative.dev/docs/shadow-props">React Native documentation</a>.</p>
<ul>
<li>shadowColor: Sets the drop shadow color.</li>
<li>shadowOffset: Sets the drop shadow offset.</li>
<li>shadowOpacity: Sets the drop shadow opacity.</li>
<li>shadowRadius: Sets the drop shadow blur radius.</li>
</ul>
<p>React Native provides shadows on both platforms iOS and Android. Only the iOS system includes various properties such as <code>shadowOffset</code>, <code>shadowOpacity</code> and <code>shadowRadius</code>. In the Android system, React Native uses the <a href="https://developer.android.com/training/material/shadows-clipping.html#Elevation">Elevation API</a> . The general reason for this is interaction design patterns differences in both systems, Apple uses <a href="https://developer.apple.com/design/human-interface-guidelines/ios/overview/themes/">Human Interface Design</a> while Google uses <a href="https://material.io/">Material Design</a>.</p>
<h3>🚀 So, how can we use the shadow props in Android?</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1619902760577/aQhxB8zC5.png" alt="image.png" />
I spent some time researching the best way to solve this problem. Then, I found <a href="https://github.com/hoanglam10499/react-native-drop-shadow">this</a> package provides shadow effects, created specifically for the Android system (<em>also working in iOS, don't worry!</em>), which creates <strong>a bitmap representation</strong>, blurs, and colors them to styles shadow values, much the same as iOS. </p>
<p>Hence, you don't need to spend time trying to set the <code>elevation</code> to get the same UI in iOS. Let's start!</p>
<p><strong>Installation:</strong> <code>yarn add react-native-drop-shadow</code></p>
<p><strong>Usage:</strong></p>
<pre><code class="language-javascript">import DropShadow from "react-native-drop-shadow";

export default function myComponent() {
  return (
    &lt;DropShadow
      style={{
        shadowColor: "#000",
        shadowOffset: {
          width: 0,
          height: 0,
        },
        shadowOpacity: 1,
        shadowRadius: 5,
      }}
    &gt;
      ...
    &lt;/DropShadow&gt;
  );
}
export default function myFlatListComponent() {
  return (
    &lt;FlatList
      data={["1", "2", "3"]}
      keyExtractor={(item, index) =&gt; "List-" + index}
      CellRendererComponent={DropShadow}
      renderItem={({ item, index }) =&gt; (
        &lt;DropShadow
          style={{
            shadowColor: "#000",
            shadowOffset: {
              width: 0,
              height: 0,
            },
            shadowOpacity: 1,
            shadowRadius: 5,
          }}
        &gt;
          ...
        &lt;/DropShadow&gt;
      )}
    /&gt;
  );
}
</code></pre>
<h3>⚡ Bonus! It supports Animated Drop Shadow</h3>
<pre><code class="language-jsx">import DropShadow from "react-native-drop-shadow";
import { Animated } from "react-native";

const AnimatedDropShadow = Animated.createAnimatedComponent(DropShadow);

export default function myComponentwithAnimatedViews() {
  return (
    &lt;AnimatedDropShadow
      style={{
        shadowColor: "#000",
        shadowOffset: {
          width: 0,
          height: 0,
        },
        shadowOpacity: 1,
        shadowRadius: 5,
      }}
    &gt;
      ...
    &lt;/AnimatedDropShadow&gt;
  );
}
</code></pre>
<h3>Full Example</h3>
<p>May not work properly with the expo. Please test it out on your phone using the react-native-cli.</p>
<p><a class="embed-card" href="https://snack.expo.io/@oguzhanatalay/react-native-how-to-add-shadow-effects-on-android-supports-animation">https://snack.expo.io/@oguzhanatalay/react-native-how-to-add-shadow-effects-on-android-supports-animation</a></p>
### Conclusion

<blockquote>
<p>Please keep in mind that using Bitmap Rendering to model shadows can be performance-heavy if multiple shadows and animations are rendered at the same time.</p>
</blockquote>
<p>Thank you for taking the time to read this post. I've clarified how to use the drop shadow effects with and without animation. If you've found this useful, please like, share it, and leave a comment. Hope to see you in the next posts! 👋🏼</p>
]]></content:encoded></item></channel></rss>