Cross-Platform AI Agents: Building a Shared Gemini + Apple Intelligence Assistant

Illustration of a shared AI assistant powering both Android and iOS devices, with connected user flows, synchronized prompts, and developer code samples bridging Swift and Kotlin.

Developers are now building intelligent features for both iOS and Android — often using different AI platforms: Gemini AI on Android, and Apple Intelligence on iOS. So how do you build a shared assistant experience across both ecosystems?

This post guides you through building a cross-platform AI agent that behaves consistently — even when the underlying LLM frameworks are different. We’ll show design principles, API wrappers, shared prompt memory, and session persistence patterns.

📦 Goals of a Shared Assistant

  • Consistent prompt structure and tone across platforms
  • Shared memory/session history between devices
  • Uniform fallback behavior (offline mode, cloud execution)
  • Cross-platform UI/UX parity

🧱 Architecture Overview

The base model looks like this:


              [ Shared Assistant Intent Engine ]
                   /                    \\
      [ Gemini Prompt SDK ]         [ Apple Intelligence APIs ]
           (Kotlin + AICore)           (Swift + AIEditTask)
                   \\                    /
           [ Shared Prompt Memory Sync ]
  

Each platform handles local execution, but prompt intent and reply structure stay consistent.

🧠 Defining Shared Prompt Intents

Create a common schema:


{
  "intent": "TRAVEL_PLANNER",
  "data": {
    "destination": "Kerala",
    "duration": "3 days",
    "budget": "INR 10,000"
  }
}
  

Each platform converts this into its native format:

Apple Swift (AIEditTask)


let prompt = """
You are a travel assistant. Suggest a 3-day trip to Kerala under ₹10,000.
"""
let result = await AppleIntelligence.perform(AIEditTask(.generate, input: prompt))
  

Android Kotlin (Gemini)


val result = session.prompt("Suggest a 3-day trip to Kerala under ₹10,000.")
  

🔄 Synchronizing Memory & State

Use Firestore, Supabase, or Realm to store:

  • Session ID
  • User preferences
  • Prompt history
  • Previous assistant decisions

Send current state to both Apple and Android views for seamless cross-device experience.

🧩 Kotlin Multiplatform + Swift Interop

Use shared business logic for agents in Kotlin Multiplatform Mobile (KMM) to export common logic to iOS:


// KMM prompt formatter
fun formatTravelPrompt(data: TravelRequest): String {
    return "Plan a ${data.duration} trip to ${data.destination} under ${data.budget}"
}
  

🎨 UI Parity Tips

  • Use SwiftUI’s glass-like cards and Compose’s Material3 Blur for parity
  • Stick to rounded layouts, dynamic spacing, and minimum-scale text
  • Design chat bubbles with equal line spacing and vertical rhythm

🔍 Debugging and Logs

  • Gemini: Use Gemini Debug Console and PromptSession trace
  • Apple: Xcode AI Profiler + LiveContext logs

Normalize logs across both by writing JSON wrappers and pushing to Firebase or Sentry.

🔐 Privacy Considerations

  • Store session data locally with user opt-in for cloud sync
  • Mark cloud-offloaded prompts (on-device → server fallback)
  • Provide export history button with logs + summaries

✅ Summary

Building shared AI experiences across platforms isn’t about using the same LLM — it’s about building consistent UX, logic, and memory across SDKs.

🔗 Further Reading

Debugging AI Workflows: Tools and Techniques for Gemini & Apple Intelligence

Illustration of developers debugging AI prompts for Gemini and Apple Intelligence, showing token stream logs, latency timelines, and live test panels in Android Studio and Xcode.

As LLMs like Google’s Gemini AI and Apple Intelligence become integrated into mainstream mobile apps, developers need more than good prompts — they need tools to debug how AI behaves in production.

This guide covers the best tools and techniques to debug, monitor, and optimize AI workflows inside Android and iOS apps. It includes how to trace prompt failures, monitor token usage, visualize memory, and use SDK-level diagnostics in Android Studio and Xcode.

📌 Why AI Debugging Is Different

  • LLM output is non-deterministic — you must debug for behavior, not just bugs
  • Latency varies with prompt size and model path (local vs cloud)
  • Prompts can fail silently unless you add structured logging

Traditional debuggers don’t cut it for AI apps. You need prompt-aware debugging tools.

🛠 Debugging Gemini AI (Android)

1. Gemini Debug Console (Android Studio Vulcan)

  • Tracks token usage for each prompt
  • Shows latency across LLM stages: input parse → generation → render
  • Logs assistant replies and scoring metadata

// Gemini Debug Log
Prompt: "Explain GraphQL to a 10-year-old"
Tokens: 47 input / 82 output
Latency: 205ms (on-device)
Session ID: 38f3-bc2a
  

2. PromptSession Logs


val session = PromptSession.create(context)
session.enableLogging(true)
  

Enables JSON export of prompts and responses for unit testing and monitoring.

3. Prompt Failure Types

  • Empty response: Token budget exceeded or vague prompt
  • Unstructured output: Format not enforced (missing JSON key)
  • Invalid fallback: Local model refused → cloud call blocked

🧪 Testing with Gemini

  • Use Promptfoo or Langfuse to run prompt tests
  • Generate snapshots for expected output
  • Set up replays in Gemini SDK for load testing

Sample Replay in Kotlin


val testPrompt = GeminiPrompt("Suggest 3 snacks for a road trip")
val result = promptTester.run(testPrompt).assertJsonContains("snacks")
  

🍎 Debugging Apple Intelligence (iOS/macOS)

1. Xcode AI Debug Panel

  • See input tokenization
  • Log latency and output modifiers
  • Monitor fallback to Private Cloud Compute

2. AIEditTask Testing


let task = AIEditTask(.summarize, input: text)
task.enableDebugLog()
let result = await AppleIntelligence.perform(task)
  

Outputs include token breakdown, latency, and Apple-provided scoring of response quality.

3. LiveContext Snapshot Viewer

  • Logs app state, selected input, clipboard text
  • Shows how Apple Intelligence builds context window
  • Validates whether your app is sending relevant context

✅ Common Debug Patterns

Problem: Model Hallucination

  • Fix: Use role instructions like “respond only with facts”
  • Validate: Add sample inputs with known outputs and assert equality

Problem: Prompt Fallback Triggered

  • Fix: Reduce token count or simplify nested instructions
  • Validate: Log sessionMode (cloud vs local) and retry

Problem: UI Delay or Flicker

  • Fix: Use background thread for prompt fetch
  • Validate: Profile using Instruments or Android Traceview

🧩 Tools to Add to Your Workflow

  • Gemini Prompt Analyzer (CLI) – Token breakdown + cost estimator
  • AIProfiler (Xcode) – Swift task and latency profiler
  • Langfuse / PromptLayer – Prompt history + scoring for production AI
  • Promptfoo – CLI and CI test runner for prompt regression

🔐 Privacy, Logging & User Transparency

  • Always log AI-generated responses with audit trail
  • Indicate fallback to cloud processing visually (badge, color)
  • Offer “Why did you suggest this?” links for AI-generated suggestions

🔬 Monitoring AI in Production

  • Use Firebase or BigQuery for structured AI logs
  • Track top 20 prompts, token overage, retries
  • Log user editing of AI replies (feedback loop)

📚 Further Reading

✅ Suggested TechsWill Posts

iOS 26 UI Patterns Developers Should Adopt from visionOS

Side-by-side comparison of iOS 26 and visionOS UI styles with SwiftUI layout code, showcasing adaptive layout, blurred cards, and spatial hierarchy in Apple’s latest design system.

Apple’s design language is evolving — and in iOS 26, the company is bridging spatial UI principles from visionOS into the iPhone. With the release of Liquid Glass and SwiftUI enhancements, developers now need to adopt composable, spatially aware, and depth-enhanced design patterns to remain native on iOS and future-ready for Apple Vision platforms.

This comprehensive post explores more than a dozen core UI concepts from visionOS and how to implement them in iOS 26. You’ll learn practical SwiftUI techniques, discover Apple’s new visual hierarchy rules, and see how these patterns apply to real-world apps.

📌 Why visionOS Matters to iOS Devs

Even if you’re not building for Vision Pro, your app’s design will increasingly reflect visionOS patterns. Apple is unifying UI guidelines so users feel visual and interaction continuity across iPhone, iPad, Mac, and Vision Pro.

Key Reasons to Adopt visionOS UI Patterns:

  • Liquid Glass design extends to iPhone and iPad
  • Spatial depth and blurs will become standard for modals, sheets, cards
  • Accessibility and gaze-ready layouts will soon be mandatory for mixed-reality support

🧊 Glass Panels and Foreground Elevation

visionOS apps organize interfaces using translucent glass layers that float above dynamic content. In iOS 26, this is possible with new Material stacks:


ZStack {
  Color.background
  RoundedRectangle(cornerRadius: 32)
    .fill(.ultraThinMaterial)
    .overlay {
      VStack {
        Text("Welcome Back!")
        Button("Continue") { showNext = true }
      }.padding()
    }
    .shadow(radius: 10)
}
  

✅ Use .ultraThinMaterial for layered background blur. Combine with shadows and ZStacks to show visual priority.

📐 Responsive UI with Container Awareness

visionOS UIs scale naturally with user distance and screen size. iOS now mirrors this with LayoutReader and GeometryReader for adaptive views:


@Environment(\.horizontalSizeClass) var size

if size == .compact {
  CompactView()
} else {
  GridLayout(columns: 2) {
    ForEach(items) { ItemCard($0) }
  }
}
  

💡 Combine with presentationDetents to scale modals to device context.

🔄 Spatial Transitions & Matched Geometry

visionOS relies heavily on animated transitions between panels and elements. These behaviors now appear on iOS with matchedGeometryEffect and .scrollTransition.


@Namespace var cardNamespace

CardView()
  .matchedGeometryEffect(id: cardID, in: cardNamespace)
  .transition(.asymmetric(insertion: .opacity, removal: .scale))
  

🎯 This improves continuity between navigation flows, especially in multi-modal apps.

🧭 Navigation Patterns: Sheets, Cards, Drawers

visionOS avoids deep nav stacks in favor of layered sheets and floating panels. iOS 26 supports:

  • .sheet with multiple detents
  • .popover for small-card interactions
  • .fullScreenCover for spatial transitions

.sheet(isPresented: $showSheet) {
  SettingsPanel()
    .presentationDetents([.fraction(0.5), .large])
}
  

These transitions match those found on Vision Pro, enabling natural movement between states.

🎨 VisionOS Visual Styles for iOS

Use This → Instead of This:

  • Material + Card Border → Flat white background
  • Shadowed button on blur → Standard button in stack
  • Scroll view fade/expand → Full-page modals
  • GeometryReader scaling → Fixed pixel height

These give your iOS app the same depth, bounce, and clarity expected in visionOS.

♿ Accessibility & Input Flexibility

  • Label all controls with accessibilityLabel()
  • Group elements with accessibilityElement(children: .combine)
  • Support voiceover via landmarks and hinting

Design assuming pointer, gaze, tap, and keyboard input types.

📚 Further Reading & Resources

✅ Suggested TechsWill Posts:

Best Prompt Engineering Techniques for Apple Intelligence and Gemini AI

Illustration showing developers testing and refining AI prompts using Gemini and Apple Intelligence, with prompt templates, syntax panels, and code examples in Swift and Kotlin.

Prompt engineering is no longer just a hacky trick — it’s an essential discipline for developers working with LLMs (Large Language Models) in production. Whether you’re building iOS apps with Apple Intelligence or Android tools with Google Gemini AI, knowing how to structure, test, and optimize prompts can make the difference between a helpful assistant and a hallucinating chatbot.

🚀 What Is Prompt Engineering?

Prompt engineering is the practice of crafting structured inputs for LLMs to control:

  • Output style (tone, length, persona)
  • Format (JSON, bullet points, HTML, markdown)
  • Content scope (topic, source context)
  • Behavior (tools to use, functions to invoke)

Both Apple and Gemini provide prompt-centric APIs: Gemini via the AICore SDK, and Apple Intelligence via LiveContext, AIEditTask, and PromptSession frameworks.

📋 Supported Prompt Modes (2025)

PlatformInput TypesMulti-Turn?Output Formatting
Google GeminiText, Voice, Image, StructuredJSON, Markdown, Natural Text
Apple IntelligenceText, Contextual UI, Screenshot InputPlain text, System intents

🧠 Prompt Syntax Fundamentals

Define Role + Task Clearly

Always define the assistant’s persona and the expected task.

// Gemini Prompt
You are a helpful travel assistant.
Suggest a 3-day itinerary to Kerala under ₹10,000.
  
// Apple Prompt with AIEditTask
let task = AIEditTask(.summarize, input: paragraph)
let result = await AppleIntelligence.perform(task)
  

Use Lists and Bullets to Constrain Output


"Explain the concept in 3 bullet points."
"Return a JSON object like this: {title, summary, url}"
  

Apply Tone and Style Modifiers

  • “Reword this email to sound more enthusiastic”
  • “Make this formal and executive-sounding”

In this in-depth guide, you’ll learn:

  • Best practices for crafting prompts that work on both Gemini and Apple platforms
  • Function-calling patterns, response formatting, and prompt chaining
  • Prompt memory design for multi-turn sessions
  • Kotlin and Swift code examples
  • Testing tools, performance tuning, and UX feedback models

🧠 Understanding the Prompt Layer

Prompt engineering sits at the interface between the user and the LLM — and your job as a developer is to make it:

  • Precise (what should the model do?)
  • Bounded (what should it not do?)
  • Efficient (how do you avoid wasting tokens?)
  • Composable (how does it plug into your app?)

Typical Prompt Types:

  • Query answering: factual replies
  • Rewriting/paraphrasing
  • Summarization
  • JSON generation
  • Assistant-style dialogs
  • Function calling / tool use

⚙️ Gemini AI Prompt Structure

🧱 Modular Prompt Layout (Kotlin)


val prompt = """
Role: You are a friendly travel assistant.
Task: Suggest 3 weekend getaway options near Bangalore with budget tips.
Format: Use bullet points.
""".trimIndent()
val response = aiSession.prompt(prompt)
  

This style — Role + Task + Format — consistently yields more accurate and structured outputs in Gemini.

🛠 Function Call Simulation


val prompt = """
Please return JSON:
{
  "destination": "",
  "estimated_cost": "",
  "weather_forecast": ""
}
""".trimIndent()
  

Gemini respects formatting when it’s preceded by “return only…” or “respond strictly as JSON.”

🍎 Apple Intelligence Prompt Design

🧩 Context-Aware Prompts (Swift)


let task = AIEditTask(.summarize, input: fullEmail)
let summary = await AppleIntelligence.perform(task)
  

Apple encourages prompt abstraction into task types. You specify .rewrite, .summarize, or .toneShift, and the system handles formatting implicitly.

🗂 Using LiveContext


let suggestion = await LiveContext.replySuggestion(for: lastUserInput)
inputField.text = suggestion
  

LiveContext handles window context, message history, and active input field to deliver contextual replies.

🧠 Prompt Memory & Multi-Turn Techniques

Gemini: Multi-Turn Session Example


val session = PromptSession.create()
session.prompt("What is Flutter?")
session.prompt("Can you compare it with Jetpack Compose?")
session.prompt("Which is better for Android-only apps?")
  

Gemini sessions retain short-term memory within prompt chains.

Apple Intelligence: Stateless + Contextual Memory

Apple prefers stateless requests, but LiveContext can simulate memory via app-layer state or clipboard/session tokens.

🧪 Prompt Testing Tools

🔍 Gemini Tools

  • Gemini Debug Console in Android Studio
  • Token usage, latency logs
  • Prompt history + output diffing

🔍 Apple Intelligence Tools

  • Xcode AI Simulator
  • AIProfiler for latency tracing
  • Prompt result viewers with diff logs

🎯 Common Patterns for Gemini + Apple

✅ Use Controlled Scope Prompts


"List 3 tips for beginner React developers."
"Return output in a JSON array only."
  

✅ Prompt Rewriting Techniques

– Rephrase user input as an AI-friendly command – Use examples inside the prompt (“Example: X → Y”) – Split logic: one prompt generates, another evaluates

📈 Performance Optimization

  • Minimize prompt size → strip whitespace
  • Use async streaming (Gemini supports it)
  • Cache repeat prompts + sanitize

👨‍💻 UI/UX for Prompt Feedback

– Always show a spinner or token stream – Show “Why this answer?” buttons – Allow quick rephrases like “Try again”, “Make shorter”, etc.

📚 Prompt Libraries & Templates

Template: Summarization


"Summarize this text in 3 sentences:"
{{ userInput }}
  

Template: Rewriting


"Rewrite this email to be more formal:"
{{ userInput }}
  

🔬 Prompt Quality Evaluation Metrics

  • Fluency
  • Relevance
  • Factual accuracy
  • Latency
  • Token count / cost

🔗 Further Reading

✅ Suggested Posts

WWDC 2025: Everything Apple Announced — From Liquid Glass to Apple Intelligence

Infographic showing iPhone, Mac, Apple Watch, and Apple Intelligence icon with the headline “WWDC 2025: Everything Apple Announced”.

Updated: June 2025

Apple’s WWDC 2025 keynote delivered a sweeping update across all platforms — iOS, iPadOS, macOS, watchOS, tvOS, and visionOS — all tied together by a dramatic new design language called Liquid Glass and an expanded AI system branded as Apple Intelligence.

Here’s a full breakdown of what Apple announced and how it’s shaping the future of user experience, productivity, AI integration, and hardware continuity.

🧊 Liquid Glass: A Unified Design System

The new Liquid Glass design system brings translucent UI layers, subtle depth, and motion effects inspired by visionOS to all Apple devices. This includes:

  • iOS 26: Revamped lock screen, dynamic widgets, and app icon behavior
  • macOS Tahoe: Window layering, new dock styles, and control center redesign
  • watchOS 26 & tvOS 26: Glassy overlays with adaptive lighting + haptic feedback

This marks the first platform-wide UI refresh since iOS 7 in 2013, and it’s a bold visual evolution.

📱 iOS 26: AI-Powered and Visually Smarter

iOS 26 debuts with a smarter, more connected OS framework — paired with native on-device AI support. Highlights include:

  • Dynamic Lock Screen: Background-aware visibility adjustments
  • Live Translation in Calls: Real-time subtitle overlays for FaceTime and mobile calls
  • Genmoji: Custom emoji generated via AI prompts
  • Messages 2.0: Polls, filters, and shared group memories
  • Revamped apps: Camera, Phone, and Safari redesigned with gesture-first navigation
Illustration depicting the Apple logo juxtaposed with the European Union flag, symbolizing regulatory scrutiny

💻 macOS 26 “Tahoe”

  • Continuity Phone App: Take and make calls natively from your Mac
  • Refined Spotlight: More accurate search results with embedded previews
  • Games App: New hub for Apple Arcade and native macOS titles
  • Metal 4: Upgraded rendering engine for smoother gameplay and 3D workflows

⌚ watchOS 26

The watchOS update turns your Apple Watch into an even smarter daily companion:

  • Workout Buddy: AI fitness assistant with adaptive coaching
  • Wrist Flick Gestures: One-handed control with customizable actions
  • Smart Stack: Enhanced widget behavior based on context

🧠 Apple Intelligence (AI Framework)

Apple Intelligence is Apple’s on-device AI suite and includes:

  • Live Translation: Real-time interpretation in multiple languages via device-only inference
  • Visual Understanding: Context-aware responses from screenshots, photos, and screens
  • Writing Tools: AI auto-editing, tone correction, and summary generation for email & messages
  • Image Playground: Text-to-image generation with personalization presets

All processing is done using the new Private Cloud Compute system or locally, ensuring data privacy.

🖥️ tvOS 26 + visionOS 26

  • Cinematic UI: Adaptive overlays with content-based color shifts
  • Camera Access in Photos App: Seamlessly import and edit live feeds from other Apple devices
  • Improved Hand Gesture Detection: For visionOS and Apple TV interactions

🛠️ Developer Tools

WWDC 2025 brings developers:

  • Xcode 17.5: Support for Liquid Glass layers, Genmoji toolkits, and AI code completions
  • SwiftUI 6: Multi-platform adaptive layout and AI-gesture bindings
  • Apple Intelligence API: Text summarization, generation, translation, and visual reasoning APIs

🔗 Further Reading

✅ Suggested Posts:

WWDC 2025: Embracing visionOS Across the Apple Ecosystem

Illustration of Apple devices unified under visionOS-inspired design — iPhone, Mac, Apple Watch, and Apple TV in spatial layout.

Updated: May 2025

Apple’s WWDC 2025 sets the stage for its most visually cohesive experience yet. With a clear focus on bringing the immersive feel of visionOS to all major platforms — including iOS 19, iPadOS, macOS, watchOS, and tvOS — Apple is executing a top-down unification of UI across devices.

This post breaks down the key updates you need to know, including spatial design principles, AI advancements, and anticipated developer tools coming with this shift.

🌌 visionOS-Inspired UI for iOS, macOS, and Beyond

Apple plans to roll out visionOS’s spatially fluid UI patterns across all screen-based platforms. Expect updates like:

  • Transparent layering & depth: Card stacks with real-time blur and depth sensing
  • Repositionable windows: Inspired by Vision Pro’s freeform multitasking
  • Refreshed icons & glassmorphism effects for universal app design

This means your iPhone, iPad, and even Apple TV will adopt design cues first seen on the Vision Pro, making transitions across devices feel seamless.

🧠 Apple Intelligence – Smarter and Context-Aware

Apple is enhancing its AI stack under the moniker Apple Intelligence. Here’s what’s coming:

  • Contextual Siri: A more responsive, memory-enabled Siri that recalls prior queries and tasks
  • System-wide summaries: Built-in document and message summarization using on-device AI
  • Generative enhancements: Image generation inside apps like Pages and Keynote

All Apple Intelligence features run on-device (or via Private Cloud Compute) to maintain Apple’s privacy-first approach.

⌚ watchOS and tvOS: Spatial Fluidity + Widget Overhaul

  • watchOS 11: Adaptive widget stacks that change based on motion and time of day
  • tvOS: Transparent UI overlays that blend with media, plus support for eye/gesture tracking in future remotes

These redesigns follow the same principles as visionOS — letting content, not chrome, take center stage.

💼 Developer Tools for Unified Design

To support these changes, Apple is releasing updated APIs and SDKs inside Xcode 17.1:

  • visionKit UI Components: Prebuilt spatial UI blocks now usable in iOS/macOS apps
  • Simulator for Mixed UI Modes: Preview how your app renders across Vision Pro, iPad, and Mac
  • Shared layout engine: Reduce duplicate code with one design spec that adapts per device

🔗 Further Reading:

✅ Suggested Posts: