FlowCentral Documentation
The configurable MCP Agent Backend. Run logic locally, invert the cloud, and give your AI tools that actually work.
What is FlowCentral?
FlowCentral is an MCP (Model Context Protocol) Host that enables AI agents to discover and use tools dynamically. Unlike traditional chatbots, FlowCentral gives your AI the ability to do things—control robots, generate images, post to social media, manage infrastructure, and coordinate with other agents.
Drop a Python file into a folder, and it instantly becomes a tool available to AI. No restart required. No complex setup. Just code and execute.
Enterprise teams waste countless hours on manual workflows and disconnected tools. FlowCentral uses MCP to let AI agents discover and orchestrate your existing systems automatically—no complex integrations required. Write a Python function, and it becomes an AI-callable flow instantly.
Installation
FlowCentral consists of two main components: the Python Server (the "Remote") and the Node.js Client (for connecting to Claude Desktop, Cursor, etc.).
Prerequisites
- Python 3.12+ (3.13.9 recommended)
- Node.js & npm (for the MCP client)
- uv / uvx (modern Python package management, recommended)
- npx (comes with Node.js)
Step 1: Clone the Repository
# Clone from GitHub
git clone https://github.com/ProjectAtlantis-dev/flowcentral-mcp-server.git
cd flowcentral-mcp-server
Step 2: Install Python Dependencies
# Using uv (recommended)
uv sync
# Or using pip with venv
cd python-server
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txt
Step 3: Configure the Server
Navigate to the python-server folder and edit the runServer script:
python server.py \
[email protected] \ # Your FlowCentral account email
--api-key=foobar \ # Default dev key (change for production)
--service-name=home_pc \ # UNIQUE name for this machine
--host=localhost \ # Local MCP clients connect here
--port=8000 \
--cloud-host=wss://flowcentral.ai \
--cloud-port=443
The service-name must be unique across all your machines. Use descriptive names like home_pc, work_laptop, or cloud_server.
Step 4: Run the Server
# Make sure you're in the python-server directory with venv activated
source venv/bin/activate # If using venv
./runServer # Or: python server.py ...
Step 5: Create Account & Generate Secure API Key
To use cloud features, you must generate a secure API key:
- Visit flowcentral.ai and sign in with your Google account (use the same email from Step 3)
- In the terminal interface, type:
/api generate - Copy the generated API key
- Replace
foobarin yourrunServerscript with the new key - Restart your server - it will auto-connect to the cloud
Never use the default foobar API key in production! Always generate a secure key with /api generate on flowcentral.ai.
Quick Start
Connecting to Claude Desktop / Cursor
To use FlowCentral as a regular standalone MCP server, add the following to your MCP configuration:
{
"mcpServers": {
"atlantis": {
"command": "npx",
"args": [
"atlantis-mcp",
"--port",
"8000"
]
}
}
}
Connecting via Claude Code CLI
claude mcp add atlantis -- npx atlantis-mcp --port 8000
Your First Dynamic Function
Create a simple function to test the setup:
import atlantis
@visible
async def hello_world(name: str = "World"):
"""
A simple greeting function.
Args:
name: The name to greet
Returns:
A greeting message
"""
return f"Hello, {name}! Welcome to FlowCentral."
Save the file. The server will automatically detect it and reload. Now your AI can call hello_world!
Changes to files in dynamic_functions/ are detected automatically. No server restart needed!
Architecture: The Inverted Cloud Model
Understanding FlowCentral's architecture is key to unlocking its full potential. Unlike traditional cloud services where your code runs on remote servers, FlowCentral inverts the model—your code runs locally, but can be discovered and called remotely.
Key Components
- Remote (MCP Host): The Python server running on your machine
- Dynamic Functions: Python functions you write that become AI tools
- Dynamic MCP Servers: Third-party MCP tools you can install
- Cloud (flowcentral.ai): Hub for agent discovery and coordination
- Client (npx atlantis-mcp): Node.js client that connects your AI to the remote
Connection Flow
↓ (stdio)
npx atlantis-mcp client
↓ (WebSocket ws://localhost:8000)
Python Remote (Your Machine)
↕ (Socket.IO wss://flowcentral.ai)
FlowCentral Cloud
MCP terminology is confusing. What we call a "Remote" is technically an MCP "Host" in the spec. We use "Remote" to emphasize that it's a remote-controllable local server.
The MCP Host (Remote)
The Remote is the heart of FlowCentral. It's a Python server that:
- Watches the
dynamic_functions/directory for changes - Parses Python files to discover tools
- Exposes tools via WebSocket (local) and Socket.IO (cloud)
- Manages third-party MCP servers in
dynamic_servers/ - Handles authentication and permissions
Directory Structure
Cloud Connection
The cloud service at flowcentral.ai provides:
- Agent Discovery: Let other developers' agents find your public tools
- Remote Management: Control multiple remotes from a web interface
- Tool Sharing: Share tools with the community, a closed group, or keep them private
- Coordination: Enable multi-agent workflows across machines
By default, all functions are hidden and only accessible to you. You explicitly choose what to share using decorators like @public or @protected.
How Cloud Works
When you start a remote with cloud connection:
- Remote connects to flowcentral.ai via Socket.IO
- Authenticates with your email and API key
- Sends a list of available tools (respecting visibility decorators)
- Receives tool calls from authorized users
- Executes functions locally and returns results
Your code never leaves your machine. The cloud only routes messages and manages discovery.
How Tool Routing Works
When Claude (or any AI) calls a tool through FlowCentral, here's what happens:
Local Mode (Direct Connection)
- AI sends tool call via MCP stdio to
npx atlantis-mcp - Client forwards to
ws://localhost:8000 - Remote executes the function
- Result returns to AI
Cloud Mode (Agent Discovery)
- AI sends tool call with compound name (e.g.,
alice*home_pc*ComfyUI**generate_image) - Cloud routes to the correct remote
- Remote checks permissions
- If authorized, executes function
- Result returns through cloud to AI
MCP auth is still evolving. FlowCentral uses email-based authentication and decorator-based permissions. Change the default API key (foobar) in production!
Dynamic Functions
Dynamic functions are the core feature of FlowCentral. Drop a Python file into dynamic_functions/, and it instantly becomes an AI tool.
Basic Example
import atlantis
from datetime import datetime
import pytz
@visible
async def get_current_time(timezone: str = "UTC"):
"""
Get the current time in a specific timezone.
Args:
timezone: IANA timezone name (e.g., 'America/New_York', 'Europe/London')
Returns:
Current time formatted as string
"""
tz = pytz.timezone(timezone)
now = datetime.now(tz)
return now.strftime("%Y-%m-%d %H:%M:%S %Z")
Function Requirements
- Must be
async def - Must have a decorator (
@visible,@public, or@protected) - Must have a docstring (used for AI tool description)
- Should have type hints (helps AI understand parameters)
Hot Reloading
The server watches dynamic_functions/ for changes:
- Save a file → Function updates immediately
- Add a new file → New tool appears
- Delete a file → Tool disappears
- Fix a bug → Changes apply on next call
Use the built-in logger instead of print() statements. Use import atlantis at the top of your file.
Security & Decorators
All functions are HIDDEN unless explicitly decorated. You must opt-in to expose a function.
Available Decorators
1. @visible (Owner Only)
Function is visible in tool lists but only callable by the remote owner.
import atlantis
@visible
async def restart_server():
"""Restarts the local server. Owner only."""
import os
os.system("sudo reboot")
2. @public (Everyone)
Function is visible and callable by anyone connected (local or cloud).
import atlantis
@public
async def get_weather(city: str):
"""Public weather lookup tool."""
# ... API call ...
return "Sunny, 72°F"
3. @protected("auth_function") (Custom Auth)
Delegates permission checking to another function. This is the most flexible and powerful option.
IMPORTANT: The auth function must be a separate top-level function (not in any app folder), and the protected function references it by name.
import atlantis
@visible
async def demo_group(user: str):
"""
Protection function - checks if user is in the demo group.
Args:
user: Email of the user trying to access the function
Returns:
True if authorized, False otherwise
"""
allowed_users = ["[email protected]", "[email protected]"]
return user in allowed_users
import atlantis
@protected("demo_group")
async def generate_4k_image(prompt: str):
"""High-cost image generation. Demo group only."""
# ... expensive operation ...
pass
4. @chat (Chat Handler)
Marks a function as a chat handler that processes conversations. Used with AI chat functions.
5. @session (Session Initialization)
Marks a function that should run when a new session starts. Use for setup, greetings, and UI initialization.
Best Practices
- Start with
@visiblefor testing - Use
@publiconly for truly public utilities - Use
@protectedfor business logic that needs custom auth - Use
@chatfor AI conversation handlers - Use
@sessionfor session initialization and customization - Never expose destructive operations as
@public
App Organization
Organize your functions into "Apps" using folder structure. The folder name IS the app name.
Folder Structure
Nested Apps (Subfolders)
You can create nested app structures:
Why Apps Matter
Apps help disambiguate functions when you have naming conflicts:
Chat/send_message.pyEmail/send_message.pySMS/send_message.py
Without apps, calling send_message would be ambiguous. With apps, you can specify which one.
The old @app(name="...") decorator still works but is not recommended. Just use folders!
Compound Tool Names
When you have multiple remotes or naming conflicts, use compound tool names to route calls precisely.
Format
remote_owner*remote_name*app*location*function
Key Principle
Use the simplest form that resolves uniquely. Only include as much of the path as needed.
Examples
# Simple call (only works if unique)
update_image
# Specify app to disambiguate
**ImageTools**update_image
# Nested app path
**MyApp/SubModule**process_data
# Full routing: owner + remote + app + function
alice*prod*Admin**restart
# Just location context
***office*print
Real-World Example
You have these functions:
dynamic_functions/Chat/send_message.pydynamic_functions/Email/send_message.pydynamic_functions/SMS/send_message.py
Calling them:
send_message # Ambiguous! Which one?
**Chat**send_message # Clear! The one in Chat
**Email**send_message # Clear! The one in Email
**SMS**send_message # Clear! The one in SMS
The Atlantis API Module
The atlantis module provides utilities for interacting with the client and managing state.
Client Interaction
import atlantis
# Send log message to user's UI
await atlantis.client_log("Processing your request...")
# Render HTML in user's interface
await atlantis.client_html("<div>Custom UI</div>")
# Trigger client-side commands
await atlantis.client_command("\\input", {"prompt": "Enter name"})
# Send an image to the user
await atlantis.client_image("/path/to/output.png")
Get Current User
import atlantis
username = atlantis.get_caller() or "unknown_user"
await atlantis.client_log(f"Hello, {username}!")
Shared State (Connections Only)
IMPORTANT: atlantis.shared should only be used for persistent connections (like database connections, API clients, file handles), NOT for data storage.
import atlantis
@visible
async def query_database(sql: str):
"""Execute SQL query."""
# Check if DB connection exists
if not atlantis.shared.get("db"):
# Create connection and store it
import sqlite3
db = sqlite3.connect("data.db")
atlantis.shared.set("db", db)
# Retrieve connection
db = atlantis.shared.get("db")
cursor = db.cursor()
cursor.execute(sql)
return cursor.fetchall()
Store: Database connections, API clients, WebSocket connections, file handles.
Don't store: User data, application state, configuration values. Use proper databases and config files instead.
Logging
import atlantis
import logging
logger = logging.getLogger("mcp_client")
@visible
async def process_data(data: str):
"""Process some data."""
await atlantis.owner_log("Starting data processing")
try:
# ... processing ...
logger.info("Processing complete")
except Exception as e:
logger.error(f"Error: {e}")
raise
Use owner_log for server-side logging visible to the remote owner. Use client_log for messages to the user's chat interface.
UI & Presentation Methods
import atlantis
# Set background image for user interface
await atlantis.set_background("/path/to/image.jpg")
# Send HTML to render in user's interface
await atlantis.client_html("<h1>Hello!</h1>")
# Inject JavaScript into client
await atlantis.client_script("console.log('Hello');")
# Send markdown content
await atlantis.client_markdown("# Heading\n\nContent here")
# Send structured data (tables/charts)
await atlantis.client_data("Sales Data", [{"name": "Alice", "sales": 100}])
Media
# Send image to client
await atlantis.client_image("/path/to/image.png")
# Send video to client
await atlantis.client_video("/path/to/video.mp4")
Interactive Elements
# Register onclick callback
await atlantis.client_onclick("my_button", my_callback_function)
# Register file upload callback
await atlantis.client_upload("file_upload", handle_upload)
Tool Results
# Add tool result to conversation transcript
await atlantis.tool_result("function_name", result_data)
Client Commands
Use atlantis.client_command() to send special commands to the client and get responses. Commands are prefixed with \.
Transcript Management
\transcript get
Retrieves the full conversation transcript including chat messages, tool calls, and metadata.
import atlantis
transcript = await atlantis.client_command("\\transcript get")
# Returns: [{"type": "chat", "role": "user", "content": "Hello"}, ...]
\tool llm
Gets the list of available tools formatted for LLM function calling (OpenAI format).
tools = await atlantis.client_command("\\tool llm")
# Returns: [{"type": "function", "function": {"name": "...", "parameters": {...}}}, ...]
Silent Mode
\silent on / \silent off
Controls whether commands produce UI feedback. Use \silent on before internal operations, then \silent off when done.
# Enable silent mode for background operations
await atlantis.client_command("\\silent on")
# Do internal work without UI noise
transcript = await atlantis.client_command("\\transcript get")
tools = await atlantis.client_command("\\tool llm")
# Re-enable UI feedback
await atlantis.client_command("\\silent off")
Chat Routing
\chat set <target>
Routes chat messages to a specific function. Use compound names to specify owner/remote/function.
# Route chat to 'assistant' function on owner's default remote
owner_id = atlantis.get_owner()
await atlantis.client_command(f"\\chat set {owner_id}*assistant")
# Route chat to a specific function
await atlantis.client_command("\\chat set my_chat_handler")
User Input
\input
Prompts the user for text input and waits for response.
name = await atlantis.client_command("\\input", {"prompt": "What's your name?"})
await atlantis.client_log(f"Hello, {name}!")
Most client commands are for advanced use cases. Start with atlantis.client_log(), atlantis.client_html(), and basic API methods before diving into commands.
Cloud & Web Interface
The flowcentral.ai web interface provides centralized management for your remotes, tool sharing, and multi-agent coordination.
Getting Started with Cloud
- Sign up at flowcentral.ai (Google auth)
- Start your remote with cloud connection (see Installation)
- Your remote auto-connects using email + API key
- First remote becomes your "default" automatically
Cloud Features
Remote Management
- View All Remotes: See all your connected machines in one place
- Monitor Status: Check which remotes are online/offline
- Switch Remotes: Change your default remote for tool calls
- Multi-Machine Setup: Run remotes on different computers, access from anywhere
Tool Discovery & Sharing
- Public Tools: Share
@publicfunctions with the community - Protected Tools: Use
@protected("auth_func")for group access - Private by Default: All functions hidden unless explicitly shared
- Browse Tools: Discover tools shared by other developers
Agent Coordination
- Cross-Remote Calls: AI can invoke tools across multiple machines
- Routing: Use compound names to target specific remotes
- Collaboration: Multiple agents working together on complex tasks
Terminal Commands Reference
When connected to flowcentral.ai terminal, use slash commands (/) to manage remotes, functions, and tools. Commands follow Unix-like conventions.
- Generate API key:
/api generate - Update all remotes with new key (they auto-detect)
- First connected remote becomes your "default"
Essential Commands
/help # Show all available commands
/whoami # Show current user
/api generate # Generate new API key for remotes
/api set <key> # Set API key manually
/remote list # List all your remotes
/remote refresh <name> # Refresh specific remote
/remote refresh_all # Refresh all remotes
/remote clear <name> # Remove/ignore a remote
Tool Discovery & Navigation
Unix-style commands to browse and find functions:
/ls [searchTerm] # List tools (like ls in Unix)
/ll [searchTerm] # List tools by date
/dir [searchTerm] # Expanded tree view
/cat <funcName> # Print function source
/pwd # Print working directory (current path)
/cd <dirTerm> # Navigate tool directories
/which <searchTerm> # Show which tool resolves
# Search and filter
/search <filter> # Search everywhere for tools
/tool info [searchTerm] # List tools with descriptions
/tool list [searchTerm] # Detailed tool list with params
Calling Functions Manually
Use @ for regular calls or % for absolute path calls:
# Regular call (uses working path/context)
@myFunction # Simple call, no params
@myFunction 3 4 # Positional params
@myFunction {x:3, y:4} # Named params (JSON)
@myFunction(3, 4) # Function call syntax
# Absolute path call (ignores working path)
%myFunction # Force absolute resolution
%owner*remote*app**func # Full compound name
# Compound names for disambiguation
@**ImageTools**update_image # Specify app to avoid conflicts
@alice*prod*Admin**restart # Full routing path
# Formal syntax (equivalent to @ shorthand)
/tool call myFunction 3 4
@ - Relative to current working path (respects /cd navigation)
% - Absolute path (ignores context, used by AI for reliability)
Search Terms & Tool Specs
searchTerm uses wildcards to find tools:
# Format: user*remote*app*location*function
update_image # Simple name (if unique)
**ImageTools**update_image # Specify app
alice*prod*Admin**restart # Full path with owner+remote
**Examples.FilmFromImage**upscale # Nested app (use dots for subfolders)
# Use % as wildcard in searches
*%*%Image% # All tools with "Image" in name
Function & App Management
/add <name> [remoteName] # Add new function (creates stub)
/app create <name> [remote] # Create new app (folder)
/app list [searchTerm] # List all apps
/app enter <owner> <name> # Enter an app context
/edit <searchTerm> # Edit function in web editor
/tool copy <from> <to> # Copy tool between remotes
MCP Server Management
/mcp list [searchTerm] # List MCP servers
/mcp add <name> [remote] # Add MCP server (creates config)
/mcp start <searchTerm> # Start MCP server
/mcp stop <searchTerm> # Stop MCP server
/mcp tool <searchTerm> # List tools from specific MCP
# Shortcuts
/mls # Same as /mcp list
/tls # Show MCP tools
Chat & Session Management
/chat set <chatPath> # Route chat to specific function
/chat list # List available chat handlers
/chat show # Show current chat handler
/chat clear # Clear chat routing
/chat edit # Edit current chat handler
/session set <sessionPath> # Set session initialization function
/session list # List available session handlers
/session show # Show current session handler
/session clear # Clear session handler
Advanced Features
# Path management (tool collections)
/path add <name> <searchTerm> # Add tools to named path
/path list <name> # Show path tools
/path llm <name> # Export path for LLM use
/path clear <name> # Clear path
# History and SQL
/history [funcTerm] # Recent function calls (max 30)
/select * from prior # SQL against previous table results
# Environment config
/env show [scopeName] # Show environment variables
/env save # Save env configuration
/env load # Load env configuration
# Other
/silent on # Enable silent mode (no UI feedback)
/silent off # Disable silent mode
/transcript get # Get conversation transcript
/color list # Show available colors
- Use
/ls,/cd,/pwdlike Unix file navigation - Start with
/api generateto get secure credentials - Use
@for quick manual testing of functions - Compound names resolve ambiguity:
**App**function - Results tables support SQL:
/select * from prior
You control what gets shared. By default, everything is private. Only functions with @public or @protected are discoverable through the cloud.
Dynamic MCP Servers
You can run third-party MCP servers inside FlowCentral. This lets you integrate existing MCP tools without writing custom code.
Checking Your Remotes
Before adding MCP servers, verify your remotes are connected and healthy:
# List all remotes with detailed info
/rls # Shortcut for /remote list
/remote list # Shows: name, status, owner, last seen
# Refresh remotes if status is stale
/remote refresh <remoteName> # Refresh specific remote
/remote refresh_all # Refresh all remotes
# Restart a remote if it's misbehaving
/remote restart <remoteName> # Restart the remote process
/rls shows which remotes are online, their owners, when they last connected, and how many tools they expose. Use this to verify connectivity before adding MCP servers.
How It Works
- Add MCP server config via
/mcp add <name>or manually create JSON indynamic_servers/ - Start the server with
/mcp start <name> - FlowCentral spawns the MCP server process and connects to it
- Tools from that server become available alongside your functions
- Manage lifecycle with
/mcp stop/startcommands
Managing MCP Servers from Terminal
# List all MCP servers
/mcp list # or /mls (shortcut)
# Add new MCP server (creates stub config)
/mcp add weather [remoteName] # Creates JSON config file
# Edit the config
/edit weather # Opens web editor for JSON config
# Start/stop servers
/mcp start weather # Start the MCP server process
/mcp stop weather # Stop the MCP server process
# View tools from specific MCP
/mcp tool weather # List tools provided by weather server
/tls # Show all MCP tools (shortcut)
Example: Weather MCP Server
{
"mcpServers": {
"weather": {
"command": "uvx",
"args": [
"--from",
"atlantis-open-weather-mcp",
"start-weather-server",
"--api-key",
"<your_openweather_api_key>"
]
}
}
}
Workflow: Adding a New MCP Server
- Check remote status:
/rlsto verify your remote is online - Create config:
/mcp add weather myremote - Edit config:
/edit weatherand add proper command/args - Start server:
/mcp start weather - Verify tools:
/mcp tool weatherto see available tools - Test: Use
@weatherServerFunctionto call tools manually
Installing From Claude Code CLI
Alternative method using Claude Code's MCP management:
claude mcp add --transport stdio weather_forecast \
--env OPENWEATHER_API_KEY=mykey123 \
-- uvx --from atlantis-open-weather-mcp start-weather-server
Available Third-Party Servers
Check the Awesome MCP Servers list for available integrations:
- Databases: PostgreSQL, MySQL, SQLite
- APIs: GitHub, Slack, Google Drive, Linear
- Search: Brave Search, Google, Perplexity
- DevOps: Docker, Kubernetes, AWS
- Files: Filesystem access, Git operations
- Web: Puppeteer, Playwright for browser automation
If an MCP server won't start, use /cat <mcpname> to check the JSON config for syntax errors. Ensure command and args are correct and any required API keys are set.
File Mapping System
The File Mapping system is FlowCentral's "single source of truth" for what tools exist and how they're exposed to AI.
How It Works
- Server watches
dynamic_functions/for changes - Parses Python files to extract function metadata
- Builds a mapping of available tools
- Updates when files change (hot reload)
What Gets Extracted
- Function name
- Docstring (becomes tool description for AI)
- Parameters and type hints
- Decorator (visibility level)
- App name (from folder structure)
Tool Schema Generation
The file mapping generates MCP-compatible tool schemas that AI clients can discover:
{
"name": "get_current_time",
"description": "Get the current time in a specific timezone.",
"inputSchema": {
"type": "object",
"properties": {
"timezone": {
"type": "string",
"description": "IANA timezone name",
"default": "UTC"
}
}
}
}
Good docstrings and type hints translate directly to better AI tool usage. Be specific about parameter formats, valid values, and return types.
Function Examples Repository
The atlantis-mcp-function-examples repository contains production-ready examples demonstrating best practices.
Installation
# Clone into your dynamic_functions directory
cd /path/to/flowcentral-mcp-server/python-server/dynamic_functions
git clone https://github.com/ProjectAtlantis-dev/atlantis-mcp-function-examples.git
What's Included
- Marketing Automation (
marketing/) - Social media management with AI-generated content - ComfyUI Integration (
comfyui_stuff/) - Video generation, audio synthesis, image editing - Bug Report System (
bug_reports/) - Intelligent bug tracking with AI-assisted resolution
Key Concepts Demonstrated
- File Upload Patterns: Base64 encoding, HTML file inputs, preview UIs
- Custom UI Injection: HTML/CSS/JavaScript rendering in user interface
- External API Integration: ComfyUI, social media APIs, LLMs
- Database Patterns: SQLite for persistent data
- Progress Tracking: Long-running operations with status updates
- Error Handling: User-friendly error messages
Example: Interactive Bug Management UI
The manage_bug_reports() function demonstrates advanced interactive UIs with navigation, forms, and real-time updates.
What It Does
Displays a sophisticated card-based interface for triaging bug reports - scroll through bugs one at a time, set severity/category, view screenshots, and take actions (Save, Dismiss, Resolve).
Key Features
- Card-Based Navigation: Prev/Next buttons to scroll through bug reports
- Rich Data Display: Shows title, description, reproduction steps, logs, screenshots, metadata
- Interactive Elements: Dropdowns for severity/category, action buttons with distinct styling
- Screenshot Embedding: Base64-encoded images displayed inline
- Client-Side JavaScript: Event handlers for navigation and form submission
It shows how to build production-grade admin interfaces with complex interactions, multiple UI states, and seamless server integration—all within the FlowCentral framework. Perfect for workflow tools, dashboards, and management interfaces.
Example: ComfyUI Image Generation
The ComfyUI integration demonstrates AI-powered image generation with HTTP polling for completion status and base64-encoded image display.
Key Features
- ComfyUI Workflow: Embeds complete workflow JSON directly in function
- HTTP Polling: Monitors generation progress via REST API
- Seed Randomization: Ensures unique generations each time
- Base64 Image Display: Shows results inline without file management
- Multi-stage Pipeline: Upscaling, background removal, and refinement
Implementation Pattern
import atlantis
import requests
import base64
import time
@visible
async def create_image():
"""Creates an image using ComfyUI workflow"""
await atlantis.client_log("Starting generation...")
# ComfyUI server address
server_address = "localhost:8188"
# Embedded workflow with KSampler, VAE, upscaling, etc.
workflow = {...} # Your workflow JSON
# Queue the workflow
response = requests.post(f"http://{server_address}/prompt", json={"prompt": workflow})
prompt_id = response.json()["prompt_id"]
# Poll for completion (max 2 minutes)
while time.time() - start_time < 120:
history = requests.get(f"http://{server_address}/history/{prompt_id}")
if prompt_id in history.json():
break
time.sleep(3)
# Download and encode image
img_response = requests.get(f"http://{server_address}/view", params=params)
image_base64 = base64.b64encode(img_response.content).decode('utf-8')
# Display inline
await atlantis.client_html(f'<img src="data:image/png;base64,{image_base64}" />')
return "Image generated!"
Example: Chat Functions
The chat function examples demonstrate production-ready AI chat assistants with personality, tool calling, session management, and streaming responses.
Key Features
- @chat + @public Decorators: Public chat endpoint with conversation handling
- Character System Prompt: Detailed personality definition for consistent roleplay
- Busy Flag Pattern: Prevents concurrent execution conflicts
- Session Management: File-based persistence of conversation history per session
- Tool Integration: AI can discover and call other FlowCentral functions dynamically
- Streaming Responses: Real-time token-by-token output
Implementation Pattern
import atlantis
from openai import OpenAI
_chat_busy = False
@public
@chat
async def my_assistant():
"""Main chat function"""
global _chat_busy
# Prevent concurrent execution
if _chat_busy:
return
_chat_busy = True
try:
# Get transcript silently
await atlantis.client_command("\\silent on")
transcript = await atlantis.client_command("\\transcript get")
tools = await atlantis.client_command("\\tool llm")
await atlantis.client_command("\\silent off")
# Start streaming
streamId = await atlantis.stream_start("assistant", "Assistant")
# Call LLM with tools
client = OpenAI(base_url="https://openrouter.ai/api/v1", api_key=...)
stream = client.chat.completions.create(
model="anthropic/claude-3.5-sonnet",
messages=transcript,
tools=tools,
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content:
await atlantis.stream(chunk.choices[0].delta.content, streamId)
await atlantis.stream_end(streamId)
finally:
_chat_busy = False
It demonstrates a complete production chat system with personality, tool integration, and robust patterns for preventing race conditions. Perfect for building custom AI assistants, customer service bots, or interactive NPCs.
Example: Interactive Forms with Callbacks
The customForm.py example shows how to create interactive forms with client-side JavaScript and server-side callbacks.
What It Does
Displays a form with custom UI styling, accepts user input, and processes it via a callback function. Essential for building business workflows that need user input.
Key Features
- Procedural UI Generation: Creates unique form IDs to prevent conflicts
- Custom Styling: Full control over form appearance via CSS
- Client-Side JavaScript: Uses
client_script()to inject event listeners - Callback Pattern: Form submission triggers another FlowCentral function
Implementation Highlights
1. Form Display Function
import atlantis
import uuid
@visible
async def customForm():
"""Display a custom form with interactive elements"""
caller = atlantis.get_caller()
session_id = atlantis.get_session_id()
# Generate unique form ID
FORM_ID = f"{str(uuid.uuid4()).replace('-', '')[:8]}"
# Inject HTML with unique IDs
htmlStr = f"""
<div class="form-container">
<input type="text" id="input_{FORM_ID}" placeholder="Enter value...">
<button id="button_{FORM_ID}">Submit</button>
</div>
"""
await atlantis.client_html(htmlStr)
# Inject JavaScript to handle button clicks
miniscript = f"""
const okButton = document.getElementById('button_{FORM_ID}');
okButton.addEventListener('click', async function() {{
let data = {{ customText: inputField.value }}
await sendChatter(eventData.connAccessToken, '@*submitForm', data)
}})
"""
await atlantis.client_script(miniscript)
2. Form Callback Function
@visible
async def submitForm(customText: str):
"""Process form submission"""
await atlantis.client_log(f"Form submitted with: {customText}")
# ... process the input ...
This two-function pattern (display + callback) is essential for interactive UIs. The display function shows the UI, and the callback processes user actions. Use this for approval workflows, data entry, and multi-step processes.
Example: File Uploads with Processing
The frames_to_video function demonstrates file uploads, custom auth, and external API integration—common patterns for enterprise tools.
What It Does
Accepts file uploads from users, processes them via an external service, and returns results inline. Applicable to document processing, image conversion, data import/export, and more.
Key Features
- Custom Auth: Uses
@protected("demo_group")for group-based access control - File Upload UI: Custom HTML interface for uploading files with live previews
- External API Integration: Connects to processing servers via HTTP/WebSocket
- Progress Tracking: Real-time status updates during processing
- Inline Results: Displays processed output directly in the user interface
Implementation Highlights
1. Protected Decorator with Custom Auth
import atlantis
@protected("demo_group") # Only users in group can call
async def process_files(prompt: str = "default"):
"""
Process uploaded files.
Only accessible to authorized team members.
"""
username = atlantis.get_caller() or "unknown_user"
await atlantis.owner_log(f"process_files called by {username}")
2. Robust Server Polling with Failure Detection
Always implement failure detection when polling external services. Without it, your function will hang indefinitely if the server crashes!
# Poll for completion with failure detection
max_wait_time = 3600 # 1 hour max
start_time = time.time()
consecutive_failures = 0
max_consecutive_failures = 5 # Bail after 5 failures
while time.time() - start_time < max_wait_time:
try:
response = requests.get(
f"http://{server_address}/status/{job_id}",
timeout=30
)
if response.status_code == 200:
data = response.json()
consecutive_failures = 0 # Reset on success
if data.get("complete"):
break # Job complete!
else:
consecutive_failures += 1
except requests.RequestException as e:
consecutive_failures += 1
logger.warning(f"Request error: {e}")
if consecutive_failures >= max_consecutive_failures:
await atlantis.client_log("Server appears to be down")
return # Exit gracefully
await asyncio.sleep(5)
3. File Upload Handler
// File upload button click handler
sendButton.addEventListener('click', async function() {
const base64Content = await readFileAsBase64(file);
// Use studioClient.sendRequest for file uploads
await studioClient.sendRequest("engage", {
accessToken: "{UPLOAD_ID}", // Matches callback key
mode: "upload",
content: "not used",
data: {
base64Content: base64Content,
filename: file.name,
filetype: file.type
}
});
});
4. Auth Function Example
import atlantis
@visible
async def demo_group(user: str):
"""
Protection function - checks if user is in the authorized group.
Args:
user: email/username of the user trying to access the function
Returns:
True if authorized, False otherwise
"""
allowed_users = [
"[email protected]",
"[email protected]"
]
return user in allowed_users
It demonstrates advanced patterns like custom auth, file uploads, external API integration, progress tracking, and custom UI rendering—all essential for production-grade enterprise tools.
Example: Marketing Automation
The Marketing suite demonstrates automated content generation with AI and multi-platform integration—a practical example of monetizable tools.
Features
- Brand Management: Create and store multiple brand profiles with voice/values
- AI Content Generation: Platform-optimized posts using LLM APIs
- Multi-Platform Support: Twitter/X, LinkedIn, Facebook, Instagram
- Content Repurposing: Turn blog posts into multiple social posts
Key Functions
Create Brand Profile
@visible
async def create_brand_config():
"""
Create/update brand profile with custom form UI.
Stores: brand_id, name, description, target_audience,
brand_voice, key_messages, products/services, etc.
"""
# Renders HTML form with all brand fields
# Saves to marketing_brands.json
Generate Social Post
@visible
async def generate_social_post(
brand_id: str,
topic: str,
platform: str,
tone: str = None
):
"""
Generate platform-optimized social media post.
Platforms: twitter, linkedin, instagram, facebook
Uses LLM API with brand context injection.
"""
# Load brand profile
brand = load_brand(brand_id)
# Build prompt with brand context
prompt = build_prompt(brand, topic, platform, tone)
# Call LLM API
post = await llm_generate(prompt)
return post
Platform Best Practices (Built-In)
- Twitter/X: 100-280 chars, 1-2 hashtags, front-load key info
- LinkedIn: 800-1600 chars, first 210 critical, 3-5 hashtags
- Instagram: 138-150 chars shown, 5-10 hashtags
- Facebook: 40-80 chars (shorter is better!), 1-2 hashtags
This pattern—brand config + AI generation + multi-platform output—is a template for any content automation tool. Charge per generation, per brand, or via subscription.
Example: Workflow System (Bug Tracker)
A complete workflow system demonstrating role-based access, status management, and AI integration. Adaptable to any approval workflow, ticketing system, or task management use case.
Roles & Workflows
Users
report_bug()- Submit items with attachments and system info
Managers
manage_bug_reports()- Triage items, set priority/categoryassign_bugs_interactive()- Assign to team members or AIteam_bug_dashboard()- View team workload
Workers
my_assigned_bugs_html()- View assigned items with full detailsupdate_my_bug_progress()- Update status and add notes
Reviewers
bugs_ready_for_testing()- Verify work, approve or send backaudit_resolved_bugs()- View completed item history
AI Assistants
get_bugs_for_ai()- Get items as structured JSONget_bug_details(bug_id)- Get full item informationai_fix_bug(bug_id, notes)- Mark as completed after work
Item Lifecycle
AI Integration Example
# 1. AI gets assigned items
items = await get_bugs_for_ai(status="Assigned", priority="High")
for item in items:
# 2. Get full details
details = await get_bug_details(item["bug_id"])
# 3. Analyze and process
result = await process_item(details)
# 4. Mark as complete
await ai_fix_bug(
item["bug_id"],
f"Completed: {result.summary}"
)
This pattern—role-based functions, status workflows, and AI integration—applies to any business process: support tickets, document approvals, order processing, HR requests, and more. Build it once, adapt it everywhere.