multilingualprogramming Reference¶
This document is the detailed reference for the project.
Overview¶
multilingualprogramming is a Python framework for multilingual programming. It supports writing source code with keywords, numerals, and literals from multiple human languages while mapping everything to a shared semantic model.
Python compatibility baseline: - compatibility_matrix.md - compatibility_roadmap.md
Version and release status:
- Package version: multilingualprogramming/version.py
- Release notes: CHANGELOG.md
Supported Languages¶
- English
- French
- Spanish
- German
- Italian
- Portuguese
- Polish
- Dutch
- Swedish
- Danish
- Finnish
- Hindi
- Arabic
- Bengali
- Tamil
- Chinese (Simplified)
- Japanese
Core Components¶
Numeral System¶
MPNumeralUnicodeNumeralRomanNumeralComplexNumeralFractionNumeralNumeralConverter
Key capabilities:
- arithmetic across numeral scripts
- conversion across scripts
- Unicode fraction handling
- scientific notation helpers
Keyword and Concept Model¶
KeywordRegistryKeywordValidator
Key capabilities:
- concept -> keyword lookup (
COND_IF->if,si, etc.) - keyword -> concept reverse lookup
- supported-language discovery
- ambiguity/completeness checks
Date and Time¶
MPDateMPTimeMPDatetime
Key capabilities:
- multilingual parsing and formatting
- script-aware numeric rendering
Frontend (Lexing, Parsing, Semantic Analysis)¶
LexerParser- AST node model in
multilingualprogramming/parser/ast_nodes.py core.semantic_analyzer.SemanticAnalyzerASTPrinter
Key capabilities:
- Unicode-aware tokenization
- AST generation from multilingual source
- scope and symbol checks
- multilingual semantic error messages
Runtime and Execution¶
PythonCodeGeneratorRuntimeBuiltinsProgramExecutorREPL
Key capabilities:
- transpile multilingual semantic IR to Python source
- execute full pipeline: source -> tokens -> optional normalization -> AST -> IR -> checks -> Python/WAT -> runtime
- inject multilingual runtime builtins
- interactive REPL with language switching and Python-preview mode
AI Runtime¶
AIRuntime— singleton registry that dispatches AI calls to the active providerAIProvider— abstract base class for LLM backendsAnthropicProvider— concrete provider backed by the Anthropic Messages API
Key capabilities:
prompt(model, template)— single-turn text completionthink(model, template)— extended chain-of-thought reasoning (returnsReasoning)generate(model, template, target_type)— structured / JSON-mode generationstream(model, template)— token-by-token streaming (returnsIterator[StreamChunk])embed(model, text)— text embedding (returnsEmbeddingVector)extract / classify / plan / transcribe / retrieve— specialised AI operations- provider registration:
AIRuntime.register(AnthropicProvider())
Model reference literals in source (@claude-sonnet, @claude-haiku, …) resolve to
full model IDs via AnthropicProvider._MODEL_ALIASES.
Reactive / UI Runtime¶
ReactiveEngine— engine managing observableSignalobjectsSignal— a value cell that notifies subscribers on changeCanvasNode— a named UI canvas regionstream_to_view(signal, target)— bind a signal stream to a view target
Key capabilities:
observe name = value/on name.change:reactive declarationscanvas name:/render target = valuecanvas and renderingview target = signalbinding
Structured Concurrency Runtime¶
Channel— typed async FIFO channel backed byasyncio.Queue
Key capabilities:
channel<T>()creates aChannel(unbounded or withcapacity)await ch.send(value)/await ch.receive()— async message passingasync for item in ch:— iteration until channel is closedpar [ expr1, expr2, … ]— parallel fan-out; lowers toasyncio.gather()spawn expr— background task; lowers toasyncio.create_task()
Observability Runtime¶
ml_trace(value, label)— record aTraceEventand return value unchangedml_cost(value)— return(value, CostInfo)with token and latency dataml_explain(value)— return(value, explanation_text)from the model
Key capabilities:
- transparent: original result always flows through unchanged
TraceEvent/CostInfodata classes for structured inspection- global trace log via
get_trace_log()/clear_trace_log()
Placement Runtime¶
@local/@edge/@cloud— deployment target annotations
Key capabilities:
- decorators attach
__ml_placement__to any function or agent - Python backend executes locally; a distributed backend routes on the hint
get_placement(fn)— inspect the placement of any callable
Agent Memory and Coordination Runtime¶
MemoryStore/ml_memory(name, scope)— named key-value storesSwarm— pool of named sub-agents with fan-out and delegationml_delegate(swarm_or_agent, …)— async message to an agentswarm_decorator—@swarm(agents=[…])decorator factory
Key capabilities:
- memory scopes:
"session"(in-process),"persistent"(JSON file),"shared"(swarm-wide) Swarm.broadcast(message)— fan-out to all sub-agents concurrentlydelegate(agent, message)in source lowers toawait ml_delegate(…)
Language Features¶
AI-native constructs¶
Effects must be declared on the enclosing function or agent with uses ai:
fn summarise(text: str) -> str uses ai:
return prompt @claude-sonnet: "Summarise: " + text
fn reasoning_demo() uses ai:
let r = think @claude-sonnet:
What are the implications of AI-native programming?
print(r.conclusion)
fn typed_output() uses ai:
let result: SentimentLabel = generate @claude-sonnet: "Classify: great product"
Available AI keywords (all 17 languages supported — see keywords.json):
| Concept | English | French | Japanese |
|---|---|---|---|
| prompt | prompt |
requête / requete |
プロンプト |
| think | think |
réfléchir |
考える |
| generate | generate |
générer |
生成する |
| stream | stream |
diffuser |
ストリーム |
| embed | embed |
incorporer |
埋め込む |
| extract | extract |
extraire |
抽出する |
| classify | classify |
classifier |
分類する |
| plan | plan |
planifier |
計画する |
| transcribe | transcribe |
transcrire |
書き起こす |
| retrieve | retrieve |
récupérer |
取得する |
Agent and tool declarations:
@tool(description="Search the web")
fn web_search(query: str) -> str uses net:
pass
@agent(model=@claude-sonnet)
fn researcher(question: str) -> str uses ai, net:
return prompt @claude-sonnet: question
Structured concurrency¶
# Parallel fan-out — all branches run concurrently, results returned as tuple
let results = parallel [
prompt @claude-sonnet: "Answer A",
prompt @claude-sonnet: "Answer B"
]
# Background task — returns immediately with a future
let task = spawn long_running_operation()
# Typed channel — async FIFO between tasks
let ch = channel()
spawn producer(ch)
let item = ch.receive()
All concurrency keywords are multilingual:
| Concept | English | French | Japanese |
|---|---|---|---|
| parallel | par / parallel |
parallèle |
並列 |
| spawn | spawn / launch |
lancer |
起動 |
| channel | channel |
canal |
チャネル |
| send | send |
envoyer |
送る |
| receive | receive |
recevoir |
受信 |
Observability¶
fn monitored() uses ai:
# trace — log timing; value passes through unchanged
let result = trace(prompt @claude-sonnet: "Hello", "my-label")
# cost — returns (value, CostInfo) with token counts
let answer, info = cost(prompt @claude-sonnet: "What is AI?")
print(info) # CostInfo(model='claude-sonnet-4-6', tokens=42, latency=1200ms)
# explain — returns (value, explanation_text)
let value, why = explain(answer)
Distributed placement¶
@local
fn preprocess(data: str) -> str:
pass # hint: run on local machine
@edge
fn classify_fast(img: str) -> str uses ai:
pass # hint: run at the network edge
@cloud
@agent(model=@claude-sonnet)
fn heavy_reasoning(prompt: str) -> str uses ai:
pass # hint: run in the cloud
Placement keywords are multilingual (local/lokal/स्थानीय/本地/ローカル …).
Agent memory and coordination¶
fn with_memory() uses ai:
# Named session store (dict-like)
let facts = memory("facts")
facts["answer"] = "Paris"
# Persistent across runs
let cache = memory("cache", scope="persistent")
@swarm(agents=[researcher, writer, reviewer])
fn team_coordinator(task: str) -> str uses ai:
# Fan-out to two sub-agents simultaneously
let draft, review = parallel [
delegate(writer, task),
delegate(reviewer, task)
]
return prompt @claude-sonnet: "Merge: " + draft + "\n" + review
Memory scopes: "session" (default, in-process), "persistent" (JSON-backed file), "shared" (swarm-wide in-process).
General language features¶
The implementation includes support for:
- booleans and None, including identity checks (is, is not)
- control flow (if/else, for, while)
- async constructs (async def, await, async for, async with)
- functions and classes
- imports (import, from ... import ..., aliases with as)
- assertions
- exception handling (try, except, else, finally)
- chained assignment
- slices (a[1:3], a[::-1])
- comprehensions (list, dict, generator), including nested for clauses
- default parameters, *args, **kwargs
- tuple unpacking
- decorators
- f-strings
- triple-quoted strings
Example (nested comprehension):
Example (async for / async with):
import asyncio
class AsyncCtx:
async def __aenter__(self):
return 1
async def __aexit__(self, exc_type, exc, tb):
return False
async def main(xs):
let total = 0
async with AsyncCtx() as base:
async for i in xs:
total = total + i + base
return total
print(asyncio.run(main([1, 2, 3])))
API Entry Points¶
Most commonly used imports:
from multilingualprogramming import (
MPNumeral,
KeywordRegistry,
MPDate,
Lexer,
Parser,
ASTPrinter,
PythonCodeGenerator,
ProgramExecutor,
REPL,
)
from multilingualprogramming.core.semantic_analyzer import SemanticAnalyzer
CLI and REPL¶
Run interactive mode:
python -m multilingualprogramming repl
python -m multilingualprogramming repl --lang fr
python -m multilingualprogramming repl --show-python
python -m multilingualprogramming repl --show-wat
python -m multilingualprogramming repl --show-rust
REPL commands:
:help:language <code>:python— toggle generated Python display:wat— toggle generated WAT (WebAssembly Text) display:rust— toggle generated Rust/Wasmtime bridge code display:reset:kw [XX]:ops [XX]:q
REPL Language Smoke Tests¶
Use these two snippets to quickly validate each language in REPL.
Snippet A (variables + print):
Snippet B (for loop):
Built-in aliases are also available for selected universal functions. Both the universal name and localized alias work. Example (French):
Language-specific forms:
English (en)¶
French (fr)¶
soit x = 2
soit y = 3
afficher(x + y)
soit somme = 0
pour i dans intervalle(4):
somme = somme + i
afficher(somme)
Spanish (es)¶
German (de)¶
sei x = 2
sei y = 3
ausgeben(x + y)
sei summe = 0
für i in bereich(4):
summe = summe + i
ausgeben(summe)
Italian (it)¶
sia x = 2
sia y = 3
stampa(x + y)
sia totale = 0
per i in intervallo(4):
totale = totale + i
stampa(totale)
Portuguese (pt)¶
seja x = 2
seja y = 3
imprimir(x + y)
seja soma = 0
para i em intervalo(4):
soma = soma + i
imprimir(soma)
Hindi (hi)¶
Arabic (ar)¶
ليكن x = 2
ليكن y = 3
اطبع(x + y)
ليكن المجموع = 0
لكل i في مدى(4):
المجموع = المجموع + i
اطبع(المجموع)
Bengali (bn)¶
Tamil (ta)¶
இருக்கட்டும் x = 2
இருக்கட்டும் y = 3
அச்சிடு(x + y)
இருக்கட்டும் மொத்தம் = 0
ஒவ்வொரு i இல் வரம்பு(4):
மொத்தம் = மொத்தம் + i
அச்சிடு(மொத்தம்)
Chinese (zh)¶
Japanese (ja)¶
Examples¶
Runnable examples are documented in:
Complete feature coverage examples:
examples/complete_features_*.ml(one file per supported language)
Run:
python -m multilingualprogramming run examples/complete_features_en.ml --lang en
python -m multilingualprogramming run examples/complete_features_fr.ml --lang fr
python -m multilingualprogramming run examples/complete_features_es.ml --lang es
Run all examples from repository root:
python -m examples.arithmetic
python -m examples.numeral_extended
python -m examples.keywords
python -m examples.datetime_example
python -m examples.lexer_example
python -m examples.parser_example
python -m examples.ast_example
python -m examples.multilingual_parser_example
python -m examples.codegen_example
python -m examples.multilingual_codegen_example
python -m examples.semantic_example
python -m examples.executor_example
Development¶
Related Docs¶
- Project quick start: README.md
- Design overview: design.md
- Core formal specification: core_spec.md
- Frontend translation contracts: frontend_contracts.md
- Related work and differentiation: related_work.md
- Controlled language scope: cnl_scope.md
- Evaluation plan: evaluation_plan.md
- Word order and syntax naturalness: word_order_and_naturalness.md
- Standard library localization strategy: stdlib_localization.md
- Translation governance: translation_guidelines.md
- Development and debugging guide: development.md
- Python compatibility matrix: compatibility_matrix.md
- Python 3.12 compatibility roadmap: compatibility_roadmap.md
- Usage snippets: USAGE.md
- Examples guide: examples/README.md
- French programming guide: fr/programmation.md
- Language onboarding: language_onboarding.md
License¶
- Code: GPLv3+
- Documentation/content: CC BY-SA 4.0