proj-oot-ootInteropNotes2

Difference between revision 33 and current revision

No diff available.

"JNA- and ctypes-like FFI is convenient, there is no doubt about it. But making it the default way to interface with native APIs is a major safety issue. C and C++ have headers for a reason and redeclaring everything by hand is not only time-consuming, but also error-prone. A little mistake in your ccall and you just happily segfaulted, and that’s an optimistic scenario. And now try to correctly wrap strerror_r or similar...So this “feature” instead of eliminating boilerplate eliminates type checking. In fact, there is more boilerplate per function in ccall than in, say, pybind11. This is another area where Python and even Java with their C API win. There is a way to abuse FFI there too, but at least it’s not actively encouraged." [1]

---

"In my opinion, in Clojure, the Java layer pokes the Clojure layer way more than it should. Another example is that Clojure require to access the Java standard library in many cases. The problem with this is that Java functions and Clojure functions are not interchangable. For instance you can do "(apply clojureFunction [0 0])". But you can not do "(apply .javaFunction [0 0])". You need to wrap the Java function into a Clojure function explicitly in this way "(apply #(.javaFunction %1 %2) [0 0])". I don’t find this particularly elegant." [2]

---

valarauca1 5 days ago [-]

This is more complex then just a target. Rust has to communicate to the Go Runtime Environment.

Two main points:

-Go uses a very small stack for GoRoutines? to make them dirt cheap. When you exceed this stack, Go maps in more stack for you transparently. Rust generated ASM is running on the Go Stack, but it is expecting when it exceeds its stack to explode, like a C program. As that should be the OS Stack. Larger problem then you think, Rust likes to put a ton of stuff on the stack. This is one of the nice things about Rust is putting _a ton_ of data on the stack is cheap, and makes ownership simpler.

-Go's system calls, and concurrency primitives are cooperative with its runtime. When make they communicate the routine can yield to the runtime. Targeted Rust code would _also_ have to make these calls, as well as 3rd party crates.

Again none of this is impossible, linker directives and FFI magic could import these functions/symbols. But this would also require Go have a stabilized runtime environment for other languages to link against. Currently just stating Go has a runtime is controversial, so I expect this won't happen soon.

reply

masklinn 5 days ago [-]

> This point confuses me; if Rust expects to run on a limited stack, why would it expect to put a ton of data on the stack?

Rust runs on a C stack, while it's not infinite[0] it's a whole other ballpark than a Go stack since it's non-growable (Rust used to use growable stacks before 1.0): the default C stack size is in the megabyte range (8MB virtual on most unices[1]), in Go the initial stack is 2kB (since 1.4, 8k before).

[0] you can set the size to "unlimited", systems will vary in their behaviour there, on my OSX it sets the stack size to 64MB, Linux apparently allows actual unlimited stack sizes but I've no idea how it actually works to provide that

[1] I think libpthread defines its own stack size circa 2MB so 8MB would be the stack of your main thread and 2MB for sub-threads, but I'm not actually sure

reply

vvanders 5 days ago [-]

Things may have changed from when I last looked at it but there's another issue which is pinning.

If you ever want to do more than the most trivial FFI you'll eventually want to be able to pass types back and forth(usually opaque to either side). AFAIK Go doesn't offer any pinning if it's GC'd types so you can have the collector move them from under you.

C# has this beautiful thing where you can pass a delegate as a raw C fn pointer. It makes building interop a wonderful thing but you have to make sure to pin/GCHandle it appropriately.

reply

FiloSottile? 5 days ago [-]

Folks, stop, you are reinventing cgo now.

Defining a Go target for Rust actually makes sense in the context of replacing assembly (which has no runtime, GC or concurrency connotations), I was just too lazy to do it that way :)

reply

valarauca1 5 days ago [-]

> Folks, stop, you are reinventing cgo now.

This is a good thing because `cgo` is really bad. No ALSR, always forking. These are completely _insane_ defaults, it manages to be slower then the JNI [1] which is an FFI from a completely managed stack based VM universe! Not a _compiled_ language.

Somehow a compile language calling a static binary manages to be slower then a dynamic language's runtime calling a static binary...

`cgo` isn't doing _anything right_. It is doing a lot of things wrong.

[1] https://lwn.net/Articles/446701/

reply

---

"

Why not cgo

Go has a Foreign Function Interface, cgo. cgo allows Go programs to call C functions in the most natural way possible—which is unfortunately not very natural at all. (I know more than I'd like to about cgo, and I can tell you it's not fun.)

By using the C ABI as lingua franca of FFIs, we can call anything from anything: Rust can compile into a library exposing the C ABI, and cgo can use that. It's awkward, but it works.

We can even use reverse-cgo to build Go into a C library and call it from random languages, like I did with Python as a stunt. (It was a stunt folks, stop taking me seriously.)

But cgo does a lot of things to enable that bit of Go naturalness it provides: it will setup a whole stack for C to live in, it makes defer calls to prepare for a panic in a Go callback... this could be will be a whole post of its own.

As a result, the performance cost of each cgo call is way too high for the use case we are thinking about—small hot functions. "

---

probably irrelevant but just putting here in case i need it later: https://blog.filippo.io/rustgo/

---

mbrubeck 14 hours ago [-]

Hi, I'm a Servo developer who worked on some of the Rust code that's in Firefox. Calls between C++ and Rust code in Firefox all go through "extern C" functions. Some of the code involves reference-counted smart pointers. We use RAII wrapper types in both Rust and C++ to ensure that refcounts are incremented and decremented correctly on either side of the FFI boundary.

P.S. This old blog post is not about Rust-in-Firefox, but it does cover a related topic: How the Servo browser engine (written in Rust) interacts with the Spidermonkey JavaScript? engine (written in C++ and embedded in both Gecko and Servo), including garbage-collected JavaScript? objects:

https://research.mozilla.org/2014/08/26/javascript-servos-on...

reply

---

kibwen 131 days ago [-]

Looking ahead to 1.19 (currently in beta, which you can try out easily via rustup), the RFC for unsafe (C-style) unions recently stabilized, which closes IMO the biggest remaining hole in Rust's FFI story. By removing the need for various manual hacks to interoperate with unions from C libraries,...

---

" int_19h 129 days ago [-]

That's exactly the problem. R has a very specific API for its extensions - it's not just vanilla C, it's stuff like SEXP.

Although now that I think more about it, it's not quite as bad as Python, because the data structures are mostly opaque pointers (at least until someone uses USE_RINTERNALS - which people do sometimes, even though they're not supposed to), so all operations have to be done via functions, where you can map them accordingly.

You'd also need to emulate R's object litetime management scheme with Rf_protect etc; but that shouldn't be too difficult, either.

Some more reading on all this:

https://cran.r-project.org/doc/manuals/r-release/R-exts.html...

http://adv-r.had.co.nz/C-interface.html

peatmoss 127 days ago [-]

Oh, yeah, now that you mention it I have seen the SEXP and protect / unprotect stuff before. ... "

---

random (good) Python<->Rust interop example:

[3]

---

C# 'span' for encapsulating memory:

https://github.com/dotnet/corefxlab/blob/master/docs/specs/span.md

---

Rust 'bindgen' apparently helps with interop, macros too:

" bindgen & macros are amazing

I wrote blog posts about these already but I want to talk about these again!

I used bindgen to generate Rust struct definitions for every Ruby struct I need to reference (across 35 different Ruby version). It was kind of… magical? Like I just pointed at some internal Ruby header files (from my local clone of the Ruby source) that I wanted to extract struct definitions from, told it the 8 structs I was interested in, and it just worked.

I think the fact that bindgen lets you interoperate so well with code written in C is really incredible.

Then I used macros (see: my first rust macro) and wrote a bunch of code that referenced those 35 different struct versions made sure that my code works properly with all of them.

And when I introduced a new Ruby version (like 2.5.0) which had internal API changes, the compiler said “hey, your old code working with the structs from Ruby 2.4 doesn’t compile now, you have to deal with that”. "

-- [4]

---

[5]

" ... you can’t just pass a string into a WebAssembly? function. Instead, you have to go through a bunch of steps:

    On the JS side, encode the string into numbers (using something like the TextEncoder API)
    Encoder ring encoding Hello into number equivalent
    Put those numbers into WebAssembly’s memory, which is basically an array of numbers
    JS putting numbers into WebAssembly's memory
    Pass the array index for the first letter of the string to the WebAssembly function
    On the WebAssembly side, use that integer as a pointer to pull out the numbers

And that’s only what’s required for strings. If you have more complex types, then you’re going to have a more convoluted process to get the data back and forth.

If you’re using a lot of WebAssembly? code, you’ll probably abstract this kind of glue code out into a library. Wouldn’t it be nice if you didn’t have to write all that glue code, though? If you could just pass complex values across the language boundary and have them magically work?

That’s what wasm-bindgen does. If you add a few annotations to your Rust code, it will automatically create the code that’s needed (on both sides) to make more complex types work. ... Under the hood, wasm-bindgen is designed to be language-independent. This means that as the tool stabilizes it should be possible to expand support for constructs in other languages, like C/C++. ...

Q. How do you package it all up for npm?

A. wasm-pack

"

---

huh, even these guys couldn't keep up with their interop:

[6]

---

 ioddly 3 days ago [-]

I'd think one of the big advantages of rewriting Lua in a GC'd language would be that you could ditch the stack API, which is necessary to keep track of pointers but not so intuitive to use. Seems like they didn't take that route though.

reply

wahern 3 days ago [-]

What's the alternative to a stack API? If you look at Python and Perl they use a combination of (1) an explicit stack similar to Lua, (2) instantiation of heavy-weight list or array objects for passing arguments, (3) explicit reference counting, and (4) code generation.

Lua's stack protocol can sometimes be verbose, but I find it infinitely more simple and elegant than the existing alternatives.

I think you tend to see long, unwieldy code blocks that invoke Lua functions multiple times because you can, despite how tricky it can be to track the top of your stack. You rarely see this in Perl or Python because it becomes impossible to manage long before that point. So the solution for Lua is the same as for other interfaces--use shorter, simpler functions and compose them. This is usually easier and more performant to do in Lua, anyhow, because there's less fixed boiler plate and indirection necessary; not to mention Lua gives you many more facilities for stashing and managing your C state across calls (like closure upvalues, userdata values, continuation context cookies, etc) without resorting to global variables either inside or outside the VM.

reply

ioddly 3 days ago [-]

Yeah, implementing a precise GC is a pain, but in Go the hard work of that is already done for you. If I were to write my own Lua-in-Go, my inclination would be to try to make an API something like this

    table := lua.MakeTable()
    table.Set(lua.MakeString("five"), lua.MakeNumber(5))

Rather than manipulating values on the stack. But it's possible there are reasons for not doing that that I'm unaware of. Was just curious.

reply

wahern 3 days ago [-]

Why do you prefer that over

  lua_pushstring(L, "five");
  lua_pushnumber(L, 5);

The above is shorter and also significantly more efficient because you're not creating the intermediate table object. Even in Go, given Lua's coroutine and tailcall semantics (assuming they were implemented) the Go compiler would likely be forced to always heap allocate the argument list table. There's a reason PUC Lua is blazingly fast, and its partly the same reason why LuaJIT? is so fast--the Lua stack protocol is a thin abstraction that easily admits a simple, efficient implementation yet still managing to provide an effective interface boundary that doesn't leak (beyond the fact of the protocol itself).

I don't think there's any good middle ground here. The best options are (1) Lua's explicit stack protocol or (2) some sort of type inference that permitted a direct call, like lua_tcall(L, "five", 5), into the VM. You can actually implement the latter in C somewhat easily (at least if you stick to simple data types) by using _Generic and __VA_ARG__ macros to generate the stack pushing code.

But I rarely see this done because it's a lot of magic for little gain. And where I have seen it done it's always been annoying because it's too leaky--invariably you have to fallback to the stack pushing code because there's a limit to the types of automagic type coercions you can accomplish between two drastically different languages. So you have to learn their magical wrapper routines in addition to the regular Lua API.

Many years ago I abused __VA_ARG__, __builtin_types_compatible_p, and libffi so I could invoke ad hoc functions as libevent callbacks without having to proxy the call through a function that unwrapped a pointer-to-void cookie. See http://25thandclement.com/~william/projects/delegate.c.html I still think it's kinda clever but I can't remember the last time I actually used it. Even in code bases already using delegate.c I ended up returning to the slightly more repetitive but transparent and idiomatic patterns for installing libevent events.

reply

omaranto 3 days ago [-]

I'm guessing the code was intended to be equivalent to:

    lua_newtable(L);
    lua_pushstring(L, "five");
    lua_pushnumber(L, 5);
    lua_settable(L, -3);

reply

wahern 3 days ago [-]

Ah, I see. So it's more a preference for a richer standard library, like,

  lua_newtable(L);
  luaL_setfieldi(L, -1, "five", 5);

rather than having to roll your own. Part of the function of the stack protocol is to minimize the number of unique functions needed to load and store values across language boundaries. So there's a single [low-level] function for communicating a number value in a type-safe manner, rather than a multitude of functions--one routine for setting a number value for a table string key, another for a number value for a table number key, another for a string value for a table number key, etc. The permutations explode, and unless the host language has some sort of generics capability then it adds significant complexity both to the interface and to implementation maintenance. (See Perl XS.) Alternatively you can reduce the number of permuted interfaces by using printf-style formatting strings and va_lists, but that's not type safe. (See Py_BuildValue?.)

Notably, Lua 5.3 removed lua_pushunsigned, lua_tounsigned, luaL_checkint, luaL_checklong, and many similar routines as there was never an end to the number of precomposed type coercing, constraint checking auxiliary routines people demanded. They decided that the only way to win that game was to not play at all. But with C11 _Generic and a dash of compiler extension magic you can actually implement a richer set of type agnostic (but type safe) numeric load/store interfaces in just a handful of auxiliary routines. For better or worse, though, Lua doesn't target C11 nor wish to depend on compiler extensions. "Batteries not included" is a whole 'nother debate in the Lua universe.

reply

as-j 2 days ago [-]

> Lua's stack protocol can sometimes be verbose, but I find it infinitely more simple and elegant than the existing alternatives.

This. TL;DNR, used a C/Lua binding to explain the Lua stack to a fellow engineer. Took a few post-it notes to keep track of 4 levels of stack, 1 pcall back to Lua to sort an array, and forward the results on. It just worked, was elegant, and blew our minds.

Today on Friday at 5pm I was talking about a ‘problem’ we had with a Lua daemons compared to their C brothers. Due to the use of tables, each time you queried them, the output order was always random. Computers don’t care, but a big user is also engineers and finding fields wandering all over the place is a pita. So what if we ordered the keys alphabetically instead? The code is Lua -> C -> IPC -> Output. The order is set in the Lua->C binding, so what if we sorted it in the binding.

But the tables are arbitrary size and you’d have to allocate memory, put them in a structure of some kind, etc, etc. Then we realized we could just put all the keys in a Lua array, and sort them with table.sort(). We could then use the array to get key/values in order, all from within the C binding.

Being late Friday, and having talked it through let’s just do it. Plus it was a handy teaching moment to explain to a fellow engineer how Lua binding worked, including a call back into Lua. By 6pm we had the binding recompiled, tested and it’s just worked.

Lessons learned:

1. The stack can seem confusing, but once you learn how to work with it, it’s pretty amazing.

2. The Lua reference docs are amazing. They clearly list what they pop and what they push onto the stack.

3. Calling back into Lua is.

4. It looks like write only code. Reading the code needs a post it note to keep track of the stack, along with the Lua reference manual. -- comments...

reply

---

"Thanks to not having a runtime and being quite easy to create a shared library with a C-ABI, pretty much any software written in any language could be extended with code written in Rust. "

---

" We’ve defined a set of levels to encourage implementors to create tighter integrations with the JavaScript? runtime:

    Level 1: Just string output, so it’s useful as a basic console REPL (read-eval-print-loop).
    Level 2: Converts basic data types (numbers, strings, arrays and objects) to and from JavaScript.
    Level 3: Sharing of class instances (objects with methods) between the guest language and JavaScript.  This allows for Web API access.
    Level 4: Sharing of data science related types  (n-dimensional arrays and data frames) between the guest language and JavaScript." -- [7]

---

" LSP host and Jupyter kernel. Between the two you can support every development environment and awesome tooling in O(1) effort. "

note: most of the infrastructure of Jupyter can be gained for free by using Python to call out to your language via pyexpect or via Python bindings (both of these methods are suggested in the official Jupyter docs)

---

like Lua, have an interop stack

---

mhh__ 4 hours ago [-]

D's C++ interop looks better at a glance.

With a simple extern(C++) you can use templates, classes and vtable's are matched up to single inheritance. There is also some experimental work on catching C++ exceptions but I've never tried to use it.

reply

p0nce 2 hours ago [-]

Also: COM support. This can be incredibly useful for interop.

reply

---

https://github.com/pybind/pybind11

---

https://github.com/metacall/core

METACALL is a library that allows calling functions, methods or procedures between programming languages. With METACALL you can transparently execute code from / to any programming language, for example, call Python code from JavaScript? code.

sum.py

def sum(a, b): return a + b

main.js

metacall_load_from_file('py', [ 'sum.py' ]);

metacall('sum', 3, 4); 7

    Currently supported languages and run-times:

Language Runtime Version Tag Python Python C API >= 3.2 <= 3.7 py NodeJS? N API >= 8.11.1 <= 10.15.3 node JavaScript? V8 5.1.117 js C# NetCore? 1.1.10 cs Ruby Ruby C API >= 2.1 <= 2.3 rb Mock ∅ 0.1.0 mock

    Languages and run-times under construction:

Language Runtime Tag Java JNI java C/C++ Clang - LLVM - libffi c File ∅ file Go Go Runtime go Haskell Haskell FFI hs JavaScript? SpiderMonkey? jsm WebAssembly? WebAssembly? Virtual Machine wasm

---

" METACALL maintains most of the types of the languages but not all are supported. If new types are added they have to be implemented in the reflect module and also in the loaders and serials to fully support it. Type Value Boolean true or false Char -128 to 127 Short -32,768 to 32,767 Int -2,147,483,648 to 2,147,483,647 Long –9,223,372,036,854,775,808 to 9,223,372,036,854,775,807 Float 1.2E-38 to 3.4E+38 Double 2.3E-308 to 1.7E+308 String NULL terminated list of characters Buffer Blob of memory representing a binary data Array Arrangement of values of any type Map List of elements formed by a key (String) value (Any) pair (Array) Pointer Low level representation of a memory reference Null Representation of NULL value type "

" 5.1.2 Modules

    adt provides a base for Abstract Data Types and algorithms used in METACALL. Implementation must be done in an efficient and generic way. Some of the data structures implemented are vector, set, hash, comparable or trie.
    detour provides an interface to hook into functions. Detours are used by the fork model to intercept fork calls.
    detours implement the detour interface by using a plugin architecture. The current list of available detour plugins is the following one.
        funchook_detour implemented by means of FuncHook library.
    distributable defines the compilation of METACALL that generates an unique library with all core libraries bundled into it. As the METACALL architecture is divided by modules, in order to distribute METACALL is needed to build all of them into a single library. This module implements this compilation by means of CMake.
    dynlink implements a cross-platform method to dynamically load libraries. It is used to dynamically load plugins into METACALL.
    environment implements an standard way to deal with environment variables. METACALL uses environment variables to define custom paths for plugins and scripts.
    examples ...
    filesystem provides an abstraction for operative system file system.
    format provides an standard way for printing to standard input output for old C versions that does not support newest constructions.
    loader ...
    loaders
    log
    memory
    metacall
    ports
    preprocessor
    reflect
    scripts
    serial
    serials
    tests
    version

"

" 5.7 Fork Model

METACALL implements a fork safe model. This means if METACALL is running in any program instance, the process where is running can be forked safely at any moment of the execution. This fact has many implications at design, implementation and use level. But the whole METACALL architecture tries to remove all responsibility from the developer and make this transparent.

... "

" Because of these restrictions, METACALL cannot preserve the status of the run-times. In the future this model will be improved to maintain consistency and preserve the execution state of the run-times making METACALL more robust.

Although the state is not preserved, fork safety is. The mechanism METACALL uses to allow fork safety is described in the following enumeration.

    Intercept fork call done by the program where METACALL is running.
    Shutdown all run-times by means of unloading all loaders.
    Execute the real fork function.
    Restore all run-times by means of reloading all loaders.
    Execute user defined fork callback if any.

To achieve this, METACALL hooks fork primitives depending on the platform.

    fork on POSIX systems.
    RtlCloneUserProcess on Windows systems.

If you use clone instead of fork to spawn a new process in a POSIX system, METACALL won't catch it.

Whenever you call a to a cloning primitive METACALL intercepts it by means of detour. Detours is a way to intercept functions at low level by editing the memory and introducing a jump over your own function preserving the address of the old one. METACALL uses this method instead of POSIX pthread_atfork for three main reasons.

    The first one is that pthread_atfork is only supported by POSIX systems. So it is not a good solution because of the philosophy of METACALL is to be as cross-platform as possible.

...

the developer can register a callback by means of metacall_fork to know when a fork is executed to do the actions needed after the fork, for example, re-loading all previous code and restore the state of the run-times. This gives a partial solution to the problem of losing the state when doing a fork. "

rwmj 2 hours ago [-]

How does it differ from libffi or Swig?

reply

HerrMonnezza? 52 minutes ago [-]

From a cursory reading of the README, I would say that Metacall allows any of the supported languages to call any other one, whereas Swig only allows calling C or C++ code. (Swig takes a `.h` file as input and then generates the bindings for the desired target language.)

reply

---

in ocaml, "However when the garbage collector looks at data stored in generic structures it needs to tell pointers from integers, so integers are tagged using a 1 bit in a place where valid aligned pointers never have a 1 bit, leaving only 31 or 63 bits of range.". how to deal with this in Boot?

---

"To understand the METACALL fork model, first of all we have to understand the implications of the forking model in operative systems and the difference between fork-one and fork-all models. The main difference between fork-one and fork-all is that in fork-one only the thread which called the fork is preserved after the fork (i.e. gets cloned). In fork-all model, all threads are preserved after cloning. POSIX uses fork-one model, meanwhile Oracle Solaris use the fork-all model. Because of fork-one model, forking a running run-time like NodeJS? (which has a thread pool) implies that in the child process the thread pool will be almost dead except the thread which did the fork call. So NodeJS? run-time cannot continue the execution anymore and the event-loop enters into a deadlock state. When a fork is done, the status of the execution is lost by the moment. METACALL is not able to preserve the state when a fork is done. Some run-times do not allow to preserve the internal state. For example, the bad design of NodeJS? does not allow to manage the thread pool from outside, so it cannot be preserved after a fork. "

---

bb88 2 hours ago [-]

Interesting nugget about golang from their FAQ:

https://github.com/canonical/dqlite/blob/master/doc/faq.md

Why C?

The first prototype implementation of dqlite was in Go, leveraging the hashicorp/raft implementation of the Raft algorithm. The project was later rewritten entirely in C because of performance problems due to the way Go interoperates with C: Go considers a function call into C that lasts more than ~20 microseconds as a blocking system call, in that case it will put the goroutine running that C call in waiting queue and resuming it will effectively cause a context switch, degrading performance (since there were a lot of them happening). See also this issue in the Go bug tracker.

The added benefit of the rewrite in C is that it's now easy to embed dqlite into project written in effectively any language, since all major languages have provisions to create C bindings.

reply

---

[8]

---

https://blog.rust-lang.org/inside-rust/2020/06/08/new-inline-asm.html https://news.ycombinator.com/item?id=23466795

---

andrewmcwatters 11 hours ago [–]

If you use LuaJIT?, you can generate bindings to C libraries, versus handwriting bindings for PUC Lua. The productivity difference is staggering. Additions to Lua since 5.1.5 have not helped me write more or better software, as much as I love Lua.

reply

---

"

XPCOM is a technology that lets you write code in two languages and have each other call the other. The code of Firefox is full of C++ calling JavaScript?, JavaScript? calling C++ and a long time ago, we had projects that added Python and .Net in the mix. This piece of machinery is extremely complicated because languages do not share the same definitions (what’s a 64-bit integer in JavaScript?? what’s a JavaScript? exception in C++?) or the same memory model (how do you handle a JavaScript? object holding a reference to a C++ object that C++ might wish to delete from memory?) or the same concurrency model (JavaScript? workers share nothing while C++ threads share everything).

Gecko itself was originally designed as thousands of XPCOM components that could each be implemented in C++ or in JavaScript?, tested individually, plugged, unplugged or replaced dynamically and it worked. In addition, the XPCOM architecture made for much cleaner C++ programming than was available at the time, worked on dozens of platforms, and let us combine the convenience of writing code in JavaScript? and the raw speed permitted by C++. "

---

IRIS interop:

" Rich native-language interface metadata and clean decoupling of underlying primitive functions should enable partial/full transpilation of native iris code to optimized Swift code, e.g. eliminating redundant native➞primitive➞native bridging coercions so that Swift functions can pass Swift values directly.

e.g. Consider the expression:

“HELLO, ” & uppercase my_name

Handler definitions:

to uppercase {text as string} returning string requires { }

to ‘&’ {left as string, right as string} returning string requires { can_error: true swift_function: joinValues operator: {form: #infix, precedence: 340} }

Swift functions:

func uppercase(text: String) -> String { return text.uppercased() }

func joinValues(left: String, right: String) throws -> String { return left + right }

Generated Swift code, obviating the unnecessary String➞Text➞String?