Steps:
~ag/ on Linux and Mac%USERPROFILE%/ag on Windowssudo apt-get install build-essentialxcode-select --installx64 Native Tools Command PromptCD to your working directory (where all build files and final executable to be build and where your program starts.
ag/examples/work-dir..\..\bin\run-release.bat ..\helloWorld.ag../../bin/run-release.sh ../helloWorld.agfizzBuzz - demonstrates lambdas, loops, variablesbottles - prints text of 100 wine bottles, demonstrates string interpolationgraph - demonstrates building of graph data structure and traversal with finding loops (this example is intended to show that Argentum can easily handle data structures with loops)cardDom - example demonstrates operations on a rich DOM or an imaginary document editors discussed in a series of Linkedin publications and other resourcesthreadTest - example showcasing multithreading and asynchronous message passingrun-release sqliteDemowork-dir where it queries a mydb.sqlite database and prints its content to console.run-release.sh/bat we use build-debug.sh/bat,
Argentum test is a special global function, which is defined not with keyword fn but instead with keyword test.
Currently tests have no parameters and no result. In later language revisions the elements in parentheses and between the parentheses and curly braces will be used for various tests attributes, but now we have to define our tests as functions with no parameters and no result (see line 7 of the following example).
using json;
using tests { assert }
...
class MyClass { ... }
fn myFn(s str) int { ... }
test myFnTest (){ //<-- test
assert(myFn("Some input data") == 42);
...
}
log("application started {myFn("some real data")}");The test body can contain whatever you want: instantiate classes, call functions perform any actions. There is a handy function in tests module: assert the checks condition and ends application on failure.
When you build and run an application having tests (and it doesn't matter if it is release or debug mode), this application will run as if it contains no tests at all. In the above example this application just prints "application started...".
In order to use tests, we need to add a command line parameter -T with attribute "." to the compiler invocation:
agc -src "./myAppDir" -start myModule -O3 -o "t.obj" -T .In this case compiler completely ignores the main application entry point and instead builds a special application that executes each test defined in this module and all modules directly or indirectly used from this module.
For each test our compiled program first prints to the log the name of this test in form: "moduleName_testName" and then executes the test body. So you (or automation tools you use) could always understand which test is broken.
Your test can be a simple single-threaded code that synchronously performs actions and quits as function returns, or it can register a main application object and initiate execution in asynchronous or even multi-threaded way:
test asyncTest() {
t = Object; // create a dummy object instance
sys_setMainObject(t); // register it as an app global state
t ~~ asyncAction(){ // post it an asyncronous deferred action
sys_log("async hello"); // do something inside this action
sys_setMainObject(?Object); // reset global state to null (this quits our async test)
}
}If we compile, link and run this test, it prints "async hello". So tests are not just plain functions, instead they are miniature application entry points. Such async tests executes one after another, when previous asynchronous test sets an empty global state, instead of quitting the application, Argentum runs the next test. Btw, even though each test handles its own asynchronous and multi-threaded mode, all tests share the same set of global constants, that are initialized before the first test and destroyed after the last one. So constants are shared between tests.
Sometimes you don't want to include all tests in all used modules, or you want to compile one specific test or a group of tests. This is feasible. The command line key -T has a parameter, for which we passed a dot (".") in our first example. Actually it's a regular expression, And compiler adds to the executable only tests having full names containing this regular expression:
agc ... -T . <-- all tests
agc ... -T "array_*" <-- all tests from module array
agc ... -T "network_json*" <-- all JSON-related tests of the network module
agc ... -T myModule_asyncTest <-- one exact testArgentum compiler automatically excludes from the production (non-test) executable all functions that haven't been called directly or indirectly starting from the entry point. Also it excludes all classes that haven't been instantiated and all methods that are not used. This means that you can freely create functions and classes needed only for testing purposes. They will be excluded from the actual application build. And the opposite is true as well: when you build in test mode, only selected tests and their call graphs are actually included in the test build.
Argentum compiler allows to define multiple directories for the source code as command-line parameters. And when it searches for some module, it checks these directories in the given order. This means that we can create a a set of mocking modules for http-client, file systems, databases etc. and put them in a special mocks directory. Now all we have to do is to call the compiler with needed directories and test names:
agc -src ./mocks -src ./myapp ... -T myTestUsingMocksThis allows us to create executable statically linked to mock classes and mock functions defined in the mock modules.
It is common to have extended API in mock classes that allows the tune up the behavior of mocks. And the opposite: we usually do not implement the full set of APIs of real classes in mocks.
This is not a problem in Argentum: when it compiles in production mode, it parses all tests but don't check the tests against names or type integrity, it merely requires the tests to be free from basic syntax errors. If test calls nonexistent function or use nonexistent data types it doesn't prevent compiler from building the production code, because tests code is never presented in the production executable. This allows the code of tests to use API that exists only in mock classes.
In contrary when compiled in test mode, Argentum skips the main program entry point and compiles the call tree of each test, which can freely use extended API of mocks.
Since both mock modules and test names are defined in the same command line, they can be automated using the same CI/CD scripts.
Tests are just applications. They can be compiled with or without debug information with or without any optimizations. If test suite fails, perform the following steps:
bin\build-debug-ffi.bat)src\runtime\ag-assert.c: ag_fn_tests_assert functionLet's start with a JavaScript example:
// JavaScript can iterate arrays using for loops...
for (const polygon of root) {
for (const point of polygon.points) {
point.x += point.y;
point.y *= 100;
}
}
// Another way, using lambdas:
root.forEach(polygon => {
polygon.points.forEach(point => {
point.x += point.y
point.y *= 100;
});
});Why is it sometimes desirable to use lambda over loops?
map, reduce, filter are better to implement as a set of functions than to include them in the language itself..map encourages working with immutable data and allows chaining.On the other hand Lambdas have one big disadvantage: handling control flow:
All it can do is to return from itself, effectively performing continue on the most nested loop.
// JavaScript can iterate arrays using for loops, and use break/continue/return
//
function processPoints(root) {
for (const polygon of root) {
for (const point of polygon.points) {
if (point.x == 0)
return processPoints; // lambdas cannot do that
point.x += point.y;
point.y *= 100;
}
}
}Argentum lambdas are different:
fn processPoints(root Array(Polygon)) {
root.each `polygon {
polygon.points.each `point {
point.x == 0.0 ?
^processPoints; // << conditionally return from processPoints
point.x += point.y;
point.y *= 100.0
}
};
}In line 4 we conditionally return from the processPoints function.
If we stop our application at this line in debugger, we'll see multiple intermediate stack frames between current lambda and processPoint function (of course, if they haven't been inlined by compiler):
processPoints functionArray.each method of the polygons-arrayroot.each bodyArray.each of one of points-arrayspolygon.points.each bodyDespite being separated from its target by various stack frames, this return expression does its work as expected:
It all looks like a stack unwinding and exception handling, but it’s not quite the same. In languages like C++ and Java, code optimization is focused on the normal execution path, often at the expense of making the process of exceptions handling more complex. An exception is treated as a genuine application error, so during exception handling, information about the stack trace is usually gathered, numerous dynamic memory allocations are performed, and, in general, this code path is significantly less optimal. Therefore, using exception handling for the natural execution path of a program is not recommended. As a result, languages often introduce some separate techniques, such as data types like Result, Error-Or, and checks for result codes. Sometimes company code styles prohibit exception handling whatsoever, because of its added runtime cost.
Argentum uses approach similar to a return codes, but it's invisible to software developer. If some lambda can break to the outside scopes, compiler wraps result of this lambda in an optional<T>. In most cases optional wrappers are zero cost abstractions. We can say that behind the scenes code with far-breaks gets converted into something like this:
class Array{
// This method takes a lambda parameter so its return value (void)
// gets wrapped with optional and becomes `bool` (aka optional<void>).
each(body(T)bool) bool {
forRange(0, size()) `i {
body(this[i]) : ^each=false // if body returns false, exit the `each` method
};
true // Normal return.
}
}
fn processPoints(root Array(Polygon)) {
root.each `polygon {
polygon.points.each `point {=innerBody
point.x == 0.0 ? ^innerBody=false;
point.x += point.y;
point.y *= 100.0;
true // normal exit returns true
}
}
}This code, give or take, is similar to the "result-codes" approach that C-programmers use to implement brakes from inside functions. So approximately Argentum has the efficiency of C/C++ when they avoid using exceptions. The only difference here is that Argentum generates all the necessary code automatically.
Side note: In the future Argentum compiler will get devirtualization and inlining passes, and the above use cases will be heavily inlined whenever possible, which would improve multilevel returns even more.
As shown in the previous section, functions with Lambda parameters can play role of any imaginable control flow operator, because in Argentum lambdas can break, continue and return to random arbitrary levels.
The other scenario where lambdas are useful is error-handling.
Sometimes, we call functions that might encounter issues during their execution, and these issues need to be addressed differently depending on the context of the calls. In such cases, we can use a well-known design pattern called "lambda strategy." Here, we pass a lambda-function to our main function, which should be invoked if something goes wrong. Typically, such a lambda attempts to resolve the situation, provide an alternative set of data, or signal the main function about what to do next.
fn safeDivide(a int, b int, onDiv0()int){
b == 0
? onDiv0() // We call the division-by-0-handler and return its result
: a/b // Or we return the normal execution result
}
...
x = safeDivide(a, b, \0); // replace all wrong results with 0
y = safeDivide(a, b \log("div by zero")->0); // log error and continue with 0
z = safeDivide(a, b, \terminate(-1)); // end the applicationIn addition to all of the above, the passed lambda can transfer control back to the block where it was defined, canceling the entire process with any desired result.
fn doSomeStuff() {
people = getUserList();
heightToOrder = {
total = people.sum{_.height};
avg = safeDivide(
total,
people.size(),
\^heightToOrder=-1); // If safeDivide encounters 0-divider, we break out of
// heightToOrder initialization block and set its
// variable to -1
// we can also put here ^doSomeStuff to return from the doSomeStuff function
calculateHeight(avg, people)
};
orderShirts(heightToOrder) // one size fits all
}In this example we start our code with calculating the average people height, but if there are no people in the list, we can either call orderShirts with -1 or skip the whole doSomeStuff. And Argentum allows this by breaking outside to any outer level.
In this example the safeDiv function to which we pass a onDiv0 error handler is very simple. Though in real life processes that receive error handlers can be very complex and include arbitrary function calls and multiple levels of recursions. It could even include error handlers that call another error handlers and inside those handlers we might break outside, unwinding all nested functions and lambdas, and returning control to the original function frame with the needed data.
fn initializeOpenGL(onBadVersion(v int)) {
...
version != expected ? onBadVersion(version);
...
}
fn initializeGraphics(onError(str)) {
...
initGraphicsPlatform {
initializeOpenGL { onError("Bad OpenGL version {_}") }
}
...
}
class App{
init() {
...
initializeGraphics {
log("Error {_}");
^init
};
...
}
}In this example return in line 19 unwinds its own lambda, lambdas from line 8 and 9, initGraphicsPlatform, initializeOpenGL, initializeGraphics, and any invisible functions invoked inside the initGraphicsPlatform.
Sometimes such far returns are useful even inside one function:
class JsonParser{
getString() ?str {
error = `message {
log("Json parser error: {message} at {currentPosition}");
^getString=?""
};
toHexDigit = `c {
inRange(c, '0', '9') ? c - '0' :
inRange(c, 'a', 'f') ? c - 'a' + 10 :
error("not a hex digit")
};
current != '"' ? error("expected opening quote");
result = StrBuilder;
...
n = toHexDigit(current);
...
result.toStr()
}
}In this example we try to parse a string out of a JSON data format. There are multiple ways it could end up with an error, and we want to log them all before returning an empty-optional string. So we introduced an error lambda-function that logs the message and returns, but not from itself. It rather returns from the outer getString function. This means that this lambda never returns to its caller. These types of functions have result type no_ret. This type is compatible to any other type in the type system, and as such this function can be called on any branch of if statement without failing type checks.
In the following code we call this error lambda when it's needed, sometimes in the nested lambdas sometimes in the getString function itself.
In the remaining code of our function we only keep track of successful parsing control paths. We don't do anything to account for errors. We know that in the case of error Argentum will unwind our lambda's stack frames and return an empty-optional-string from our getString function, and it will do it with the efficiency of manually-written C/C++ code that uses result codes.
^block=result) to allow breaks from inner lambda to any outer lexical scopes.break continue and return from lambdas residing in control flow functions, effectively erasing the edge between lambdas-in-functions and built-in control statements.Module is always a single text source file. It starts with a series of using declaration that define dependencies on other modules with optional imports of class/function/constant names from those used modules.
Argentum SDK is a set of modules residing in a specific directory. There might exist multiple versions of SDK in the same host, so there is no global variable or standard directory name for one system-wide SDK.
Your application is also a module. You build it by calling agc compiler with these mandatory flags:
agc -src path/to/directory/with/all/modules -start yourAppModule -o outputObjFile.objWhere src defines a path to a directory containing your application file and all modules including ones from SDK.
Performance-wise it's ok to put all modules in one directory. Compiler will access/read/parse only tree of used modules. It's also ok to put into a single module as many classes/functions as needed, because compiler will include into resulting compiled program only the used classes and functions.
Though this single directory-for-everything approach is ok for compilation efficiency and resulting executable size, it has its own disadvantages for project organization. Let's say, it works only for very basic scenarios - simple test applications. So a new multidirectory structure was introduced:
Real life apps need to:
This is all feasible, using multiple -src parameters:
agc \
-src ./android/modules \
-src ./sdk/modules \
-src ${COMPAY_DIR}/lib \
-src ~/myApplication/src \
-start myApp -o outputObjFile.objThere might be many -src parameters, defining a list of directories, and argentum compiler searches for the start module and all modules it directly or indirectly uses in this list.
Order matters. For example, If android/modules directory contains a module having the same name as SDK, compiler uses one from android/modules because it's listed first.
This list of source directories covers all scenarios listed above.
Sometimes it is needed to include a module into a specific directory virtually:
Argentum compiler uses module name as a file name, adding ".ag" extension. If current directory does not contain "*.ag" file, compiler checks for an "*.ag-ref" file with the same name. This file, if exists, should contain a path to the real ag-file location. This allows to share modules between libraries or create virtual libraries containing modules from different places.
More than one -src command line params and *.ag-ref "sym-links" are supported in argentum build from sources.
output/ag directory - separate library-SDK modules from "examples"To do so it needs a graphical user interface library:
But this GUI library itself needs a foundation:
So meet an experimental "GuiPlatform" module:
using sys { Blob, log }
using guiPlatform { Canvas, Paint, Rect, Font, Image }
using string;
class MyApp {
+guiPlatform_App {
onStart() {
font.fromName("Arial Bold");
img.fromBlob(Blob.{ _.loadFile("sd.png") : log("Img sd.png not loaded") });
}
onPaint(c Canvas) {
phase += 1s;
c.clear(0xff_ffffffs);
p = Paint;
(x + dx) -> (_ < 0f || _ > w ? dx *= -1f : x := _);
(y + dy) -> (_ < 0f || _ > h ? dy *= -1f : y := _);
c.drawRect(Rect.setXYWH(x-50f, y-50f, 100f, 100f), p.color(0xff_ff0000s));
forRangeFStep(0f, 700f, 4f)`i {
p.color(0xff_008800s | (short(i) + phase));
c.drawLine( i, 0f, 0f, 700f - i, p);
c.drawLine(w - i, 0f, w, 700f - i, p);
c.drawLine( i, h, 0f, h - 700f + i, p);
c.drawLine(w - i, h, w, h - 700f + i, p);
};
c.drawSimpleText((w - 48f) / 2f, (h - 16f) / 2f, "Hello", font, 16f, p.color(0xff_004400s));
c.drawImage((w - 108f) / 2f, (h - 100f) / 2f - 100f, img);
}
onKey(pressed bool, key short, shifts short) {
log("key{pressed?"down":"up"}-{key}-{shifts} ");
pressed && key == 20s ? sys_setMainObject(?MyApp)
}
}
phase = 0s;
x = 100f;
y = 100f;
dx = 1f;
dy = 1f;
font = Font;
img = Image;
}
MyApp.run("Hello AG", 120) // Start GUI app with windows title and given FPSSo far this example is tested on Windows using Argentum built from sources on experimental branch gui_platform.
It's available in playground, so go and try it yourself 🙂
Argentum JSON module provides three separate ways to handle JSONs in your application:
To be specific we need some sort of common task to be done three different ways allowing us to compare code complexity, allocations, processing speed and other parameters.
In this post we will parse modify and write back a JSON file containing an array of polygons with arrays of points. Something like this:
const xInputJson = "
[
{
"active": false,
"name": "p1",
"points": [
{"x": 11, "y": 32},
{"y": 23, "x": 12},
{"x": -1, "y": 4}
]
},
{
"points": [
{"x": 10, "y": 0},
{"x": 0, "y": 10},
{"y": 0, "x": 0}
],
"active": true,
"name": "Corner"
}
]
"; Our mission, should we accept it, is do modify X and Y fields of points this way:
x := x + y
y := y * 100Our first candidate is a DOM approach. Mostly because it is the main and sometimes only approach common in other programming languages or in other JSON libraries.
using sys{ Array, log }
using json{ Parser, Writer, JArr, JObj, JNum, read } // <<- additional imports
using array;
// Read. The `root` variable is of type `json_JNode`.
// Please notice that we provide `read` function with a parser object
// which allows us to build DOM data structures out of parts of actual JSON
// calling it in the middle of other types of JSON Parsing
root = read(Parser.init(xInputJson));
// Write it back. Again since we provide `JNode.write` method with a Writer instance,
// we can serialize our DOM data as a part of the other serialization process.
// Also we can fine-tune Writer, producing different JSON formatting.
log(root.write(Writer.useSpaces(2)).toStr());Reading and writing with DOM is the easiest among all approaches, but let's try to modify this DOM:
root~JArr ? _.each {
_~JObj && _["points"] && _~JArr ? _.each {
_~JObj ? `pt {
pt["x"] && _~JNum ? `x
pt["y"] && _~JNum ? `y {
x.n += y.n;
y.n *= 100.0
}
}
}
};root is an actual array, and if it is, iterate over itpt.pt has fields "x" and "y", and if they are numeric nodes, and on success we store these nodes in temporary variables x and y.x and y.Skip one check, and code won't compile:
~JArr in line 1, you can't iterate, because JNode is not an array and has no each method,? in the same line, yo can't call method, because typecast operator ~ returns optional<pointer>, and you need to unwrap it with ? to extract the actual pointer in order to call method, and this is applicable to every statement: you code won't compile until you check all possibly bad corner cases.You may ask: "why so many checks"? In JavaScript I can just write:
root.forEach(polygon => {
polygon.points.forEach(point => {
point.x += point.y
point.y *= 100;
});
});Yes and no. This code could crash if input data contains unexpected node types. The safe and resilient JavaScript code looks like this:
if (Array.isArray(root)) {
root.forEach(polygon => {
if (polygon &&
typeof polygon === 'object' &&
Array.isArray(polygon.points))
{
polygon.points.forEach(point => {
if (point &&
typeof polygon === 'object' &&
typeof point.x === 'number' &&
typeof point.y === 'number')
{
point.x += point.y;
point.y *= 100;
}
});
}
});
}With all these added checks, added safety and resilience, JavaScript code becomes larger and more redundant than Argentum one (for example, it repeatedly access the same object fields over and over, and this fields are actually text keys lookups in hash maps).
Other languages for reference:
fn process_dom(root: &mut Value) {
if let Value::Array(polygons) = root {
for polygon in polygons.iter_mut() {
if let Value::Object(polygon_obj) = polygon {
if let Some(Value::Array(points)) = polygon_obj.get_mut("points") {
for point in points.iter_mut() {
if let Value::Object(point_obj) = point {
if let (Some(Value::Number(x)), Some(Value::Number(y))) =
(point_obj.get_mut("x"), point_obj.get_mut("y"))
{
if let (Some(x_val), Some(y_val)) = (x.as_f64(), y.as_f64()) {
*x = json!(x_val + y_val);
*y = json!(y_val * 100.0);
}
}
}
}
}
}
}
}
}func processDom(_ root: inout Any) {
if var rootArray = root as? [[String: Any]] {
for i in 0..<rootArray.count {
var polygon = rootArray[i]
if var points = polygon["points"] as? [[String: Any]] {
for j in 0..<points.count {
if var point = points[j] as? [String: Any],
let x = point["x"] as? Double,
let y = point["y"] as? Double {
point["x"] = x + y
point["y"] = y * 100
points[j] = point // This COW-fighting is a Swift-specific feature
}
}
polygon["points"] = points // And here
rootArray[i] = polygon // And here
}
}
root = rootArray // And here
}
}
// The above example has exponential complexity for nesting levels
// O(N^2) in this case of 2 nesting levels, because Swift arrays and maps
// are having value semantic.void processDom(nlohmann::json& root) {
if (root.is_array()) {
for (auto& polygon : root) {
if (polygon.is_object() &&
polygon.contains("points") &&
polygon["points"].is_array())
{
for (auto& point : polygon["points"]) {
if (point.is_object() &&
point.contains("x") &&
point.contains("y") &&
point["x"].is_number() &&
point["y"].is_number())
{
double x = point["x"];
double y = point["y"];
point["x"] = x + y;
point["y"] = y * 100;
}
}
}
}
}
}
// Please notice that in this example we four times
// search in a hash map by the same string key:
// in lines 10, 12, 15, 17.It's a good illustration of distinction between Argentum programming language and other languages. In other languages you can easily write unsafe and non-resilient code. While making it safer and more robust takes visible amount of efforts. In contrary Argentum allows you relatively easy create safe and resilient code while making unsafe code is impossible at syntax and type check levels.
Anyways, this DOM approach has number of disadvantages:
That's why it is usually better to read JSON documents directly into application data structures.
This approach is already described in posts about StAX parser and Streaming Writer. In these posts we made monolith functions to read and write these data structures. Let's write it here another way:
// First we define application data formats and method of JSON handling:
class Point{
x = 0.0;
y = 0.0;
readField(f str, json Parser) this { // This function handles a single field from JSON
f=="x" ? x := json.getNum(0.0) :
f=="y" ? y := json.getNum(0.0)
}
writeFields(j(str)Writer) { // This function writes all fields to JSON
j("x").num(x);
j("y").num(y)
}
}
class Polygon {
name = "";
points = Array(Point);
isActive = false;
readField(f str, json Parser) this {
f=="active" ? isActive := json.getBool(false) :
f=="name" ? name := json.getStr("") :
f=="points" ? json.getArr\points.append(Point)-> json.getObj`f _.readField(f, json);
}
writeFields(j(str)Writer) {
j("name").str(name);
j("active").bool(isActive);
j("points").arr\points.each`pt _.obj\pt.writeFields(_);
}
}
// Second, add handling of arrays of Polygons:
fn readPolygonsFromJson(data str) Array(Polygon) {
Array(Polygon).{
json = Parser.init(data);
json.getArr\_.append(Polygon)-> json.getObj `f _.readField(f, json);
json.success() : log("parsing error {json.getErrorMessage()}")
}
}
fn writePolygonsToJson(data Array(Polygon)) str {
Writer.useSpaces(1).arr {
data.each `poly _.obj\poly.writeFields(_)
}.toStr()
}Having these application data formats readers and writers we can make our task as simple as:
xInputJson->readPolygonsFromJson(_).{
_.each\_.points.each {
_.x += _.y;
_.y *= 100.0
}
}->writePolygonsToJson(_)->log(_)This approach has multiple advantages:
read methods. And we can encapsulate in these methods all input versions and variations.write methods.At the same time this approach has two disadvantages:
There is a third way. Our reader and parser are combinable, so we can create a streaming processing function:
fn process(inText str) str {
in = Parser.init(inText);
out = Writer.useSpaces(2).arr\in.getArr\_.obj\in.getObj`f (
f=="name" ? _(f).str(in.getStr("")) :
f=="active" ? _(f).bool(in.getBool(false)) :
f=="points" ? _(f).arr\in.getArr\_.obj {
x=0.0;
y=0.0;
in.getObj`f (
f=="x" ? x:=in.getNum(0.0):
f=="y" ? y:=in.getNum(0.0));
_("x").num(x + y);
_("y").num(y * 100.0)
});
out.toStr()
}
log(process(xInputJson))
//or more fancy way:
xInputJson->process(_)->log(_)Writer instance..arr).in.getArr)._.obj)in.getObj)name and active scalar fields, forcing their data to be string and bool respectively.points containing an array, and like the line 3 we create an array and fill it with content of the input array, but this time we don't replicate fields 1-to-1. Instead we:
This approach has a number of advantages:
Unfortunately this method has very narrow area where it can be applied.
I have no idea why, but most of existing (in other languages) JSON libraries support only DOM and SAX parsing. In my humble opinion SAX is the weirdest and the most difficult style of API. But it is also supported in Argentum JSON module. With a small addition:
interface ISaxReader{
onArrayStart();
onArrayEnd();
onObjectStart();
onObjectEnd();
onField(name str);
onNull();
onBool(v bool);
onNum(v double);
onString(v str);
}
parseWithSax(in Parser, r ISaxReader) {
in.tryNum() ? r.onNum(_) :
in.tryStr() ? r.onStr(_) :
in.tryBool() ? r.onBool(_) :
in.tryNull() ? r.onNull() :
in.isArr() ? {
r.onArrayStart();
in.getArr\parseWithSax(in, r);
in.onArrayEnd()
} :
in.isObj() ?{
r.onObjStart()
in.getObj`f {
r.onField(f);
parseWithSax(in, r)
};
r.onObjEnd()
}
}This function converts the input JSON into a sequence of calls to ISaxReader interface. Use it on your discretion.
Sometimes in the middle of stream processing or StAX parsing it gets needed to parse some subtree (array item or specific field) in a pass-through manner, producing a text string of this sub-JSON. This code could help:
fn scan(in Parser, out Writer) {
in.tryNum() ? out.num(_) :
in.tryStr() ? out.str(_) :
in.tryBool() ? out.bool(_) :
in.tryNull() ? out.null() :
in.isArr() ? out.arr\in.getArr\scan(in, _) :
in.isObj() ? out.obj\in.getObj`f scan(in, _(f));
}Give this function:
WriterAnd it produce a text with filtered, normalized, and formatted JSON representing this subtree.
This function can also be applied to a full JSON document. It is useful to compactify/tabify/indent/unindent various JSONs:
fn compactify(inJson str) str {
Writer.{scan(Parser.init(inJson), _)}.toStr();
}
fn tabify(inJson str) str {
Writer.useTabs().{scan(json_Parser.init(inJson), _)}.toStr();
}getObj\readFieldscan function from the previous topic to extract a subtree as a text.Combine these methods depending on your goal.
Argentum JSON Module allows to process data in multiple ways: Streaming, DOM, SAX, StAX, direct copy and all combinations of the above.
]]>vcpkg so it automatically installs and builds all dependencies for all platforms in all configurations. Argentum dependencies include LLVM, Skia, SDL, Curl, SqLite and other libraries.
The initial build takes ≈60Gb of storage, it requires 16+Gb RAM, and depending on device configuration can take up to 6 Hours.
All subsequent rebuilds take minutes.
sudo apt-get upgrade
sudo apt-get update
sudo apt-get install cmake build-essential ninja-build linux-libc-dev pkg-config
sudo apt-get install autoconf-archive libgl1-mesa-dev
sudo apt install autoconf automake libtoolmkdir ~/cpp && cd ~/cpp
git clone https://github.com/microsoft/vcpkg.git
cd vcpkg && ./bootstrap-vcpkg.shcd ~/cpp
git clone https://github.com/karol11/argentum.git
cd argentum/cmake --preset default -DCMAKE_BUILD_TYPE=Release
cmake --build buildThe first of two commands downloads, patches and builds all dependencies. It takes hours and may fail depending on you distribution and OS version. If this happened, read log/output and apt install needed dependencies.
The second one actually builds argentum, it takes a minute or two and upon completion it make you an output directory identical to one in Windows build.
cd output/workdir/
../bin/run-release.bash threadTestTBD: other apps, vscode, debugging etc.
]]>Launch VSCode, File->OpenFolder and chose the Argentum directory:
output directory, if you built Argentum from sources.Create a httpJsonDbDemo.ag file in the ag subdirectory.
using sys { log, setMainObject }
using string;
using httpClient { get, Response }
using json { Parser }
using sqliteFfi { Sqlite }Our application performs asynchronous network requests and as such needs a global state to be alive for the whole application lifetime. In argentum this global state must be incapsulated in an object:
class App { // Our App class
db = Sqlite; // it will hold a DB connection,
} // we could add other global items here later
app = App; // Let's create the app instance
setMainObject(app); // and register it inside the Argentum runtimeThe Sqlite class has the open method, that takes a path to a DB file and a set of flags. In the future there'll be added some useful named constants, but for now 2 is read-write access. This open returns a bool indicating if its database opens successfully.
app.db.open("mydb.sqlite", 2) ?
log("Connected to the DB!")Let's replace the log in line 12 with HTTPS query, since it should be performed only if db opens successfully.
The httpClient has a get function, that performs GET requests (It also exports a Request class that allows to performs arbitrary requests with any headers and verbs, but all we need now is a simple get).
This get function requires a URL and a delegate - an Argentum callable value type that combines:
Delegate functions can access their context objects using this keyword (or directly access fields and methods)
Since delegates store weak pointers to context, when they are called, Argentum locks and checks this context pointer, and skips the call if its context object is dead. This makes delegates perfect fit of pub/sub patterns. BTW delegate can be asynchronously called across threads, this automatically launches function on the thread of its context object. In fact our delegate will be called from the http transport thread.
For debug/trace purpose and for future serialization, all delegate has to have names unique in their modules.
In our example we create a delegate attached to our app object. We name this delegate onFetched. Http client calls this delegate, with Response parameter. For now we take a byte array from the response body, create a text string out of it and print it to the console.
Since this http call concludes our application activity, we gracefully terminate our application by setting a optional-empty value as the main runtime object. This assignment destroys the previous application state and exits from the application.
For this tutorial purpose we'll make a call to a mock endpoint generously provided by beeceptor.com
app.db.open("mydb.sqlite", 2) ?
get("https://fake-json-api.mock.beeceptor.com/users", app.&onFetched(resp Response){
log("Data fetched {resp.body.mkStr(0, resp.body.capacity())}");
setMainObject(?App);
});At this point we can compile and run our application
bin\run-release httpJsonDbDemo in consoleThe resulting JSON contains an array of objects having id, name and email fields. Let's extract them.
getArr method on our parser and provide a lambda that will be called once per each array item.This lambda should handle the array item object, extract its fields and store it somehow. Let for simplicity make it other way:
This eliminates the necessity of having an intermediate object.
json = Parser.init(resp.body.mkStr(0, resp.body.capacity()));
json.getArr {
id = 0;
name = "";
email = "";
json.getObj {
_=="id" ? id := int(json.getNum(0.0)) :
_=="name" ? name := json.getStr("") :
_=="email" ? email := json.getStr("");
};
log("{id} {name} {email} ")
};
setMainObject(?App);Launch app again and see the output:
1 Kirk Bernier [email protected]
2 Joshua Lynch [email protected]
3 Terrill Howell [email protected]
4 Mozell Emard [email protected]
Let's replace the log call in line 23 with this code:
db.query("
INSERT INTO "table"("id", "name", "avatar") values(?,?,?)
", 0).setInt(1, id)
.setString(2, name)
.setString(3, email)
.execute{_}Here we:
db field of the app object (we are in a delegate that is connected to the main app object){_} construction is an empty lambda with one parameter)That's it. Our application:
Let's make one small change - move DB Query creation out of json.getArr. because query should be precompiled once and not compiled individually for every JSON array item.
The final total listing is here:
using sys { log, setMainObject }
using string;
using httpClient { get, Response }
using json { Parser }
using sqliteFfi { Sqlite }
class App {
db = Sqlite;
}
app = App;
setMainObject(app);
app.db.open("mydb.sqlite", 2) ?
get("https://fake-json-api.mock.beeceptor.com/users", app.&onFetched(resp Response){
query = db.query("
INSERT INTO "table"("id", "name", "avatar") values(?,?,?)
", 0);
json = Parser.init(resp.body.mkStr(0, resp.body.capacity()));
json.getArr {
id = 0;
name = "";
email = "";
json.getObj {
_=="id" ? id := int(json.getNum(0.0)) :
_=="name" ? name := json.getStr("") :
_=="email" ? email := json.getStr("");
};
id !=0 && name !="" && email != "" ? query
.setInt(1, id)
.setString(2, name)
.setString(3, email)
.execute{_}
};
setMainObject(?App)
});For this purpose Argentum JSON module supports JSON DOM:
using sys { log }
using json { Parser, Writer, read }
// This is a JSON we work with
text = "
{
"x": 1,
"z": {"a":"sss"},
"y": "asdf"
}
";
// Read it to the DOM, where `root` is a root node
root = read(Parser.init(text));
// Write it back to JSON and print it:
log(root.write(Writer).toStr());
// This prints: {"z":{"a":"sss"},"x":1,"y":"asdf"}// Reads JSON (or its subtree at which the parser is positioned)
// into a tree of DOM elements.
// Parser interface can be used to check for completeness and errors.
fn read(input Parser) @Node;
// A common interface for all DOM nodes.
// All JSON DOM nodes can `write` to the writer.
interface Node {
write(output Writer) Writer;
}
// Classes that represent different JSON node types:
class JNull{ +Node; } // Represents null nodes
class JNum{ +Node; n = 0.0; } // Numeric node
class JStr{ +Node; s = ""; } // String node
class JBool{ +Node; b = false} // Boolean node
class JArr{ +Node; +Array(Node); } // Array node (it's just an Argentum array of nodes)
class JObj{ +Node; +Map(String, Node); } // Object node (its just a Argentum map String->Node)
// Functions that help build JSON nodes:
fn jnull() @JNull; // Creates a new null node.
fn jnum(n double) @JNum; // Creates a numeric node, sugar for JNum.{ _.n := n }
fn jbool(b bool) @JBool;
fn jstr(s str) @Jstr;
fn jarr(itemMaker(JArr)) @Jarr // Creates an array: jarr{ _.append(jbool(false)... }
fn jobj(fieldMaker(JObj)) @JObj // Create an object: iobj{ _["x"]:=jnum(42) }In this example we read DOM and access its elements. It's worth mentioning that since we read data in a way that we expect a structure in it, it definitely should be processed in a StAX mode, not in a DOM mode, but since it's hard to make-up a concise scenario of DOM usage, let it be this way.
// Source JSON - a table represented by an array of objects
text = "
[
{"name": "Andrey", "height": 6.5},
{"name": "Katy", "height": 5.8}
]
";
// Read
root = read(Parser.init(text));
// Scan and print:
// Here we first check if the root object is an array
// and if so, iterate over it, processing only array items that are JSON objects
// For each item `i` we check if it contains two fields
// and if fields are of type string and numeric node.
// If all checks succeed, we print their values.
root~JArr ? _.each { _~JObj ? `i{
i["name"] ? _~JStr ? `name
i["height"] ? _~JNum ? `height
log("{name.s}-{height.n} ")
}};
// This prints: Andrey-6.5 Katy-5.8DOM containers are internally just standard Argentum arrays and maps, they don't use any specialized JSON API. All leaf nodes hold mutable fields of data that could be access/modified directly.
text = "
{
"x": 1,
"z": {"a":"sss"},
"y": "asdf"
}
";
root = read(Parser.init(text));
root~JObj ? `r // if root is an object
r["x"] ? _~JNum ? `xn // and it has a numeric field `x`
r["z"] ? _~JObj ? // and an object field `z`
_["v"] := jnum(3.14 + xn.n); // .. put in `z` a new numeric field `v`
log(root.write(Writer.useSpaces(2)).toStr());This prints:
{
"z": {
"a": "sss",
"v": 4.14
},
"x": 1,
"y": "asdf"
}99.9% of JSON scenarios don't require using DOM. Instead it's more efficient and easier to read JSONs to you application data structures using StAX parser and streaming Writer.
JSON DOM is needed only if you make an application that edits/formats/processes the arbitrary JSONs not related to your application data structures. Such applications can add methods they need (like handle focus/layout/render etc.) directly to the Node interface and J* classes, flattening the class hierarchy.
This JSON module with DOM, parser and Writer can be used in Argentum built from sources, and in the playground. It is not yet integrated into the binary demo.
]]>using sys { log }
using json { Writer }
w = Writer;
w.num(3.14);
log(w.toStr());
// or just
log(Writer.num(3.14).toStr());
// Both variants print 3.14Writer has a number of methods that write structural and primitive data nodes.
After data is written, a call to toStr() returns the whole created JSON as a string.
For convenience all Writer methods return this, mostly to let us call toStr() method in the end.
writer.null()writer.bool(true)writer.num(123.4e-10)writer.str("Random text")Numeric nodes represent 52-bits integers as exact values and use exponent notation where possible. Please notice that JSON numbers are always doubles by standard. If you need to to store anything exceeding 52 bits, use strings.
Strings are represented with utf8 runes for all characters 0x20..0x1ffff and escapes for characters 0x1..0x1f.
log(Writer.str("trn\
"Hello"
\there/
").toStr())
//This code prints "\t\"Hello\"\r\n\t\\there\/\r\n"Argentum's multi-line string with "trn\" formatter prepends each line with a tab, ends it with CR LF and adds the line ending to the last line, as described here: multiple string literals.
Method arr expects a lambda-parameter that should write the whole array content. This lambda receives one parameter - a reference to the Writer that should be used to write array items. In the following example we pass lambda as {}-block and access its parameter using the default "_"-name:
log(Writer.arr{
_.null();
_.num(42);
_.num(11);
_.str("Hi");
_.bool(false)
}.toStr())
// This example prints: [null,42,11,"Hi",false]The only the Writer that passed inside the arr method is capable of writing multiple JSON nodes, in contrast the writer for root element and writer for field data ignore all calls beyond the first one.
The arr method can be called from the lambda of another arr call to create arrays inside another arrays:
sys_log(Writer.arr{
_.num(1.1);
_.arr{
_.bool(true);
_.bool(true);
};
_.arr{
_.bool(false);
_.bool(false);
};
_.num(1.2);
}.toStr());
// This example prints [1.1,[true,true],[false,false],1.2]Or using this-chaining:
sys_log(Writer.arr{_
.num(1.1)
.arr{_.bool(true).bool(true)}
.arr{_.bool(false).bool(false)}
.num(1.2)
}.toStr());An obj method writes objects. It expects you to provide an object-writer-lambda.
Your object-writer-lambda receives a parameter - a field-writer-lambda that could be called multiple times with a field name string parameter.
Each call to the field-writer lambda returns a Writer that could be used to create a field value.
Example:
sys_log(Writer.obj {
_("year").num(1972.0);
_("name").str("Andrey");
_("details").obj { // nested object
_("awake").bool(true);
_("excels at").arr{_}; // empty array
};
_("address").null();
}.toStr());
// Prints {"year":1972,"name":"Andrey","details":{"awake":true,"excels at":[]},"address":null}This example also demonstrates nested objects and empty arrays.
If you didn't write field value between calls to field writer lambda, then Writer automatically made this field null.
By default JSON Writer produces compact JSONs, but this default can be overridden:
Call useTabs() or useSpaces(count) methods to make Writer to format its output with extra spaces and indentations:
sys_log(Writer.useSpaces(2).obj {
_("year").num(1972.0);
_("name").str("Andrey");
_("details").obj {
_("awake").bool(true);
_("excels at").arr{_};
};
_("address").null();
}.toStr());
// It prints:
// {
// "year": 1972,
// "name": "Andrey",
// "details": {
// "awake": true,
// "excels at": []
// },
// "address": null
// }Let's assume, that we have these classes:
class Point{
x = 0f;
y = 0f;
}
class Polygon {
name = "";
points = Array(Point);
isActive = false;
}This function writes an array of Polygons to JSON:
fn polygonsToJson(data Array(Polygon)) str {
Writer.useTabs().arr {
data.each `poly _.obj{
_("name").str(poly.name);
_("active").bool(poly.isActive);
_("points").arr\poly.points.each `pt _.obj {
_("x").num(double(pt.x));
_("y").num(double(pt.y))
}
}
}.toStr()
}
log(polygonsToJson(myPolygonArray));
// Depending on the content of the myPolygonArray
// this example could print:
// [
// {
// "name": "Corner",
// "active": false,
// "points": [
// {
// "x": 10,
// "y": -100.01
// },
// {
// "x": 0,
// "y": 0
// },
// {
// "x": -42,
// "y": -11
// }
// ]
// },
// {
// "name": "A dummy one",
// "active": true,
// "points": []
// }
// ]As with JSON Parser, this Writer entirely skips the creation of the intermediate JSON DOM structures, thus reducing memory usage by twofold and CPU overhead by threefold.
This JSON module with its parser and Writer can be used in Argentum built from sources, and in the playground. It is not yet integrated into the binary demo.
]]>