J. Ryan Stinnett https://convolv.es/ Recent content on J. Ryan Stinnett Hugo -- gohugo.io en-gb Fri, 01 Nov 2024 16:10:32 +0000 Extensibility via capabilities and effects https://convolv.es/blog/2024/11/01/capabilities-effects/ Fri, 01 Nov 2024 16:10:32 +0000 https://convolv.es/blog/2024/11/01/capabilities-effects/ Much of today’s software limits user extensibility. This article explores one potential route forward using capabilities, effects, and extension-time type checking to provide a more predictable extension path. This is a submission for the fearless extensibility challenge problem organised by the Malleable Systems Collective.


Much of today’s software limits user extensibility. If you’re lucky, there may be a plugin system of some kind, but that will only support whatever actions the upstream vendor imagines and deigns to support. If it’s open source, you could fork and customise, but that’s not accessible to most. Even if you have the expertise, it entails a pile of maintenance work to stay up to date. If it’s closed source, you’re essentially out of luck.

There have been some historical extension systems that allowed a high degree of freedom (e.g. legacy Firefox extensions). As mentioned by the challenge problem, while those approaches may offer a high degree of user freedom, they also open the door to malware and create maintenance issues for the extension host.

This article explores one potential route forward using capabilities, effects, and extension-time type checking to provide a more predictable extension path for users, extension authors, and platform maintainers alike.

Disclaimers and assumptions

I’ve perhaps already spooked the dynamic language fans with words like “type checking” and “effect” above… 😅 I’m not attempting to claim that static types are the only way here. I just returned from SPLASH (an academic PL conference) where several statically typed effects systems were in the air, so my thoughts just happen to biased in that direction at the moment. I’ve scribbled a few more thoughts about a dynamic version towards the end in the Implementation section.

This article focuses on cases where the source (or a typed IR derived from source) for both the extension and the extension host are available, as tooling would need to analyse the combination of both.

Goal

The key ability we wish to achieve is arbitrary extension / modification of the extension host while preserving safe and correct operation overall and also permitting host maintainers to refactor without fear.

As an example, let’s imagine the host program we want to extend is a graphics canvas (e.g. akin to Figma). This host program has a built-in color picking feature that displays a UI to choose a color which is then added to the recent colors palette. Our extension author would like to extend color picking so that all colors are adjusted to meet accessibility standards.

// host
def pickColor() {
    val color = colorPicker.choose()
    palette.add(color)
}

// extension
def onClickPick() {
    // We want to call `host.pickColor`,
    // but we need adjust `color` before it goes into the palette
}

The only accessible and relevant API the host offers for extensions to use for this case is the pickColor function, but our extension wants to add behaviour in the middle of pickColor, so we can’t use it as-is.

Dynamic languages might allow host functions like pickColor to be copied by the extension and redefined, but this is too broad for the change we wish to make. The extension now needs to keep its modified pickColor up to date with changes upstream in the host copy, even though they aren’t related to the behaviour it’s adding. From the host maintainer perspective, you don’t feel that you can safely refactor your code, since every extension might contain old copies of host functions that could break after your refactoring.

We’d like to express the intent of the extension’s behaviour change in a targeted and precise manner that avoids these issues.

Effects and capabilities

Before we get there, let’s talk about effects.

Effects are a (relatively) newer programming language concept that allows tracking user-defined side effects as types (e.g. IO, memory access, exceptions) and also supports effect handlers to take some action when these effects occur. They’ve been percolating in experimental languages (Koka, Effekt, Unison) for a while now, and are starting to appear in more established ones (Scala, OCaml).

If you haven’t encountered effects before, I suggest skimming the Effekt language site, as they have an approachable intro to the key concepts. I don’t think I can do it justice myself, and introducing effects is beyond the scope of this article anyway.

Effect handlers can resume the computation that was suspended when the effect occurred, and may even resume multiple times. Effects and their handlers generalise many forms of control flow, including exceptions, generators, and multithreading, allowing libraries to flexibly provide these features, rather than requiring language designers to add special functionality for each one.

Various works have made connections between effects and capabilities. A function that has e.g. a “file read” effect can be thought of as requiring a “file read” capability. Effect handlers can even be added to some existing languages if we pass the handler / capability as an extra function argument (referred to as “capability-passing style”).

Implicit effects

Effects systems today focus on what I’ll call “explicit” effects: both the code performing the effect and its corresponding handler are written explicitly in the program source. If you want to model additional user-defined effects, you would add both handlers and effect performing steps to do so.

We can also imagine “implicit” effects that represent existing language operations. For example, function calls could be treated as an implicit effect. If we then allow handlers to be defined for these implicit effects, we gain quite powerful control over deeply nested code.

Extensions can leverage this ability to make arbitrary changes to the extension host. While that is quite a powerful technique, the static types present in both the host and extension help to ensure reasonable behaviour is maintained by ensuring required values are still provided as expected.

This code modification ability bears some resemblance to the power of aspect-oriented programming. Effect systems (especially those with lexical effect handlers) avoid the “spooky action at a distance” issue that can make AOP approaches hard to understand. Additional tooling can help further by highlighting modified operations in the combined system (host with extensions). Some amount of surprising host control flow seems tolerable when we gain the ability to make arbitrary changes via extensions. Extension-time type checking should ameliorate some concerns by ensure all modules fit together as expected.

🚧
This would be a good place to show a running example… I’ll try to add one in a future version.

Safety via capabilities and isolation

Host maintainers often fear nefarious extensions may perform various undesirable actions. Dangerous abilities (e.g. “delete home directory”) can be avoided by restricting or not providing those capabilities to extension execution contexts.

In a system where extensions can modify deeply nested code, there will likely be a need to isolate extensions both from the host and from each other. For example, extension A uses handles a call effect to alter host function foo, while extension B depends on its default behaviour. Fortunately, the desired isolation falls out naturally from a lexical effect handler model: extension A’s code modification is only active inside the scope of its handler. It has no effect on other code paths in extension A, and certainly not on other extensions.

It’s likely ideal to go further and ensure execution contexts actually are isolated from each other. Existing concepts like mirrors, compartments, and realms suggest a way forward.

Extension host code changes

In today’s dynamic systems where extension might mean wholly replacing host functions with modified copies, it can be hard to predict what madness may ensue when the host platform refactors some code. Issues usually only present themselves at run time when the modified host code is somehow invoked, or alternatively the modified code may be silently unused if the extension includes an outdated copy of a modified host function.

These concerns are easily avoided when leveraging static types and code modification via effects. Effects allow for precision code modification, so there’s no need to copy an entire host function just to modify one line. Static types give extension-time assurance that the extension and host continue to fit together in a reasonable way. If the host refactoring creates an incompatibility, it will be clearly surfaced when trying to load the extension (which is far better than waiting until feature use).

Open questions

We’ve examined a mechanism to modify code nested beneath some function an extension calls, but what if you need to modify some behaviour that cannot be reached by any function exposed to extensions? At first glance, it would seem like some form of reflection is needed to gain access to these internals. Perhaps a controlled form of reflection using mirrors as capabilities…? I am confident there’s an elegant approach to be found that integrates well with the thoughts on effects and capabilities above, while avoiding the messy approaches of AOP.

Implementation

This approach would seem to require an execution environment that allows dynamically hooking / modifying code in response to changes made by extensions. While various dynamic programming systems like those associated with JavaScript, Smalltalk, Lisp, etc. may have some support for this, it’s less likely to be found in the statically typed languages, as those often assume type checking should be paired with ahead-of-time compilation.

I imagine metaobject protocols (e.g. from Common Lisp, Smalltalk, etc.) could accomplish similar modifications at run time. The approach described here also makes uses of capabilities, so Newspeak might be the closest match among dynamic languages.

It’s less clear to me how dynamically typed languages might provide extension-time sanity checks, so that’s a big part of why I focused on a statically typed approach. If you see a way to do something similar in a more dynamic environment, please do let me know!

A few stacks that could allow for type-checked extension-time code modification include:

  • Wasm with GC and stack switching extensions (plus additional metadata)
  • Scala 3 (which preserves a typed AST for metaprogramming)

Are there other technologies that might be well suited to such an approach? Please do let me know, as it may save me quite a lot of time and energy as I explore this idea further!

Summary

In the article, we’ve taken a look at (an in-progress sketch of) one potential extensibility approach. Is it complicated? Yes. Is it over-engineered…? Perhaps. I’m okay with jumping through a few hoops if it will restore deep extensibility while also balancing safety and maintenance concerns.

I’d love to hear feedback on this! Assuming I manage to stay focused on this topic, I’d like to implementing an extension system using some of the ideas here. There are clearly some rough edges and likely better alternate approaches, so do let me know what comes to mind.

As already mentioned, there are various languages that support some form of effects, capabilities, or both, so do take a look at those.

There are many resources on effects, capabilities, metaprogramming, etc. that could be mentioned… The list below mentions those that connect more directly to the ideas in this article. This is certainly not a comprehensive list…!

]]>
Optimisation-dependent IR decisions in Clang https://convolv.es/blog/2024/08/17/clang-ir-opt-level/ Sat, 17 Aug 2024 22:27:06 +0100 https://convolv.es/blog/2024/08/17/clang-ir-opt-level/ I used to naively assume that Clang always handed off the same IR to the LLVM optimisation pipeline regardless of optimisation level. In an attempt to gain a bit more understanding into exactly what kinds of decisions depend on optimisation level in Clang, I surveyed the IR emission code paths. I used to naively assume that Clang always handed off “basically” the same IR to the LLVM optimisation pipeline regardless of optimisation level. I was at least aware of the optnone attribute set on functions when compiling at -O0, but I’ve slowly started to notice there are more divergences than just that.

Survey

In an attempt to gain a bit more understanding into exactly what kinds of decisions depend on optimisation level in Clang, I surveyed the IR emission code paths. I examined Clang source at commit 7c4c72b52038810a8997938a2b3485363cd6be3a (2024-08).

I ignored decisions related to specialised language specifics (Objective-C, ARC, HLSL, OpenMP) and ABI details.

Example

If you’d like to explore the differences yourself, take a look at this Compiler Explorer example. The input source is not too interesting (I’ve grabbed a random slice of Git source files that I happened to have on hand). The left IR view shows -O0 and the right IR view shows -O1 with LLVM passes disabled. We can ask Clang to produce LLVM IR without sending it through the LLVM optimisation pipeline by adding -Xclang -disable-llvm-passes (a useful tip for LLVM archaeology).

Compiler Explorer playground comparing O0 and O1 LLVM IR

After diffing the two outputs, there are two features that are only activated when optimisation is enabled that appear to be responsible for most of the differences in this example:

  • Lifetime markers
  • Type-based alias analysis (TBAA) metadata

Lifetime markers are especially interesting in this example, as Clang actually reshapes control flow (adding several additional cleanup blocks) so that it can insert these markers (which are calls to LLVM intrinsic functions llvm.lifetime.start/end).

]]>
Accurate coverage metrics for compiler-generated debugging information https://convolv.es/talks/debug-info-metrics/ Thu, 11 Apr 2024 00:00:00 +0000 https://convolv.es/talks/debug-info-metrics/ In this talk, we propose some new metrics, computable by our tools, which could serve as motivation for language implementations to improve debugging quality. Conference talk (video, slides) presented at EuroLLVM 2024

This talk covers research I worked on together with Stephen Kell. See also our CC 2024 paper on this topic.

Abstract

Many debugging tools rely on compiler-produced metadata to present a source-language view of program states, such as variable values and source line numbers. While this tends to work for unoptimised programs, current compilers often generate only partial debugging information in optimised programs. Current approaches for measuring the extent of coverage of local variables are based on crude assumptions (for example, assuming variables could cover their whole parent scope) and are not comparable from one compilation to another. In this talk, we propose some new metrics, computable by our tools, which could serve as motivation for language implementations to improve debugging quality.

]]>
Link-time optimisation (LTO) https://convolv.es/guides/lto/ Wed, 08 Nov 2023 15:49:51 +0000 https://convolv.es/guides/lto/ I recently started exploring link-time optimisation (LTO), which I used to think was just a single boolean choice in the compilation and linking workflow, and perhaps it was like that a while ago… I’ve learned that these days, there are many different dimensions of LTO across compilers and linkers today and more variations are being proposed all the time. In this “living guide”, I aim to cover the LTO-related features I have encountered thus far. I recently started exploring link-time optimisation (LTO), which I used to think was just a single boolean choice in the compilation and linking workflow, and perhaps it was like that a while ago… I’ve learned that these days, there are many different dimensions of LTO across compilers and linkers today and more variations are being proposed all the time.

In this “living guide”, I aim to cover the LTO-related features I have encountered thus far. I intend to keep updating this going forward as I learn about new details in this space. I am sure there are even more corners to cover, so if you see something that should be added, corrected, etc. please contact me.

I am not aiming to provide specific recommendations here, as there are many tradeoffs to consider and different applications of LTO will weigh each of those differently. Instead, I hope this is a broadly useful portrayal of the facts.

This guide focuses on common toolchains for languages like C, Rust, etc. which typically use ahead-of-time (AOT) compilation and linking. Alternative toolchains for these languages and common toolchains for other languages may use other strategies like interpreting, JIT compilation, etc. Those other language implementation strategies do not offer LTO-like features (that I know of), so I have ignored them here.

I hope this guide is useful to experienced compiler users and compiler hackers who may not have heard about the latest work yet. I also hope it’s broadly accessible to those who may be unfamiliar with LTO.


Basics

Normal (non-LTO) compilation compiles and optimises one file at a time. LTO can optimise across all files at once. The overall aim of LTO is better runtime performance through whole-program analysis and cross-module optimisation.

Let’s take a high-level look at the workflow of most AOT compile and link toolchains. We’ll start off with the “default” workflow without LTO.

Default compile and link workflow with several source files optimised
separately into object files containing machine code, then linked to create the
final output

In the default workflow without LTO, each unit of source code is compiled and optimised separately to produce an object file with machine code for the target architecture. Optimisations can only consider a single source unit at a time, so all externally accessible symbols (functions and variables) must be preserved, even if they will end up being unused in the final linked output. The linker then combines these object files to produce the final output (executable or library).

Now let’s look at a workflow with LTO.

In the LTO setup, we still initially compile each source unit separately and perform some amount of optimisation, but the output is different: instead of machine code, the output of LTO compilation is an object file containing the compiler’s specific internal representation (IR) for that source unit.

The linking stage now performs a much more complex dance than it did before. The IR produced from compiling each source unit is read and the compiler’s optimisation pipeline is invoked to analyse and transform the whole program (the precise details of this varies with different LTO features, as we’ll see later in this guide). This whole program stage unlocks further optimisation opportunities, as we can remove symbols that are unused outside the whole program, inline functions from other source units, etc.

With those fundamentals out of the way, let’s look at various features and variants that toolchains offer to further tweak and customise this process.

Features and variants

⚠️
Some of the features found in the LTO space have “marketing” names which do not communicate what they actually do on a technical level. For some descriptions below, I have used my own terminology to avoid these issues. Each section also lists other names things are known by, in case you want to search for more information.

Basic LTO

This is the simplest of the LTO modes and matches the workflow described above in Basics. Each compilation task produces object files containing the compiler’s internal IR format. The linking stage combines all compilation units into a single, large module. Interprocedural analysis and optimisation is performed on a single thread. With large code bases, this process is likely to consume lots of memory and take a considerable amount of time.

In terms of compile-time performance, the LLVM project has shown that compilation and linking of the Clang 3.9 codebase with basic LTO is ~5x the non-LTO time. This extra work achieves an average run-time performance improvement of 2.86%.

This mode is also referred to as “full LTO”.

Toolchain First available Option
Clang 2.6 (2009) -flto or -flto=full
GCC 4.5 (2010) -flto -flto-partition=one
Rust 1.0 (2015) -C lto

Parallel LTO

Toolchains have come up with a variety of related techniques to speed up the the link-time work of LTO while preserving (most of) the run-time performance gains. Instead of creating a single, massive module at link time, a much smaller global index of functions likely to be inlined is computed. With that in hand, each compilation unit can be processed in parallel at link time while still benefiting from most of the same whole-program optimisations as basic LTO.

Continuing with the same example data based on building Clang 3.9, LLVM’s implementation of parallel LTO achieves nearly all of the run-time performance improvement as seen with basic LTO: basic LTO reached a 2.86% improvement over non-LTO, while parallel LTO achieved a 2.63% improvement over non-LTO.

The compilation time improves dramatically: instead of 5x non-LTO, it’s now only 1.2x non-LTO, which is quite impressive.

The technical details of how each toolchain implements parallel LTO vary somewhat. LLVM-based toolchains (which includes Clang and Rust from our discussions here) optimise different compilation units in parallel and inlines across module boundaries, but most other cross-modules optimisations are skipped. GCC, on the other hand, partitions (nearly) the same optimisation work it would have done with one thread into a batch per thread.

This suggests that GCC’s parallel LTO should be able to get closer than LLVM in achieving the same run-time performance as with basic LTO (our recurring dataset shows a 0.23% run-time delta between LLVM’s basic and parallel modes). At the same time, LLVM’s approach may be better able to handle incremental changes in a single module of large program. If you would like to see data comparing the two modes in GCC, please let me know.

This mode is also referred to as “thin LTO”, particularly in the LLVM ecosystem. The “thin” concept on the LLVM side refers to the fact that no IR is involved in the whole program analysis step.

Toolchain First available Option
Clang 3.9 (2016) -flto=thin
GCC 4.6 (2011) -flto=auto or -flto=<threads>
Rust 1.25 (2018) -C lto=thin

In some applications, it may be useful to support both the basic and parallel LTO modes. In this arrangement, the compiler IR attached to each object file stores all the info it needs to run either LTO mode at link time.

When it’s time to link, you can then choose either of the basic or parallel LTO modes via link-time compiler options (for the toolchains mentioned here, the program commonly referred to as just the “compiler” is really the “compiler driver” which in turn calls the other programs in the workflow, such as the linker).

This variant is also referred to as “unified LTO”.

Toolchain First available Option
Clang 17 (2023) -funified-lto
GCC 4.5 (2010) supported by default
Rust

It may also be useful to push the choice of whether to use LTO at all down to the linking step of the workflow. To support this use case, the compiler combines both machine code and internal IR in the object files produced by each compilation unit.

This variant is also referred to as “fat LTO objects”.

Toolchain First available Option
Clang 17 (2023) -ffat-lto-objects
GCC 4.9 (2014) -ffat-lto-objects
Rust

Other details

There are a few other more advanced corners of LTO, including:

  • Distributed build support
  • Symbol visibility
  • Linker caching

If you’re curious about any of these or other aspects, please let me know! I plan to extend this guide to document additional bits of LTO that others are interested in.


Acknowledgements

Thanks to Teresa Johnson, Jan Hubička, Iti Shree, and Laurence Tratt for feedback on earlier drafts.

]]>
Testing debug info of optimised programs https://convolv.es/talks/testing-debug-info/ Thu, 15 Sep 2022 00:00:00 +0000 https://convolv.es/talks/testing-debug-info/ In this preliminary work, we symbolically execute unoptimised and optimised versions of a program which are then checked for debug info consistency. We expect this to allow testing correctness of debug info generation across a much larger portion of the compiler. Workshop talk (video, slides) presented at KLEE 2022

Abstract

Debuggers rely on compiler-produced metadata to present correct variable values and line numbers in the source language. While this tends to work for unoptimised programs, current compilers either throw away or corrupt debugging info in optimised programs. Current approaches for testing debug info rely on manual test cases or only reach a small portion of the compiler. In this preliminary work, we symbolically execute unoptimised and optimised versions of a program which are then checked for debug info consistency. We expect this to allow testing correctness of debug info generation across a much larger portion of the compiler.

]]>
Room to grow: Building collaborative, open software https://convolv.es/talks/room-to-grow/ Fri, 29 Apr 2022 00:00:00 +0000 https://convolv.es/talks/room-to-grow/ We examine one approach to collaborative, open software by building on Matrix, a secure, decentralised, real-time communication protocol with generic database capabilities hiding beneath its current focus on chat. Conference talk (video, slides) presented at HYTRADBOI 2022

Abstract

Local-first software for creative work resets the balance of ownership away from the cloud, allowing creators to retain control of their work while still collaborating in real time with others. This sounds like a magical ideal, but it also raises many new questions, such as:

  • How should data synchronisation work?
  • What does user identity look like when collaborating?
  • How is document access defined among several people?
  • Is some kind of complex custom backend needed to handle all this?

In this talk, we examine one approach to collaborative, open software by building on Matrix, a secure, decentralised, real-time communication protocol with generic database capabilities hiding beneath its current focus on chat. We show through several examples and demos that Matrix can assist with all of the questions above, allowing creative software to focus on delivering the best experience while still meeting local-first ideals.

The software itself can also be collaborative and malleable, allowing for user customisation without depending on upstream vendors to add a new options to achieve your goals.

Notes

The following notes describe the ideas from the talk in more detail:

]]>
Building Firefox for Linux 32-bit https://convolv.es/blog/2017/08/25/building-firefox-for-linux-32-bit/ Fri, 25 Aug 2017 19:33:00 -0500 https://convolv.es/blog/2017/08/25/building-firefox-for-linux-32-bit/ Background As part of my work on the Stylo / Quantum CSS team at Mozilla, I needed to be able to test changes to Firefox that only affect Linux 32-bit builds. These days, I believe you essentially have to use a 64-bit host to build Firefox to avoid OOM issues during linking and potentially other steps, so this means some form of cross-compiling from a Linux 64-bit host to a Linux 32-bit target. Background

As part of my work on the Stylo / Quantum CSS team at Mozilla, I needed to be able to test changes to Firefox that only affect Linux 32-bit builds. These days, I believe you essentially have to use a 64-bit host to build Firefox to avoid OOM issues during linking and potentially other steps, so this means some form of cross-compiling from a Linux 64-bit host to a Linux 32-bit target.

I already had a Linux 64-bit machine running Ubuntu 16.04 LTS, so I set about attempting to make it build Firefox targeting Linux 32-bit.

I should note that I only use Linux occasionally at the moment, so there could certainly be a better solution than the one I describe. Also, I recreated these steps after the fact, so I might have missed something. Please let me know in the comments.

This article assumes you are already set up to build Firefox when targeting 64-bit.

Multiarch Packages (Or: How It’s Supposed to Work)

Recent versions of Debian and Ubuntu support the concept of “multiarch packages” which are intended to allow installing multiple architectures together to support use cases including… cross-compiling! Great, sounds like just the thing we need.

We should be able to install1 the core Gecko development dependencies with an extra :i386 suffix to get the 32-bit version on our 64-bit host:

(host) $ sudo apt install libasound2-dev:i386 libcurl4-openssl-dev:i386 libdbus-1-dev:i386 libdbus-glib-1-dev:i386 libgconf2-dev:i386 libgtk-3-dev:i386 libgtk2.0-dev:i386 libiw-dev:i386 libnotify-dev:i386 libpulse-dev:i386 libx11-xcb-dev:i386 libxt-dev:i386 mesa-common-dev:i386
Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
 libgtk-3-dev:i386 : Depends: gir1.2-gtk-3.0:i386 (= 3.18.9-1ubuntu3.3) but it is not going to be installed
                     Depends: libatk1.0-dev:i386 (>= 2.15.1) but it is not going to be installed
                     Depends: libatk-bridge2.0-dev:i386 but it is not going to be installed
                     Depends: libegl1-mesa-dev:i386 but it is not going to be installed
                     Depends: libxkbcommon-dev:i386 but it is not going to be installed
                     Depends: libmirclient-dev:i386 (>= 0.13.3) but it is not going to be installed
 libgtk2.0-dev:i386 : Depends: gir1.2-gtk-2.0:i386 (= 2.24.30-1ubuntu1.16.04.2) but it is not going to be installed
                      Depends: libatk1.0-dev:i386 (>= 1.29.2) but it is not going to be installed
                      Recommends: python:i386 (>= 2.4) but it is not going to be installed
 libnotify-dev:i386 : Depends: gir1.2-notify-0.7:i386 (= 0.7.6-2svn1) but it is not going to be installed
E: Unable to correct problems, you have held broken packages.

Well, that doesn’t look good. It appears some of the Gecko libraries we need aren’t happy about being installed for multiple architectures.

Switch Approaches to chroot

Since multiarch packages don’t appear to be working here, I looked around for other approaches. Ideally, I would have something fairly self-contained so that it would be easy to remove when I no longer need 32-bit support.

One approach to multiple architectures that has been around for a while is to create a chroot environment: effectively, a separate installation of Linux for a different architecture. A utility like schroot can then be used to issue the chroot(2) system call which makes the current session believe this sub-installation is the root filesystem.

Let’s grab schroot so we’ll be able to enter the chroot once it’s set up:

(host) $ sudo apt install schroot

There are several different types of chroots you can use with schroot. We’ll use the directory type, as it’s the simplest to understand (just another directory on the existing filesystem), and it will make it simpler to expose a few things to the host later on.

You can place the directory wherever, but some existing filesystems are mapped into the chroot for convenience, so avoiding /home is probably a good idea. I went with /var/chroot/linux32:

(host) $ sudo mkdir -p /var/chroot/linux32

We need to update schroot.conf to configure the new chroot:

(host) $ sudo cat << EOF >> /etc/schroot/schroot.conf
[linux32]
description=Linux32 build environment
aliases=default
type=directory
directory=/var/chroot/linux32
personality=linux32
profile=desktop
users=jryans
root-users=jryans
EOF

In particular, personality is important to set for this multi-arch use case. (Make sure to replace the user names with your own!)

Firefox will want access to shared memory as well, so we’ll need to add that to the set of mapped filesystems in the chroot:

(host) $ sudo cat << EOF >> /etc/schroot/desktop/fstab
/dev/shm       /dev/shm        none    rw,bind         0       0
EOF

Now we need to install the 32-bit system inside the chroot. We can do that with a utility called debootstrap:

(host) $ sudo apt install debootstrap
(host) $ sudo debootstrap --variant=buildd --arch=i386 --foreign xenial /var/chroot/linux32 http://archive.ubuntu.com/ubuntu

This will fetch all the packages for a 32-bit installation and place them in the chroot. For a cross-arch bootstrap, we need to add --foreign to skip the unpacking step, which we will do momentarily from inside the chroot. --variant=buildd will help us out a bit by including common build tools.

To finish installation, we have to enter the chroot. You can enter the chroot with schroot and it remains active until you exit. Any snippets that say (chroot) instead of (host) are meant to be run inside the chroot.

So, inside the chroot, run the second stage of debootstrap to actually unpack everything:

(chroot) $ /debootstrap/debootstrap --second-stage

Let’s double-check that things are working like we expect:

(chroot) $ arch
i686

Great, we’re getting closer!

Install packages

Now that we have a basic 32-bit installation, let’s install the packages we need for development. The apt source list inside the chroot is pretty bare bones, so we’ll want to expand it a bit to reach everything we need:

(chroot) $ sudo cat << EOF > /etc/apt/sources.list
deb http://archive.ubuntu.com/ubuntu xenial main universe
deb http://archive.ubuntu.com/ubuntu xenial-updates main universe
EOF
(chroot) $ sudo apt update

Let’s grab the same packages from before (without :i386 since that’s the default inside the chroot):

(chroot) $ sudo apt install libasound2-dev libcurl4-openssl-dev libdbus-1-dev libdbus-glib-1-dev libgconf2-dev libgtk-3-dev libgtk2.0-dev libiw-dev libnotify-dev libpulse-dev libx11-xcb-dev libxt-dev mesa-common-dev python-dbus xvfb yasm

You may need to install the 32-bit version of your graphics card’s GL library to get reasonable graphics output when running in the 32-bit environment.

(chroot) $ sudo apt install nvidia-384

We’ll also want to have access to the X display inside the chroot. The simple way to achieve this is to disable X security in the host and expose the same display in the chroot:

(host) $ xhost +
(chroot) $ export DISPLAY=:0

We can verify that we have accelerated graphics:

(chroot) $ sudo apt install mesa-utils
(chroot) $ glxinfo | grep renderer
OpenGL renderer string: GeForce GTX 1080/PCIe/SSE2

Building Firefox

In order for the host to build Firefox for the 32-bit target, it needs to access various 32-bit libraries and include files. We already have these installed in the chroot, so let’s cheat and expose them to the host via symlinks into the chroot’s file structure:

(host) $ sudo ln -s /var/chroot/linux32/lib/i386-linux-gnu /lib/
(host) $ sudo ln -s /var/chroot/linux32/usr/lib/i386-linux-gnu /usr/lib/
(host) $ sudo ln -s /var/chroot/linux32/usr/include/i386-linux-gnu /usr/include/

We also need Rust to be able to target 32-bit from the host, so let’s install support for that:

(host) $ rustup target add i686-unknown-linux-gnu

We’ll need a specialized .mozconfig for Firefox to target 32-bit. Something like the following:

(host) $ cat << EOF > ~/projects/gecko/.mozconfig
export PKG_CONFIG_PATH="/var/chroot/linux32/usr/lib/i386-linux-gnu/pkgconfig:/var/chroot/linux32/usr/share/pkgconfig"
export MOZ_LINUX_32_SSE2_STARTUP_ERROR=1
CFLAGS="$CFLAGS -msse -msse2 -mfpmath=sse"
CXXFLAGS="$CXXFLAGS -msse -msse2 -mfpmath=sse"
if test `uname -m` = "x86_64"; then
  CFLAGS="$CFLAGS -m32 -march=pentium-m"
  CXXFLAGS="$CXXFLAGS -m32 -march=pentium-m"
  ac_add_options --target=i686-pc-linux
  ac_add_options --host=i686-pc-linux
  ac_add_options --x-libraries=/usr/lib
fi
EOF

This was adapted from the mozconfig.linux32 used for official 32-bit builds. I modified the PKG_CONFIG_PATH to point at more 32-bit files installed inside the chroot, similar to the library and include changes above.

Now, we should be able to build successfully:

(host) $ ./mach build

Then, from the chroot, you can run Firefox and other tests:

(chroot) $ ./mach run

Firefox running on Linux 32-bit

Footnotes

1. It’s commonly suggested that people should use ./mach bootstrap to install the Firefox build dependencies, so feel free to try that if you wish. I dislike scripts that install system packages, so I’ve done it manually here. The bootstrap script would likely need various adjustments to support this use case.

]]>
WiFi Debugging for Firefox for Android https://convolv.es/blog/2015/08/05/wifi-debug-fennec/ Wed, 05 Aug 2015 15:33:00 -0500 https://convolv.es/blog/2015/08/05/wifi-debug-fennec/ I am excited to announce that we&rsquo;re now shipping WiFi debugging for Firefox for Android! It&rsquo;s available in Firefox for Android 42 with Firefox Nightly on desktop. The rest of this post will sound quite similar to the previous announcement for Firefox OS support. WiFi debugging allows WebIDE to connect to Firefox for Android via your local WiFi network instead of a USB cable. The connection experience is generally more straightforward (especially after connecting to a device the first time) than with USB and also more convenient to use since you&rsquo;re no longer tied down by a cable. I am excited to announce that we’re now shipping WiFi debugging for Firefox for Android! It’s available in Firefox for Android 42 with Firefox Nightly on desktop.

The rest of this post will sound quite similar to the previous announcement for Firefox OS support.

WiFi debugging allows WebIDE to connect to Firefox for Android via your local WiFi network instead of a USB cable.

The connection experience is generally more straightforward (especially after connecting to a device the first time) than with USB and also more convenient to use since you’re no longer tied down by a cable.

Security

A large portion of this project has gone towards making the debugging connection secure, so that you can use it safely on shared network, such as an office or coffee shop.

We use TLS for encryption and authentication. The computer and device both create self-signed certificates. When you connect, a QR code is scanned to verify that the certificates can be trusted. During the connection process, you can choose to remember this information and connect immediately in the future if desired.

How to Use

You’ll need to assemble the following bits and bobs:

On your Android device:

  1. Install the Barcode Scanner Android app by ZXing Team
  2. Open Firefox for Android
  3. Go to Developer Tools Settings on device (Settings -> Developer Tools)
  4. Enable Remote Debugging via Wi-Fi

Firefox for Android WiFi Debugging Options

To connect from Firefox Desktop:

  1. Open WebIDE in Firefox Nightly (Tools -> Web Developer -> WebIDE)
  2. Click “Select Runtime” to open the runtimes panel
  3. Your Firefox for Android device should show up in the “WiFi Devices” section
  4. A connection prompt will appear on device, choose “Scan” or “Scan and Remember”
  5. Scan the QR code displayed in WebIDE

WebIDE WiFi Runtimes WebIDE Displays the QR Code

After scanning the QR code, the QR display should disappear and the “device” icon in WebIDE will turn blue for “connected”.

You can then access all of your remote browser tabs just as you can today over USB.

Technical Aside

This process does not use ADB at all on the device, so if you find ADB inconvenient while debugging or would rather not install ADB at all, then WiFi debugging is the way to go.

By skipping ADB, we don’t have to worry about driver confusion, especially on Windows and Linux.

Supported Devices

This feature should be supported on any Firefox for Android device. So far, I’ve tested it on the LG G2.

Acknowledgments

Thanks to all who helped via advice and reviews while working on Android support, including (in semi-random order):

  • Margaret Leibovic
  • Karim Benhmida

And from the larger WiFi debugging effort:

  • Brian Warner
  • Trevor Perrin
  • David Keeler
  • Honza Bambas
  • Patrick McManus
  • Jason Duell
  • Panos Astithas
  • Jan Keromnes
  • Alexandre Poirot
  • Paul Rouget
  • Paul Theriault

I am probably forgetting others as well, so I apologize if you were omitted.

What’s Next

If there are features you’d like to see added, file bugs or contact the team via various channels.

]]>
WiFi Debugging for Firefox OS https://convolv.es/blog/2015/03/25/wifi-debug-fxos/ Wed, 25 Mar 2015 08:51:00 -0500 https://convolv.es/blog/2015/03/25/wifi-debug-fxos/ I am excited to announce that we&rsquo;re now shipping WiFi debugging for Firefox OS! It&rsquo;s available in Firefox OS 3.0 / master with Firefox Nightly on desktop. WiFi debugging allows WebIDE to connect to your Firefox OS device via your local WiFi network instead of a USB cable. The connection experience is generally more straightforward (especially after connecting to a device the first time) than with USB and also more convenient to use since you&rsquo;re no longer tied down by a cable. I am excited to announce that we’re now shipping WiFi debugging for Firefox OS! It’s available in Firefox OS 3.0 / master with Firefox Nightly on desktop.

WiFi debugging allows WebIDE to connect to your Firefox OS device via your local WiFi network instead of a USB cable.

The connection experience is generally more straightforward (especially after connecting to a device the first time) than with USB and also more convenient to use since you’re no longer tied down by a cable.

Security

A large portion of this project has gone towards making the debugging connection secure, so that you can use it safely on shared network, such as an office or coffee shop.

We use TLS for encryption and authentication. The computer and device both create self-signed certificates. When you connect, a QR code is scanned to verify that the certificates can be trusted. During the connection process, you can choose to remember this information and connect immediately in the future if desired.

How to Use

You’ll need to assemble the following bits and bobs:

On Firefox OS, enable WiFi debugging:

  1. Go to Developer Settings on device (Settings -> Developer)
  2. Enable DevTools via Wi-Fi
  3. Edit the device name if desired

Firefox OS WiFi Debugging Options

To connect from Firefox Desktop:

  1. Open WebIDE in Firefox Nightly (Tools -> Web Developer -> WebIDE)
  2. Click “Select Runtime” to open the runtimes panel
  3. Your Firefox OS device should show up in the “WiFi Devices” section
  4. A connection prompt will appear on device, choose “Scan” or “Scan and Remember”
  5. Scan the QR code displayed in WebIDE

WebIDE WiFi Runtimes WebIDE Displays the QR Code

After scanning the QR code, the QR display should disappear and the “device” icon in WebIDE will turn blue for “connected”.

You can then access all of your remote apps and browser tabs just as you can today over USB.

Technical Aside

This process does not use ADB at all on the device, so if you find ADB inconvenient while debugging or would rather not install ADB at all, then WiFi debugging is the way to go.

By skipping ADB, we don’t have to worry about driver confusion, especially on Windows and Linux.

Supported Devices

This feature should be supported on any Firefox OS device. So far, I’ve tested it on the Flame and Nexus 4.

Known Issues

The QR code scanner can be a bit frustrating at the moment, as real devices appear to capture a very low resolution picture. Bug 1145772 aims to improve this soon. You should be able to scan with the Flame by trying a few different orientations. I would suggest using “Scan and Remember”, so that scanning is only needed for the first connection.

If you find other issues while testing, please file bugs or contact me on IRC.

Acknowledgments

This was quite a complex project, and many people provided advice and reviews while working on this feature, including (in semi-random order):

  • Brian Warner
  • Trevor Perrin
  • David Keeler
  • Honza Bambas
  • Patrick McManus
  • Jason Duell
  • Panos Astithas
  • Jan Keromnes
  • Alexandre Poirot
  • Paul Rouget
  • Paul Theriault

I am probably forgetting others as well, so I apologize if you were omitted.

What’s Next

I’d like to add this ability for Firefox for Android next. Thankfully, most of the work done here can be reused there.

If there are features you’d like to see added, file bugs or contact the team via various channels.

]]>
Debugging Tabs with Firefox for Android https://convolv.es/blog/2014/10/28/debug-fennec-tabs/ Tue, 28 Oct 2014 16:08:00 -0500 https://convolv.es/blog/2014/10/28/debug-fennec-tabs/ For quite a while, it has been possible to debug tabs on Firefox for Android devices, but there were many steps involved, including manual port forwarding from the terminal. As I hinted a few weeks ago, WebIDE would soon support connecting to Firefox for Android via ADB Helper support, and that time is now! How to Use You&rsquo;ll need to assemble the following bits and bobs: Firefox 36 (2014-10-25 or later) ADB Helper 0. For quite a while, it has been possible to debug tabs on Firefox for Android devices, but there were many steps involved, including manual port forwarding from the terminal.

As I hinted a few weeks ago, WebIDE would soon support connecting to Firefox for Android via ADB Helper support, and that time is now!

How to Use

You’ll need to assemble the following bits and bobs:

  • Firefox 36 (2014-10-25 or later)
  • ADB Helper 0.7.0 or later
  • Firefox for Android 35 or later

Opening WebIDE for the first time should install ADB Helper if you don’t already have it, but double-check it is the right version in the add-on manager.

Firefox for Android runtime appears

Inside WebIDE, you’ll see an entry for Firefox for Android in the Runtime menu.

Firefox for Android tab list

Once you select the runtime, tabs from Firefox for Android will be available in the (now poorly labelled) apps menu on the left.

Inspecting a tab in WebIDE

Choosing a tab will open up the DevTools toolbox for that tab. You can also toggle the toolbox via the “Pause” icon in the top toolbar.

If you would like to debug Firefox for Android’s system-level / chrome code, instead of a specific tab, you can do that with the “Main Process” option.

What’s Next

We have even more connection UX improvements on the way, so I hope to have more to share soon!

If there are features you’d like to see added, file bugs or contact the team via various channels.

]]>
DevTools for Firefox OS browser tabs https://convolv.es/blog/2014/10/14/debug-fxos-tabs/ Tue, 14 Oct 2014 13:29:00 -0500 https://convolv.es/blog/2014/10/14/debug-fxos-tabs/ We&rsquo;ve had various tools for inspecting apps on remote devices for some time now, but for a long time we&rsquo;ve not had the same support for remote browser tabs. To remedy this, WebIDE now supports inspecting browser tabs running on Firefox OS devices. A few weeks back, WebIDE gained support for inspecting tabs on the remote device, but many of the likely suspects to connect to weren&rsquo;t quite ready for various reasons. We’ve had various tools for inspecting apps on remote devices for some time now, but for a long time we’ve not had the same support for remote browser tabs.

To remedy this, WebIDE now supports inspecting browser tabs running on Firefox OS devices.

Inspecting a tab in WebIDE

A few weeks back, WebIDE gained support for inspecting tabs on the remote device, but many of the likely suspects to connect to weren’t quite ready for various reasons.

We’ve just landed the necessary server-side bits for Firefox OS, so you should be able try this out by updating your device to the next nightly build after 2014-10-14.

How to Use

After connecting to your device in WebIDE, any open browser tabs will appear at the bottom of WebIDE’s project list.

Browser tab list in WebIDE

The toolbox should open automatically after choosing a tab. You can also toggle the toolbox via the “Pause” icon in the top toolbar.

What’s Next

We’re planning to make this work for Firefox for Android as well. Much of that work is already done, so I am hopeful that it will be available soon.

If there are features you’d like to see added, file bugs or contact the team via various channels.

]]>
WebIDE enabled in Nightly https://convolv.es/blog/2014/08/18/webide-enabled/ Mon, 18 Aug 2014 10:44:00 -0500 https://convolv.es/blog/2014/08/18/webide-enabled/ I am excited to announce that WebIDE is now enabled by default in Nightly (Firefox 34)! Everyone on the App Tools team has been working hard to polish this new tool that we originally announced back in June. Features While the previous App Manager tool was great, that tool&rsquo;s UX held us back when trying support more complex workflows. With the redesign into WebIDE, we&rsquo;ve already been able to add: I am excited to announce that WebIDE is now enabled by default in Nightly (Firefox 34)! Everyone on the App Tools team has been working hard to polish this new tool that we originally announced back in June.

Features

While the previous App Manager tool was great, that tool’s UX held us back when trying support more complex workflows. With the redesign into WebIDE, we’ve already been able to add:

  • Project Editing
    • Great for getting started without worrying about an external editor
  • Project Templates
    • Easy to focus on content from the start by using a template
  • Improved DevTools Toolbox integration
    • Many UX issues arose from the non-standard way that App Manager used the DevTools
  • Monitor
    • Live memory graphs help diagnose performance issues

Monitor

Transition

All projects you may have created previously in the App Manager are also available in WebIDE.

While the App Manager is now hidden, it’s accessible for now at about:app-manager. We do intend to remove it entirely in the future, so it’s best to start using WebIDE today. If you find any issues, please file bugs!

What’s Next

Looking ahead, we have many more exciting things planned for WebIDE, such as:

  • Command line integration
  • Improved support for app frameworks like Cordova
  • Validation that matches the Firefox Marketplace

If there are features you’d like to see added, file bugs or contact the team via various channels.

]]>
Mozilla https://convolv.es/blog/2013/08/25/mozilla/ Sun, 25 Aug 2013 01:27:00 -0500 https://convolv.es/blog/2013/08/25/mozilla/ In my last post (back in February&hellip;), I mentioned I was spending a lot of time on the side working on the developer tools in web browsers, particularly Firefox. I find that Mozilla&rsquo;s values, which truly put the user first, are something I agree with wholeheartedly. Mozilla is in a unique position in this way, since all the other browsers are made by companies that, at the end of day, are looking to make money to appease their shareholders. In my last post (back in February…), I mentioned I was spending a lot of time on the side working on the developer tools in web browsers, particularly Firefox. I find that Mozilla’s values, which truly put the user first, are something I agree with wholeheartedly. Mozilla is in a unique position in this way, since all the other browsers are made by companies that, at the end of day, are looking to make money to appease their shareholders. Mozilla’s values are even more important to emphasize these days with the various forms of governmental spying that has been revealed in the last few months.

With that context, hopefully you can get an idea of how excited I am to say that this past Monday was my first day as a Mozilla employee, working on the Firefox Developer Tools team!

I am currently in Paris ramping up with the team for two weeks. After the first week, I am humbled to be able to say I am a part of this organization. There are so many people smarter than me here, and I am thrilled to have the opportunity to work with them. I know we will accomplish great things together.

There is so much potential in the web developer tools space. Every developer has their own workflow, favorite set of frameworks and tools, and new platform technologies and techniques are coming out at a rapid pace. While part of my job is certainly to help make all of this more efficient, there is a lot of room to shake things up by looking towards the future of web development and the tools that people don’t even know they need.

It’s going to be a blast! Hopefully I’ll have to some fun things to share.

]]>
Into the Open https://convolv.es/blog/2013/02/17/into-the-open/ Sun, 17 Feb 2013 22:52:00 -0600 https://convolv.es/blog/2013/02/17/into-the-open/ I&rsquo;m excited to start to focusing what time I do have on the side towards open source projects. I&rsquo;m always on the lookout for good projects to help out with, but these days I always seem to come back to web browsers. In particular, I really enjoy working on tools that improve the lives of other every day. In that vein, I&rsquo;d like to focus on improvements to the developer tools in browsers today, though they are vastly better today than even a few years ago. I’m excited to start to focusing what time I do have on the side towards open source projects. I’m always on the lookout for good projects to help out with, but these days I always seem to come back to web browsers.

In particular, I really enjoy working on tools that improve the lives of other every day. In that vein, I’d like to focus on improvements to the developer tools in browsers today, though they are vastly better today than even a few years ago.

The main open source options as far as web browsers go are Chrome / Chromium and the various Mozilla projects, like Firefox. Near the end of 2011, I started ramping up on the Chromium project, mainly because Chrome is the main browser I used then.

However, now that Opera has decided to switch to Chromium for future versions of their browser, I’ve been reminded that Mozilla is the only party in the web development game that truly seems to be doing their best to fight for the user. The main reason I stopped using Firefox was due to Chrome’s impressive developer tools, so I’d like to help improve Firefox tools to bring them up to the level we’ve now come to expect and beyond.

Likely I’ll dabble in both Chromium and Firefox, but no matter what it should be an exciting time ahead!

]]>
Materials, Maps, and Surfacing https://convolv.es/blog/2012/10/06/materials/ Sat, 06 Oct 2012 15:13:00 -0500 https://convolv.es/blog/2012/10/06/materials/ Overview For this assignment, we learned how to apply materials to the objects we&rsquo;ve been creating. There are many, many different ways to construct a material that you apply to 3D object. They can be procedurally generated, they can pull from textures (or maps) you create, and you can manually tune many parameters as well. Maps are a great way to control the appearance of an object because you can make something resemble a real object quite quickly by just taking a picture and doing a bit of editing. Overview

For this assignment, we learned how to apply materials to the objects we’ve been creating. There are many, many different ways to construct a material that you apply to 3D object. They can be procedurally generated, they can pull from textures (or maps) you create, and you can manually tune many parameters as well.

Maps are a great way to control the appearance of an object because you can make something resemble a real object quite quickly by just taking a picture and doing a bit of editing.

Environment

For the environment, I used a variety of wood textures as maps to give the huts a rustic feel. The door particularly look much more believable now. Also, the islands look much less like strange brains now there’s a grassy appearance applied, instead of just the flat green.

Robot

For the robot, I gave him a weathered, rusty metal appearance that seems to tie in well with his supportive / charming look. Old, but still functioning just fine.

The eyes and mouth have a bit of an ethereal / floating feel to them thanks to the transparency.

It was fun to experiment with the various parameters and material types that can give a metallic appearance.

]]>
Splines, Loft, and Lathe https://convolv.es/blog/2012/09/29/splines/ Sat, 29 Sep 2012 08:17:00 -0500 https://convolv.es/blog/2012/09/29/splines/ Overview For this assignment, we learned a few new techniques, namely how to make use of 2D shapes to give more detail to our models. Beyond simple shapes like circles and squares is the very flexible spline, which gives you a lot of control over how a line is drawn, while still be purely analytical. Shapes and Splines For part of the homework, we added some shapes and splines to our existing models. Overview

For this assignment, we learned a few new techniques, namely how to make use of 2D shapes to give more detail to our models. Beyond simple shapes like circles and squares is the very flexible spline, which gives you a lot of control over how a line is drawn, while still be purely analytical.

Shapes and Splines

For part of the homework, we added some shapes and splines to our existing models. Below you can see the island huts from before, but with a few additional decorations, such as some scary wiring / branches, as well as door knobs and roof ornaments.

Lathe and Loft

We also learned how to take 2D paths and morph them into 3D shapes in several interesting ways. You can use lathe to revolve the path around an axis, similar to a torus.

I used this technique to create a pot, an inflatable pool, and a spyglass.

We also learned about loft, which will take a shape and replicate it in 3D across whatever path you define. This is another very powerful feature, with many ways you can customize and tune its behavior.

I made several tracks and other shapes using this process:

]]>
Basic Game with Unity https://convolv.es/blog/2012/09/22/basic-game-with-unity/ Sat, 22 Sep 2012 04:42:00 -0500 https://convolv.es/blog/2012/09/22/basic-game-with-unity/ Overview For this week&rsquo;s assignment, we jumped out of the normal curriculum and went straight to using a game engine with our models from last time. We added a few common elements, like terrain and fog. Unity Unity is a lot of fun to work with! It&rsquo;s quite easy to assemble something pretty cool, and yet it is also has a lot of depth to allow you to refine details. I can definitely see myself working with this down the road, especially given the wide multi-platform support. Overview

For this week’s assignment, we jumped out of the normal curriculum and went straight to using a game engine with our models from last time. We added a few common elements, like terrain and fog.

Unity

Unity is a lot of fun to work with! It’s quite easy to assemble something pretty cool, and yet it is also has a lot of depth to allow you to refine details. I can definitely see myself working with this down the road, especially given the wide multi-platform support.

Game Level

For the level itself, I modified the environment to remove the static water I made before and surround the island with mountainous terrain. Then I added the animated water to get back to the original environment idea.

From there, I added some basic lighting and fog to match the skybox I chose. It was actually a bit tricky to map a collider to the island huts. For now I used spheres, but perhaps I’ll need to flatten those into a single mesh for Unity to represent them accurately.

There’s definitely room for improvement along many aspects, but it’s really exciting to see something playable come together so quickly.

]]>
3D Modeling: Robot in the Ruins https://convolv.es/blog/2012/09/15/3d-modeling-robot-in-the-ruins/ Sat, 15 Sep 2012 05:43:00 -0500 https://convolv.es/blog/2012/09/15/3d-modeling-robot-in-the-ruins/ Overview For our first assignment, we had to build a robot and an environment for the it to inhabit. Robot The robot had to use exactly 1000 polygons. I chose to create a kind of transport robot with a cute appearance. I used an elongated egg shape, and gave it two propellers to maneuver. The robot is shown at an angle to suggest movement. The propellers were fun to construct, especially with the primitives, since you have to think of clever ways to build what want from just a few shapes. Overview

For our first assignment, we had to build a robot and an environment for the it to inhabit.

Robot

The robot had to use exactly 1000 polygons. I chose to create a kind of transport robot with a cute appearance. I used an elongated egg shape, and gave it two propellers to maneuver. The robot is shown at an angle to suggest movement. The propellers were fun to construct, especially with the primitives, since you have to think of clever ways to build what want from just a few shapes. With two shapes and a few filters, I arrived at a convincing version of propeller blades.

Environment

For the environment, we had a budget of 10,000 polygons. I thought it would be fun to create a water / island environment. However, even a poor simulation of water quickly eats up a lot of polygons!

My thoughts here were that several of the robots might be used to transport various items between the islands as needed.

]]>
3D Modeling and Rendering 1 https://convolv.es/blog/2012/09/07/3d-modeling-and-rendering-1/ Fri, 07 Sep 2012 23:20:00 -0500 https://convolv.es/blog/2012/09/07/3d-modeling-and-rendering-1/ I&rsquo;ve started taking classes at ACC to learn some visual design skills and generally extend my knowledge of how games and other media are built. The first one is this 3D modeling class, and I&rsquo;ll be maintaining a portfolio of my work as this class goes on. I&rsquo;m excited to see what I can come up with! Undoubtedly my work won&rsquo;t win too many awards, but hopefully I can pick up the principles and develop them further over time. I’ve started taking classes at ACC to learn some visual design skills and generally extend my knowledge of how games and other media are built.

The first one is this 3D modeling class, and I’ll be maintaining a portfolio of my work as this class goes on. I’m excited to see what I can come up with! Undoubtedly my work won’t win too many awards, but hopefully I can pick up the principles and develop them further over time.

I think this will also give me a better understanding of the thought process that design and creative teams go through at the various places I’ve worked now and in the past.

Anyway, should be fun! :D

]]>
Exciting new blog thing https://convolv.es/blog/2012/05/07/exciting-new-blog-thing/ Mon, 07 May 2012 23:28:00 -0500 https://convolv.es/blog/2012/05/07/exciting-new-blog-thing/ &hellip;to replace what exactly? My LiveJournal from college&hellip;? I&rsquo;ve never been great at maintaining any kind of journal / blog / whatever, but Octopress just looks like so much fun, so I had to give it a shot. Tonight I finally ripped the default theme up a bit to create a more minimalist look, which I think turned out quite well. I&rsquo;m hoping to start talking about all the crazy tech projects I&rsquo;ve got brewing in my mind that I&rsquo;ll one day sit down and make progress on, along with a few bits of life, work, and everything else mixed in. …to replace what exactly? My LiveJournal from college…? I’ve never been great at maintaining any kind of journal / blog / whatever, but Octopress just looks like so much fun, so I had to give it a shot. Tonight I finally ripped the default theme up a bit to create a more minimalist look, which I think turned out quite well.

I’m hoping to start talking about all the crazy tech projects I’ve got brewing in my mind that I’ll one day sit down and make progress on, along with a few bits of life, work, and everything else mixed in.

Even writing this tiny thing is more than I’ve done in ages. Any guesses how long until the next one?

]]>