Alloverse https://alloverse.com Laying the foundation for the open Metaverse Mon, 19 Jun 2023 14:07:07 +0000 en-US hourly 1 https://wordpress.org/?v=5.9.8 https://alloverse.com/wp-content/uploads/cropped-favicon_tk-32x32.png Alloverse https://alloverse.com 32 32 Commercial operations taking a break https://alloverse.com/2023/06/08/commercial-operations-taking-a-break/ Thu, 08 Jun 2023 09:23:10 +0000 https://alloverse.com/?p=10338 Building Alloverse in the open is the most exciting job we’ve ever had. Turning it into a commercial venture, however, was perhaps a few years too early — as the rest of the market has also found out. We’re very grateful to Icebreaker and others chipping in funding to get us this far, but alas, our coffers are now empty, and on the 15th of June we are taking a break on commercial operations.

However, this is not a goodbye. From the very beginning we’ve been an open source project with volunteer contributors, so we can – and will – keep going even without the commercial backing! Thus, we transition from an “open core” commercial company with an open source center, into a fully open source project. This is simplified by our work a year ago to become a more open and transparent organization.

What does all this mean for the Alloverse team’s life together? Well, no more recurring meetings, and we don’t have day jobs together anymore. But all of us — Nevyn, Tobes, Voxar and Domi — are all continuing as volunteers for our open source project in our spare time. Mash in some code some evening here, maybe design some UI components over a weekend there, discuss a PR on the side. It’s still the coolest project I can imagine, and we’ll keep it alive.

On top of our core team, we have our adventurous Ambassadors who continue to experiment with the future of XR, and spreading the word of our magical platform. (On that note, we welcome Elena Palombini as our newest ambassador!)

As for the future: when the market and our product is more mature, we hope to come back from the commercial break to work on Alloverse full time again, and maybe even hire an even larger development team. Who knows what the future will bring? But that’s what we’re wishing, and working hard for.

Do you believe in our mission? That the next generation 3D Internet we’ll be getting soon deserves to be open source, built for openness, inclusiveness and intuitiveness? Come join our community, and help us fight for it in our new era. Alloverse is always there for you to experiment with and build your crazy VR projects.

— nevyn

]]>
How to run Spotify, Discord, Netflix and other 2D Android apps in VR on the Meta Quest (Pro) https://alloverse.com/2023/01/24/sideload-2d-quest/ Tue, 24 Jan 2023 14:40:29 +0000 https://alloverse.com/?p=10296 Have you ever tried using a VR headset on an airplane, instead of an iPad or laptop? With two tricks, it can work really well: side-loading Android phone apps, and turning off tracking.

I love experimenting with VR, so on my holiday break, I figured I’d experiment with using my Quest Pro as a laptop/iPad replacement. The first goal was to use it as in-flight entertainment, with Netflix and Spotify.

Why side-load Android 2D apps on a VR headset?

Discord and Netflix running side by side on a Quest Pro, overlaying passthrough video of my office

Most people don’t seem to know that you can use the Quest as basically an Android tablet! Since VR apps are often scaled down versions of mobile apps, with this method you can bring way more functionality into VR.

For example, by side-loading the full tablet Netflix app, you can download shows and movies, so you can watch without Internet, a must on a flight! (This is a feature omitted from their official VR app).

By side-loading the Spotify app and moving it over to the side, you can have music playing while you’re browsing the web, writing notes, or even when using other VR apps. You can also offline sync music, which you can’t do from Spotify’s web app.

Hook up a bluetooth keyboard, and install Word, Notion, Discord, Slack, Jira or whatever productivity tool you’re used to, and your VR headset starts to become an actual laptop replacement.

Or connect your Xbox gamepad over bluetooth, and play whatever Android game you want, including RetroArch!

It gets particularly exciting when you enable the Quest’s multi-tasking feature, so you can use three apps side-by-side, as if you had three giant computer displays with an app each. Watch the attached video for a tutorial on how that works.

If you have a Quest Pro and turn on Passthrough, it gets even better: you can see these three displays together with your real environment in full color!

Getting 2D apps onto your Quest

Here’s my favorite method for sideloading 2D Android apps to your Meta Quest. With this method, you only need a laptop to get started, and can then be completely standalone. You can also watch it in video form:

  1. There’s a great sideloading tool for Mac, Windows and Linux called SideQuest VR. We’re going to use it to sideload Total Commander, so that you can do future sideloading directly from your headset and don’t need a computer. Head over to sidequestvr.com and follow the setup instructions for the “Advanced installer”. This’ll involve hooking up your Quest to your computer with a USB-C cable and enabling developer mode on your devie.
  2. You need a special version of the android app Total Commander that can itself install apps. Head over to their forum and download the APK for “arm64” (that’s the kind of CPU the Quest has).
  3. Fire up SideQuest on your computer. It should look like the image above, with a green dot to the left in the title bar indicating that it can detect your Quest being successfully connected to your computer.
  4. Press the “install apk” icon from the right of the title bar (circled in red above), and select the Total Commander APK from your downloads folder in the file browser that appears. Once the task completes, you’re done with the computer and can continue from inside your Quest!
  5. Disconnect the cable and put on your headset! Let’s install something more fun, like Netflix. Inside VR, open the Oculus web browser, and navigate to apkmirror.com. This is a website for downloading APK files from the Play Store, without using the Play Store.
  6. Search for Netflix, and download the “APK” variant of the latest version. It’s the one that says “nodpi”. It’ll download to the Downloads folder on your Quest.
  7. It’s time to install that APK using Total Commander. Go to your App Library, change it to “Unknown Sources”, and launch Total Commander.
  8. Go into the “Download” folder, and then click on the APK file, and then click Install.
  9. Confirm selections, and you’re good to go! You can now go back to Unknown Sources and start Netflix!
In SideQuest, make sure the dot is green. Press the “install APK” button to install Total Commander.
Opening Total Commander
Selecting the downloaded APK
Installing the selected APK
Hey presto, Netflix APK has been installed!

Turn off tracking

An airplane turns, has turbulence and moves in other ways that completely break the Quest’s otherwise amazing tracking features. By turning tracking off completely, you can use it even though your environment is a bit shaky.

If the quest detects that your tracking is very shaky, it’ll ask if you want to turn it off. To do it manually, you can go into Settings > System > Headset Tracking and then tap the switch to turn it off. Just turn it on when you’re back on ground!

Finally…

It’s a lot of fun to explore novel ways to use VR! It’s still a new medium, with much remaining to explore, despite having been out for several years. I’m particularly excited to see how the Quest Pro can be used as an AR workstation to replace my laptop; what UX patterns will be established for multitasking in VR; and how to make the migration for 2D to 3D as seamless as possible.

What kind of unconventional ways have you used your VR headset?

]]>
Creating a Discord onboarding bot: meet Allfred https://alloverse.com/2022/11/08/creating-a-discord-onboarding-bot-meet-allfred/ Tue, 08 Nov 2022 18:12:44 +0000 https://alloverse.com/?p=10284 In this blog post, we’re gonna be talking about server onboarding on discord, as well as details on how we at Alloverse designed and implemented our own bot for the purpose. Finally, we’ll go into detail how you can access our bot’s source code so you can adjust it for your own use case.

Long time readers know that we at Alloverse use Discord as the central hub for community outreach. Unfortunately, anyone who’s joined a Discord server know that its first impressions can be pretty overwhelming. As a new user, you tend to land in the server’s #welcome channel to be greeted with a pre-set welcoming message without regard to the user’s particular reason for joining. Due to its “one-size-fits-all” nature, said welcoming message is generally a wall of text reminiscent of a terms of service, in turn containing a plethora of links to more information on a wide variety of topics.

The opportunity to provide the user with a smooth onboarding is thus lost, and users run a high risk of losing interest solely because they weren’t presented with relevant information. Even with our relatively meagre number of public channels we, too, run a significant risk of falling into this trap by not providing bite-sized information and quickly guiding users to relevant contexts and conversations.

Typical examples of popular Discord Servers’ #welcome channel – very busy and provides lots of branching paths.

So, we thought, how can we improve this user experience? There must be Discord bots out there allowing for the creation of decision tree traversal in order to guide new users using progressive disclosure, right?

Wrong. We joined the 10-something biggest Discord communities and spent a full day searching for bots to fill our purpose. Alas, to no avail.

So it must be because Discord’s bot API:s don’t support this kind of functionality, right?

Wrong. Cursory googling quickly reveals that it is indeed within the capability of the Discord API.

So here we are. Welcome to the journey of developing our own Discord Onboarding Bot: Allfred. (Named after Batman’s butler, with an Alloverse twist. Very creative, we know.)

Designing the decision tree

The Alloverse Discord gathers developers, designers and open source enthusiasts alike. Some are interested in the Alloverse platform itself, some are interested in building apps on top of it, and others are just curious about following our development. In order to provide each of them with a relevant onboarding experience, we needed to figure out what roles and channels are suitable to recommend for each of our target groups.

In our case, we’re focusing on populating our #welcome channel with a bot that can:

  1. Display the Alloverse elevator pitch
  2. Ask what the new user what their skills are and what they want out of joining the server.
  3. Provide optional responses, such as:
    1. Provide roles like 👩‍💻 coder or 👨‍🎨 designer .
    2. Link to relevant channels like #app-development or #feature-suggestions.

Then, If need be, we repeat the cycle with new, contextual information and questions.

The very basic Alloverse bot interaction loop.

Additionally, on the admin side, we want a channel to ping us when someone interacts with the bot. That way we can personally reach out to that user in a context where we know what they’re looking for.

After having established this basic flow, we sat down to design the full decision tree, complete with all information, options and responses. This turned out to be a very useful exercise, and was iterated upon several times to ensure a flow that contained all vital information, yet was short enough to avoid tedium.

Allfred’s decision tree, full and detailed view. Orange diamonds are prompts, Yellow rectangles are points where the user is given a role, and white signify the bot’s response to the users’ input.

Implementing the bot

Our initial (pre-design) research had already revealed to us that there are several bot-building frameworks available for use. However, we soon found that most of them didn’t (yet) support the button-with-label component we needed for user input.

This narrowed our list of options to two: Discord’s own framework discord.js and the python framework nextcord. After comparing respective quick start guides we eventually chose the latter, as it became clear that nextcord is more helpful with server communication, leaving only the application code for us to worry about.

Why? Because nextcord utilises Pythons async/await to make it really easy to get going and write asynchronous code in in a synchronous manner, and decorators (the @bot.event etc above methods) to hook into the frameworks event loop.

import os, nextcord
bot = commands.Bot()

@bot.slash_command()
async def hello(interaction: nextcord.interaction):
    await interaction.send(f"Greetings {interaction.user.mention}")

bot.run(os.env("SECRET_DISCORD_TOKEN"))

The example above registers a ‘slash command’ (/hello) that responds with a mention of the user that invokes it.

We won’t go much further into specifics here since the code is available for anyone to check out, and we’re happy to answer questions about it in our Discord. But here are some resources that we found especially useful during development and might help you get started:

Result, and open source

In the end, we reached the goals we set out for, and we’re quite happy with the result! Since you’ve read this far, you should come try it out yourself to get the full experience. Join our Discord and you’ll find Allfred in the #welcome channel!

A screenshot of Allfred’s initial pitch and user prompt.

Alloverse is all about open source, and thus we’d love to share our bot with you! As it stands, however, the source code contains a variety of secret Discord IDs that should best remain secret, so we’re unable to share the repository as-is. That said, if there’s interest in retrieving the bot’s source code, let us know in our Discord and we’ll happily publish a “clean” version for you on our GitHub Repository!

Thanks for reading! Have a nice day 🙂

]]>
How to build a VR retro arcade: do it yourself! https://alloverse.com/2022/06/20/how-to-build-a-vr-retro-arcade-do-it-yourself/ Mon, 20 Jun 2022 12:30:04 +0000 https://alloverse.com/?p=10221 In part 1, you got an intro and team retrospective of the “retro arcade” hack week project, where the whole team banded up and built a fun project together that really put Alloverse to the test. In part 2, you got an architecture overview, explaining the components involved in bringing a SNES emulator into a collaborative VR environment.

In this part 3, you’ll get a step-by-step tutorial in building it yourself! This is going to be intermediate-to-advanced, and a lot of code. The point is to demo the capabilites of the Alloverse platform, and show you some of the really advanced stuff that is possible with LuaJIT and AlloUI. If that sounds appealing to you, strap in and let’s go!

Let’s get to coding!

We start out by creating an AlloUI project somewhere on our computer. (This is the same as the first step in our Getting Started guide for the Lua language. If you want an easier starter project, I recommend following that guide!)

$ mkdir myarcade
$ cd myarcade
$ git init
$ /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/alloverse/alloapp-assist/master/setup.bash)"
$ git commit -am “initial"
$ ./allo/assist run alloplace://sandbox.places.alloverse.com # just to confirm it worked

If you open up the Alloverse app and connect to sandbox.places.alloverse.com, you should now be seeing your app in there as a flat surface with a button on it.

Stubs and dummies

This flat slab is an eyesore. Let’s spice it up and add some basic UI, make it feel like a real object that the user will want to walk up to and interact with. Replace lua/main.lua with the code below. It instantiates stubs for Emulator and RetroMote; loads a fancier model file; creates the “main UI” view that everything will attach to; places game controllers at some good default location; and kicks off the runloop for the app.

local client = Client(arg[2], "myarcade")
app = App(client)
local Emulator = require("Emulator")
local RetroMote = require("RetroMote")

assets = {
    arcade = ui.Asset.File("models/220120-arcade.glb"),
}
app.assetManager:add(assets)

local main = ui.View(Bounds(0.2, 0.1, -4.5,   1, 0.2, 1))
main:setGrabbable(true)

local emulator = Emulator(app)

local cabinet = main:addSubview(ui.ModelView(Bounds.unit():scale(0.3,0.3,0.3), assets.arcade))

local controllers = cabinet:addSubview(View())
controllers.bounds:scale(5,5,5):move(0,5.6,-1.4)
emulator.controllers = {
    controllers:addSubview(RetroMote(Bounds(-0.15, -0.35, 0.6,   0.2, 0.05, 0.1), 1)),
    controllers:addSubview(RetroMote(Bounds( 0.087, -0.35, 0.6,   0.2, 0.05, 0.1), 2))
}

app.mainView = main

app:connect()
app:run(emulator:getFps())

You’ll need three more files before this runs.

1st file: Download and place the arcade cabinet model file into models/. This’ll be the fancy arcade cabinet model that the user will want to walk up to.

2nd file: Create lua/Emulator.lua, where we just stub out an Emulator class. We will later use this to wrap all of libretro in a self-contained class.

local class = require('pl.class')
Emulator = class.Emulator()

function Emulator:getFps()
    return 60
end

return Emulator

3rd file: Create lua/RetroMote.lua, which will be our game controller object that we can pick up in VR to control the retro game being played. For now, this is also just an empty stub subclassing ui.View.

local class = require('pl.class')

class.RetroMote(ui.View)

return RetroMote

Run it…

$ ./allo/assist run alloplace://sandbox.places.alloverse.com 

and you should have a blank cabinet in the Sandbox place, yay!

Using FFI to hook up libretro to Emulator.lua

Now to make the arcade machine actually DO something. This is going to be a bit of a journey: we’ll be using LuaJIT’s FFI (”foreign function interface”) to talk directly to libretro’s C API. This way, we’ll be able to tell libretro to emulate games, send controller input to it, and receive back audio and video from the emulated system.

The libretro API is thankfully all in a single header, but LuaJIT can’t read it as-is, since the LuaJIT FFI lacks a preprocessor (and also a few other C features). I’ve gone ahead and preprocessed the header for you using the advanced tool “my brain and hands”, so that LuaJIT can read it. It’s too long to paste here; instead, download it from here, and put it into lua/cdef.lua.

Head over to lua/Emulator.lua. Splat this at the top:

local class = require('pl.class')
local tablex = require('pl.tablex')
local pretty = require('pl.pretty')
local vec3 = require("modules.vec3")
local mat4 = require("modules.mat4")
local ffi = require("ffi")
local RetroMote = require("RetroMote")

ffi.cdef(require("cdef"))

(we’ll be needing all those other requires later, so might as well get them in there now).

That last ffi.cdef line is all that’s needed for you to be able to call all of libretro as if it was a lua library.

We just need to dynamically load it. Which means we need libretro somewhere on your system…

Installing libretro on your machine

Right, okay. If you’re on Linux:

sudo add-apt-repository ppa:libretro/stable && sudo apt-get update && sudo apt-get install retroarch
sudo apt-get install libretro-nestopia libretro-genesisplusgx libretro-snes9x
sudo apt install libavcodec-dev libavformat-dev libswresample-dev libswscale-dev

On a Mac:

  1. Install RetroArch from https://www.retroarch.com/
  2. Launch RetroArch, and use the menus to install the following cores: Snes9X, Genesis Plus GX, Nestopia.

On Windows:

It should be totally doable to get retroarch installed in a way that’s compatible with this project on Windows, at least if you’re using mingw or something; but it’s not a setup I’m familar with so I can’t provide a detailed guide here.

Initializing libretro

Let’s dynamically load our newly installed libretro. Below are some helpers that…

  1. locate the libretro dynamic library on your platform
  2. load it using ffi.load() so you can call functions from it!
  3. set all the callbacks to closures that call our own functions, so we can start reacting to events happening in libretro. (just setting self.handle.retro_set_input_state = self._input_state wouldn’t work, because the callback wouldn’t be called with the correct self instance. So we need to capture self with a closure and call the own method with it.)
  4. and finally, set up controllers and init the library!

Go ahead and add this code to lua/Emulator.lua:

function os.system(cmd)
    local f = assert(io.popen(cmd, 'r'))
    local s = assert(f:read('*l'))
    f:close()
    return s:match("^%s*(.-)%s*$")
  end

function _loadCore(coreName)
    local searchPaths = {
        "~/.config/retroarch/cores/"..coreName.."_libretro.so", -- apt install path
        "/usr/lib/x86_64-linux-gnu/libretro/"..coreName.."_libretro.so", -- gui install path linux
        "$HOME/Library/Application\\ Support/RetroArch/cores/"..coreName.."_libretro.dylib" -- gui install path mac
    }

    for i, path in ipairs(searchPaths) do
        print("Trying to load core from "..path)
        local corePath = os.system("echo "..path)
        ok, what = pcall(ffi.load, corePath, false)
        if ok then
            print("Success")
            return what
        else
            print("Failed: "..what)
        end
    end
    error("Core "..coreName.." not available anywhere :(")
end

function Emulator:loadCore(coreName)
    if coreName == self.coreName then return end
    self.coreName = coreName
    self.handle = _loadCore(coreName)
    assert(self.handle)

		-- libretro uses this callback to poll us for settings it should use
    self.handle.retro_set_environment(function(cmd, data)
        return self:_environment(cmd, data)
    end)
	  -- libretro calls us asking what the state of the game controllers are
    self.handle.retro_set_input_state(function(port, device, index, id)
        return self:_input_state(port, device, index, id)
    end)
	  -- libretro has some image data for us
    self.handle.retro_set_video_refresh(function(data, width, height, pitch)
        return self:_video_refresh(data, width, height, pitch)
    end)
		-- libretro has some audio data for us
    self.handle.retro_set_audio_sample_batch(function(data, frames)
        return self:_audio_sample_batch(data, frames)
    end)
    self.handle.retro_set_controller_port_device(0, 1); -- controller port 0 is a joypad
    self.handle.retro_set_controller_port_device(1, 1); -- controller port 1 is a joypad
    self.handle.retro_init()
end

So as you can see, we’re able to call C methods like retro_set_controller_port_device(unsigned port, unsigned device) directly from Lua now that we’ve cdef’d its interface. The only bummer is that we can’t use the handy macros like RETRO_DEVICE_JOYPAD, so we have to send in the raw numbers that they map to. But at least we don’t have to manually allocate ffi memory for the arguments! It’s just all handled automatically.

(It’s also blows my mind every time an FFI interface is able to assign a closure to a regular old C function pointer.)

Loading games

While we’re at it, let’s load the game too.

Emulator.coreMap = {
  sfc = "snes9x",
  smc = "snes9x",
  nes = "nestopia",
  smd = "genesis_plus_gx",
}

function Emulator:loadGame(gamePath)
		-- figure out which core to use based on file extension
		local ext = assert(gamePath:match("^.+%.(.+)$"))
		local core = assert(Emulator.coreMap[ext])
		self:loadCore(core)
		
		self.gamePath = gamePath
		
		-- this is how you allocate more complex data types, like structs. 
		self.system = ffi.new("struct retro_system_info")
		-- luajit assumes your reference is a pointer, so we can send it byref to populate it
		self.handle.retro_get_system_info(self.system)
		
		self.info = ffi.new("struct retro_game_info")
		self.info.path = gamePath
		local f = io.open(gamePath, "rb")
		local data = f:read("*a")
		self.info.data = data
		self.info.size = #data
		local ok = self.handle.retro_load_game(self.info)
		assert(ok)
		
		self:fetchGeometry()
end

We use ffi.new to allocate named C structures (which have previously been handed to luajit with the ffi.cdef method). With retro_get_system_info we’re then passing the allocated structure to get it filled in so we can use it later; and with retro_load_game we’re passing in a structure that we have filled with data so that libretro can work with it. The data in question is the full game just read from disk and mashed into RAM. (there are also APIs for streaming game data so that one could play bigger games, but that’s overkill for what we’re doing here).

Constructor

Later on we’re going to want to render the game’s graphics, so we need to know what the size of the screen is going to be (”geometry”), so we set out to fetch that. Let’s also get the class’s constructor in place and finish up all the setup we needed:

function Emulator:_init(app)
    self.app = app

		-- app will need to set speaker to a ui.Speaker and screen to a ui.VideoSurface
		-- to present audio and video. we'll get to that.
    self.speaker = nil
    self.screen = nil
		
    self.sample_capacity = 960*32
    self.audiobuffer = ffi.new("int16_t[?]", self.sample_capacity)
    self.buffered_samples = 0
    self.audiodebug = io.open("debug.pcm", "wb")
    self.soundVolume = 0.5
    self.frameSkip = 2 -- 1=60fps, 2=30fps, etc
    self.onScreenSetup = function (resulution, crop) assert("assign onScreenSetup plz") end
end

function Emulator:fetchGeometry()
    self.av = ffi.new("struct retro_system_av_info")
    self.handle.retro_get_system_av_info(self.av)
    print(
        "Emulator AV info:\n\tBase video dimensions:", 
        self.av.geometry.base_width, "x", self.av.geometry.base_height,
        "\n\tMax video dimensions:",
        self.av.geometry.max_width, "x", self.av.geometry.max_height,
        "\n\tVideo frame rate:", self.av.timing.fps,
        "\n\tAudio sample rate:", self.av.timing.sample_rate
    )

    self.resolution = {self.av.geometry.base_width, self.av.geometry.base_height}
    -- this callback is used to ask the UI layer to create a VideoSurface of the correct dimensions
    self.onScreenSetup({self.av.geometry.base_width, self.av.geometry.base_height}, {self.av.geometry.max_width, self.av.geometry.max_height})
end

Run-fix-repeat: The “environment”

At this point, it’s kind of easier to just try to run it and fix all the runtime errors, than to read documentation and figuring out exactly what needs to be configured to make things work. We’ll be doing run-fix-repeat cycles now until we have a working emulator.

Let’s make the code load up a game. Please legally acquire a NES or SNES rom file for a game you enjoy, and put it in roms. Then, change the bottom of main.lua to read:

emulator:loadGame("roms/rom.sfc") -- or whatever you named your game rom
app:connect()
app:run(emulator:getFps())

So, first try: where do we crash?

$ ./allo/assist run alloplace://sandbox.places.alloverse.com
Trying to load core from ~/.config/retroarch/cores/snes9x_libretro.so
Success
./allo/deps/luajit-bin//bin/linux64/luajit: ./allo/../lua/Emulator.lua:77: attempt to call method '_environment' (a nil value)

Okay. So immediately upon loading a game, libretro is asking us for “environment”, which means it’s asking us for runtime settings. Let’s implement it and check which setting it’s asking for:

$ ./allo/assist run alloplace://sandbox.places.alloverse.com
Emulator is asking for setting #	34
  1. What does 34 mean? If we go back to libretro’s API and just search for 34, we’ll find it means RETRO_ENVIRONMENT_SET_SUBSYSTEM_INFO. We don’t care about subsystems, so we can just return false to say “we have no environment value for setting 34, sorry”.

If you’re feeling extra ambitious, you can now go ahead and run-fix-repeat for all the requested environment settings until it works… or you can copy-paste my implementation here 😅

function Emulator:_environment(cmd, data)
    if cmd == 27 then -- RETRO_ENVIRONMENT_GET_LOG_INTERFACE
        local cb = ffi.cast("struct retro_log_callback*", data)
        cb.log = self.helper.core_log
        return true
    elseif cmd == 3 then -- RETRO_ENVIRONMENT_GET_CAN_DUPE
        local out = ffi.cast("bool*", data)
        out[0] = true
        return true
    elseif cmd == 10 then -- RETRO_ENVIRONMENT_SET_PIXEL_FORMAT
        local fmt = ffi.cast("enum retro_pixel_format*", data)
        local fmtIndex = tonumber(fmt[0])
        local indexToFormat = {
            [0]= "rgb1555",
            [1]= "bgra", -- ?? supposed to be xrgb8
            [2]= "rgb565",
        }
        self.videoFormat = indexToFormat[fmtIndex]
        print("Emulator requested video format", fmtIndex, "aka", self.videoFormat)
        return true
    elseif cmd == 9 then -- RETRO_ENVIRONMENT_GET_SYSTEM_DIRECTORY
        local sptr = ffi.cast("const char **", data)
        sptr[0] = "."
        return true
    elseif cmd == 31 then -- RETRO_ENVIRONMENT_GET_SAVE_DIRECTORY
        local sptr = ffi.cast("const char **", data)
        sptr[0] = "."
        return true
    elseif cmd == 32 then -- RETRO_ENVIRONMENT_SET_SYSTEM_AV_INFO
        print("System av info changed")
        return false
    elseif cmd == 37 then -- RETRO_ENVIRONMENT_SET_GEOMETRY
        local geometry = ffi.cast("struct retro_game_geometry*", data)
        print("New geometry")
        self:fetchGeometry()
        return true
    end

    --print("Unhandled env", cmd)
    return false
end

You’ll note that we just return false for all the things we don’t care about. Some notes:

  • libretro MUST have a logging callback. And, it must be a vararg function. luajit’s FFI doesn’t support that, so we’re going to have to implement that in C. BUMMER. We’ll get to that later.
  • You’ll note that data is a void pointer, so we have to cast it to whatever is appropriate for the given setting, and then set it with pointer dereferencing. Since pointers and single-value-arrays are the same thing in C, we can dereference a pointer by referring to its first array value.
  • The rest of the settings should be fairly self-explainatory; if not, ping @nevyn on our Discord and ask him to explain it to you 😅

Run-fix-repeat: The logging helper

Running the code above will crash on field 'helper' is a nil value. So let’s implement the helper. Download libretro.h into c/libretro.h. Implement c/helper.c:

#include "libretro.h"
#include <stdio.h>
#include <stdlib.h>
#include <stdarg.h>

void core_log(enum retro_log_level level, const char *fmt, ...) {
    char buffer[4096] = {0};
    static const char * levelstr[] = { "dbg ", "info", "warn", "err " };
    va_list va;

    va_start(va, fmt);
    vsnprintf(buffer, sizeof(buffer), fmt, va);
    va_end(va);

    if (level == 0)
        return;

    fprintf(stderr, "[%s] %s", levelstr[level], buffer);
    fflush(stderr);

    if (level == RETRO_LOG_ERROR)
        exit(EXIT_FAILURE);
}

Write a Makefile to build it (in the project root):

.PHONY : clean

CFLAGS += -fPIC -g
LDFLAGS += -shared

SOURCES = $(shell echo c/*.c)
HEADERS = $(shell echo c/*.h)
OBJECTS=$(SOURCES:.c=.o)

TARGET=lua/libhelper.so

all: $(TARGET)

clean:
	rm -f $(OBJECTS) $(TARGET)

$(TARGET) : $(OBJECTS)
	$(CC) $(CFLAGS) $(OBJECTS) -o $@ $(LDFLAGS)

We’ll also have to load this new libhelper.so at runtime. In Emulator.lua, right after self.handle = _loadCore(coreName), add:

self.helper = ffi.load("lua/libhelper.so", false)

Try it out:

$ make all
$ ./allo/assist run alloplace://nevyn.places.alloverse.com
Trying to load core from ~/.config/retroarch/cores/snes9x_libretro.so
Success
Map_HiROMMap
[info] "Street Fighter2 Turbo1" [checksum ok] HiROM, 32Mbits, ROM, NTSC, SRAM:0Kbits, ID:B___, CRC32:D43BC5A3

Woah. It works?! It’s even using our logging callback to print the game name, which means we’re officially emulating a game inside a VR (even though we can’t see it yet).

Hey, you. Well done getting here! You deserve a fika break. Get a coffee and a cinnamon bun. I’m going to do that too, because the above was quite a handfull. Hopefully by the time you’re done, I’ve published part 4, and we can dig into displaying video, playing audio, and receiving controller input.

If you have any questions or feedback, please head over to our Discord and let us know!

]]>
GitHub Projects for open source project tracking https://alloverse.com/2022/05/31/github-projects-for-open-source-tracking/ Tue, 31 May 2022 12:25:05 +0000 https://alloverse.com/?p=10145 Hi everyone! Alloverse PM Tobes here with another update on how we do open source product development. This time I’ll be taking a closer look at the GitHub Projects beta and share some of my thoughts on it. Let’s go!

First off; readers of Nevyn’s latest post will recall this illustration regarding project tracking tools:

Some common project tracking tools, ordered by how structured they are

When we began working on Alloverse, Shortcut was our initial tool of choice. While very competent, it was perceived by the team as complex and overwhelming. Sometimes even the most basic things, like planning and viewing sprints, felt like a drag.

More importantly though, Alloverse’s focus is on developing in public – and Shortcut doesn’t allow for viewing boards from outside the organization. Thus, we had more than enough motivation to switch tools.

Enter GitHub Projects

GitHub is a central part of Alloverse. All our code is hosted with them, and the years they’ve spent creating the largest hosting platform for open source have generated a great deal of trust and goodwill. As such, the bar to entry for trying out another one of their tools was very, very low.

As some some readers may already know, GitHub has supported simple project tracking for a long time. However, the tool’s been very barebones and, honestly, quite lacking. In that light, it’s very exciting that they’re currently working a major overhaul which, in a burst of exhilarating creativity, was dubbed “Projects Beta”.

I brought it up to the Alloverse team, and we made the decision to get on board. Now, after having used it for a few months, here’s my short list of takeaways. I hope they can be of use for you.

The Good Stuff

  • Views (tabs with filters) can be made public. As previously explained, this is a hard requirement for us, and I’ve yet to see another tool that allows for this.
  • The ability to directly link a group of issues with a particular label (tag). For example, here’s a direct link to all Good First Issues for people new to the project. Pretty cool!
  • GitHub Projects is lightweight and, for most developers, familiar. Internally, we’re simply more likely to use it than other unfamiliar or more complex tools.
  • Because it’s built on the code hosting platform and all its usual functionality, GitHub Projects provides straightforward, logical couplings with a project’s ongoing issues.
  • Keyboard shortcuts everywhere makes navigating and using the page a breeze.

The Not-So-Good Stuff

GitHub Project is still very clearly in beta. It’s not buggy, but it is clearly lacking features. This leads to hacks, which in turn leads to frustration and increased complexity.

  • Unfortunately, milestones are solely available on a per-repo basis, not per-project. An Alloverse Milestone can involve any number of our 50+ repos, and it’s not realistic for us to create the same milestone 50+ times across all repos. 😬
  • There’s no concept of Epics. As it stands, we’re currently hacking this desired functionality together – but neither creation or syncing of the related issues is automated, so it’s never certain to be an exact representation of the actual work done.
  • Inability to link issues to each other, like “A is blocked by B”.

Hacking Milestones & Epics

To work around GitHub Projects’ lack of Milestones and epics, we’ve created an empty repo called open planning (and created a tab/view for it; “2022Q2Q3” below):

The Alloverse Q2Q3 Milestone view

In this repo, we’ve added a few issues and (with a current Milestone label) beginning with “EPIC”, as such:

The Milestone’s contents: Five issues in a trenchcoat

Each of these quasi-epics, in turn, contain a list of issues that relates to it. These have to be added, and linked to, manually, as part of the issue description.

The contents of the AlloUI design “epic” issue

It’s a bit tedious, but it’s the best solution we got so far – feel free to check it out here. It’s high on my wishlist for GitHub to add this functionality natively by providing us:

  • The ability to create milestones on an organizational level.
  • A concept of an epic. Preferably, it would be able to automatically generate a description (such as our checklist above) based on which issues gets linked to it.
    • Epics could have custom properties (why not use the already-existing concept of labels?) such as “research”, “mockup”, “design” etc. Said labels could then act as headers to its corresponding sub-list of issues (so our solution above is achieved automatically).
    • When an issue is closed, the corresponding checkbox in the Epic gets checked. No manual checking tickets allowed – it’s important that the list is always tightly coupled with the actual issues.

What the Future Holds

Their new Projects feature is obviously a major focus for GitHub, and they’re far from done with it. According to their own marketing team, several of our problems are actively being worked on right now. In addition, I’m excited to try out some all-new features also in progress, such as custom-made, automated workflows. They have the potential to be quite useful for further keeping the project up-to-date with its actual progression, such as by automatically closing issues and setting its status to “Done” when its pull request have been approved.

Conclusion

After having used GitHub Projects Beta for a few months, we’ve found it to be a good fit for us, and we have no plans of switching away from it. While it has some problems, the most important of them are actively being worked on by the GitHub team (although it would be nice to have access to a timeline…).

Over a transitional period, we’ll still be keeping Shortcut for large-scale milestone planning. Additionally, it serves as a bucket of backlog issues and ideas from the past couple of years which will slowly get migrated to GitHub issues as we move forward.

That’s all for now – thanks for reading! If you’d like to check out GitHub Projects Beta in action, take a look at our current sprint, or the Milestone we’re working toward.

]]>
The Alloverse Ambassador Program https://alloverse.com/2022/05/20/the-alloverse-ambassador-program/ Fri, 20 May 2022 13:51:06 +0000 https://alloverse.com/?p=10132 Community is the backbone of any successful open source project. However, growing a community from zero is not easy. There’s over 7 billion people in the world, so how do we best reach the ones that share your values? Social media can be great, but at the same time, algorithms can be moody and the battle for attention is hard fought. So, why not engage our own community of like-minded people to advocate for us? After all, isn’t word of mouth how most of us usually find our favorite tools and projects?

With that in mind, we developed our very own Alloverse Ambassador Program. With access to an exclusive discord channel, ambassadors are able to closely collaborate with each other and the Alloverse team when developing their own apps. In addition, every ambassadors get some physical swag (a super cool Alloverse hoodie!) and an own virtual AlloPlace, hosted by us.

With all these perks, of course, come some responsibilities as well. So, what exactly does it mean to be an ambassador?

It’s quite simple, honestly. Your values align with ours: supporting open core, being friendly and making a lasting impact (to read more in-depth about the ethos of our mission, you can read all about it here). An ambassador is expected to spread the word about Alloverse IRL and online. Plus we might reach out to you from time to time for feedback, or to help us test some new features we’re currently working on!

Since the launch of the program in late March, we’ve received quite a number of applications and already admitted a few amazing ambassadors into the program! Each one is a pretty awesome individual and we are very lucky to have them on board. The common denominator for joining was the belief in an open metaverse. Other reasons included things like the desire to build 3d apps specific to their domain expertise and the fact that Alloverse was accessible and easy to get started with.

It’s been super fun to see our community grow! We can’t wait to see what our new Ambassadors will create and explore the ways in which we’ll all collaborate. Most importantly, though, we’re excited to see how the program develops!

So – does the Alloverse Ambassador program sound like something you’d like to get involved with? If so, read more and submit your application here: https://alloverse.com/ambassador/#application-form

]]>
How to build a VR retro arcade: the techy details https://alloverse.com/2022/05/10/how-to-build-a-vr-retro-arcade-the-techy-details/ Tue, 10 May 2022 12:11:13 +0000 https://alloverse.com/?p=10119 We recently built a retro arcade as an Alloverse Place — complete with Street Fighter II, shuffleboard noises, and the smell of popcorn.

The key to this experience is of course the arcade machine itself — it’s an AlloApp, a piece of software emulating games from the 80s and 90s.

In part 1 of this blog post series, we focused on the process and team experience of building a fun hack project together.

This, part 2, post focuses on an overview of the juicy technical bits. how were we able to run a SNES emulator in a collaborative environment? Is this something you could do yourself with something like Unity/Unreal? Can we teach you how to do it?

The answers are: through magic; only with extreme effort; and yes we can and we will!

In part 3 and on, you’ll actually get a code tutorial to build your very own arcade.

Technical overview: What are we building?

As a recap, an AlloApp is a 3D thing running inside an Alloverse Place. Unlike other VR apps, an AlloApp doesn’t take over your entire experience (think: Beat Saber, taking over the entire environment, soundscape, hand behavior, etc to give you a holistic and immersive experience), but rather runs alongside your experience (think: a widget, tool, decoration or other thing in your environment). If you’d like to dive deeper into what and how an AlloApp is, try out our Getting Started guide, or perhaps our Architecture documentation.

Here’s an architecture block diagram of the project:

LibRetro/RetroArch

The first piece of tech involved is LibRetro from RetroArch. This is a project to consolidate all retro game console emulators under a single umbrella.

RetroArch is a end-user app you can run to emulate pretty much anything. They do this by wrapping all the regular emulators as libraries, and calling them “cores”. For example, the popular Super Nintendo emulator Snes9X has been made into a core that can be loaded into RetroArch to play Super Nintendo games.

LibRetro, then is the library implementing all the core abstractions. It can be reused to build your own emulator front-end. Which is what we’re doing!

LibRetro is unusual in that it has two separate API surfaces:

  • It has an interface “downwards” into emulator cores. Each core, such as the Snes9X core, has to reply to API calls to “perform a step of emulation”, “give me the pixels on screen right now”, “Call this callback every time there is new audio to be played”, “these buttons are pressed on player 1’s controller”, “save your save games here”, and so on.
  • It has an interface “upwards” towards some sort of user interface. The standard user interface is of course RetroArch, which might be running on your computer, on a set-top box, or some portable Android-based game console. This interface doesn’t have to know anything about any specific emulator. It’ll just tell libretro, “please play this game the user selected, using the standard core for taht kind of game, and here are the controllers. Give me the audio and video when the core has them”. This is the same whether it’s emulating NES, Amiga or Playstation 2.

Allonet/AlloUI

The next layer is the Alloverse specific libraries.

Allonet is our C library abstracting out network communication, audio and video transfer, and world syncing. We’ll be sending the audio, video and controller data through this pipe, as a bridge between libretro and the user.

AlloUI is an abstraction on top of allonet, providing something that looks like a high-level front-end UI framework; but it’s actually just a layer to talk to Placeserv and visor apps, which are the the pieces of software actually providing the user interaction. We’ll be using this to build a user interface for the user.

Flynncade

Finally, the project “Flynncade” wraps LibRetro and builds an AlloUI interface on top of it.

The emulated screen’s pixels are sent as a video stream to placeserv and then forwarded to all the users looking at the arcade

The same for the emulated sound.

The user’s controllers are interpreted by this code and converted to a SNES controller’s input, and sent to LibRetro. In VR, we’ll steal all the face buttons on the user’s left and right Quest controllers, and map those to a standard “snes/playstation” style controller. In 2D, we’ll steal the keyboard and use that for input.

Then there’s also code to make the arcade machine usable and pretty:

  • The 3D model of the arcade cabinet has a swappable texture to match the game being played
  • There are buttons to get help, choose game, change sound volume, read more about games, etc.

That’s it for part 2! Next time, we’ll get to building! But you’re of course more than welcome to try on your own and let us know how it went by joining our discord!

]]>
Focusing on building: What it means for our product https://alloverse.com/2022/04/26/focusing-on-building-what-it-means-for-our-product/ Tue, 26 Apr 2022 12:37:42 +0000 https://alloverse.com/?p=10089 When we incorporated Alloverse a bit over two years ago, we figured that in addition to developing the platform, we would also attempt to rent it out as a virtual meeting platform for businesses as a way to (partly) finance operations. As it turns out, however, there are a lot of virtual collaboration tools out there. Additionally, Alloverse’s unique selling points – the ability to create your own apps and being open source – are not necessarily the most attractive features for corporate clients looking for a way to quickly improve work-from-home collaboration.

Thus, after the summer of 2021, we had to make a shift. The Alloverse founders joined a three-month consulting project to finance another year of development, and our sales-oriented CEO Julie stepped down to make room for our core founder and CTO Nevyn to embrace the role.

With relieved pressure to develop a market-facing product (for now), we could focus 100% on improving Alloverse for the people that already believe in our mission, and so we reached out to our developers, supporters and enthusiasts for advice. During early fall, the feedback we received made it clear that Alloverse needs to return to its core of improving the platform and building a community. In a nutshell, here are are the major points of feedback, along with the actions we took to respond to them over the last six months:

orange megaphone on orange wall

“We love Alloverse’s mission, but it’s not loud & clear enough”

  • To this end, we’ve updated our website to be more in line with our mission and our focus on community development.
  • Additionally, we’re posting to all our social media channels at least weekly, and started making weekly announcements in Discord.
yellow red blue and green lego blocks

“Make it easier for people to build AlloApps”

  • Most 3d apps have some 2D UI, and our research showed that building 2D interfaces in Alloverse was more complicated than it probably should. To address this issue, we’ve added some helpful components (like TabView and StackView) to our API which we’ll continuously improve. Additionally, we’re evaluating if we should even build a full-fledged visual editor for laying out AlloUI.
  • We’re looking into the possibility to support another language. At the moment, we’re leaning toward something that web developers would be comfortable with, such as TypeScript.
white red and green wooden street sign

“Make it clear for people how they can contribute”

  • We’ve created a Discord bot that funnels new members through an onboarding flow with questions, giving them roles based on their responses and recommends related channels & people. You can try out the flow here, if you like.
  • We’ve moved all our project tracking to GitHub – even our weekly sprints are now all completely public, and anyone can add issues for consideration in the backlog.
people building structure during daytime

“The more active the community, the better”

  • As a way for us further support potential contributors and spread the word about Alloverse, we’ve recently launched the Alloverse Ambassador Program.

Looking Ahead

Now, six months later, Alloverse is in a better place than ever. The platform is more stable, the Ambassador program is taking off and we’re excited about the future:

  • The roadmap for spring & summer has been set, informed by user feedback – in addition to objectives we believe will have an impact in the even longer term, such as
    • Screen sharing to allow us to have all our internal meetings in Alloverse and thus do recurring wine tasting.
    • Collection of certain basic usage statistics in order to find bugs and inform how we develop tutorials.
    • Overall visual improvements for a consistent and more confidence-inspiring Alloverse.
  • When fall rolls around, the core team will briefly slow down our engagement with Alloverse in favour of another consulting round to keep our accountant happy.
  • In late fall, we believe an XR hype wave is coming. This will coincide with Apple releasing consumer-grade XR goggles and Meta & Microsoft getting further with their respective Metaverse Projects.

If you’re as excited as us about making sure the future of the internet and metaverse is open and inclusive, check out our plans, and come join our community!

]]>
Tools and processes for running a remote or distributed company https://alloverse.com/2022/04/13/remote-work-principles/ Wed, 13 Apr 2022 12:51:34 +0000 https://alloverse.com/?p=10035 I’ve been part of running two distributed startups over 9 years, plus 4 years of work across timezones at Spotify before then. The first few years were rough, and me and the orgs I were in fell into a lot of common pitfalls that took a long time to dig out of. I’ve been asked to share some learnings, so I’ll try to distill the most important ones here. If you just want the tl;dr, scroll down to “communication principles”.

My first experience with failed remote work was when Spotify’s New York office popped into existence, and was supposed to work together with Spotify’s Stockholm office. (I wasn’t a manager then, so this is just my perspective as one engineer in a large org.) Within each office, people could talk day in and day out and create amazing plans with high fidelity; but between offices was strung some IRC channels and some product managers thrown into the deep end. The end result was tons of frustration, people thinking the others were ignorant or rude, duplicate work, etc.

Two boxes with "new york office" and "stockholm office" with two thin lines between them labeled "irc between engineers" and "cries for help over email"

So this is how I was introduced to communication bandwidth asymmetry, or just you know, “silos”. People who were supposed to communicate between offices wouldn’t, because it’s easier to talk to the person next to you. And when you do, those not around you miss out on important information. This is true and the effect even worse when even just one person is in another location.

Two boxes with "london office" and "that one designer in Poland" with two thin lines between them labeled "pngs in a drive", and "cries for help over email"

From this experience at Spotify, and then the following first few years at Lookback, we distilled a rule for remote work.

If even only one worker is remote, you have to pretend that everyone is.

Nevyn’s first rule of remote work

This might sound like “if even one worker is remote, you have to decrease the productivity of everyone”. This is not true, because the way you efficiently work remotely is by becoming communication and documentation experts, which helps everyone in any organization.

Communication bandwidth VS permanence

So let’s talk about communication bandwidth, and the different trade-offs you can make to create the perfect communication environment for your org.

The highest communication bandwidth is a face-to-face meeting with another person. The lowest communication bandwidth is an SMS convo where you get a reply a week after deadline. Then there’s of course a huge spectrum in between.

In my mind there are seven relevant communication lines within a tech org, in decreasing order of bandwidth:

  1. Face-to-face
  2. Video calls (or VR calls if you’re fancy)
  3. Audio calls
  4. Chat, like Slack or Discord,
  5. Project tools, like Trello, Shortcut or Jira
  6. Email, which I detest so I’ll pretend it doesn’t exist really
  7. And then your company’s documentation tool, such as Notion, a wiki or your drive full of documents.
Here they’re laid out from highest to lowest communication bandwidth, as in capacity to get coworkers informed in the shortest amount of time possible:

The upside to high bandwidth is that you can collaborate and reach decisions fast. The downside is that it is impermanent: things you figured out face-to-face is documented nowhere. Things you wrote in the company wiki is stored forever.

So there’s another axis to the selection of tools: there’s both high vs low bandwidth, and ephemeral versus permanent.

This leads to the second major learning, a way of implementing the first rule:

Pick the tool as far towards permanence as you can, while still communicating in a way that makes you comfortable and productive.

Nevyn’s second rule of remote work

If you’re new to this, you might still prefer a call or a chat about most things; and that’s fine.

And if you’re dealing with a time-sensitive, or hard to think or communicate about problem, you definitely need to pick a high bandwidth tool as well. That’s also fine.

But, it’s only fine if you also follow the third rule:

As soon as a conversation or task completes, document it in a more permanent location.

Nevyn’s third rule of remote work

For example, if product manager Holly runs into engineer Jane at the coffee machine and takes that opportunity to tell her that the button really needs to be green and not red, the information has been transferred, but the rest of the org doesn’t know, especially not the designer in Poland.

This is how it should go: Holly and Jane have a heated conversation in the kitchen, and then log the decision they’ve reached in the relevant card in Trello. Perhaps they even pinged the designer in Poland in the Trello comment; then she now knows, too!

This is especially true for things like a group conversation in Slack or a big video call where you think, “oh, everybody who needs to know already knows”. Well, you three months from now certainly doesn’t know anymore, and the new employee next week doesn’t know. So, go log it!

Tool choices

Okay, with the theory out of the way… Let’s get into the tools and talk about my personal preferences, and tool usage that has worked for me and my orgs.

Face-to-face, video call and audio call don’t really have a permanent effect in this system, so I’ll leave them out.

Chat

There are two big contenders in this space: Slack and Discord. Slack is great for organizations. Discord is great for communities. If you are running a tech company, you’re likely already using one of these and aren’t going to switch, so I’lll leave it there.

Project tool

This is a big one. You might not even be using one, in which case I highly recommend you do so. The important role of a project tool is to have somewhere to communicate about tasks, and to surface who is working on what. Is the endpoint for this going to be REST or GraphQL? Is the button red or green? Who’s doing the copy for this feature anyway?

Most tools I’m used to are centered around some sort of agile work flow. I haven’t read the manifesto, and I have no strong preference to exact working method, but working on small concrete tasks and iteratively figuring out what to do is really nice, and I encourage you to work like that.

Again, there are a bunch of different tools with different trade-offs. In this case, the tradeoff is structured versus unstructured.

A graph going from left to right along the axis "unstructured-structured". From the left: Some random google doc, Notion, Trello, Github Projects, Shortcut, Monday, and finally Jira.

As your organization grows, your needs for structure work will grow. Therefore, you’ll likely start somewhere on the left, and slowly migrate to the right. Or, you might feel structure is extra important in what you’re doing, and start somewhere in the middle.

  • I really can’t recommend using a google doc. It lets you have really any format, but it doesn’t help you at all in keeping sane as things move around and happen in the org.
  • Notion is super flexible. You can type up a document in free text talking about what to do, and suddenly slam a to-do list in the middle. Then in the next section, you can add a full kanban board, inline in the middle of the social media planning meeting notes document. You can use it as a support tool to just add a little more structure to what you’re doing, or or let it run your whole company with high fidelity. I love it.
  • Trello feels like it started out as the anti-jira, or that one engineer being tired of pointing a web cam at a whiteboard of post-its to their remote colleagues. It’s an old and refined tool that just lets you move cards between columns. This can be incredibly powerful, and is often just what you need to organize even your one-person-shop’s work. This is my recommended starter tool for small organizations, or even one-person shops. However, when you become a larger organization and start feeling the need to group cards under “epics” or “milestones”, it starts falling short.
  • Enter Shortcut (formerly Clubhouse). This has been my favorite tool for years. It’s basically Trello, but with all the agile tools slapped on: stories in epics in milestones, projects, estimates, iterations, roadmaps, team assignments, what have you. The downside is that it’s no longer flexible enough to let you sort cat pictures into tiers; on the upside it does the heavy lifting for you in terms of keeping you organized according to agile principles.
  • If you’re a 100+ people organization, you are probably already using something like Monday or Jira, where you have a crazy amount of tooling and customization to fit any part of your organization. You probably also has one or even several people whose job is to maintain this tool.
  • Bonus mention is GitHub Projects, which recently got a big upgrade and is really competent, snatching a spot somewhere in between Trello and Shortcut. It’s in beta, and is currently getting a lot of attention and upgrades.

At Lookback we use Shortcut; and at Alloverse we use a combination of GitHub Projects and Notion.

Documentation tool

I’m leaving the best for last. This is going to basically just be a huge ad for Notion.

What you have right now is probably a shared Google Drive with all the documentation and supporting documents for your organization. Maybe you have your code of conduct and vacation policy and meeting notes tucked away neatly in folders somewhere there.

The problem is that the important information isn’t likely being surfaced to those who need it. If you need the vacation policy, you’re likely heading over to your email to find the announcement email with the link to drive, instead of going to the drive and finding it quickly.

We found Notion at Lookback, and started using it as the documentation tool for everything: organization, processes, technical documentation, research, you name it. Fast forward a few years, and every new employee has told me: this is the fastest they’ve ever been onboarded, and the fastest they’ve understood a new organization. Everything is amazingly documented, and you can find exactly what you need when you need it.

At Alloverse, we continue to use Notion to great effect. We’re also an open source company, so I can actually share our Notion since it has a lot less secrets than Lookback’s. We’re not quite as well-organized, but it’ll do.

Screenshot of Alloverse's People Handbook in Notion

An important innovation over a drive is the sidebar. Every document is available in a hierarchy, which sort of surfaces the structure of the company. It’s easy to add more docs, and they’re all linked together like in a wiki, so it’s easy to cross-reference.

Use this to document all your processes, all your decisions, use it as a CRM, put all your meeting notes in there, write your blog post drafts in there, et etc.

Example usage: Note taking

For example, we use a table to log our weekly check-ins:

Each row is its own full document:

Example usage: Projects

Here’s how a project might be documented.

A main page describing what it is, and mainly linking out to other documents going into the details.

Communication principles

Finally, I want to share Alloverse’s internal “communication guidelines”, which is basically a summary of the above plus some things not directly related to communication bandwidth. Some, I’ve brought with me from shared learnings at Lookback. Some, we’ve formulated together ourselves at Alloverse. If you ignore all the above, and just bring these with you, you’ll be in a great place to run a distributed company.

  • Openness. Prefer an open discussion in a public channel over private messages if the subject is our product or business, unless it specifically only concerns the two persons. This way, everybody can learn from the discussion, and others can chip in if they have relevant knowledge.
  • Praise in public, feedback in private. Spread the love so we can all celebrate awesomeness together! If you have personal feedback, give it in private to the person or to your boss so that it can be resolved, instead of shaming people in public.
  • Avoid communication silos. In a remote organization, it’s important to include everybody, or important information gets stuck in silos. If you have a conversation IRL or in voice/video/VR, take notes and log it in the appropriate tool (Discord, Shortcut, Notion, Drive) so that others can take part in your decisions, findings and ideas.
  • Use the right forum. Depending on the nature of your discussion, different tools might be relevant:
    • Transient and async collaboration: Discord. You’re collaborating, or notifying people about something that is relevant in the moment, but doesn’t need to be found or referenced to in a week or two.
    • Tasks (product or management): Shortcut. Use comments and mention in the correct card. If a discussion in Slack turns into insights, save those in clubhouse so they can be referenced later when you’re working on the thing.
    • Documentation: Notion. If things need to be referenced or found, write a document in Notion. If you’re collaborating with external partners, use Google Drive (but use real Drive documents, not docx, since support for it is buggy 😥).
    • Sync collaboration: Slack/Discord video, pop.com (great for pair coding) or goteam.video (great for quick hangouts). E g a meeting or pair programming. Don’t forget to summarize your findings in a text tool afterwards if that’s relevant.
    • Long-form discussion: Email
  • Manage your pings. Turn on Do Not Disturb outside of your working hours, and don’t check your work email outside of work. This way, people can ping you from their time zone or work hours without worrying about disturbing your personal life.
  • Respect people’s life work balance. Don’t expect people to reply right away, and don’t expect people to reply at all outside of their work hours.
  • Set timing expectation. When asking for something, let them know how urgently you need it.
  • Decision making. Designate who really owns the task or project, so that after a conversation, it’s clear who makes the decision.

There are many other topics that I could cover, but this post is too long already. These include:

  • Remote presence
  • Team spirit, having fun together, “water cooler moments”
  • Avoiding feeling surveiled
  • How to make sure everybody helps out with keeping things organized and documented
  • Check-in cadence, text vs video
  • etc…

I hope this has been helpful! If you have any questions or comments, ping us on twitter at @nevyn or @alloverse, or join the conversation over at our community Discord!

]]>
How to build a VR retro arcade: a hack week retrospective https://alloverse.com/2022/02/03/how-to-make-a-retro-arcade-in-a-week-101/ Thu, 03 Feb 2022 16:46:53 +0000 https://alloverse.com/?p=9715 New year, new projects!

But also where do you get started after a holiday break? AND keep yourself motivated?

For us, two things were happening:

  • We wanted to showcase what kinds of apps you as a developer can build on the Alloverse platform.
  • Plus we wanted something fun to kick the year off with after a well deserved holiday rest.

It took us no time to realize that we could address both by doing something really nerdy, technical AND fun! Nevyn had previously experimented with using LibRetro* in an AlloApp, so it was a no brainer that we should take his side project experiment and turn it into a full on multi-user VR retro arcade!

<kbd><sub><span style="color:#787979" class="has-inline-color">*LibRetro is an open source project and reusable software library for emulating and playing retro games -- anything from Super Nintendo to relatively modern Playstation games.</span></sub></kbd>

In Nevyn’s experiment, two Alloverse users could play Turtles 2 for NES, albeit with a low framerate and slightly broken color palette. So it was a truly perfect project to take on and hack in two weeks, show off how much you can accomplish in Alloverse in a relatively short amount of time and have fun while at it!

Our goals were as follows:

  • Re-do the hacky parts, so that color, audio, framerate and input are all smooth and comfortable to use
  • Let the user pick games from a list
  • Design a space where you can hang out and play together — the building, the bleep-bloop sounds, the smell of popcorn (or, the nostalgic memory of it at least! Although we highly recommend making some popcorn before you start playing 😉), all that jazz.
  • Put the emulator in a proper arcade cabinet, and skin it with art from the chosen game
  • Improve the controls, both when used from VR and when used from a computer

At the end of all that, our hope was (and is) that people (looking at you developers) from around the Internet would find the thing we’ve built, try it out, realize how awesome our tech and APIs are, get an urge to build their own 3D apps, and try out our app building tutorial! And maybe join our Discord and tell us what they think of it 🙂

How did it all go? Well, honestly it exceeded our own expectaions! Not only we managed to check off all the things in that bullet list, we now have a beautiful fancy arcade space with plants and art in it (and of course retro arcade machines with many games to choose from), and we implemented A LOT of overall performance improvements for the platform.

Take a look! (remember: you’ll need the Alloverse app to try it out)

And if you’re curious, here’s a snippet of how the experience was for each of the Alloteam members:

Nevyn:

Being able to stream video from any random app into Alloverse really inspired me, and got me thinking about so many different fun applications. I’m also a huge retro tech nerd, so building a SNES emulator for VR was unavoidable once the idea got into my head.

Doing a hack project with the whole team after three weeks of winter holidays was a great choice. After a lot of rest, the project really kick-started our brains with creativity, teamwork, and just… fun!

The things I’m the most happy with are:

  • The competence of our platform and APIs. Without adding any platform features, it’s apparently possible to build a working multiplayer game, integrating with a C library, with live-streamed video, audio and two-handed controller input. That would be really hard to build with in two weeks on pretty much any other platform I can think of.
  • Working together intensely on a project, sharing creativity from our own fields of expertise, is super rewarding. The whole is so much better than if any one person had built it, and we’re pushing each other to explore our skills even further.

The things that could be improved:

  • Performance of the Alloverse platform is a little too low to have more than 2 arcade machines and multiple players, so that’s a bit disappointing. However, we’re spending time this week to improve it, so this “dogfooding” exercise actually ends up making the platform better for everyone, even though we “just” made a fun side project.
  • My brain was in a bad way for several days after the first week after working too much. Even when work is extremely fun, it’s important to take breaks and find ways to rest (especially when your normal “resting activity” is writing fun code 😅)
  • Now that we’re mindfully designing a whole Place and not just an app, it’s becoming really clear that there needs to be a way to save a “layout” — a set of apps and decorations for a space, so you can set it up once and then come back to it later (e g if strangers come in and “mess” your place up, or if apps crash and lose their setup)

Domi:

Working on a big project together was a great way to start the year and easily find focus after the holiday break. Bonus points: it’s been REALLY fun! I loved the world building aspect of it, from imagining the arcade to translating it to paper and visualizing it, and then explaining what I see in my head to everyone else. The amount of progress I made in navigating and using Blender is crazy, the skills were pretty much there already, but my confidence level in using the tool got a massive boost. I do wish I worked more with everyone else (even if by chilling in the General Voice chat room when everyone was there), because the times I did, I feel like I was getting more done than when working separately. But we’ve already established a while back that working together works better for us than working individually. So now this team member just has to remember to hop into general when working so she doesn’t get lost in her own head and give in to frustrations.

I got hit with allergies early on which messed up my flow a bit (not being able to look at screens is no fun), feeling like I was behind with my side of the project compared to everyone else. But the end result absolutely exceeded my expectations and is so much cooler than what I imagined we would have at the end of the two weeks.

Most important take away? It was a very successful experiment beautifully showcasing how much you can actually accomplish in Alloverse in just two weeks. (no surprises there, but it still feels amazing).

Voxar:

Building a defined place was a super inspirational way to work and gave things a clearer goal of what we really need to do instead of random apps floating around in nothing.

Tobi:

The decision to start a new, fun-oriented team project right after the winter break was fabulous. Not only did it turn out to be a great way to de-escalate the angst of getting back to work, but it came with additional (un)expected benefits as well:

  • By making sure the project is multidisciplinary, it promotes internal communication & collaboration. There were days when I worked with 2d art, 3d modelling and coding it all together. It’s been a while since I feel like I learned so much.
  • Dogfooding leads to a slew of downstream benefits – but finding a way to do it in a genuine, joyful way has been challenging. Building this Arcade was far and away the most successful approach yet. aps

Technically, a few issues became apparent with the building of a complex environment such as this one:

  • Transparency rendering is an issue. Now that we’re building more complex environments, we’re realising that transparency and rendering order is complex.
  • Serious performance issues can no longer be ignored. They’re suddenly make-it-or-break-it.

Pleasant surprises:

  • Alloverse’s spatial sound works wonders for ambiance. The bleeping and blooping from several arcade machines feels very genuine.
  • The fact that we were able to materialize this whole project in a virtual world is crazy.
]]>