omigroup https://omigroup.org/ Helping build a more open and interoperable metaverse Tue, 10 Feb 2026 16:56:01 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 OpenSimulator Community Conference (OSCC) https://omigroup.org/opensimulator-community-conference-oscc/ https://omigroup.org/opensimulator-community-conference-oscc/#respond Wed, 14 Jan 2026 18:08:02 +0000 https://omigroup.org/?p=628 OpenSimulator had their annual community conference about a month ago on 6-7 December 2025. It is a virtual conference happening in OpenSimulator, with a connected Expo area that remains accessible year round. Their hypergrid address is http://cc.opensimulator.org:8002/ The OMI community was well represented. humbletim joined on a Viewer panel sharing the Firestorm viewer VR mod, …

The post OpenSimulator Community Conference (OSCC) appeared first on omigroup.

]]>
OpenSimulator had their annual community conference about a month ago on 6-7 December 2025.

It is a virtual conference happening in OpenSimulator, with a connected Expo area that remains accessible year round. Their hypergrid address is http://cc.opensimulator.org:8002/

The OMI community was well represented.

humbletim joined on a Viewer panel sharing the Firestorm viewer VR mod, along with puffball working on the Benthic viewer. The video is available on youtube.

Keyframe presented the "The Teleportal: An interactive gateway of the Virtual Worlds Museum™". The video is available on youtube.

indiebio presented on "MetaCulture = Metagaming + Integrated knowledge infrastructures". The video recording is available on youtube, and the talk is also written as a blogpost.

OMI also had a panel titled "Nurturing the Health of the Metaverse Ecosystem" which was well attended. The recording is available on youtube.

We spoke about our work, and our experiences over the past few years. The panel was composed of indiebio, Aaron Franke and zodiepupper, who shared their work along with previously collected experiences from the wider community. Aaron and zodie also had a lot of fun and shared their experiences importing their custom avatars.

The energy from the conference has continued to ripple outwards. For OMI, the glTF avatar work and vehicle extensions has received a boost of contributors and conversations that is still continuing. The Virtual Education Journal (VEJ) presented, and invited contributions for their latest issue which is focused on community, for which OMI is submitting a piece. Special thanks to avatarjoy for not only being one of the main organisers of OSCC, but also helping OMI members feeling welcome and facilitating people new to the OS experience.

All the talks from OSCC is now available: https://www.youtube.com/@AvaConOrg

The post OpenSimulator Community Conference (OSCC) appeared first on omigroup.

]]>
https://omigroup.org/opensimulator-community-conference-oscc/feed/ 0
Building the ecology of the Metaverse https://omigroup.org/building-the-ecology-of-the-metaverse/ https://omigroup.org/building-the-ecology-of-the-metaverse/#respond Thu, 08 Jan 2026 10:33:31 +0000 https://omigroup.org/?p=619 This post is about an area cluster of interest to a group of OMI members. It does not represent all OMI members, or all of OMI's interests. We are a collective, using digital technologies to address the challenge of interacting with complex knowledge ecosystems. We use people-centric technology development to empower informed, accountable action from …

The post Building the ecology of the Metaverse appeared first on omigroup.

]]>
This post is about an area cluster of interest to a group of OMI members. It does not represent all OMI members, or all of OMI's interests.

We are a collective, using digital technologies to address the challenge of interacting with complex knowledge ecosystems. We use people-centric technology development to empower informed, accountable action from the bottom up. Using digital hyper-connected, integrated technologies - what we understand as the Metaverse - potentially gives grassroots initiatives the ability to contribute to the growing integrated knowledge ecosystem - the Metaverse as an ecosystem - and thereby scale their efforts and networks globally. We are looking for consortium members to pursue research funding, or strategic partnerships with companies interested in the commercial potential of these approaches. Please find some use cases below with more information. We also welcome anyone to join the journey; find us on Discord - the Open Metaverse Interoperability Group (OMI). Interested? Email [email protected]

Common needs observed

There is a lot of information out there. However, there is a lack of globally accessible user-friendly resources for outsiders, like visitors, educators and researchers to access this information in a way that makes sense. The complex information available about our interactions with nature, for example in biosphere reserves, or our interactions with natural resources, for example in water resource management, including key characteristics, infrastructure, and services is virtually invisible to anyone outside of those directly working with the data. What's worse, as outsiders it is hard to even find out how to get help finding the information!

The lack of integrated information about complex environments, e.g. biosphere reserves or urban resource management, hampers research on human usage patterns and experiential impacts on conservation and local development, resulting in a noticeable gap in studies on user perceptions and behaviours, particularly concerning digital tool adoption and trade-off evaluations.

Opportunity presented

A unified knowledge infrastructure, which may include a platform with searchable data, interactive maps, and clear support channels would greatly improve the discovery and exploration experience.

Further, having an open, integrated, user-friendly knowledge infrastructure like this has great potential in for example education, or even extending the game development ecosystem to include entertainment games that incorporate physical world assets.

Technologies used

System-of-systems data integration

The approach of systems-of-systems integration of data sources (e.g., environmental, terrestrial, meteorological, and marine) at multiple levels (i.e., syntax, semantics, or conceptual), including those in different sectors e.g. the private sector or government allows a more human-centric approach to data integration; meeting people where they are rather than trying to get everyone to agree on a new approach or standard.

Rather than integrating each system on a one to one basis, this approach achieves interoperability between different data sources through an Evolutionary Architecture described by IPSME (Nevelsteen & Wehlou, 2021). Instead of only gathering diverse stakeholders around a table in an attempt to reach a common approach to data management, IPSME is conducive to rapid prototyping and iterative development. In the Evolutionary Architecture, stakeholders can be added and removed dynamically to the ecosystem, in runtime, and with no downtime for the systems being integrated.

Rather than re-engineering existing data sources, IPSME supports legacy systems i.e., where the technical knowledge of the system as been lost or the risk of introducing errors in long running systems would be detrimental.

AI knowledge assist modules

Artificial Intelligence (AI) will be employed to implement interoperability in IPSME and assist in analysing and manipulating data, external to the systems being integrated for pattern recognition in decision making or different visualizations of the data. Machine learning models, colloquially known as “cooperative AI” can assist decision-making in complex systems (Wang et.al., 2024).

The function of the language model-based AI is to serve as a knowledge base and reduce misunderstanding among stakeholders. It is useful especially when complex tasks need to be decomposed into smaller pieces. Multi-agent coordination algorithms can then be employed to provide comprehensive model outputs.

Several pilot sites are crucial to provide access of data, information, infrastructure, tools, etc. Google’s NotebookLM has become a prevailing tool to build local knowledge base for each pilot. It works same as RAG, but more powerful and intelligent when handling multi-source data.

Deep learning and machine learning, as well as wider areas of AI applications (chat, wearable devices, computer vision), AI policies, governance models, and AI-driven human-nature interaction methods is also of interest.

Data visualisation / extended reality visualisations

Visualisations have two categories, framework and platform. A "framework" is a toolbox for developers to build things. Members in our collective are developing new frameworks to be more accessible to a wider range of users, be able to do different things, and be more easily interoperable.

A "platform" is a finished place for users to do things. Frameworks are used to build a platform, and some framework builders have their own platform as a "reference implementation". The apps and websites we are most familiar with - Open Simulator, SecondLife, VRChat, Facebook ... are all platforms. Implementing solutions on existing platforms is currently contracted out to companies external to the collective.

XR prototyping

WebXR prototyping with a focus on agency preservation (the collective concerns about privacy, open source, etc). The basis of this approach is to rely on files for interoperability. What is revolutionary is that it's very boring, yet works. Example video: XR Experiences by Fabien Bénétou (The Future of Text ’25) (12 min).

Knowledge with agency

Applying so-called Metaverse methods and protocols, including systems of systems data engineering, AI knowledge assist modules, and extended reality visualisations to integrate diverse knowledge infrastructures through a people-centric lens, enables the development of technology that encourages individuals and groups to engage critically with information.

Use cases

AquaSavvy: Participatory Urban Water Management

https://aquasavvy.eu/ Bringing water sensitive urban design into digital twins of cities, creating a visual canvas tool through and for participatory planning perspectives. This is the first funded prototype of our approach.


Biosphere Metaverse: Contribute research in human-nature interactions facilitated by digitalized knowledge system

Current proposal in development for the NFRFI2026 funding call.

1) It is difficult for visitors (especially international visitors) and researchers to find integrated information about each biosphere reserve (type, size, climate, zoning, accessibility, etc.). They also don’t know where they can get help for information acquisition. Basic infrastructure of databases, search tools and interactive maps that should allow different users to easily find and explore, e.g. type of ecosystems, zoning, accessibility, public transport, activities, local services, etc.

2) Consequently, there seems to be limited research on how people actually use and experience biosphere reserves, and how this affects conservation and local development. Much work on biosphere reserves focuses on ecology, land use or governance structures, but less on everyday practices and lived experiences of users. There are relatively few systematic studies on how different groups perceive trade-offs between conservation and development, and how digital tools shape access and engagement. 

3) Given the current state of technology, there is considerable potential for AI-supported tools. For example, systems that help visitors plan trips based on their needs and constraints, multilingual conversational interfaces that explain rules and ecological values, and analytical tools that combine ecological and social data to better understand socio-ecological dynamics.

Working google doc


Building a cybernetic library network

Weaving the knowledge commons

in draft, currently included in the biosphere project as a complementary use case.


Education as an Emergent game, where Metagaming is core to play

"Exploring the world is great and all that, but poking fun at exploring the world? Oh hell yes." - Mike Sowden

What is an emergent game?

Emergent games refer to a type of gameplay that arises from the interactions and behaviors of players within a game system, rather than being predetermined by the game designers. This type of gameplay is often characterized by complex, dynamic systems and open-ended rulesets that allow players to create their own experiences and stories within the game world. The sandbox concept is one application of this.

Applying the emergent game context to education can create an ecosystem where creativity can thrive, for the learners as well as the educators. This approach shifts the focus to the player, focusing on building a playground with relatively open goals, and the verbs and game mechanics to find their own fun and solutions to the problems presented. As the ample literature on using Minecraft for education shows, this is a very promising area for education, but it is still often a limiting and frustrating experience. 

We can improve on this in one way by extending the ecosystem beyond a specific platform. That is part of our interest in the Metaverse, and the focus of the interoperability aspect. If you choose to have your community, and your virtual education, for example in Second Life, and you want to join up with another community in Open Simulator, you should be able to do that. And in Roblox, or Minecraft, or wherever you want. Perhaps you have a unique platform developed that is specific to your needs, or someone made an industrial simulation available for education, those should seamlessly interoperate into your activities or curriculum. 

What is Metagaming in this context?

In traditional role-playing games, metagaming refers to the use of real-world information that the player character should not know about. A metagame can also refer to achievement systems and other official elements outside the actual core game. While the use of the term varies by context, the different meanings have in common a reference to something beyond the game itself. In our modern world where Wikipedia was first scoffed at as a resource, then reluctantly embraced, and with the current challenges and opportunities with AI, education needs to move from ring-fencing the "game" of education, to teaching how to engage critically with the wealth of information (and misinformation) that exists beyond the education sandbox.

In their article “A Typology of Metagamers: Identifying Player Types Based on Beyond the Game Activities”, Kahila’s group investigated how school learners engaged with games for educational purposes, and they looked at what happens beyond the game. They talk about three distinct profiles of players: versatile metagamers, strategizers, and casual metagamers, and maps their metagame activities into the main categories of game-enabling activities, strategizing activities, discussing activities, information-seeking activities, creating and sharing activities, and consuming activities.

Moving Education to be responsive to modern needs.

The advances of structuring data in for example geospatial mapping and knowledge infrastructures more generally unlocks potential for using physical world assets in games. Using physical world assets in emergent approaches to game design is well suited to allowing players to interact with their game worlds in varied ways. Exploring playing with the physical world – morphing and changing it – through games can allow us to learn about the world not through a top-down education, but through a curiosity that does not even have to involve the truth. Through these games we can build a new sense of belonging, that builds a common language across polarised opinions, because it’s just for fun, after all.


Including physical world assets into entertainment games - play for play's sake

in draft


Transformative management

in draft


The post Building the ecology of the Metaverse appeared first on omigroup.

]]>
https://omigroup.org/building-the-ecology-of-the-metaverse/feed/ 0
The human side of open source work https://omigroup.org/the-human-side-of-open-source-work/ https://omigroup.org/the-human-side-of-open-source-work/#respond Wed, 05 Nov 2025 17:32:41 +0000 https://omigroup.org/?p=613 A few weeks ago a demo by zodiepupper expanded into casual conversation and touched on various challenges and reasons for being involved in open source. As the weekly community meetings are recorded, we were able to capture and summarize the conversation here. Beyond the Code: Authenticity, Decentralization, and the indie hustle at OMI While many …

The post The human side of open source work appeared first on omigroup.

]]>
A few weeks ago a demo by zodiepupper expanded into casual conversation and touched on various challenges and reasons for being involved in open source. As the weekly community meetings are recorded, we were able to capture and summarize the conversation here.

Read more: The human side of open source work

Beyond the Code: Authenticity, Decentralization, and the indie hustle at OMI

While many conversations about open technology focus strictly on code, our recent conversation highlighted that creating a resilient metaverse is equally about human interoperability, community, and sheer creative willpower—even when survival means living on rice and vegetable dip.

Attendees included indiebio, Zodiepupper, AvatarJoy, and Aaron Franke (Godot Engine), who collectively explored conference strategy, the realities of academic funding, all while demonstrating cutting-edge decentralized VR development.


1. The Human Layer: Finding Power in Authenticity

The meeting opened with a candid discussion on networking and community building. Members reflected on the struggle to balance genuine personality with the expectations of professional engagement.

Zodiepupper noted that sincerity naturally fosters strong connections: “People really do appreciate, like, just being yourself. Just being authentic. A lot of people perceive that as very charismatic, because then the consistency is important.”

Indiebio agreed with the necessity of authenticity, but added a note of pragmatism, acknowledging that personal traits sometimes need modulation for effective collaboration: “I 100% believe in authenticity, but I think tailoring those [interactions] because my natural personality is actually very overpowering and like, borderline rude. If I was authentic, I think that would just be too much for a lot of people. It’s a balance.”

This sentiment underlines a core tenet of open source collaboration: success isn't just about code quality; it's about mutual respect and sustainable human connection.


2. OMI: Philosophical Foundation for Concrete Action

The group briefly addressed a recent tongue-in-cheek comment describing OMI as "more philosophical than action-oriented." This led to a clarification of OMI’s core mission:

“We don’t build the projects as OMI—we support people building projects.”
indiebio

This philosophy frames OMI not as a centralized development organization, but as a supportive ecosystem where individuals and teams can pursue groundbreaking open projects.

This mandate ties directly into the group’s plans for the upcoming OpenSimulator Community Conference (OSCC), where indiebio is coordinating a panel submission. The working title, “Nurturing the health of a metaverse ecosystem,” perfectly reflects OMI's focus on sustainable community design.


3. The Precarious Reality of the Open Developer Hustle

The financial realities of independent and academic open source work formed an unexpectedly humorous interlude. Indiebio shared the challenging paradox of securing a large (>$1M) research grant that includes no personal salary, and as they are not formally employed by the leading institution, this necessitates an application for external post-doctoral funding and exploring other avenues.

The discussion quickly turned into a relatable moment of solidarity regarding the "indie developer diet."

“Yeah, eating and having the ability to pay your rent is always nice.”
AvatarJoy

“Honestly, lately, I’ve been eating nothing but rice and, like, blended veggie dip stuff. Rice is good.”
Zodiepupper

This lighthearted moment underscores the deep personal sacrifice and determination required by contributors who prioritize open standards and creative freedom over conventional corporate stability.


4. Project Spotlight: Bark VR’s Decentralized Resilience

The second half of the meeting pivoted to action, featuring a live demo of Zodiepupper’s open-source Godot-based VR environment, Bark VR. This project serves as a compelling proof point for OMI's goals, focusing heavily on decentralization and user resilience.

Driven by frustrating experiences with centralized platforms (such as a community meltdown in NeosVR that shattered social support networks), Bark VR is engineered for uncensorable, user-owned social space.

Zodiepupper explained the architecture:

  • Networking: The system utilizes Matrix for user management (providing protocol bridging and decentralized identity) and the Iro library for peer-to-peer session communication, eschewing central game servers.
  • Persistence: The network is designed using Paxos-style distributed state machine logic, ensuring that if any single user disconnects (even the session originator), the world state is maintained across remaining clients.

“The main reason that I came on [Bark VR] was to make something that can’t happen to [others]. I just want it to basically never be able to be taken down.”
Zodiepupper

A Push for Universal Accessibility

A major design priority highlighted in the demo was accessibility. Zodiepupper emphasized the need for XR interfaces that require only one hand, referencing friends and clients who face physical disabilities.

The discussion broadened into the current oversight in XR development: “The fact that someone who’s blind still has to strap two heavy displays onto their face just to be able to participate in VR… those are the people who should be able to be present.” (Lyuma)

Indiebio summarized the powerful universality of inclusive design:

“Honestly, designing for people who need accessibility considerations makes it better for all of us, because I struggle in VR and I would like to have things that are a little bit more relaxed. It would make it better for all of us.”
indiebio


Get Involved

The OMI group is a welcoming space for anyone interested in the technical, philosophical, and cultural components of building an open metaverse. Whether you are leading a million-euro project, maintaining core open standards, or just getting by on rice and veggie dip while pursuing a dream, your contributions and insights matter.

Next Steps & Opportunities:

  1. OSCC Proposals: Submissions are closing soon for the OpenSimulator Community Conference (OSCC) on December 6th & 7th. Contact the group if you wish to participate alongside OMI. We also organise events occasionally, and have informal demos each week.
  2. Contribute: Projects like Bark VR, the glTF extensions and the Godot project are actively seeking contributors and collaborators to accelerate development and testing.

Join the conversation and help us ensure the open metaverse is built on foundations of resilience, authenticity, and universal access.

The post The human side of open source work appeared first on omigroup.

]]>
https://omigroup.org/the-human-side-of-open-source-work/feed/ 0
Building the Metaverse with Legacy in mind https://omigroup.org/building-the-metaverse-with-legacy-in-mind/ https://omigroup.org/building-the-metaverse-with-legacy-in-mind/#respond Thu, 06 Feb 2025 10:56:07 +0000 https://omigroup.org/?p=574 “Legacy is a foundational principle to the creation of the infrastructure supporting a communications commons” - Astral_Druid OMI, in collaboration with the Russian language groups RU, VRChatRU and the RockVR Club held a virtual presentation "VRConf" in VRChat on Tuesday 28 January 2025 at 17:00 UTC. We also bridged the event to our Discord Voice …

The post Building the Metaverse with Legacy in mind appeared first on omigroup.

]]>

“Legacy is a foundational principle to the creation of the infrastructure supporting a communications commons” - Astral_Druid

OMI, in collaboration with the Russian language groups RU, VRChatRU and the RockVR Club held a virtual presentation "VRConf" in VRChat on Tuesday 28 January 2025 at 17:00 UTC. We also bridged the event to our Discord Voice channel, to expand accessibility. The event was planned on github, where the page remains open for comments and feedback. The event was also live-streamed and is available on youtube.

We hope to improve collaborations across time zones and languages, and this was an example of a multi-lingual event, with it’s own set of challenges (some thoughts at the bottom).

Speakers

Julian Reyes (keyframe)

Julian Reyes is the Founder and Director of the Virtual Worlds Museum, whose mission is to explore, preserve, and share the evolution of immersive worlds. In recognition of his contributions to the XR industry and digital preservation, he was awarded the AUREA Impact Award 2025, alongside Lisa Egger of Arrival Space.

dr Karol Suprynowicz (74hc595)

Known locally as 74, they are one of 5 co-founders of Overte, a 3D artist, programmer and open source enthusiast. With a PhD in biomechanics, 74’s areas of interest are avatars and collaborative creation in virtual worlds.

Astral_Druid

Astral is the Founder and Creative Director of OurSpace, a not-for-profit commonwealth Metaverse platform being built by V-Sekai, an open source project team contributing to Godot Engine. Astral is working on developing a new frame for building the Metaverse called the Transplanar Ecological Society Model.

Notes from conversation
After the speakers presented their projects, they were asked to share what in their view are three aspects important to thinking about legacy:

Julian:

  1. Anthropological: what is the community doing? Different worlds have different cultures, ways of doing things, reasons for doing it.
  2. Technology: What is the technology stack, what are the assets etc
  3. Creative aspect: What art is being created?
    Julian asks people to take care to document their projects with regards to these three areas, contribute to the characterisation and documentation of the worlds, or initiatives you care about, or are building, more generally.
    Initiatives end, but the lessons, wisdom, assets and infrastructure are still valuable.

Related, there has not been enough tracking about where contributors go when initiatives shut down. The migratory patterns of the contributors to initiatives are important to understand better how to help the survival of the knowledge as it moves through the Metaverse landscape.

dr Karol Suprynowicz (74hc595), or simply, 74, talked about the ecosystem of how to build and maintain an initiative. They emphasised the importance of community as well, and the value of keeping knowledge from past initiatives, using the development of Overte from previous initiatives as example, and the use of Godot, VRM, the community contributions of glTF as examples of wider community participation working together in their own ways.

74’s emphasis on open culture is a key philosophy relevant to maintaining legacy. They warned about code rot and the need to re-invigorate code bases, which requires community input.

Astral_Druid complemented the points already made by re-iterating that building for legacy is a foundational principle of the Metaverse commons, something to incorporate at the planning stages, rather than something to think about later.

Astral noted that we are in a post-human, post-digital landscape, where the human and digital components are deeply intertwined. In that respect the Metaverse, and building its legacy, should be considered as part of the physical fabric of our existence and the needs, challenges and opportunities that this presents.

Astral noted that play forms a core part of what makes us human, and so building for legacy needs to be done according to game design, following principles of play, for example our desire for acknowledgement, “likes”, and the social, trans-boundary aspects of forming families and trust networks by choice.

Continue the conversation:

Github comments: https://github.com/omigroup/omigroup/discussions/505
(feedback and suggestions for future events may also live here)

Join OMI: Discord server: OMI - Open Metaverse Interoperability Group
chat: #omi-general
audio bridge: Weekly Meeting
VRChatRU group on VRChat: DM @VAV1ST for details.

We also had an ice-breaker casual conversation that preceded the event, which had very interesting contributions:

It is the year 2337
You found a box full of media from 2025 that contains something that can make you rich and make a lot of people’s lives better.

How do you access the media?

The ice-breaker was discussed in the Discord omi-general channel, and when asked in the VRConf room, the same conclusions were reached, which, considering how far out the conclusion is, was pretty humorous.

iFire started off by saying that a time capsule made from a bluray writer with a usb 3.0 data and a usb 3.0 power plug and a bunch of bluray disks would be their choice. Aaron countered at the risk of optical media degrading over time. While m-discs are rated for 1000 years maybe, the group agreed that the trade off between useful alternatives and long-lasting alternatives was a tough one. Maximus recommended the rosetta stone approach, to include data in a variety of formats, along with different physical adaptors. This conversation then branched off into more technical considerations around which data formats to include, and how this could apply to 3D assets as well (read about it on the channel). The group however converged that the main challenge will be how to preserve data in a durable format, and this is where DNA makes an appearance, which is where the VRConf chat also got to! Julian noted that if the data is encoded in DNA, that removes the need for the rosetta stone, as the data can be translated to any format.

However, there was also general agreement that relying on technology only will not be sufficient. "The regular advice is an online storage system that is upgraded as long as people care" - As was the common thread in the event overall; people need to care.

Meta considerations: Multi-lingual, cross-cultural events

This event was the first collaboration between two groups who hardly even share a language. OMI has members across Europe and North America so time zones are already an issue. The cultures are different: RU seemed to have a more formal, structured way of approaching the VRConfs, while OMI approaches it more informally, similar to the casual conversations that happened on Spaces until now. This lead to some misunderstandings that was only cleared up last minute, and predictably caused some stress.

The event was moderated by a native English speaker from OMI, and the flow happened at length, in English, which a large portion of the audience had to sit through without translation. While the choice for simultaneous or asynchronous translation is a constant problem for multilingual events, the larger context is to think about the event in that international sense. This includes being considerate of the length any person in any language is speaking, as well as the topics that are covered. It is important that each side of the organisation team works on the convenience of its language if it considers it necessary to improve the "quality of life" of the meeting participants. Rather than having in depth conversations, shorter conversation or topic hooks that can lead people to content outside the event may be better.

A participant commented that while the topic was interesting, what the participant liked the most was learning about the interesting people presenting. This could be a very good angle for an international event: to have the speakers each from a different country, who may be well known in their own country but less so outside of it. They can present in whatever language they are comfortable in, and this forces a constructive appreciation about the language barriers.

Ultimately, we hope that the VRConf venue can host different events from different groups passionate about the Metaverse, in different languages, at a reasonable frequency, fostering international community.

If this interests you, the planning for VRConf events happens in our #media channel, in the VRConf thread. You're welcome!

The post Building the Metaverse with Legacy in mind appeared first on omigroup.

]]>
https://omigroup.org/building-the-metaverse-with-legacy-in-mind/feed/ 0
OpenBrush: where it's been, and where it's going https://omigroup.org/openbrush-where-its-been-and-where-its-going/ https://omigroup.org/openbrush-where-its-been-and-where-its-going/#respond Thu, 03 Oct 2024 08:35:37 +0000 https://omigroup.org/?p=559 On Tuesday 24 September we had a casual townhall chat on X/Spaces with @andybak and @mikeskydev from OpenBrush. It was a fantastic discussion that kept giving for 2 hours! Here are some AI generated summaries, as well as the original recording. A quick note on our use of AI: AI is a tool; the final …

The post OpenBrush: where it's been, and where it's going appeared first on omigroup.

]]>
On Tuesday 24 September we had a casual townhall chat on X/Spaces with @andybak and @mikeskydev from OpenBrush. It was a fantastic discussion that kept giving for 2 hours! Here are some AI generated summaries, as well as the original recording.

A quick note on our use of AI: AI is a tool; the final accountability still rests with us. We hosted this conversation, we have listened to the summary, and read the highlights. We are confident in the raw data, and think the AI summaries are a decent snapshot, but not a 100% reflection. For a deepdive, the original recording is available, and please join us on Discord for a chat. Any issues, please get in touch!

A quick note on X/Spaces: We know it's gone bad. But it is still the best place we know of to have low-friction, larger reach conversations to draw more members. If you have better ideas, please join the conversation on Discord, #media channel.

14 minute summary in podcast format, Generated by NotebookLM AI based on the original recording:

Summary of the Spaces conversation about OpenBrush, using NotebookLM fed with the transcript.

Listen (original recording, 2 hours): https://x.com/i/spaces/1mnxeAnNoQnxX
Thread: https://x.com/open_metaverse/status/1838360714970374483
Open Brush App: https://openbrush.app/
Join the conversation on Discord: https://discord.gg/Wqt4ZC4zjF

[05:45] History and current status of Open Brush, an open source fork of Tilt Brush originally created by Google
[10:35] Integrating Open Brush with Google Blocks, which was also recently open sourced
[12:40] Developing Open Brush for Quest 2, working through some rendering/shader bugs
[14:00] Open Brush real-time interactive scripting API that allows creating new tools and capabilities
[15:50] Import/export pipelines and interoperability, using glTF as a core format
[18:15] Concerns about USD as a 3D format compared to glTF
[21:40] Open Brush WebGL port that is in progress
[24:50] Multiplayer support being worked on for Open Brush
[31:20] Icosa 3 Gallery project to replace Google Poly and host Open Brush and 3D model content
[36:00] Using Open Brush as a "whiteboarding" and brainstorming tool, especially with multiplayer
[1:27:00] Experimenting with using AI/Stable Diffusion to "re-render" Open Brush scenes based on the geometry
[1:35:00] Potential for WebXR development using Open Brush as an authoring tool
[1:38:30] Integration between Open Brush and Blender via glTF and grease pencil
[1:44:00] Dream textures and point cloud rendering as an alternative to traditional 3D modeling
[2:06:00] glTF extensions work being done by the OMI group to expand capabilities
Please note that these timestamps are approximate and the topics might be discussed at multiple points during the conversation.

Summary
The Open Metaverse Interoperability (OMI) group recently hosted a Twitter Space with the developer of Open Brush, an open-source fork of the popular VR painting application Tilt Brush, originally created by Google. The discussion delved into the current state and future of Open Brush, the importance of interoperability and open standards in the 3D creation space, and the exciting possibilities that arise from integrating Open Brush with other tools and platforms.

During the Twitter Space, the Open Brush developer shared insights into the ongoing development efforts, including Quest 2 support and the implementation of multiplayer functionality. The conversation also touched on the significance of interoperability and the use of glTF as a core format for 3D assets. The OMI group expressed their concerns regarding the USD format and highlighted the advantages of glTF, as well as the ongoing work on glTF extensions to expand its capabilities.

The Twitter Space also explored the potential integrations between Open Brush and other tools, such as Blender, through the use of glTF and grease pencil. Additionally, the participants discussed the possibilities of using Open Brush for WebXR development and the Icosa 3 Gallery project, which aims to replace Google Poly as a platform for hosting and sharing Open Brush creations and other 3D models.

One of the most exciting topics covered in the discussion was the experimentation with combining Open Brush and AI tools like Stable Diffusion to re-render scenes based on the generated geometry. The developer also mentioned the potential of using Open Brush as a "whiteboarding" and brainstorming tool, especially in a multiplayer setting. Other innovative approaches, such as dream textures and point cloud rendering, were discussed as alternatives to traditional 3D modeling.

The Twitter Space with the Open Brush developer highlighted the importance of open-source tools and interoperability in shaping the future of 3D creation. The OMI group encourages readers to explore Open Brush, join the community, and contribute to the development of open standards. By fostering collaboration and innovation, we can unlock new possibilities and create a more accessible and interconnected 3D creation ecosystem.

The post OpenBrush: where it's been, and where it's going appeared first on omigroup.

]]>
https://omigroup.org/openbrush-where-its-been-and-where-its-going/feed/ 0
Notes from glTF Interactivity Extension https://omigroup.org/notes-from-gltf-interactivity-extension/ https://omigroup.org/notes-from-gltf-interactivity-extension/#respond Wed, 26 Jun 2024 17:46:25 +0000 https://omigroup.org/?p=553 Khronos recently released a glTF interactivity specification requesting public comment. We had some concerns about the quality of the spec, and the availability of sample files to test on, so we reached out to Ben Houston and glTF 3D on Twitter/X to obtain clarification and determine the best way to contribute, going forward. Core take-aways …

The post Notes from glTF Interactivity Extension appeared first on omigroup.

]]>
Khronos recently released a glTF interactivity specification requesting public comment. We had some concerns about the quality of the spec, and the availability of sample files to test on, so we reached out to Ben Houston and glTF 3D on Twitter/X to obtain clarification and determine the best way to contribute, going forward.

Core take-aways for us included:

  • the emphasis of keeping the spec simple, or "boring". Rather than try to include every edge case, the use of events and translators add functionality without complicating things.
  • The overall approach of the system having three components - Events and variables, the behaviour graph and the runtime object model - is very powerful.

From a community perspective, it was noted that taking in ideas from different members, especially more independent developers should be improved. There needs to be engagement in publically accessible ways. We were very pleased at the engagement in the X Spaces event, and hope to have more in future.

There is a Discord specifically for the Khronos glTF, but content from there does not often make it into (closed room) meetings. The participants agreed that more blogs about use cases would help with exposure, and allow discovery by interested parties.

Relevant links

The detailed notes of the discussion can be found below.


Key points from the conversation on Thursday and Friday, 20 and 21 June 2024

We had a dry-run on the Thursday to figure out how Spaces work, but ended up having quite a productive conversation anyway. These notes combine the conversations from both Thursday and Friday:

  • The interactivity spec originated from Adobe's work on trigger-action lists for Adobe Aero and USDZ.
  • There was debate between trigger-action lists, behavior graphs, and WASM approaches, with behavior graphs ultimately chosen.
  • The spec development process involved studying existing systems like Unreal Blueprints and Unity Visual Scripting.
  • Security considerations were a major factor in the design, leading to a more constrained system than arbitrary JavaScript or WASM.
  • The goal was to create a "boring" standard that consolidates best practices rather than innovating.
  • The spec introduces concepts like the glTF object model, events, and variables that can be built upon by future extensions.
  • It's designed with a layered approach, allowing different levels of capability and security.
  • The spec builds on the Animation Pointer extension for referencing parts of the glTF.
  • Custom events allow GLTFs to send and receive messages, potentially enabling communication between nested GLTFs.
  • There are ongoing efforts to finalize related extensions like audio and physics.
  • The spec is designed to be flexible but with performance considerations in mind.
  • Implementations are expected to limit execution time to maintain performance.
  • There are concerns about the spec being incomplete, lacking examples, JSON schemas, and other expected components.
  • The current implementation is limited in terms of data types and capabilities compared to full programming languages.
  • There's discussion about potential future work, including adding more complex data types and operations.
  • Google is working on implementing the spec for use in Google Maps and other products.
  • The Godot engine team expressed interest in the spec but also raised concerns about other needed features, like consistent UUIDs for nodes across exports.
  • There was discussion about the challenge of maintaining unique identifiers for nodes when optimizing or merging assets.
  • The Blender team is exploring how to integrate the spec with their geometry nodes system.
  • There are some concerns about the expansion of glTF's scope and potential performance implications.
  • The importance of having multiple implementations before ratification was emphasized.
  • The community was encouraged to contribute to the implementation efforts, particularly in projects like three.js.
  • The spec is not intended to replace game engines but to enable interactivity for simpler use cases.
  • There's a need for more examples and supporting materials to help people understand and implement the spec.
  • The working group is considering how to best respond to community feedback and concerns about the ratification timeline.

The post Notes from glTF Interactivity Extension appeared first on omigroup.

]]>
https://omigroup.org/notes-from-gltf-interactivity-extension/feed/ 0
Community Perspectives: What do game worlds and medicine have in common? https://omigroup.org/what-do-game-worlds-and-medicine-have-in-common/ https://omigroup.org/what-do-game-worlds-and-medicine-have-in-common/#respond Wed, 08 May 2024 13:11:15 +0000 https://omigroup.org/?p=537 This post shares one member's perspective on how to address challenges in interoperability. If you have a story of your own to tell about your work within the open metaverse, we'd love to hear from you! Using your same email account on different computers, on your desktop and on your phone, being able to make …

The post Community Perspectives: What do game worlds and medicine have in common? appeared first on omigroup.

]]>
This post shares one member's perspective on how to address challenges in interoperability. If you have a story of your own to tell about your work within the open metaverse, we'd love to hear from you!

Using your same email account on different computers, on your desktop and on your phone, being able to make a call between different brands of phones, between different cellular operators, across countries … these are all examples of things that must operate across boundaries, in other words, that need to be interoperable. The Metaverse supercharges this requirement. As a wild example, imagine taking a character from your favourite movie, and porting them into your favourite game. Imagine seeing a dress your favourite actor is wearing and porting it into your favourite dress-up game, or trying it on in the mirror through your phone. Imagine taking a couch from the shop website and moving it into place in your home with your phone or a virtual reality headset. What about chatting with your favourite character on that couch wearing your beautiful dress, in your favourite game or virtual world, and then having friends from other virtual worlds join you there? And it all happens seamlessly.

This is a challenge for games, virtual worlds and the metaverse - hence the proliferation of metaverse standards groups, but it has been a challenge in many areas for a long time. OMI member Dr Kim Nevelsteen and Martin Wehlou M.D. faced this challenge in 2003 in the medical sector, when trying to achieve interoperability between various health care applications e.g., the hospital patient journal system and the software of specialist doctors.

Initiatives typically try to address the need for interoperability by attempting to force everyone to use the same thing, through a common set of standards, either existing ones or creating new ones. This doesn't work very well, because different implementations have different needs, people have egos, people work in isolation and develop differently, and then it is a huge cost - time, effort, resources, system downtime - to change to a common standard, and then, whose common standard gets chosen anyway? In addition, every time the protocols are updated, this re-engineering cost gets repeated.

Interoperability should not rely on standards

"Do not avoid standards. Avoid the need for standards. As soon as I see a project that says, we first have to decide on a standard … you're doomed"

-- Martin Wehlou, co-author of IPSME

Rather than aiming to create a set of standards for the medical industry, they considered a different approach, which Dr. Nevelsteen has now applied to the challenge of interoperability in the metaverse. An alternative order of events could be:

  1. Decide what functionality the systems will share (e.g. for virtual worlds, the authorisation, ability to teleport, the avatar, several assets);
  2. Have all participating systems create an API in their native language/system to allow for that functionality (this is the lowest cost option, in contrast to implementing the integration of standard protocol);
  3. Decide where inefficiencies are and only then:
  4. Decide on standards to take care of those inefficiencies.

This is the approach that Dr Kim Nevelsteen's IPSME specification follows. IPSME, which stands for Idempotent Publish/Subscribe Messaging Environment, states that integrations are external to the systems being integrated, which saves the cost of re-engineering and downtime when a protocol is updated. In other words, if integrations are external, each system must have an API in their native language/system with external translations that are updated when protocols change.

For example, Second Life has an API for login and teleporting in, but they do not have an API for importing an avatar or an asset; it must be done through their UI. The least cost for them would be to build an API (in whatever language) to the existing code base they have for importing an avatar/asset. It costs a lot more if there was a demand to conform to a particular protocol (e.g., requirement to support gLTF or VRM); they would have to build in that integration instead of just exposing an interface to the code they already have.

IPSME is the basis for a concept called the Industry of Integrations (IOI), where, when any two system interfaces (APIs) are known and accessible (via the conventions of IPSME), a translation can be created integrating the two APIs. If desired, that translation can then be monetized. The principle of IOI, underpinned by IPSME, is that creating integrations for the Metaverse could be a grassroots community endeavour, open to everyone in keeping with OMI principles, rather than being dictated by the large corporations.

IPSME has been a volunteer driven project since 2018, with its first scientific publication in July 2021. For the IPSME and the IOI to thrive and grow, it needs to find a host organisation and funding. If this interests you, please get in touch with with Dr Nevelsteen at [email protected].

Further reading

IPSME was published to the scientific community in July 2021 and can be found here: https://dl.acm.org/doi/10.1145/3458307.3460966

A website with the conventions has been created and can be found here: https://ipsme.dev

Videos introducing IPSME: https://ipsme.dev/ipsme/0.1/introductions.html
Initial SDKs implementing IPSME on the various platforms can be found here: https://ipsme.dev/ipsme/0.1/repos.html

YouTube playlist of all IPSME integrations can be found here, which includes the following highlights:

The post Community Perspectives: What do game worlds and medicine have in common? appeared first on omigroup.

]]>
https://omigroup.org/what-do-game-worlds-and-medicine-have-in-common/feed/ 0
The future of text in webXR https://omigroup.org/the-future-of-text-in-webxr/ https://omigroup.org/the-future-of-text-in-webxr/#respond Wed, 17 Apr 2024 17:25:06 +0000 https://omigroup.org/?p=526 post written by jimmy6dof Imagine stepping into a world where reading is no longer a linear journey, confined to the boundaries of static pages or even screens. This is the exciting future that "The Future of Text" foundation recently discussed at the WebXR monthly meetup. The shift from 2D text interfaces to 3D environments marks …

The post The future of text in webXR appeared first on omigroup.

]]>
post written by jimmy6dof

Imagine stepping into a world where reading is no longer a linear journey, confined to the boundaries of static pages or even screens. This is the exciting future that "The Future of Text" foundation recently discussed at the WebXR monthly meetup.

The shift from 2D text interfaces to 3D environments marks a significant moment in the evolution of spatial computing. While traditional 2D platforms have served us well, they limit our cognitive potential, merely scratching the surface of our brain's spatial capabilities. Here is where VR steps into the spotlight, providing a conduit for exploring the world of text in 3D interfaces.

As Frode says: "You know, there's a lot at stake here ... in the near future, let's say in 10 years, everybody will have a headset in their bag. But whether it's going to be useful to work with like a thinking cap, which is how we think of it now, or if it's going to be for watching movies, playing games, and so on, that's entirely up to people like us."

Introduction

The Future of Text group is kicking off on an ambitious new project developing answers to questions like "How will the advent of dynamic embodied 3D spatial computing affect our ability to interact with text and documents in general?" The goal is to develop and experiment how to produce, format, and consume text in environments like VR and AR (collectively known as "extended realities" or XR) using the open web standard WebXR. Dene Grigar and Frode Hegland are co-PIs of the initiative, made possible by the Alfred P. Sloan Foundation.One key motivation is to deliver immersive spaces without relying on proprietary software on the user side. By embracing open standards, the team aims to foster wide spread innovation and collaboration in the field of digital text and spatial computing.

The project involves three main approaches:

  1. Developing software: Creating innovative applications and tools that push the boundaries of how text can be manipulated, navigated, and experienced in XR environments.
  2. Metadata as connective tissue: Exploring how metadata can act as a bridge between different systems, enabling seamless integration and interoperability of text-based experiences across various platforms. See the section on Visual Meta
  3. Dialogue and community building: Facilitating discussions, organizing symposiums, and publishing books to bring together researchers, developers, and thought leaders in this emerging field.
person wearing a virtual reality headset interacting with virtual nodes and information

How to Get Involved ....

Step 1: Explore the Prototypes

To get a hands-on understanding of the project's goals, you can explore the prototypes that Dene and Frode have developed so far:

  • Try out the word processor and PDF viewer built natively for the Apple Vision Pro and Quest 3 headsets.
  • Experience the Future Text Lab, a cutting-edge research environment focused on academic reading and authoring in XR.

Step 2: Attend the Annual Symposium

Dene and Frode invite you to attend the annual Future of Text symposium, where researchers, developers, and industry experts gather to discuss the latest advancements and challenges in the field of digital text and spatial computing.

One of the primary challenges is the need to rethink the way we interact with and navigate text in a 3D space. In a 2D environment, we are familiar with scrolling, clicking on hyperlinks, and using menus and toolbars. However, in an XR environment, these traditional interactions may not be as intuitive or effective.

By embracing the potential of XR technologies, the Future of Text project aims to redefine the way we think about and interact with text, paving the way for a future where immersive environments become an integral part of academic work and communication.

The post The future of text in webXR appeared first on omigroup.

]]>
https://omigroup.org/the-future-of-text-in-webxr/feed/ 0
OMI elects new chairs https://omigroup.org/omi-elects-new-chairs/ https://omigroup.org/omi-elects-new-chairs/#comments Tue, 02 Apr 2024 16:27:53 +0000 https://omigroup.org/?p=518 The Open Metaverse Interoperability (OMI) group recently elected new chairs. Jesse Alton, aka mrmetaverse, one of the founders of OMI was re-elected, and Bernelle Verster, aka indiebio, was elected as a new chair. OMI was started in 2021 as a grassroots group interested in all the components that make the metaverse a reality. The group …

The post OMI elects new chairs appeared first on omigroup.

]]>
The Open Metaverse Interoperability (OMI) group recently elected new chairs. Jesse Alton, aka mrmetaverse, one of the founders of OMI was re-elected, and Bernelle Verster, aka indiebio, was elected as a new chair.

OMI was started in 2021 as a grassroots group interested in all the components that make the metaverse a reality. The group is home to a diverse group of people passionate about interoperability, accessibility, systems of systems integration, useful AI agents, music in the metaverse, fashion in the metaverse, virtual worlds and curating the history of these in the virtual worlds museum, avatar builders, extending game engine usability for metaverse applications, through example extending glTF specs and V-Sekai, and more (here is an example of our members on the github notes of a community meeting).

Bernelle is a researcher and joins with a focus on physical world interactivity with game worlds, including digital twins, geospatial interoperability, and particularly incorporating urban resource flows into the metaverse and exploring digital community governance. Being a member of OMI has meant meeting people with complementing skill-sets and experience, and Bernelle is excited to grow the group and extend these benefits further.

Jesse is passionate about open interoperability, and extended reality. His interest is in helping founders incorporate open protocols for interoperability in their products and business models. He is the co-founder of MagickML which is an agent creation tool. In parallel, he runs AngellXR to support and grow the wider community to make the open metaverse happen.

OMI actively collaborates with other groups relevant in the area. OMI is included as a research body of the World Wide Web Consortium (W3C) where we are called the Metaverse Interoperability Community Group. OMI also actively collaborates with the metaverse makers (M3), the Open Metaverse Foundation (OMF), OMA (OMA3) and other groups, and has a voting role in the Metaverse Standards Forum (MSF). OMI membership of the MSF in particular offers a way for individuals to have a say in a corporate membership institution.

OMI welcomes everyone. Members of OMI are encouraged to bring their own projects and passions, and any level of experience or interest is welcome. Get in touch via Discord or join our W3C mailing list. Our recent meetings are recorded and archived in github.

The post OMI elects new chairs appeared first on omigroup.

]]>
https://omigroup.org/omi-elects-new-chairs/feed/ 1
OMI is growing up! https://omigroup.org/omi-is-growing-up/ https://omigroup.org/omi-is-growing-up/#comments Tue, 19 Mar 2024 16:39:45 +0000 https://omigroup.org/?p=506 OMI is moving to its own Discord server

The post OMI is growing up! appeared first on omigroup.

]]>
OMI has a new Discord server

The time has come for OMI to leave the incubator AngellXR. Since it's formation in 2021, OMI has been hosted by AngellXR, who has provided server space, infrastructure and financial resources to grow OMI. We will always be most grateful for that, and hope to maintain a friendship for ever. At two years old now, and preparing to pro-actively grow the membership, it's time for OMI to spread its wings.

This also means OMI is moving to its own Discord server, and while this is understandably disruptive, it is also a good time to review our communication, clean up and update our websites, coordinate groups better and revisit what we stand for and what we do. Join the new server, called "OMI: Open Metaverse Interoperability Group" here: https://discord.gg/2QXdAhkFCn

What does OMI stand for?

Apart from the Open Metaverse Interoperability Group, which is literally what OMI stands for, OMI also stands for grassroots participation. We are a loose collective of people with a shared interest in the Metaverse, and shared goals of creating assets, standards, protocols, knowledge and energy for the Metaverse. We represent a social connection between different metaverse groups.

What does OMI do?

In a way, whatever you want. OMI members are a global group of developers, designers, researchers, artists, and media makers working on standards, protocols, and open-source software for interoperability between 3D worlds, as well as those interested in governance and organizing communities in the open Metaverse. While there are a few more formal working groups, OMI is designed for participation that includes everyday people - or "normies", university groups, students, hobbyists, anyone. While we do prefer open source, we are more concerned about building an open ecosystem.

OMI hosts weekly community meetings in our Discord voice channel, development working groups, member-led metaverse/virtual world tours and show-and-tell gatherings, and opportunities to show and tell what members are currently working on. Being part of OMI also offers opportunities to participate in Metaverse standards groups and exhibit at professional events. If you wish to create a focus group, start in the #omi-experiments channel on our Discord and get in touch at our community meetings (calendar).

An example of an active working group is the glTF group, which develops standards and protocols for interoperability between open 3D worlds. This includes interoperability between games, game platforms, game engines, and non-game 3D content. This kind of scope is what the M in OMI stands for, Metaverse, meaning a universe of interoperable 3D content with no central point of control. This is in the same vain as how we have standards for HTML/CSS/JS that allow the world wide web to provide a universe of interoperable 2D content that is compatible with many websites, web browsers, operating systems, etc.

OK but what is the Metaverse, anyway?

You know what, whatever. While we feel rather strongly that one virtual world is not the Metaverse (not even if it's Facebook, er, Meta) and not even a collection of virtual worlds, we're OK with however you describe the Metaverse. We're about building whatever bits of this Metaverse thing interests you, together. We have some members passionate about virtual worlds, some about integrating data, some about open protocols, some about the social aspects, some about integrating physical world assets ... whatever rocks your boat.

Get in touch!

Join our new Discord server: https://discord.gg/2QXdAhkFCn
We are also reviving our W3C mailing list: https://lists.w3.org/Archives/Public/public-metaverse-interop/
If you have a strong preference for a different way to communicate, e.g. Matrix, please let us know.

In following blogposts, we will share our thinking behind our logo, introduce our new chairs, and share some ideas of what type of engagement we are pursuing, and how we plan to implement that.

The post OMI is growing up! appeared first on omigroup.

]]>
https://omigroup.org/omi-is-growing-up/feed/ 1