DNARoma's DevOps Blog https://blog.dna.moe/ Mon, 18 Jul 2022 22:07:48 GMT http://hexo.io/ Rust Programming Language Learning Note - Part 1 https://blog.dna.moe/2022/07/19/learning-note-rust-1/ https://blog.dna.moe/2022/07/19/learning-note-rust-1/ Mon, 18 Jul 2022 22:07:01 GMT <p><a href="https://rust-lang.org/">Rust</a> is a modern compiled programming language. It is fast and secure, it is a strong-typed language, and it can build any type of program. Due to tiny binary output, it is suitable to build tiny docker images for running on the cloud.</p>

Rust is a modern compiled programming language. It is fast and secure, it is a strong-typed language, and it can build any type of program. Due to tiny binary output, it is suitable to build tiny docker images for running on the cloud.

I want to learn it because I want to replace some JS programs running on my private cloud with Rust programs. JS runtime is heavy, and not type-safe and memory-safe, the docker images are also heavy (with dependencies\ up to 500MB). The rust programming language seems to solve all of my pains, so let me try it.

The learning material I used is the Rust Course which is written in Chinese. For English readers, I recommend the official book The Book. All codes run on a 2020 M1 Mac Mini (16+512) under macOS Monterey (12.4).

Setup Rust

On macOS, setting up the environment for rust programming is very simple with brew. Just search for rustup, install it with

1
2
$ brew install rustup-init
$ rustup-init

and follow the prompts to finish the installation. It will install all rust toolchains including the compiler rustc and the package and project manager cargo. A C-compiler is possibly also necessary, install it with

1
$ xcode-select --install

As always, Hello World

It is extremely simple to create a Hello World program for rust:

1
$ cargo new world_hello

and you will have a folder named world_hello with an already well-written “Hello World” rust program. Just run it with

1
$ cargo run

and you will see the following outputs

1
2
3
4
  Compiling world_hello v0.1.0 (/***/world_hello)
Finished dev [unoptimized + debuginfo] target(s) in 0.42s
Running `target/debug/world_hello`
Hello, world!

and boom, a rust version “Hello World” is finished, fast and simple.

But you may notice the debug in the path, which indicates that the compiling target is a debug version of program. To obtain a release version, just append --release to cargo run command:

1
2
3
4
5
$ cargo run --release
Compiling world_hello v0.1.0 (/***/world_hello)
Finished release [optimized] target(s) in 0.42s
Running `target/release/world_hello`
Hello, world!

Another great thing is that cargo new will also initialize the folder as a git repository, and add a default .gitignore for you, which is very convenient.

All information about this “Hello World” program is recorded in Cargo.toml file:

1
2
3
4
[package]
name = "world_hello"
version = "0.1.0"
edition = "2021"

The edition field indicates that this rust program will use the 2021 edition to compile. The rust release edition every 3 years, the last two editions are 2015 and 2018. 2021 is therefore the latest edition.

Summary

What I learned from this part:

  1. How to set up the Rust development environment
  2. How to create a new rust project
  3. How to run a rust project
  4. How to determine rust project information

Next, I will learn deeper the syntax and features of the rust programming language.

]]>
Learn Rust Programming Develop Rust Notes https://blog.dna.moe/2022/07/19/learning-note-rust-1/#disqus_thread
Result of Sekai Viewer's first anniversary survey https://blog.dna.moe/2021/10/16/sekai-viewer-first-anniversary-survey-results/ https://blog.dna.moe/2021/10/16/sekai-viewer-first-anniversary-survey-results/ Sat, 16 Oct 2021 19:20:32 GMT <p>Before the first anniversary of Sekai Viewer, a survey is done and the replies are collected until 2021&#x2F;10&#x2F;07 11:58 PM (JST), 101 out of 104 collected replies are valid. Now I would like to share how the replies look like.</p>

Before the first anniversary of Sekai Viewer, a survey is done and the replies are collected until 2021/10/07 11:58 PM (JST), 101 out of 104 collected replies are valid. Now I would like to share how the replies look like.

Most users seem have played Project Sekai for a long time: 87.1% of all users have played for more than 6 months.

How long have you played Project Sekai?

And about 70% of them also have used Sekai Viewer for more than 6 months. We have also 3 new users who just knew Sekai Viewer within four weeks.

How long have you used Sekai Viewer?

The fans of “25ji, Night-cord de“ and “Wonderlands X Showtime“ stand as equals. 26 players vote WxS while 25 vote 25ji. “Virtual Singer“ is the least favored unit with only 9 fans here.

Your favorite unit

The most favorite character is the “mystery” member of 25ji “Akiyama Mizuki“. Another member of 25ji “Shinonome Ena“ ties the second place with the title virtual singer “Hatsune Miku”. The two male characters from WxS “Tenma Tsukasa“ and “Kamishiro Rui“ almost shares the third place.

Your favorite character(s)

The answers to the question “Your favorite song(s)” almost figure out all songs in Project Sekai. The most frequent one is ID SMILE. Other popular songs are KING and Kanade Tomosu Sora.

Project Sekai players also love to challenge difficult charts, 6 of them are able to survive charts with difficulty 33. There are only four songs have master chart with such difficulty: The Disappearance of Hatsune Miku, The Intense Voice of Hatsune Miku, Roku-chou Nen to Ichiya Monogatari, Machine gun poem doll. Another 42 players are able to survive charts with difficulty over 30. It is not an easy job to survive such charts. Most players are able to handle “Expert” charts.

The chart difficulty you can survive

Players also compete to reach high ranking in the events. 12 of them reached within the rankings of first 1000, I must give my respect to them while they must have played very intensive during the event. About half of participants got rankings within 10k, nine players never got rankings within 100k which means they missed the title reward.

The highest rank you ever got in an event

The most used tool on Sekai Viewer is, with no doubt, the Story Reader. It enables you to read various stories, talks etc from Project Sekai. The translation of story lines are one of the most wanted feature, and I will make it. Another popular tool is Event Tracker, users love to watch the scores before event ended to secure their rankings.

Which tools do you use most?

The least used tool is Song Recommender and Event Planner, users got confused with them and the tools is also not very user-friendly.

Which tools do you use least?

Other answers are too informative and contains some sensitive data, so I would not like to show them.

Last but not least, 60 participants attend the giveaway campaign, after filtering out invalid participants whose Twitter account does not show up in @SekaiViewer‘s follower list, 52 participants are eligible to join the gift distribution. There are 2 Colorful Passes and 1 Premium Mission Pass distributed. The winners have been drawn with Random Name Picker. @SekaiViewer will contact the winners per DM. After gift distribution is completed, I will make the twitter name of winners public.

]]>
Project Sekai Sekai Viewer anniversary giveaway https://blog.dna.moe/2021/10/16/sekai-viewer-first-anniversary-survey-results/#disqus_thread
Sekai Viewer's first anniversary https://blog.dna.moe/2021/10/01/sekai-viewer-first-anniversary/ https://blog.dna.moe/2021/10/01/sekai-viewer-first-anniversary/ Fri, 01 Oct 2021 19:02:12 GMT <p>Hello everyone, DNARoma here. On 9&#x2F;30, just past few days ago, Project Sekai feat. Hatsune Miku has just celebrated its first anniversary. On 10&#x2F;08, which is just a few days from now, the database and tools site Sekai Viewer, which is created by me together with countless contributors, will celebrate its first anniversary too.</p>

Hello everyone, DNARoma here. On 9/30, just past few days ago, Project Sekai feat. Hatsune Miku has just celebrated its first anniversary. On 10/08, which is just a few days from now, the database and tools site Sekai Viewer, which is created by me together with countless contributors, will celebrate its first anniversary too.

How was Sekai Viewer started

At first, I must give special thanks to Burrito, whose Bestdori encouraged me to create Sekai Viewer, and RayFirefist, who created SekaiDB before me and decided to cooperate with me.

I noticed the mobile rhythm game about Hatsune Miku very early because I’m a big fan of her. It was planned to be available on 2020/04, but due to Covid-19 pandemic, its release date was postponed to 2020/09/30. Since it is very similar to Bandori, I spent some time to analyze the master data and file structure, and wrote a first version of Sekai Viewer. At that time, there are only few information pages like Cards, Songs, Gachas, Events. The first tools available is Event Tracker since the first event is about to start.

The first commit of Sekai Viewer was pushed on 2020/10/07. On the second day, 2020/10/08, the website is published through GitHub Pages. At that time, the domain is https://sekai-world.github.io. After a discussion and short vote in Discord server, the current domain https://sekai.best is decided. At that time I still want sek.ai however it costs too much.

As the slogan which I used for the first twitter of @SekaiViewer says, Sekai Viewer is a website made by fans (I and many contributors) for fans (users like you who are reading this).

First anniversary and the survey

This year, Covid-19 pandemic was not stopped as expected. The variants made the situation even worse. It is not an easy year for all of us. But still, things are getting better, and time files. It is so sudden that Project Sekai and Sekai Viewer reached its first anniversary. I still feel like the game and my website was launched just yesterday.

In this year, I am not alone. I met many people by creating this website, by playing this game. Although we are not able to travel around the world as usual, we are connected in this “Sekai”. Here at the first anniversary, I want to hear more from you, so I prepared a survey to collect user experience and run a giveaway campaign.

You can fill this survey with this link: https://forms.gle/3vg7E1XEbk85p7oF8. To ensure the quality of this survey, there are two simple verification questions at first step. Please feel free to answer the questions. I need your real feedback to help me improve Sekai Viewer. This survey will start from 2021/10/02 00:00 (JST) and end at 2021/10/07 23:59 (JST).

At the end of this survey, you can choose to attend the giveaway campaign or not. If you decided to attend, you must follow @SekaiViewer, and fill in your account in order to have chance to receive gift. The giveaway gifts will include several Colorful Pass (amount depends on number of attendees) and Premium Mission Pass (not guaranteed, amount depends on number of attendees). I will announce them after surveys are collected.

The future about Sekai Viewer

The version update frequency of Sekai Viewer is slow down because I have little time to work on it. After this period, I may have more time on this project and keep it always up to date.

This is why I held this survey at first anniversary. You suggestions, ideas and supports can encourage me a lot to make Sekai Viewer better. First anniversary is a milestone, but the way of future is still long to go.

The end

Last but not least, I want to say THANK YOU to all of you, for your kindly help, contribution and support!

]]>
Project Sekai Sekai Viewer anniversary https://blog.dna.moe/2021/10/01/sekai-viewer-first-anniversary/#disqus_thread
Sekai World releases its Live2D plugin https://blog.dna.moe/2021/01/11/sekai-world-live2d-plugin/ https://blog.dna.moe/2021/01/11/sekai-world-live2d-plugin/ Mon, 11 Jan 2021 21:52:43 GMT <h2 id="TL-DR"><a href="#TL-DR" class="headerlink" title="TL;DR"></a>TL;DR</h2><p>Recently, Sekai Viewer released the Live2D Viewer function. I forked a plugin package called <a href="https://www.npmjs.com/package/find-live2d3">find-live2d3</a>, did some modification and renamed to <a href="https://www.npmjs.com/package/@sekai-world/find-live2d-v3">@sekai-world&#x2F;find-live2d-v3</a>. Release of my package can be found on <a href="https://www.npmjs.com/package/@sekai-world/find-live2d-v3">npmjs</a> and <a href="https://github.com/Sekai-World/find-live2d-v3">GitHub</a>.</p>

TL;DR

Recently, Sekai Viewer released the Live2D Viewer function. I forked a plugin package called find-live2d3, did some modification and renamed to @sekai-world/find-live2d-v3. Release of my package can be found on npmjs and GitHub.

What is Live2D?

Live2D is a software technology that allows you to create dynamic expressions that breathe life into an original 2D illustration.
Live2D is a technique of generating animated 2D graphics, usually anime-style characters, using layered, continuous parts based on a single illustration, without the need of animating frame-by-frame or creating a 3D model.
What is Live2D? | Live2D Cubism

Live2D in Project Sekai

Project Sekai use Cubism 3.3 to make the Live2D models. As Cubism promised, all Cubism Live2D SDK version 3+ has backward compatibility. Although the only one downloadable SDK is version 4.0, it works well with models from 3.3.

But not like other unity games which also use Live2D, Colorful Palette made some modification on that. To extract the Live2D models from Project Sekai asset bundles, I tried Perfare’s UnityLive2DExtractor at first, unfortunately it doesn’t work. Then I realized a painful fact: I have to extract motions and expression from the bundle files by myself.

Then I dived into the code of unity live2d extractor to figure out how it works. I believe those code is helpful cause it works with many other games. After that I made it clear how to reconstruct motion json files, I wrote some python code and amazingly they worked! But one big problem is still blocking my way: the field Parameter is a number, not a string, and I don’t know what it means. In other games, Live2D motion bundle has a unity GameObject component in it, it doesn’t appear in the files of Project Sekai. Colorful Palette either hide the GameObject somewhere in other asset bundles, or use another way to determine the Parameter field.

After a lot of reasearch (which costs me about two days), I found some clue from Cubism Live2D Unity examples. From Perfare’s code I knew that the Parameter number is crc32 computed from some strings (usually from GameObject), but I’m not sure how was the strings composed. I did some tests and found that I just missed some prefix of the strings. However, I still didn’t know which parameters are available for those motion files.

I took a look at Cubism Live2D Web SDK, it contains a core module which can decode the moc3 files and read out parameter list. This module is compiled from c/c++ code through EMscripten into js code. I’m not here to decode the machine code, so with my last hope I opened the moc3 file in hex editor. I felt lucky because the structure of parameters in moc3 file is pretty simple. In the start of moc3 file it has some chunks like offset table, telling the offset of each part. The position of parameter table in the offset table is constant, therefore I can read same position to get the offset of parameter table. The parameter table consists of a lot of strings indicating the name of parameter. Tada, I’m on the right way.

I add some simple code to my python module and most of the parameters are recovered. After some smoke test of my extracted motion jsons I can finally step further.

Implement Live2D in React

I have to say, Cubism has a very shitty documentation support for the WebGL SDK. Luckily they wrote a good production-ready working example of the SDK, but it still doesn’t meet my need. After searching on npmjs, I found a package called find-live2d3. It’s really amazing, transformed the official example into a plugin package which can be used out-of-box. I use this package as a good start.

This package did not provide a GitHub repo address, so I installed it locally and copied everything from node_modules folder. I have to modify this package because:

  • The original package did not allow me to specify a canvas container, it creates a full screen container by default.
  • The model jsons of Project Sekai does not contain motions and expressions, I have to load them in another way.
  • It uses outdated core and framework, which breaks the display of Project Sekai models. I have to update and patch them again.

Therefore I added el argument to initialize function to allow me give my custom canvas container element. I also added a loadMotions function to model instance which allows me to load motions after loading the model.

To help others who may also need a plugin package of Live2D, I make it open source. The documentation is not done, I suggest you to read the code.

Conclusion

As my other datamining work, I learned some useless knowledge again. I figured out how to read moc3 file, how to reconstruct the motion json from AnimateClip of unity Live2D bundle and how to decode the Parameter field of unity Live2D bundle. Furthermore, during the implementation of Live2D Viewer, I knew how to resize a canvas upon window resizing, how to save image in WebGL context. And I do test two models showing at once, it would help the development of Live2D Story Reader.

]]>
Website Project Sekai Live2D https://blog.dna.moe/2021/01/11/sekai-world-live2d-plugin/#disqus_thread
Project Sekai 试玩版分析 https://blog.dna.moe/2020/09/08/analyse-project-sekai-trial-version-md/ https://blog.dna.moe/2020/09/08/analyse-project-sekai-trial-version-md/ Tue, 08 Sep 2020 21:05:08 GMT <p>世嘉出的新的V家相关游戏<code>Project Sekai</code>终于出了一个试玩版,也展示了一下世嘉谱师的实力(火花的Master难度吓死人)。整个游戏的音游部分比较类似于世嘉自家的街机音游<code>Chunithm</code>,音符可以跨多个轨道,只要点中其中覆盖的任意轨道就可以得分。由于本游戏和<code>Bandori</code>游戏的开发商都是<a href="https://www.craftegg.co.jp/">Craft Egg</a>,可以预见到这就是邦邦换皮游戏,而对游戏的分析也证实了这一点。</p>

世嘉出的新的V家相关游戏Project Sekai终于出了一个试玩版,也展示了一下世嘉谱师的实力(火花的Master难度吓死人)。整个游戏的音游部分比较类似于世嘉自家的街机音游Chunithm,音符可以跨多个轨道,只要点中其中覆盖的任意轨道就可以得分。由于本游戏和Bandori游戏的开发商都是Craft Egg,可以预见到这就是邦邦换皮游戏,而对游戏的分析也证实了这一点。

游戏介绍

《世界计画彩色舞台feat. 初音未来》主要以现实世界中位于日本涩谷附近的地区作为舞台,在这里的初音未来等人与我们在现实世界中对他们的认知相同,作为虚拟歌手演唱着许多创作者们所创作的歌曲;而另一个舞台「世界(SEKAI)」是个从思念所诞生出来的场所,同时在这边也将创造出许多歌曲,本作的主角之一「星乃一歌」在因缘际会下来到了「世界」,并在这里与初音相遇。

本体解析

整个解析过程没什么特别之处,老套的il2cppdumper+IDA+Fiddler流程,特别是查看了解析出来的dump.cs之后,一股熟悉的感觉铺面而来。依然是使用AES加密,密钥的获得过程和邦邦几乎没有区别。获得API地址和下载服务器地址的地方也几乎和邦邦完全一样。和邦邦不同的地方在于,它使用MessagePack作为内层消息编码,而邦邦使用的是Protobuf

理论上Protobuf是比MessagePack编码节约空间的,因为前者不会在消息里传递结构消息(实际上也没有必要,因为结构本质上是服务端和客户端的约定),后者只是把JSON变换了一种编码方式而已。目前邦邦利用Protobuf编码的MasterDB消息已经接近10M,导致Craft Egg最近不得不再套了一层压缩,以减小消息体积。可以预见的是,Project Sekai一定也会再对MasterDB进行压缩,否则大小增长一定会比邦邦快得多。

资源文件

本游戏的资源文件下载地址大体构造与邦邦类似,不过地址里出现了一个GUID的部分,很明显这是服务器返回的东西,因为客户端里没有写死这个。后来对API的调查发现,不带任何参数访问API服务器根地址的话就会获得当前服务器支持的游戏配置,其中就有这个GUID的hash。

下载了两个文件回来之后发现没有办法直接在AssetStudio里读取,用HexEdit打开之后发现文件头并不是普通的AssetBundle的文件头,只能看到明文的FS字样,而且头四个字节似乎不是AssetBundle的一部分。后来跟双草爷爷交流了之后了解到这东西经过异或加密的。从开头保留有FS可以判断出只有五字节被异或,然后空了几个字节,循环往复。去掉头四个字节并且利用异或还原了文件头之后终于可以顺利解析了。于是写了个脚本下载试用版给的所有资源,并且用我自己写的Unity解包脚本全部拆开,发现得到的数据比游戏里给的东西要多,看来Craft Egg还是弄的不干净,还有些AssetBundle里打包进了不应该出现的东西,文件管理也是够混乱的。

本来以为音频文件要花点时间去找解密用的key,但是使用邦邦的key直接就解密出了能正常播放的音频,免去了这个步骤。不过这游戏设置key的方式也改变了,尝试跟踪了一下也没找到从内存里读取key的办法,现在用的笔记本性能也很弱,分析libil2cpp.so估计需要一整天,就作罢了。

打开谱面文件一口老血差点喷出来。虽然还是BMS格式,但是头部有这么一句话:

This file was generated by Ched 2.6.2.0.

于是拿着这句话去搜索了一下,找到了一个叫Ched的开源工具,用途嘛……就是制作Chunithm的自制谱。帮世嘉做跟自己街机游戏类似的音游模式还拿不到他们的制谱工具,只能使用开源工具,确实有点黑色幽默的感觉。

总结

这游戏叫换皮其实也不是完全的换皮(相比于某音乐时间),只是使用自己开发框架重新开发了一个游戏,音游部分等等都是重写的,还是做了许多工作。更赞的是那个Virtual Live功能,真正实现在线打call,如果支持AR/VR就更好了。不过,我大概只会选择V家模式,乐团碰不碰看缘分吧。

]]>
Hack unity Project Sekai https://blog.dna.moe/2020/09/08/analyse-project-sekai-trial-version-md/#disqus_thread
First Kubernetes deployment with microk8s and cert-manager https://blog.dna.moe/2020/07/31/study-k8s-with-microk8s/ https://blog.dna.moe/2020/07/31/study-k8s-with-microk8s/ Fri, 31 Jul 2020 20:10:57 GMT <p><code>Docker</code> is a amazing container platform, it helps fast deploying and scaling service among multiple servers (called <code>cluster</code>). But <code>Docker</code> is not good at managing instances on different servers, DevOps need a new software. A great thank to Google, <code>Kubernetes (K8s)</code> is well fitted for </p> <blockquote> <p>automating deployment, scaling, and management of containerized applications. </p> </blockquote> <p>In this post I will show my first impression on Kubernetes and how I setup a deployment of <code>MySQL</code>+<code>PhpMyAdmin</code>+<code>Nginx</code> and assign a ssl certificate automatically with <code>cert-manager</code> and the steps of my troubleshooting.</p>

Docker is a amazing container platform, it helps fast deploying and scaling service among multiple servers (called cluster). But Docker is not good at managing instances on different servers, DevOps need a new software. A great thank to Google, Kubernetes (K8s) is well fitted for

automating deployment, scaling, and management of containerized applications.

In this post I will show my first impression on Kubernetes and how I setup a deployment of MySQL+PhpMyAdmin+Nginx and assign a ssl certificate automatically with cert-manager and the steps of my troubleshooting.

Kubernetes and MicroK8s

The official Kubernetes is mainly for cloud platform who needs to manage many clusters, and the control-panel node (which can also be called master node) is not recommended and not allowed to run any container as default. For a bare metal server (like VPS without upstream K8s support) it’s too heavy. A lightweighted K8s solution like MicroK8s is the best choice.

MicroK8s is developed by Canonical, the author of Ubuntu, and is

The smallest, simplest, pure production K8s.
For clusters, laptops, IoT and Edge, on Intel and ARM.

To install on Ubuntu system is very simple with snapd installed:

1
sudo snap install microk8s --classic

and it’s done. You can watch the install status with microk8s status --wait-ready if you want. More detailed information about installation see Offcial Docs.

For convenience, I recommend running following code:

1
2
3
alias kubectl='microk8s kubectl'
microk8s enable dns helm # helm3 is also available
alias helm='microk8s helm' # replace helm with helm3 if you use helm3 in command above

Secret? Volume? Service? Pod? Ingress?

Different from Docker, You’ll face lots of new concepts in order to start a single service:

  • ConfigMap for providing configuration
  • Secret for store private or secret information
  • Volume for providing storage space
  • Deployment for deploying and scaling service
  • Pod for running the service instance in container
  • Ingress for providing service to public network.

Those description is based on my understandings and may be not accurate enough. It’s really hard to understand how they work at the beginning, but they really helps to seperate configs and instance, allow you to generate different configs from a template for deploying on different nodes in clusters. But talk is cheap, now I show you how I setup a cluster with MySQL, PhpMyAdmin, Nginx.

Deploy first service

PersistentVolume and PersistentVolumeClaim

Let’s setup a MySQL service as a try. As a container will lost its data after shutdown, I need a PersistentVolume to persist database. A PersistentVolume is like a disk for containers, every container can claim some space from it, therefore a extra PersistentVolumeClaim is needed.

All configuration file is written in yaml format, and a yaml config file can contain multiple configs. The following code shows a PersistentVolume and a PersistentVolumeClaim for MySQL database storage:

Assuming the above code is written in mysql-pv.yaml, run following code to create the actual resources:

1
kubectl apply -f mysql-pv.yaml

and check if the resource is sucessfully created:

1
2
3
4
5
6
7
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
mysql-pv-volume 2Gi RWO Retain Bound default/mysql-pv-claim manual 1m

$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mysql-pv-claim Bound mysql-pv-volume 2Gi RWO manual 1m

Deployment and Service

The next step is to create a Deployment configuration and a Service configuration. The PersistentVolumeClaim created above will be mounted to the Deployment. As a database is a stateful application, we don’t need to take care of scaling problem.

Secret

To keep the root password safe, it is not directly written in env section, but retrieved from mysql-secret, which is a Secret resource created from following code:

Attention that the value of root_password must be base64 encoded.

Deploy MySQL service

Assume that the two above codes are saved as mysql-deployment.yaml and mysql-secret.yaml, apply them using:

1
2
kubectl apply -f mysql-secret.yaml
kubectl apply -f mysql-deployment.yaml

By creating a Deployment resource, a Pod will also be created to run the instance. The Pod is the real container, backended by containerd by default.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
$ kubectl describe secret mysql-secret
Name: mysql-secret
Namespace: default
Labels: < none >
Annotations:
Type: Opaque

Data
====
root_password: 16 bytes

$ kubectl get deploy -l app=mysql
NAME READY UP-TO-DATE AVAILABLE AGE
mysql 1/1 1 1 3m

$ kubectl get pod -l app=mysql
NAME READY STATUS RESTARTS AGE
mysql-75b7c7dcb-qmxqg 1/1 Running 1 3m

$ kubectl get svc -l app=mysql
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mysql ClusterIP 10.152.***.*** < none > 3306/TCP 3m

Now try connecting to the mysql instance to see if the deployment succeeds.

1
kubectl run -it --rm --image=mysql:5.6 --restart=Never mysql-client -- mysql -h mysql -p[your password]

It will run a instant Pod running MySQL version 5.6 and call mysql client command to connect local mysql service. If you see a error message like Unknown mysql server host 'mysql', it means that your deployment is not correct or you didn’t enable dns addon. You can follow steps in official guide to check your dns addon working state. When everything works, you can see following prompt:

1
2
3
If you dont see a command prompt, try pressing enter.

mysql>

You can try executing some MySQL command here to check if MySQL server really works.

Deploy PhpMyAdmin

It’s time to deploy more services. Next one is PhpMyAdmin, a popular database management web application written in PHP. I recommend you use the default docker image branch instead of the fpm branch, at least I didn’t make the fpm image work properly.

With following code you can deploy a pma service in one configuration file. Remember to set PMA_ABSOLUTE_URI to the real public uri you want to use in development or production.

Then run following command to apply the configuration:

1
kubectl apply -f pma-deployment.yaml

and check the status of this deployment:

1
2
3
4
5
6
7
$ kubectl get deploy -l app=pma
NAME READY UP-TO-DATE AVAILABLE AGE
pma 1/1 1 1 1m

$ kubectl get svc pma-service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
pma-service ClusterIP 10.152.***.*** < none > 80/TCP 1m

Deploy Nginx with Nginx-Ingress and secure with cert-manager

Until now the deployed service is not able to be accessed from externel network, even not able from localhost. To allow external access, an Ingress resource will be created. Furthermore, the web service will be secured with a ssl certificate.

Use Nginx-Ingress to deploy nginx service

Nginx-Ingress allows you deploy nginx service with some simple commands, based on Kubernetes Ingress, use ConfigMap to auto configure nginx. All you need to do is install it and write a Ingress config file and then all done.

I recommend you use helm to install Nginx-Ingress using microk8s enable helm, replace helm with helm3 if you want to use helm3. You will also need a tiller account for helm:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
$ kubectl create serviceaccount tiller --namespace=kube-system
serviceaccount "tiller" created

$ kubectl create clusterrolebinding tiller-admin --serviceaccount=kube-system:tiller --clusterrole=cluster-admin
clusterrolebinding.rbac.authorization.k8s.io "tiller-admin" created

$ helm init --service-account=tiller
$HELM_HOME has been configured at /Users/myaccount/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!

Then run helm repo update to update official repo. Assume that install service name is nginx, run helm install stable/nginx-ingress --name nginx to install Nginx-Ingress controller. But by default it requires a LoadBalancer to assign an external ip to the controller. As a bare metal server, the provider will not give an upstream LoadBalancer support. If you really want to use LoadBalancer you can install MetalLB which is still in beta phase and you need some IP available. I recommend using NodePort mode rather than LoadBalancer for convenience.

Helm supports config override using values. At Helm Hub page the configurable values are listed. Values can be set in command like --set config.service.type=NodePort, or in a file:

Remember to assign the external IP to your server IP. Check controller service status:

1
2
3
4
$ kubectl get svc -l app=nginx-ingress
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-nginx-ingress-controller NodePort 10.152.***.*** [your server ip] 80:30123/TCP,443:30456/TCP 1m
nginx-nginx-ingress-default-backend ClusterIP 10.152.***.*** < none > 80/TCP 1m

You can access http://[your server ip]:30123 and get a default backend response with 404.

Expose PhpMyAdmin using Ingress

The Nginx-Ingress controller supports exposing a service with Ingress configuration, where I just need to point desired host, path and reverse proxy (backend). A sample yaml is shown below:

and comment out line 6 and line 18-21 to disable tls for now. It means that we expose the backend service pma-service to host, reverse proxy any request sent to the host (domain) to port 80 of pma-service. Apply this yaml using:

1
kubectl apply -f pma-ingress.yaml

and check ingress status:

1
2
3
kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
pma < none > [your hostname] [your server ip] 80, 443 24h

the address could be pending for a while because the Ingress will send config to the Nginx-Ingress controller and wait it to activate. Once your server ip is shown in the ADDRESS field, you can access the host you set in pma-ingress.yaml to test if the ingress works. Remember to point the host to your server ip in DNS provider.

Secure Web Application with SSL

Normally I use Let's encrypt to secure the connection using acme.sh script and import private key and public certificate in Nginx virtual host config. But with Kubernetes I can use cert-manager to automate this process.

At first run following commands to install cert-manager with helm:

1
2
3
4
5
6
7
8
9
10
kubectl create namespace cert-manager
helm repo add jetstack https://charts.jetstack.io
helm repo update

helm install \
--name cert-manager \ # if you use helm3, delete '--name'
--namespace cert-manager \
--version v0.16.0 \
jetstack/cert-manager \
--set installCRDs=true

Verify the installation with:

1
2
3
4
5
$ kubectl get pods --namespace cert-manager
NAME READY STATUS RESTARTS AGE
cert-manager-c456f8b56-4wkq7 1/1 Running 0 1m
cert-manager-cainjector-6b4f5b9c99-tqp5c 1/1 Running 0 1m
cert-manager-webhook-5cfd5478b-kd69h 1/1 Running 0 1m

Then create two Issuer, this is a new type imported by cert-manager. The one is for test using staging acme server, one is for production using real acme server.

The class name defined at line 18 must match the class name set in pma-ingress.yaml at line 5. The generated cert will be stored in Secret, name is defined at line 13 (privateKeySecretRef.name). Apply them using:

1
2
kubectl apply -f le-staging.yaml
kubectl apply -f le-prod.yaml

You can check the status of issuer with kubectl describe issuer letsencrypt-staging or kubectl describe issuer letsencrypt-prod.

Sign SSL Certificate and Deploy

Uncomment line 6 and line 18-21 in pma-ingress.yaml, replace the value at line 6 with letsencrypt-staging for testing purpose. Then apply this ingress config again. Check the status of certificate:

1
2
3
$ kubectl get certificate
NAME READY SECRET AGE
pma-tls True pma-tls 3m

until the field READY become True. If it is always False you can check detailed information about the certificate using kubectl describe certificate pma-tls.

The certificate is expected to be stored in pma-tls:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
$ kubectl describe secret pma-tls
Name: pma-tls
Namespace: default
Labels: < none >
Annotations: cert-manager.io/alt-names: [your hostname]
cert-manager.io/certificate-name: pma-tls
cert-manager.io/common-name: [your hostname]
cert-manager.io/ip-sans:
cert-manager.io/issuer-kind: Issuer
cert-manager.io/issuer-name: letsencrypt-staging
cert-manager.io/uri-sans:

Type: kubernetes.io/tls

Data
====
tls.crt: 3558 bytes
tls.key: 1679 bytes

Try accessing https://[your hostname]/, you will get a certificate warning, it’s normal because the certificate is signed by staging acme server. It means that the certificate issuer is working.

Replace letsencrypt-staging with letsencrypt-prod at line 6 in pma-ingress.yaml, delete the secret pma-tls using

1
kubectl delete secret pma-tls

and apply the pma-ingress.yaml again. Then wait a few minutes until new certificate is ready.

Now you should be able to access https://[your hostname]/ without any certificate warning, otherwise check if you forget to delete old pma-tls secret, or the certificate issue process is erroneous (execute kubectl describe certificate pma-tls to check the status).

Afterword

At the very beginning, the Kubernetes seems a little bit scared and complicated. I need to write some configuration yaml files to setup just one service. But it has great profit: I don’t need to set every config by myself, I don’t need to write nginx config, run acme.sh commands etc. And I can deploy another cluster with same configuration files in just a few minutes. With kustomize it’s quiet easy to generete and reuse configurations among clusters (see GitHub repo and this blog post).

An obvious disadvantage is relatively high memory usage, for example my Kubernetes configuration will eat up to 1.5 GiB memory. The recommended memory, according to microk8s official docs, is 4GiB. But anyway, Kubernetes worth a try.

]]>
DevOps kubernetes microk8s cert-manager mysql phpmyadmin nginx letsencrypt https://blog.dna.moe/2020/07/31/study-k8s-with-microk8s/#disqus_thread
Coming breaking changes about Bandori Database, the future and more https://blog.dna.moe/2020/07/16/coming-changes-bandori-database/ https://blog.dna.moe/2020/07/16/coming-changes-bandori-database/ Thu, 16 Jul 2020 12:07:23 GMT <h2 id="TL-DR"><a href="#TL-DR" class="headerlink" title="TL;DR"></a>TL;DR</h2><ul> <li><code>Bandori Database</code> will be <strong>renamed</strong> to <code>Bandori Top</code> with new domain <a href="https://bandori.top/">bandori.top</a>.</li> <li>The old domains <em>bandori.ga</em> and <em>bangdream.ga</em> will be abandoned in the near future.</li> <li>All requests to old domains will be <code>301 permanent redirect</code> to new domain.</li> <li>More stable and robust auto update of datas and assets.</li> <li>New design of pages and offline cache.</li> <li><a href="https://www.transifex.com/bandori-top/bandori-top-website/">Crowdin Translation</a> support.</li> </ul>

TL;DR

  • Bandori Database will be renamed to Bandori Top with new domain bandori.top.
  • The old domains bandori.ga and bangdream.ga will be abandoned in the near future.
  • All requests to old domains will be 301 permanent redirect to new domain.
  • More stable and robust auto update of datas and assets.
  • New design of pages and offline cache.
  • Crowdin Translation support.

Domain and name changes

I configured an automatic update of the certificates for my domains, but about two weeks before, someone reported that the cert of my api server was expired. I thought it must be the scripts not working again. Unfortunately, I manually ran the command, then error occurred. The log showed a strange error message:

1
{"error": "You cannot use this API for domains with a .cf, .ga, .gq, .ml, or .tk TLD (top-level domain). To configure the DNS settings for this domain, use the Cloudflare Dashboard."}

It was really a shock for me. After some googling I realized that CloudFlare banned free domains from using API. I mainly use CloudFlare API for assign wildcard certificates, and from now on I can’t do it anymore with my “free” domains. It reminds me of the fact that anything free may be not stable. OK, it’s time to say goodbye to free domains.

After some search acorss different domains and domain registrars, I picked bandori.top. I will move all Bandori Database services to this domain, and, to match the new domain, I made the decision rename it to Bandori Top.

Website changes

It’s been a quiet long time that I didn’t change anything on Bandori Database except bug fixes. I planned some features before but they still on the pending feature list. To tell the truth, I’m not good at designing, and the user expericence of Bandori Database is far from convenient. Sometimes the auto update feature also fails. In the past months I made some changes about backend, making it more reliable and robust, and they are not public at this moment. As the changes is so breaking, I have to modify a lot of codes of frontend and API, so I decided to improve Bandori Database and introduce some new features.

The following parts might contain some technical details and are not friendly for normal users.

Backend and API

Bandori Top API Service is a RESTful API service. Current versioning, from start of service never changed, is v1. But because the backend will directly use the proto definition extracted from the code, many entries and key names are changed, most api will have a new version v2. As v1 endpoints may still work, some key may have changed and you should follow those changes to prevent crash of your programs which use Bandori Top API Service.

New page design

I have to say, I’m not good at representing data in a “beautiful” way. I decided to redesign the pages again to make them more clear and to display more useful information. Gachas will be removed from home page and moved to separate page. Event history will be presented in a new page supporting by a backend database. Card detail and song detail page will also have more information.

Bug fixes

Yet some bugs make some important features unavailable, like Live2D and music chart viewer. They have something to do with backend and with new backend version they will be fixed.

Crowdin Translation

Bandori Top support multi languages, but some translations are missing in Korean and Japanese. Now with new Crowdin Translation feature, you can help improving the translations! More info see project page. You may need a account for submitting new translations, and I can invite you to collaborate.

Offline cache with IndexedDB

IndexedDB is a useful tool for Progressive Web Appliaction (PWA). PWA is aimed to create “experience like native apps”, so offline use and offline cache is important to such apps. IndexedDB is the tool exact for this task. And with offline cache, your access to Bandori Top will be blazing fast because most data will be retrieved from cache. I will do some optimizations to ensure cached data up to date and you will not see deprecated data if you are online.

]]>
Website Bandori Database Breaking Changes Bandori Top https://blog.dna.moe/2020/07/16/coming-changes-bandori-database/#disqus_thread
Generating Proto File For Bangdream 4.0.0 (V2) https://blog.dna.moe/2020/03/16/auto-gen-bang-proto-v2/ https://blog.dna.moe/2020/03/16/auto-gen-bang-proto-v2/ Mon, 16 Mar 2020 11:06:34 GMT <p>You can check <a href="/2018/09/15/auto-gen-bang-proto/">this post</a> for some background. BangDream now use “on demand” delivery and I can get only arm64 blobs now. They are so different with old armv7 instrcutions so I have to rewrite some code to get correct tag numbers.</p> <p>At first, use <strong>latest</strong> <code>il2cppdumper</code>, or you may have some errors when running the script. I tried immediately with my old script, and all tag numbers are reported <em>None</em>. It’s pretty annoying but it should have something to do with new instructions of arm64 (aarch64). Now let’s check what happend.</p>

You can check this post for some background. BangDream now use “on demand” delivery and I can get only arm64 blobs now. They are so different with old armv7 instrcutions so I have to rewrite some code to get correct tag numbers.

At first, use latest il2cppdumper, or you may have some errors when running the script. I tried immediately with my old script, and all tag numbers are reported None. It’s pretty annoying but it should have something to do with new instructions of arm64 (aarch64). Now let’s check what happend.

Disassembly

Like in armv7, protobuf-net codes are also compiled into two kinds of instructions. Use UserAuthRequest as an example. userId is compiled into:

1
08 04 40 F9 E1 03 00 32 E2 03 1F AA 00 01 40 F9 6D 70 78 14

With a online disassembler you can get following instructions:

1
2
3
4
5
0x0000000000000000:  08 04 40 F9    ldr x8, [x0, #8]
0x0000000000000004: E1 03 00 32 orr w1, wzr, #1
0x0000000000000008: E2 03 1F AA mov x2, xzr
0x000000000000000c: 00 01 40 F9 ldr x0, [x8]
0x0000000000000010: 6D 70 78 14 b #0x1e1c1c4

and attestationErrorMsg is encoded as:

1
F3 0F 1E F8 FD 7B 01 A9 FD 43 00 91 08 04 40 F9 21 01 80 52 ...

and processed by disassembler:

1
2
3
4
5
0x0000000000000000:  F3 0F 1E F8    str  x19, [sp, #-0x20]!
0x0000000000000004: FD 7B 01 A9 stp x29, x30, [sp, #0x10]
0x0000000000000008: FD 43 00 91 add x29, sp, #0x10
0x000000000000000c: 08 04 40 F9 ldr x8, [x0, #8]
0x0000000000000010: 21 01 80 52 movz w1, #0x9

The most important instructions are:

1
2
3
E1 03 00 32    orr w1, wzr, #1
# or
21 01 80 52 movz w1, #0x9

Instruction Encoding

But how are the immediate encoded? By checking the reference and the instruction encoding I figured out that MOVZ use direct immediate and ORR use bitmask immediate. Although they are aarch64 instructions, both of them use 32 bit immediate.

The direct immediate is pretty direct, just read the immediate and it’s finished:

1
x10x 0010 1xxi iiii iiii iiii iiid dddd

But how about bitmask immediate? They are like the rotating encoding of immediate in armv7 but have some different. ORR immediate instruction looks like this:

1
x01x 0010 0Nii iiii iiii iinn nnnd dddd

N together with first x (as known as sf) refers to bit length (sf==0 AND N==0 => 32bit or sf==1 => 64bit), first six binary digits of immediate are immr and last six binary digits of immediate are imms. You can find the code to decode bitmask immediate in the arm official reference:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
// DecodeBitMasks()
// ================
// Decode AArch64 bitfield and logical immediate masks which use a similar encoding structure

(bits(M), bits(M)) DecodeBitMasks(bit immN, bits(6) imms, bits(6) immr, boolean immediate)
bits(M) tmask, wmask;
bits(6) levels;

// Compute log2 of element size
// 2^len must be in range [2, M]
len = HighestSetBit(immN:NOT(imms));
if len < 1 then UNDEFINED;
assert M >= (1 << len);

// Determine S, R and S - R parameters
levels = ZeroExtend(Ones(len), 6);

// For logical immediates an all-ones value of S is reserved
// since it would generate a useless all-ones result (many times)
if immediate && (imms AND levels) == levels then UNDEFINED;

S = UInt(imms AND levels);
R = UInt(immr AND levels);
diff = S - R; // 6-bit subtract with borrow
esize = 1 << len;
d = UInt(diff<len-1:0>);

welem = ZeroExtend(Ones(S + 1), esize);
telem = ZeroExtend(Ones(d + 1), esize);
wmask = Replicate(ROR(welem, R));
tmask = Replicate(telem);

return (wmask, tmask);

But it’s pretty hard to understand. LLVM gives code which is way more clear:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
static inline uint64_t decodeLogicalImmediate(uint64_t val, unsigned regSize) {
// Extract the N, imms, and immr fields.
unsigned N = (val >> 12) & 1;
unsigned immr = (val >> 6) & 0x3f;
unsigned imms = val & 0x3f;

assert((regSize == 64 || N == 0) && "undefined logical immediate encoding");
int len = 31 - countLeadingZeros((N << 6) | (~imms & 0x3f));
assert(len >= 0 && "undefined logical immediate encoding");
unsigned size = (1 << len);
unsigned R = immr & (size - 1);
unsigned S = imms & (size - 1);
assert(S != size - 1 && "undefined logical immediate encoding");
uint64_t pattern = (1ULL << (S + 1)) - 1;
for (unsigned i = 0; i < R; ++i)
pattern = ror(pattern, size);

// Replicate the pattern to fill the regSize.
while (size != regSize) {
pattern |= (pattern << size);
size *= 2;
}
return pattern;
}

OK we can now rewrite our code to get correct tag number.

New getTag Function

Attention that in following code we use Little-Endian. In our case, N and sf always equal to 0 so we don’t have to care about length, it’s fixed to 32 bits.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
def getTag(address):
offset = address & 0xFFFFFFFF

prog.seek(offset)
inst = prog.read(4)
inst = int.from_bytes(inst, byteorder='little', signed=False)
if inst == 0xf9400408:
prog.seek(offset + 4)
inst = int.from_bytes(prog.read(4), 'little', signed=False)
elif inst == 0xf81e0ff3:
prog.seek(offset + 16)
inst = int.from_bytes(prog.read(4), 'little', signed=False)
else:
print(hex(inst), hex(address))
return None
if inst >> 24 == 0x52:
return (inst >> 5) & 0xFFFF
elif inst >> 24 == 0x32:
retnum = (inst >> 8) & 0xFFFF
immr = (retnum >> 8) & 0x3F
imms = (retnum >> 2) & 0x3F
clz = lambda x: "{:032b}".format(x).index("1")
_len = 31 - clz((0 << 6) | (~imms & 0x3F))
_size = 1 << _len
R = immr & (_size - 1)
S = imms & (_size - 1)
ret = (1 << (S+1)) - 1
for i in range(immr):
ret = rotr(ret, 32)
return ret

For whole code please see this gist.

Happy hacking!

]]>
Hack Bandori unity il2cpp protobuf automation https://blog.dna.moe/2020/03/16/auto-gen-bang-proto-v2/#disqus_thread
Generating Proto File For Bangdream https://blog.dna.moe/2018/09/15/auto-gen-bang-proto/ https://blog.dna.moe/2018/09/15/auto-gen-bang-proto/ Sat, 15 Sep 2018 12:04:21 GMT <p>This post is encouraged by <a href="https://estertion.win/2018/04/bang-dream-proto%E7%9B%B8%E5%85%B3/">esterTion’s post</a>. Before I only extract the proto file by hand, but it gives me a easy way to automation. For what is <em>proto file</em> and <em>protobuf</em>, please check <a href="https://developers.google.com/protocol-buffers/">Google’s documentation</a></p>

This post is encouraged by esterTion’s post. Before I only extract the proto file by hand, but it gives me a easy way to automation. For what is proto file and protobuf, please check Google’s documentation

Background

Bangdream (Full name: BanG Dream! Girls Band Party!, Japanese: バンドリ! ガールズバンドパーティ!) is a rhythm card game released by Bushiroad and developed by Craftegg with Unity. As of today, Unity is becoming more and more popular among game developers. In Unity 4.x.x, there’s nearly no protection to C# source code. With dnSpy or ILSpy it’s very easy to read the code or do some hack.

From Unity 5, a method called “il2cpp” is applied to convert C# bytecode (IL) to native code. If target is Android, a libil2cpp.so can be found under lib directory. Thanks to Perfare’s amazing tool Il2CppDumper, reading the code is as easy as before.

Il2CppDumper will generate some files, if you don’t want to read source code with IDA, then just get the dump.cs which includes all classes, methods and variables. For game like Bangdream, you can get rich infomations from this file, like AES key and protobuf definitions. I think Craftegg want to avoid unnecessary files, so they use protobuf-net to write protobuf definitions in C# code.

Analyse

The method name in dump.cs are very reasonable. The protobuf definition is always stored in class like “GetResponse” or “PostRequest” prefixed by API path. So “SuiteMasterGetResponse” is the class storing the protobuf definition of game master data (or game database). By reading the esterTion’s post and gist code, it seems not hard to extract proto file from the protobuf definition in il2cpp file. The only problem is how to get the Tag number, which can only be found in binary file.

Read and rewrite code

The original code of esterTion is written in PHP. Firstly it read the dump.cs and extract the basic structure of protobuf then generate the valid “proto2” file. The best parts are the regular expressions extracting the class body and ProtoMemberAttribute (=proto message member), but sadly it can only run in PCRE system (like PHP). In Python some grammar is invalid (subroutine and atomic group), it spent me hours to constructure working regex for Python.

Now I get the proto file but with all Tags as hex address. The original getTag method is not working. Is the original code wrong? Probably not. esterTion seems to use the Mach-O binary, which has the different code from Android library. It needs some adjustments.

Get the right Tag

Let’s open the libil2cpp.so with hex code editor and jump to the address. The hex code for members is like: 04 00 90 E5 01 10 A0 E3 ... 04 00 90 E5 02 10 A0 E3 .... One member begins with 04 00 90 E5, and the following number keeps changing. Do a verfication and you can find that this is exactly the Tag number you want. Notice that the number is stored in 12 bits, so do not only extract one byte.

But it’s only one type. Another type begins with 10 4C 2D E9, the Tag number is 12 bytes away. So replace the original getTag code with this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# Only for Python 3
def getTag(address):
offset = address & 0xFFFFFFFF

prog.seek(offset)
inst = prog.read(4)
inst = int.from_bytes(inst, byteorder='little', signed=False)
if inst == 0xe5900004: # caution! little-endian
prog.seek(offset + 4)
return int.from_bytes(prog.read(2), 'little', signed=False) & 0xfff
elif inst == 0xe92d4c10: # caution! little-endian
prog.seek(offset + 12)
return int.from_bytes(prog.read(2), 'little', signed=False) & 0xfff
else:
print(hex(inst), hex(address))

A small problem: the Tag number 300(0x12c) is stored as 3915(0xf4b). Can’t figure out the reason and made a hardcoding.

Proto supports map

Proto files now supports map definition, but it will be compiled to a array structure with objects referring the map key and value. For example, the following protos is equivalent:

1
2
3
message test {
map<uint32, string> entries = 1;
}
1
2
3
4
5
6
7
message testEntry {
required uint32 key = 1;
required string value = 2;
}
message test {
repeated testEntry entries = 1;
}

The original code only supports the second one. The protobuf for Python supports to generate map structure from the first definition. So the first one is optimal for me. Some changes were made to generate the beautiful map definition.

Final words

You can find the gist code here. I’ve tested the generated proto file, it fits the game master data. It’s time to say goodbye to manual writting proto file.

]]>
Hack Bandori unity il2cpp protobuf automation https://blog.dna.moe/2018/09/15/auto-gen-bang-proto/#disqus_thread