Inspiration: We started with a focused approach to Challenge 1: Data Visualization, driven by our fascination with transforming complex SAP data into intuitive, modern interfaces. The opportunity to reimagine traditional SAP GUI screens and create a more accessible user experience for purchase order data immediately captured our interest. As we delved deeper into the data structures and SAP ecosystem, our curiosity grew about the underlying data sources themselves. This natural progression led us to explore Challenge 2: Archive Decryption, where we sought to understand how SAP stores and structures its archival data. What started as a single-challenge endeavor evolved into a comprehensive full-stack solution that addresses both data presentation and data source investigation.
Approach: Our approach began by testing the skeleton decoder provided in the challenge documentation. As expected, the script failed to decompress the given .ARCHIVE file, which indicated that the file wasn’t encoded with a standard compression method. Rather than rewriting the entire program, we added error handling and diagnostic printouts to better understand where and why it failed.
Next, we used a hex viewer to inspect the file’s internal structure. The file did not contain recognizable compression signatures such as GZIP, LZ4, or ZSTD magic bytes. This confirmed that the archive was not a typical compressed file. After reviewing the SAP documentation linked in the challenge, we discovered that these .ARCHIVE files are created using the SAP Archive Development Kit (ADK) — a proprietary system that stores business data as structured binary segments rather than compressed byte streams.
Once we understood that the file followed SAP’s ADK structure, we developed a custom function to scan for segment headers and extract them automatically. The scan revealed a segment with some characteristics that suggest binary segments:
Segment ID: 0065, Length: 585217 bytes, Offset: 2
This confirmed the presence of ADK-style binary segments. We then analyzed the raw data and detected a repeating 16-byte pattern, suggesting that the data was organized in fixed-length records.
Finally, we built functions to extract ASCII-readable text, guess field types, and interpret potential record boundaries. We exported the parsed data as structured JSON files and integrated the results directly into our web dashboard under a new “Decryption Results” tab. This made it possible to visualize the decrypted findings alongside other SAP-related data. Through this process, we concluded that the .ARCHIVE file was not encrypted or compressed in a conventional way but was instead serialized using SAP’s ADK binary format.
Challenges: Challenge 1: Misdiagnosis of Compression
Problem: Initially assumed standard compression algorithms
Solution: Conducted thorough binary analysis and SAP documentation review
Learning: Not all archival formats use conventional compression; proprietary systems require specialized knowledge
Challenge 2: Lack of Clear Documentation
Problem: Sparse documentation on ADK binary format specifics
Solution: Reverse-engineered through pattern recognition and iterative testing
Learning: Developed skills in binary analysis and format discovery
Impact and Abilities: This project demonstrates our ability to:
Tackle complex, multi-faceted technical challenges
Adapt and learn new technologies rapidly
Bridge the gap between legacy enterprise systems and modern user expectations
Deliver end-to-end solutions that address both data access and data presentation

Log in or sign up for Devpost to join the conversation.