Working towards a foundation

The last month has been filled with a lot development on Kestrel whilst community between home and work.

The office in which Kestrel is being developed – it’s often late, noisy and sometimes I don’t even have a seat to make working more comfortable.

I am also getting some development done at home, but the majority is being done on the train.

So what has been happening?

Lot’s. Let’s start things with a small screenshot.

I did say a small screenshot.

This represents the progress of a month of development. It doesn’t look like much but it does show a lot.

Where are we at?

When I last posted anything there was a very basic render pipeline set up in Metal. It rendered a yellow square. Now the engine is actually rendering stuff and it looks a lot more interesting.

I should probably point out that the engine is being developed to be compatible with EV Nova in terms of resources and data. The end game and scenario will be that of EV Override, but in order to test certain features I need to use Nova.

A shuttle flying past Jupiter – spot the bugs in this scene.

Currently I am working on implementing ships. I know this sounds like a fairly basic thing, but given that space ships are sort of the very heart of EV, it is a big thing to get right, and something that will get its own post in the future. This is also requiring a change to the rendering pipeline to allow for additive light blending. The lights and the glow on the shuttle do not blend correctly by default and have weird black halos around them.

The status bar is being developed bit by bit. It’s got no real function right now, other than to occupy that bit of the screen.

Additionally I have added in asteroids to the system, as seen in the small screenshot. This means that the systems feel a bit more alive.

What are the problems?

This is the first time I’ve really worked with GPU based rendering (as in Metal or OpenGL). I’ve used heavily abstracted things that ultimately end up on the GPU, but I’ve never really done it directly. This means I have a lot of learning to do with regards to it, and in general it has been fun.

However getting used to render pipelines and the different coordinate system has been interesting, and led to some interesting bugs. In fact up until recently the distances between everything in the engine have been squashed. In the “Sol” system, you could easily see both Earth and Mars on the screen together whilst parked at Earth, when Mars is almost always off screen in this scenario.

There is also the blending issue and trying to determine how best to do that. Lot’s of little things like these that add up into a bit of a headache.

Ship movement is also another issue. Trying to determine how a ship moves in Nova seems like a straightforward exercise, but the exact mechanics are tricky. The ships can’t feel too sluggish or too zippy otherwise it will be frustrating to returning users.


I’ve decided recently to document some of the additions being made to the engine by adding videos/demonstrations on YouTube. I may add other Kestrel related things as well over time.

You can find the “Kestrel Development” playlist here.

A short demonstration of Asteroids in the Kestrel Engine – Using Nova Data Files.

This is probably not the most exciting of updates, but I just wanted to get a little bit of update out to help clear my mind before getting back to development.

I will say I have a couple of posts planned (probably over the next month or so)

  • Ship Movement in the Escape Velocity games
  • Reverse Engineering EV Nova

Hopefully they will be interesting ones for people to read, and for me to write!

Plugin plans in Kestrel

In my last post I talked about how the EV: Override revival project is now a thing and how the engine is being called Kestrel. In that post I also mentioned about the plans for plugin development and how maintaining backwards compatibility is an important factor. At the same time it will be important to expand the capabilities and features afforded to plugin developers.

This post will explain how this will be done and my current plans on accomplishing it.


I have talked in the past about ResourceForks and how they are a central aspect of how the old Escape Velocity games worked. Everything was stored in ResourceForks, and that allowed the engine to quickly located the required resources, and for plugin developers to quickly identify, modify and replace existing resources in the game.

This functionality will be included into kestrel and will allow it to load all of the old legacy data files and plugins. Plugins using this format will be constrained to the same limitations as they were back on the original engines, as some of the constraints actually came from the format itself.

Newer content should be able to make use of a custom extension on ResourceForks.

Extended ResourceForks

So this is a purely custom thing, based upon the ResourceFork format. The format will allow for larger offset numbers and Resource IDs, thus removing limitations (for all practical purposes) of the format. These values will likely become 64-bit values, rather than 16-bit values.

This will be the extent of the change to the ResourceFork format.

Distinguishing between the two formats

So how will the engine distinguish between the formats, and what will it mean for plugin developers?

Well it won’t really mean anything for plugin developers, or even much for the engine itself. Most of the heavy work will be handled by libResourceFork which is part of the Diamond Project. That will handle loading from both formats and then providing back a common set of data structures and interfaces.

Internally though there will be a small addition to the header of the ResourceFork to denote that it is in the extended format.

Let’s take a look at the header (preamble) of the original ResourceFork.

Standard ResourceFork Preamble

This provides information about the layout of the ResourceFork. We can visualise this slightly with the following diagram.

The basic layout / structure of a ResourceFork

The two offset values represent the position within the file for that particular structure. The “data_offset” being where the actual data for all of the resources is located, and the “map_offset” being where all of the resource meta data is located (including types, ids, names, flags, etc). The corresponding sizes simply state how much data of each there is.

Note: Technically the resource map could be located before the resource data, but everything I’ve seen has this ordering.

So let’s take a look at the proposed preamble for the extended format.

Extended ResourceFork Preamble

There are two big changes. Firstly is an extra field at the start of the structure, encoding information about the format.

Due to the extremely unlikely scenario of both the “data_offset” and “map_offset” having a combined value of 1, this is the value we expect “format_version” to return. Anything else causes the file to be treated as a standard ResourceFork.

The second change is the use of 64-bit values, effectively making the capacity of the ResourceFork infinite (unless you have access to 18.5EB of storage and can fill it.)

So why/how does this “format_version” field trick work?

Let’s take a look at the initial 20 bytes of a standard ResourceFork.

A standard ResourceFork preamble

Each grouping of 8 digits represents a single 32-bit value (each pair of digits is a single byte). Using the above structure for the standard preamble we can see that the “data_offset” field will have a value of “00000100”, which translates to 256. We can also see that the “map_offset” field will have a value of “00007520” which translates to 29,984.

Now if we apply the extended preamble format we need to take 16 digits in order to have a 64-bit value. This will give our “format_version” field a value of “000001000007520” which translates to 1,099,511,657,760, which is quite a bit bigger than 1!

Because the offsets are from the beginning of the file, we can be certain that neither offset is going to be 0, and any value greater than 0 in “data_offset” will cause “format_version” to have a value in excess of 4,294,967,296.

So now that we have a method of distinguishing between standard and extended ResourceForks, let’s take a look at the resource map, and limitations that it imposes.

A diagram illustrating the layout of the resource map

This whole structure represents all of the meta data in the ResourceFork. Resource types, resource IDs, sizes, flags, resource names, etc. It is this structure that causes us most of our headaches and limitations.

The data structure representing the “resource map layout”

This data structure is only using 16-bit fields! Given that both offsets included are from the beginning of the resource map, this means we have a limitation on the number of resources that can be included in the ResourceFork. Obviously we can split the resources into multiple files to get around this issue but that is not always a nice or elegant solution (plus I’m not totally certain on what the internal limitations of the Resource Manager were, so there may be further limitations there!)

So let’s work out, best case scenario, how many resources we could house in a single ResourceFork. We’ll assume that we have just a single resource type for this. Let’s take a look at the structure of the type list.

The resource type list contains a count of how types are included in the file, followed by a list of type definitions, which in turn point to a list of resource definitions.

This structure may seem complex, but in reality it is not that bad. The type count field at the start is a 16 bit value, which means a total of 65,536 types can be included in a single ResourceFork. For all practical purposes this would never be reached.

Each type entry is 8 bytes long. 4 bytes denoting a type code such as ‘PICT’, 2 bytes for the offset to the first resource of that type and then a further 2 bytes representing how resources of that type are included.

The resource entry is 12 bytes long. 2 bytes for the resource ID, 2 bytes for the offset to the resource name, 1 byte for the resource flags, 3 bytes for the offset to the resource data and then 4 ignored bytes.

Note: These bytes are not actually ignored and used by the system for an unknown purpose, likely storing a reference value.

So we started with a total of 65,536 bytes of space. Immediately we can take away 6 bytes for the resource map structure, followed by a further 2 bytes for the resource type count, and then finally another 8 bytes for our type definition.

This gives us a total 65,520 bytes of space to keep track of our resources, which when divided between resource definitions allows us to store a maximum of 5,460 resources in a single ResourceFork! This means that a game such as EV Nova requires its data to be split across multiple files. Assuming those resources do not need more than 16MB of data (due to the 3-byte data offsets the format uses.)

In reality memory constraints on earlier systems will have been a much more limiting factor.

How the Extended ResourceFork will change things.

I already mentioned the changes to the preamble structure at the start of the ResourceFork. This is purely to help differentiate the two formats. The next change will be to make all of the 16-bit (2 byte) values in to 64-bit values.

In addition to that all resource names will be UTF-8 encoded rather than Mac Roman Encoded, allowing for modern representation of text in the resource names.

However this new format will need a proper specific defining at some point, but keen plugin developers will have noticed an issue with this proposal. It will break existing plugin editors!

The Kestrel Plugin Development Kit (KPDK)

This is a tangental project to Kestrel and will be extremely barebones, likely requiring the community to make it nice (kind of like how Nova Tools and Mission Computer made EV Nova plugin development nice).

So what is the plan with this? What is it?

Many people have expressed concern over the use of resource forks, and rightly so. They are an old format, and if backwards compatibility was not a concern they would not be getting used. But this does not mean plugin developers should be subjected to them. For this reason the KPDK will be compiler of sorts that compiles simple text based definitions into plugins compatible with Kestrel or EV Nova.

Here is a simple draft definition for a plugin.

; Define the planet Earth
StellarObject {
    New(id = #128, name = "Earth") {
        PlanetType = 0
        Government = #128
        ; Other fields for the stellar object.

; Define a planet description for Earth
Description {
    New(id = StellarObject("Earth"), name = "Earth Landing Description") {
        Body = "Welcome to Earth. The home of Humanity."
        LandingPicture = PNG(/path/to/image/file.png)

The compiler would know how to parse this and how to build the appropriate resources. It would be able synthesise the appropriate resources and reference resources with in definitions, allowing a plugin developer to not worry about remembering specific id ranges or how to link things.

There will be more information on this KPDK later on once it has been fleshed out more, and once Kestrel is actually loading some data files.

But I do plan to have compiler be able to target both Kestrel and EV Nova, as well as provide warnings on issues in the definitions being compiled and to be able to decompile a Kestrel plugin, or EV Nova plugin in to this definition script.

It should also mean that the tooling and features to plugin developers can evolve and expand over time without being tightly bound and dependant on the engine.

Wrap up

There was a lot of information in this post. I hope you enjoyed reading it, and it gives some insight into the direction I hope to go with regards to plugins.


Kestrel – EV: Override Remaster

In case you are not a frequent visitor of the r/evnova subreddit (or just missed it), Peter Cartwright recently announced a project to revive and remaster Escape Velocity Override. So what does this mean for OpenNova? Well OpenNova will becoming the foundation for the new version of EV: Override. Additionally the engine itself will be adopting the name Kestrel.

Open Source

Good news to everyone who has been following OpenNova for a while. Kestrel will be open source. Whilst the exact license has not yet been decided, it will be a generally permissive license as to allow people to use the main engine as a basis for their own games.

The biggest question in deciding a license is going to be around forks pushing changes back to the main Kestrel project, commercial limitations and attribution requirements. For the time being these are still being considered and weighed up.

Kestrel will not be source available during the initial run of development. This is mainly because I may decide to make sweeping changes to the foundation of the game and I don’t want to worry about managing the project and merge conflicts whilst I’m in this initial phase.

What will become of this blog?

This blog will become the primary development blog for Kestrel. I’ll discuss design decisions, structure and implementation details of Kestrel. Some of the posts will be more technical than others, but I’ll generally try and keep them understandable.

I’m also going to try and aim for weekly/bi-weekly updates. In the past OpenNova has been an infrequent side project for me and as a result has not allowed for many posts to be made. Kestrel on the other hand will be much more active, with me dedicating a couple of hours a day (where possible) to it.

The structure and format of this blog may change over time and evolve as the project develops and the community grows.

Some information about Kestrel

Disclaimer: The project is still in its very early stages, so do not take all of this information as being set in stone. Things may change and/or evolve.

Kestrel is being developed from the ground up in an engine dedicated to accurately representing the Escape Velocity experience. Further to this it will maintain backwards compatibility with all of the old plugins and data files, whilst bringing in some new features. There will be a follow up post on the future of plugin compatibility in Kestrel in the coming days.

As it stands at the moment, I am developing Kestrel in C. This may evolve in to C++ in the future, but for the moment I’m wanting to keep it to C. This is mostly a personal preference and that C can compile to anything.

The result of this is that there will be several components of the project.

  1. The core library of Kestrel
    This will include all of the graphics layer abstraction, basic engine functionality etc. This library should generally be platform agnostic.
  2. The main Kestrel game library
    This library is the main functionality of Kestrel. The actual game itself. This library should be completely platform agnostic.
  3. The platform specific binaries
    These are the foundational code bases that are used to make an actual executable or application bundle. The core library and main game will be linked into these to produce an actual functional game.

    For example; one of the macOS versions of the game will use Metal to render any graphics. The platform specific game will setup the environment and Metal for use, and provide any hooks that the core library will use to render sprites to the screen, and receive events.

There will be some overlap between these components, but in general it should be minimal. Once again there will be a blog post about the architecture and structure of the engine in the future.

The functionality for old Macintosh features such as ResourceForks, QuickDraw, etc will becoming from the Diamond Project and not directly replicated into Kestrel.

Current State

So what is the current state of Kestrel?

Well I’m currently getting the Metal rendering pipeline and functions setup for macOS, and dispatching rendering commands from the main Kestrel game library.

Humble beginnings for Kestrel – A test of the render pipeline using Apple’s Metal Graphics API.

This may not look like much, and to an extent it is not, but this actually has much of the graphical requirements fulfilled for Kestrel. The ability to render quads, textures (sorry that the texture is just a yellow square), move objects around, etc. The only missing aspect is the rendering of text, key presses and mouse interaction.

Once these aspects are implemented, it will be on to developing the actual game.

Something big is brewing…

So this is a big one… I may have finally lost my mind. My growing frustrations with the state of legacy software on macOS are making me think of ambitious projects. Not to mention my desire to do this stuff full time and make a career out of it.

Before we get into the big thing, let’s take a look at these frustrations.

  • 32-bit apps no longer run on Catalina (Sure there are VMs)
  • Classic Mac OS stuff is basically out of reach unless you have old hardware.
  • SheepShaver is crap. I use it and I like it, but damn it is unreliable.
  • I don’t have masses of time to work on this stuff.

For some people these may not be the big issues they are for me. But for me personally they are issues that frustrate me and I want to do something about it.


I introduced the Graphite library last week. This is an open source framework which will include a number of modern reimplementations of old technologies such as QuickDraw, ResourceForks, etc. Well this is the underpinning of my larger idea.

That idea is to build a technology that will run old classic mac software directly in a modern environment, similar to what WINE does.

OK so this is a big bit of work, and this aspect of the project will not be Open Source. I plan to make this a product that I can market and sell (though still maintaining a free aspect to it). The underlying Graphite technologies will remain open source and free for people to make use of though.

A kickstarter

As I said I want to do this full time. I’m fed up of life as an iOS developer. It’s not interesting to me and I love working on this stuff.

My plan is to put together a proof of concept and a plan so that I can setup a kickstarter to get this thing moving properly. The end result is likely to be something that take old Mac OS software and run it alongside you modern software emulating the old 68k or PPC code and provided functionality for API’s that they call.

It won’t be emulating OS 9 itself, or taking any aspect of Classic Mac OS directly (such as ROMs or System files).

This proof of concept may come to nothing. If thats the case then I will be continuing work on Graphite anyway.

This is a project/product that will take a lot of development. If you are interested in becoming involved in it and Graphite then head over to the Discord that I have set up for it. The intention for it is to discuss the development of Graphite and eventually the details of this new product.

What does this mean for the EV Nova Clone?

I will continue to work on it as I have done thus far. I still want to complete this project. But it can never pay the bills (unless I was able to get the rights to the name, etc). Graphite will remain open source, and it will be used to build an EV Nova clone, even if it takes a number of years.


Here we go… time to start properly building this thing, and the first step of the journey is going to be Graphite.

What is Graphite?

Graphite is going to be a Swift package containing all the modern implementations of old legacy code found on the Macintosh platform in the Carbon era. It is fully written in Swift, and the name is a play on the name Carbon, of which Graphite is a type.

The early versions of Graphite will basically provide enough functionality for me to get my EV Nova clone up and running. Read resource files, handle some QuickDraw media and play some sound clips. Certainly not enough to provide a fully compliant Carbon environment. Maybe it’ll get to that point one day, but I certainly don’t see a need for that.

As a result this means that it is a combination of ResourceKit and ClassicKit, two earlier frameworks that I developed for accomplishing these goals. However they are written in Objective-C and strongly coupled to Apple’s Cocoa environment.

Graphite, on the other hand is written in Swift and, in theory, is not coupled to Apple’s ecosystem.

I’m not quite ready to setup the Open Source project page for Graphite just yet, as I want to get some tests in place and ensure the code and project are nicely documented. However I hope to get this up soon.

The end of an era…

Next week is WWDC, the hyped and supposedly exciting event for Apple Developers! New versions of macOS, iOS, tvOS and watchOS. New API’s. New things to take advantage of. Except this year I can’t help but feel down…

Last year Apple made it clear that macOS Mojave would be the end of the line for 32-bit applications. This means that the next version of macOS will have stripped out all 32-bit functionality from the Kernel and any frameworks that still have it, and most likely Carbon will be finally put to rest.

From the point of view as a developer, this is great! Refinement, cleaning up and optimisation in the tools I use everyday is a great thing and makes my job easier. As someone who enjoys playing some older games, such as EV Nova, this is not so great.

The latest versions of the EV Nova binary itself only contains code for the i386 architecture (32-bit intel). Earlier version also contain 32-bit PowerPC, but thats even more obsolete! From now on it looks like if I ever want to play EV Nova I’ll be firing up a VM in order to do so.

When I first began investigating the ResourceFork format back in 2014, I was aware that Carbon was already old news and utterly deprecated. I had hoped to be able to complete the entire EV Nova clone project over the course of a few years. I had not really anticipated how much other events in my life would begin to consume my time, thus causing the project to take longer.

Now here we are, more or less the end of the road for the technologies of the old world, and I’ve not been able to get the game completed or really even started in any meaningful capacity. I just have a bunch of research and information on those old formats and technologies.

Forging a new path

Over the past week I have been thinking (arguably a dangerous thing) about the future direction of the project. The point of this whole thing, and this project is to allow EV Nova to survive. Immortalised as an Open Source project that can be updated when required.

This means I need to do it right. I need to make it future proof. I need to make it portable, and detached from core technologies of any given operating system. So I’m going back to ground zero. I’m going to rebuild ResourceKit and ClassicKit. I’m going to introduce another framework NovaKit, which will be responsible for handling EV Nova resources and the likes. Each of these will be built in Swift using the package manager allowing them to be easily ported to Linux or Windows.

They will then be used by the main engine which will also be Swift, using the package manager again to provide and construct the core functionality of the game, sans any kind of graphical output. This is to ensure it has no platform specific functionality.

Finally a “rendering layer” will provide specific functionality for producing the games output and input for a given platform. Different platforms would include different rendering layers, i.e. macOS might make use of SpriteKit or Metal, whilst Linux might make use of OpenGL, and DirectX for Windows.

Player Data – The NpïL resource

One of the more irritating resources to deal with is the player data resource, or the NpïL resource. Before we get into how this resource can be read, we need to first understand some of the complexities and issues with it.

  1. For whatever reason AmbrosiaSW never migrated this particular file to flattened ndat files. This means that the actual NpïL resource is inside a true resource fork. This is illustrated by the fact that when we drop the file in to a hex editor, we see no content at all, despite Finder reporting an 87KB file size.

    Screenshot 2019-05-27 at 07.46.05.png

    This is a problem as none of the Cocoa or POSIX API’s can open or read from the resource fork data. The only API’s that can are the Carbon API’s. Thankfully the macOS virtual file system provides us a “hack” to read the resource fork as a data fork (though who knows for how much longer). We’ll take a look at this later.

  2. The second issue is that the resource data itself is encrypted. There is a document online covering the actual data layout of the pilot resources (I believe by guy). However I can not currently find it, so not able to link to it. I have attached two screenshots beneath of the code structures representing the two NpïL resources.

    Screenshot 2019-05-27 at 07.56.14.png
    NpïL – 128 “Pilot Data” – structure

    Screenshot 2019-05-27 at 07.56.21.png
    NpïL – 129 “Ship Name” – structure

    Yep, both resources have a different format internally, which means they can’t be read with exactly the same process. This is rather annoying. On top of that, if we parse the resources using these structures the data is completely garbled and insane due to the encryption. We’ll get into the decryption in a bit.


The Contents of a Pilot File

Screenshot 2019-05-27 at 07.51.47.png
Good old ResEdit and Mac OS 9 to the rescue

Nothing on modern macOS will allow us to look at the contents of a resource fork easily. It just doesn’t exist. The API’s required were deprecated 10 years ago, and do not function correctly under 64-bit.

This means that liberal use of SheepShaver (a PPC emulator, for Classic Mac OS) is required. As you can see this is not easy to actually look at… even unencrypted binary data is not easy to look at, but this is especially not easy to look at. As far as I can tell there is zero documentation about this encryption online, or at least zero surviving documentation. This means that the only way to deal with it was by reverse engineering EV Nova itself.

Screenshot 2019-05-27 at 08.11.57.png
The routine inside EV Nova responsible for Loading Pilot Data

This is a Hopper Disassembler, and it is used for investigating the contents of application binaries. I’m using it to reverse engineer parts of EV Nova itself. In addition to finding the _LoadPilotData routine, I also ran it through Hopper’s decompiler. The result of which can be found here.

The actual encryption algorithm is listed below. Being the result of a disassembly and decompilation, it is annoyingly cryptic and difficult to follow.

Screenshot 2019-05-27 at 08.21.34.png
EV Nova’s pilot encryption algorithm

To be fair, I’ve written the code in much the same way many of the original variables are gone thanks to the original compiling of EV Nova. That’s fine, we don’t technically need them, they just make it easy to read.

The 3 arguments in the function are as follows:

  1. arg0: The data to encrypt/decrypt
  2. arg1: The size of the data to encrypt/decrypt
  3. arg2: The encryption key (which is hex value 0xb36a210f)

So once you have all of this sorted out and resolved you are ready to read the contents of a Pilot File!

Not quite. Pilot resources aren’t in ndat files for some reason.


The ResourceFork under modern macOS

There is something fundamentally wrong with this, and it honestly surprised me. Why were pilot files not migrated to ndat files? My assumption is due to the existing pilot files out in the world and the desire to not render them dead and forgotten. That said I’m not sure that argument would have been valid as the same resource loading API’s are used for both ndat and pilot files alike.

As I mentioned earlier it is impossible to load a resource file using modern none deprecated API’s. Carbon provides the only way to do this, and it is 32-bit only. It will not work under 64-bit.

However the macOS virtual file system does provide a means of reading resource fork data into a separate data file, or via a pipe.

$ /path/to/resource-fork/..namedfork/rsrc

By appending the /..namedfork/rsrc to a file, you can read its ResourceFork.

Let’s return to Hex Fiend, and attempt to load the pilot file again, but using this command instead.

Screenshot 2019-05-27 at 08.36.33.png
At the bottom you can see the ship name!

It all worked successfully. Armed with all these tools, it is possible to fully read and handle pilot files from the original EV Nova engine.

An update on very slow progress

So it’s been quite a while since my last update on here. That said I’ve been posting a little more frequently on the /r/evnova subreddit. The last year or so has been hectic, leaving me with very little time to work on this project (moving, renovating, and my partner and I are expecting our first child very soon).

That said I do work on this whenever I can, or have the motivation to try and conquer some of the stubborn formats used. I’m now getting closer and closer to having everything parsed, and I think I have all of the documentation required to do so.

Since my last post I have managed to determine/decode the following formats (of which I’ll provide some documentation for soon.)

  1. The character file encryption. The npïl resource was never heavily documentation to begin with, but information does exist in a vanishingly few places. However at some point an encryption was added to the npïl data making it impossible to easily edit as before. I’ll make a post about this soon.
  2. The pattern resource, or ppat is used to provide the static on the status bar in the mini map. It’s rather annoying because it probably could have been provided as anything else and be easier to work with… well back then I suppose API’s existed to do just that!
  3. The sound resource, or snd, is a cocktail of varying formats and the the extension ‘snd’ has been used for so many different file formats over the years that it is almost impossible to determine how to decode this thing. I’ve found old Apple Documentation about ‘Type 1’ and ‘Type 2’ sounds, but they are so limited on details. Hopefully it gives a starting point for writing a full parser.
  4. The color icon, or cicn is another complex format that I just need to get done. I have documentation for it as with ppat and snd, but it’s high level details and doesn’t go into the exact structure exactly.
  5. DLOG/DITL. These are used to define window layouts and Nova uses them to define the layout of windows such as Shipyard, Mission Computer, Landing Screen, etc… all of them in fact.

Well that’s all for now. Hopefully I’ll begin to have more things to show soon.

Starting on the Engine

OpenNova on an iPad

This is by no means a full engine yet. It handles the opening sequence and gets the main menu to this state. Utterly useless to a player of the game. However this represents a major milestone.

For this screen to be rendered properly using nothing but the Nova data files, it needs to be able to decode rlëD and PICT resources, as well as understand bespoke Nova resources such as cölr and spïn. There are still a lot of other resource types that need parsing to make the engine complete. Not to mention replicating all the game logic!

I’m building the game in SpriteKit. This means that it should be possible to get it to run on macOS, iOS and tvOS all with the same basic codebase, although control systems are going to be more problematic. EV Nova’s built in system only really works with a Mac.

I think I’m probably going to make the UI (things like interaction, new pilot, settings, etc) be a little more friendly to touch environments on iOS, whilst leaving it as is on macOS.

Run Length Encoding – RLË Resources

EV Nova encodes all of its sprites in the RLË format. Specifically split between rlë8 and rlëD in the Nova engine. The 8 and D correspond to the colour/bit depth of the sprite images. For modern computers we’re only ever going to be concerned about rlëD.

RLË as it turns out is very similar, but not as complex, as the PICT​ format. It uses a series of opcodes to instruct the decoder on how to product an image. Luckily, it turns out I made a crude prototype of one quite a while ago. All I had to do was update the code and it worked.

The Starbridge. The icon of EV Nova, literally.

Well as it turns out, we’re now loading in all graphical assets except for the CICN and PPAT assets. Everything is just game data.