Beta 1.0.5 version 2

There’s a new beta available for 1.0.5. This is version 151214.

Thanks very much to everyone who has been testing and sent bug reports and feedback.

Important!

If you have a mod that has fonts in it, and you built your mod using Beta 1.0.5 build 151026, this version is going to crash when loading it. In fact loading fonts from 1.0.4 with the previous build would also crash. This stopped most all translations from working.

Change List

Here are the changes to this build:

  • – Fixed a bug that caused fonts from 1.0.4 to not load in 1.0.5. A UCS2 – UTF8 conversion wasn’t made properly.
  • – Fixed a bug that caused dropped resources (from citizen death/task cancelation) to drop in invalid places.
  • – Fixed a bug that caused orchards to cause invalid data access and or data corruption if a citizen tried to harvest a tree, but the tree died before he got there.
  • – Fixed a bug that caused potential memory corruption when cutting down an orchards trees.
  • – Fixed a bug that caused a crash if game startup failed before memory allocation was available or was corrupt. It now properly displays an error.
  • – Added better error message if the game runs out of memory due to too many mods loaded.
  • – Fixed a bug that caused a crash when loading old mods that had custom materials. The game will no longer crash, however objects with those materials will not display. To fix this issue, mods should be updated to the newest mod kit version and update the materials.

How to get the build

If you are using Steam, go into your game library and right click on Banished. Select properties, and then in the windows that opens, select the BETAs tab. Select the drop down and pick Beta Test for 1.0.5.

If you don’t use Steam, you can download the patch here: BanishedPatch_1.0.4_To_1.0.5.151214.Beta.zip. Note that you need to apply the patch to version 1.0.4. Previous versions of the game won’t work with this patch. Once downloaded, just unzip the archive into the folder where you have Banished installed. This is usually C:\Program Files\Shining Rock Software\Banished\.

If you’re into modding, you can get the beta mod kit here: BanishedKit_1.0.5.151214.Beta.zip.

When will this build not be a Beta?

I’ve fixed all the bugs that were reported to me, so if there aren’t any serious bug reports in about a week, I’ll push this build live to everyone.

As before, if you find a problem, I’d like to hear about it. You can submit bugs on the forum in the new beta sub forum. Or through the regular Support methods.

21 Comments

Graphics Drivers

Gah. So if you saw the last post I made about OSX, you may remember it was running at 1 FPS.

I spent a lot of time thinking about this issue and a quite a bit of time trying to code solutions. Despite OpenGL being a ‘cross platform’ library, at this point I’m pretty sure each platform that uses it is going to have to be tailored to that platforms specific graphics drivers.

Here’s my debugging method. (This is going to sound elegant as I type this out, but there was a lot of stumbling and double and triple checking things…)

One Frame Per Second

So I’m sitting there looking at the game chug along at 1FPS, and thinking: the loading screens run fast, but the title screen runs miserably. The loading screens have 1-3 draw calls per frame, whereas the title screen has hundreds, if not thousands. Something per draw call must be going slow.

Sure enough, if I don’t make any draw calls, things run fast, but this is mostly useless, since I can’t see anything.

A few thoughts enter my mind.

Hypothesis

  1. – The graphics driver is defaulting to software rendering or software transformations.
  2. – I’m doing something that’s not OpenGL 3.2 compliant, or doing something causing OpenGL errors.
  3. – The GPU is waiting on the CPU (or vice versa) for something.

The first idea just shouldn’t be possible, as I selected a pixel format (an OpenGL thing that specifies what kind of rendering you’ll be doing) on OSX requiring hardware acceleration and no software fall back. But I’ll double check.

The second idea is somewhat likely, but I worked very hard to make the Windows renderer OpenGL 3.2 compliant and it doesn’t show any errors. But I’ll check anyway since it’s a different driver and different GPU using the same code.

Third idea? Let’s hope it’s not that.

Testing

How do you check something like this? There’s some sorta-ok GPU debugging tools available for OSX, so I downloaded them and started them up. After a little documentation reading, I got them working. You can set some OpenGL break points which will stop the program and give a bit of information if theres an error or if you encounter software rendering.

BreakPointsSet

Of course nothing is easy. No OpenGL errors, no software rendering. This immediately discounted ideas #1 and #2. So it’s probably #3. Something is syncing the CPU and GPU. Blah.

Next I looked at what OpenGL calls were being made and how long they were taking.

DrawCallsSlow

Ah ha! You’ll notice the highlighted lines (which are draw calls), and that opengl calls are taking up a crazy 98% of the frame.

Looking close at individual calls, the huge time differences can be seen between glDraw calls and other API calls…

SingleCallSlow

Having written low level code for consoles that don’t really have a driver has given me a good understanding of what sort of things go on when the CPU sends commands to the GPU, and what can cause a stall. Generally this happens when you’re either writing to dynamic resources that the GPU is currently using but the CPU wants to update. Or, when the CPU is waiting for the GPU to finish some rendering so it can access a rendered or computed result.

I only have 3 places in code that might cause this. The first one I looked at is updating vertex and index data used for dynamic rendering – which is used for particle systems, ui, and other things that change frame to frame.

The (abbreviated) code looks like this:

    GLbitfield flags = GL_MAP_WRITE_BIT;
    if (_currentOffset + bytes > _bufferBytes)
    {
        // at the end of the buffer, invalidate it and start writing at the beginning...
        flags |= GL_MAP_INVALIDATE_BUFFER_BIT;
        _currentOffset = 0;
    }
    else
    {
        // there's still room, write past what the GPU is using and notify that there's no
        // need to stall on this write.
        flags |= GL_MAP_UNSYNCHRONIZED_BIT;
    }
        
    glBindBuffer(GL_ARRAY_BUFFER, _objectId);
    void* data = glMapBufferRange(GL_ARRAY_BUFFER, _currentOffset, bytes, flags);    

    // write some data ....

    glUnmapBuffer(GL_ARRAY_BUFFER);

    // draw some stuff with the data at _currentOffset.

    _currentOffset += bytes;

It’s setup so that generally you’re just writing more data while the GPU can use data earlier in the buffer as it’s needed. Occasionally when you run out of room you let the driver know you’re going to overwrite the buffer. (This can be better with multiple buffers, but I didn’t want to overcomplicate this example code.)

This didn’t seem to be the problem as nearly every draw call was slow. Drawing that used fully static data was slow too. Static data is setup with code that looks like this.

    glGenBuffers(1, &_objectId);
    glBindBuffer(GL_ARRAY_BUFFER, _objectId);
    glBufferData(GL_ARRAY_BUFFER, bytes, data, GL_STATIC_DRAW);       

That data isn’t ever touched again, and hopefully the GPU takes the hint that it can reside in GPU memory so no problem there.

But then I noticed that not every draw call was slow. Using the OpenGL Profiler trace I could see that sequential draw calls without any changes to any render state in-between did not stall.

FastDrawCalls

Hmmmm….

What’s the most common thing that changes between draw calls? If it’s not the material on the object, it’s the location where that object is drawn. It’s transformation – position and orientation. Transformations are generally stored in a very fast (and fairly small) section of GPU memory meant just for this purpose. It’s also where the camera location, object color, and other variable properties are stored. We call this data ‘uniforms’. Or in my engine ‘constants’.

In OpenGL 3.2 I used uniform buffer objects, since it most closely matches my engine architecture and that of DX10/11. DX9 fits the concept as well, since you can specify the location of all uniforms. Seems like a good fit.

After some pre-configuration, sending uniforms to the GPU for vertex and pixel programs to use is really easy. It looks like this:

void ConstantBuffer::Bind(Context& context, void* data, int32 offsetBytes, int32 bytes)
{
    glBindBuffer(GL_UNIFORM_BUFFER, _objectId);
    glBufferSubData(GL_UNIFORM_BUFFER, offsetBytes, bytes, data);
}

To my knowledge this should be crazy fast. On some hardware (way down at the command stream level) this data is part of the command buffer and it updates constants just before the vertex and pixel shaders are invoked. Worst case if its actually a separate buffer the GPU uses, and/or the driver supports getting this data back to the CPU, it needs to copy it off somewhere until the GPU needs it and the last set values can be read back by the CPU without any stall…

But you never know….

I read the OpenGL docs again, and sure enough glBufferSubData can cause a stall and the GPU waits for the previous commands to consume the previous values.

“Consider using multiple buffer objects to avoid stalling the rendering pipeline during data store updates. If any rendering in the pipeline makes reference to data in the buffer object being updated by glBufferSubData, especially from the specific region being updated, that rendering must drain from the pipeline before the data store can be updated.”

Really? Why? Setting uniforms HAS to be fast. You do it almost as often as issuing draw commands!!! This has been true since vertex shader 1.0. (Yeah I know, this doesn’t have to be quite true for some of the newest GPUs and APIs)

So for kicks, since there’s more than one way to modify buffer data in OpenGL, I changed the ConstantBuffer update to:

void ConstantBuffer::Bind(Context& context, void* data, int32 offsetBytes, int32 bytes)
{
    glBindBuffer(GL_UNIFORM_BUFFER, _objectId);
    void* destData = glMapBufferRange(GL_UNIFORM_BUFFER, offsetBytes, bytes, GL_MAP_WRITE_BIT);
    memcpy(destData, data, bytes);
    glUnmapBuffer(GL_UNIFORM_BUFFER);
}

And while in my mind there really shouldn’t be any difference, the statistics on OpenGL commands changes to this:

MapBufferSlow

Huh, theres all that wait time again, but its moved to setting uniforms. Now I’m getting somewhere. I figure I’m just not using the API correctly when setting uniforms.

Experimentation

So I tried a bunch of different things.

I tried having a single large uniform buffer using the GL_MAP_INVALIDATE_BUFFER_BIT / GL_MAP_UNSYNCHRONIZED_BIT and glBindBufferRange() so that no constants were overwritten. This was slower. And yes, you can get slower than 1 FPS.

I tried having a uniform buffer per draw call so they were never overwritten, except between frames. This was slower, using either glMapBuffer or glBufferSubData.

I tried changing the buffer creation flags. No change.

I read about other coders running through their entire scene, collecting uniforms, updating a uniform buffer once at the beginning of the frame, and then running through the scene again just to make draw calls. This is stupid and slow.

I wished I could use a newer version of OpenGL to try some other options, but I’m using 3.2 for maximum compatibility.

Eureka!

Then I got a sinking feeling in my stomach. I knew the answer (actually was pretty sure…) but I didn’t want to code it. Ugh.

Back before OpenGL 3.0 / DirectX 10, there weren’t any uniform buffers. Uniforms were just loose data that you set one at a time using functions like glUniformMatrix4fv and glUniform4fv.

What isn’t great about the old way is every time you change vertex and pixel programs, you need to reapply all the uniforms that have changed that the next GPU programs uses. OpenGL 3.2 doesn’t let the shader pick where uniforms go in memory, so you always have to look it up, and the location of each uniform variable can change shader to shader.

With uniform buffers, if you set some values once and it doesn’t change the entire frame there’s nothing else to do.

So I went about changing the engine to use the old old way.

  1. -First I had to change all the shaders to not use uniform buffers. Luckily I have the shader compiler so this was a few lines of code instead of hand editing 100’s of shaders.
  2. -Then I sat around for a few minutes for all the shaders to regenerate and recompile.
  3. -Next I had to record the per vertex/pixel program combination of which uniforms were used and where they needed to be uploaded to. This was a non-trival amount of code to write.
  4. -Then any time a shader changed, I had to change the code to dirty all uniforms so they’d be reapplied.
  5. -Then I had to write a new uniform binding function.

Here’s the new constant binding function. Pretty messy memory wise, and many more calls to the GL API frame.

void ConstantBuffer::Bind(Context& context, void* data, int32 offsetBytes, int32 /*bytes*/)
{
    _Assert(offsetBytes == 0, "can't upload with non-zero offset");
        
    const VideoProgram* program = context.GetVideoProgram();
    const Collection::Array& upload = program->GetUploadInfo(context.GetDetailLevel(), _ordinal);
        
    for (int32 i = 0; i < upload.GetSize(); ++i)
    {
        const VideoProgram::UploadInfo& uploadInfo = upload[i];
        switch (uploadInfo._type)
        {
            case GL_FLOAT_MAT4:
                glUniformMatrix4fv(uploadInfo._index, uploadInfo._size, 
                                   false, (float*)data + (uploadInfo._offset * 4));
                break;
            case GL_FLOAT_VEC4:
                glUniform4fv(uploadInfo._index, uploadInfo._size, 
                            (float*)data + uploadInfo._offset * 4);
                break;
        }
    }
}

Success

Finally I watched the game run at 60 FPS. So now the statistics are nicer. And only 5% CPU time spent in OpenGL. Woot.

FixedIssue

Graphics Drivers

Ok, so the driver is optimized to set loose constants very quickly, but when presented as a block it just stalls waiting for the GPU to finish? I don't get it. The Windows drivers seem to handle uniform buffers properly. I understand writing the driver to the OpenGL spec - but geez, this makes uniform buffers mostly useless. It's known to be a uniform buffer, the calling code is updating it, it's marked as DYNAMIC_WRITE, so why isn't it doing exactly the same things as what my manual setting of each uniform value is doing???? Arhghghghg.

I'm sure someone has a good answer as to how to update uniform buffers on Mac OSX, but I couldn't find it. Or maybe the answer is upgrading, or not using them? But this was debugging hours I didn't need to spend. Actually I take that back. Tracking down issues like this is pretty satisfying...

So I can just keep the code the way that works on Mac, but uniform buffers are so much more elegant. Plus what if Linux runs faster with uniform buffers instead of loose uniforms? Or if Windows does? Then I have to generate two different OpenGL shaders, and have different code per platform to get the same data to the GPU. Now I'm not so worried that the Windows OpenGL implementation was slightly different from OSX, because I can see the implementations are going to be driver dependent anyway...

OpenGL is cross platform? Sorta. Yikes.

33 Comments

Quick Update…

Things are progressing on both the Mac port and the current Beta.

OSX

I did figure out the slowdown with OSX running at 1 FPS. Apparently there are certain functions in OpenGL 3.2 that are just unusable in the OSX driver because they cause a full GPU pipeline flush and the CPU just waits around doing nothing while it happens.

Really this just means that the OSX version of the OpenGL renderer now diverges from the PC (and possibly Linux) version, which is okay, it just means maintaining the renderer over time is slightly more annoying, and that my shader compiler now outputs different vertex and pixel shaders for OpenGL depending on the target platform. Gah!

I’ll probably write more on this later after testing a few more GPU/OS configurations to make sure the PC version doesn’t have to change.

Beta Version

As for the beta, there will most likely be an update soon to fix a few issues. There are two very common bugs that people are reporting.

The first is for mods that had custom materials built. In the developer build, those materials would just fail to draw anything, but in a release build the validity of the material was skipped (supposedly for performance reasons) and as soon as those materials ended up on screen, a crash would occur. That was an easy fix.

The other issue occurs when cutting down trees in an orchard. There’s a bug where the game tries to access the cut down tree after it’s removed, causing a potential crash. Also an easy fix.

Windows 10

What’s stopping me from updating the beta right now is there are bugs I have to dig into, but can’t. Unfortunately Windows 10 (I think) is causing me issues and I can’t currently use many of the crash dumps that people have sent.

What’s happening is that when the game detects the errors I added additional checking for, it forces an exception so that a proper debug crash dump can be output. The instruction on the top of the callstack happens to be in a system dll when this occurs. If the crash dump is generated on Windows 10, my Windows 7 machine doesn’t have debug information for (or even a copy of) the newer updated dll and therefore can’t generate a proper stack frame to begin walking the stack too see where the error occurred.

Basically this means I can’t read these crash dumps, and I just need to upgrade my development machines to Windows 10.

I’ve been avoiding this, because I hate doing OS reinstalls. This is mostly because there’s a lot of software to install to get up and compiling and developing, and I have a lot of computers to update. I end up half-working on other machines but mostly looking at progress bars. There’s also a big chunk of time spent making sure there’s nothing local to the harddrives that isn’t already on the server or NAS before they get wiped.

What I’ll probably end up with is my main desktop and new laptop with Windows 10, a desktop machine that dual boots Windows 7 and Linux, and my old development laptop will become a linux laptop.

Time to wait on HDD formats and progress bars…

Edit: Thanks for the tip of using the MS symbol server… works well. Goes to show there’s always something new you haven’t used or known about after 20-some years of programming. Windows 10 is still a good idea though as I’ve had a few Windows 10 specific bug reports…

16 Comments

OSX Progress

This was a pretty good week of coding. After implementing some core platform specific code and moving the Windows OpenGL code to Mac OSX, the game rendered on the screen without any issue.

Tada!


Banished on OSX

Well, I shouldn’t say without any issue. It runs at less than 1 frame per second and water isn’t rendering correctly. There’s still no sound and no input, no Steam integration and it lacks the ability to resize the window and shut down. But I’m getting there. It’s good progress.

Working on Mac and doing this port has been a good experience. This week I hope to head back and spend time on the current Beta version a bit to work out some issues, then get back to the Mac version.

In the mean time here’s some thoughts on working on Mac. I’ve been using Windows exclusively (except for console programming) for a long time, and OSX is a new system to me, so don’t be offended if I’m getting things wrong. :)

Using a Mac

I don’t know anything about OSX. When I started I could open the web browser, and that’s about it. I stumbled with the user interface for a bit, couldn’t figure out how an ‘All Files’ category was useful when browsing finder windows, but then realized I could open a term window and that I was really using a unix-like system with a user interface that wasn’t X Windows. Good deal, I can do that.

So after that, file organization and using the machine was easy, and I was comfortably editing files using vi. Apparently my vi command muscle memory hasn’t totally faded.

Development Environment: Xcode

Xcode isn’t too bad. It was fairly intuitive to bring the Banished source into it, setup the required compiler settings and get to work. I mostly turned off it’s auto formatting since it does things that don’t go with my code style, and the intellesense is a bit overdone, but overall it’s an IDE that gets the job done.

Languages: Objective-C

When I look at Objective-C and the general Cocoa libraries, I feel like I’m looking at a foreign language I haven’t used in 20 years. I bought a book on it to help out. I mostly get it, but I just feel like I’m missing a fundamental knowledge base. Luckily there’s not that much of it I have to write or use before it jumps straight into the C++ code that’s common to all platforms.

I just need to spend more time with it. And read the book.

Code portability

I had the best intentions of writing my game engine using portable C++. I didn’t do anything crazy with the language, and any platform specific chuncks were tucked away in their own files to be replaced per platform.

While that was a good start, I don’t think it’s possible to actually write cross platform code until you’re compiling the code with multiple compilers. Even before I could get to writing the OSX specifics, I had many errors and warnings that clang presented that the Microsoft compiler just overlooked.

Most of these had to do with templates that expected the code inside them not to be compiled until they were instantiated. The Microsoft compiler has that behavior, while clang does not. There’s command line flags for a compatibility mode, but I’d rather just fix the issues so that other compilers won’t treat the errors the same way.

Once the common code was compiling cleanly, I started writing things specific to OSX. Memory management, file I/O, timers, date handling, threading, etc. What is nice, is being a unixy environment, a ton of the platform specific code will also work on Linux and Steam Box. When I get to working on Linux more fully it should go quickly.

I did have a few issues that required changes to both the Windows and Mac code bases to make sharing more code possible, but it was an easy refactor.

I’m glad I spent the time to write a common shader language, switch to UTF8, and make other changes to the engine to make porting easier. I could have just ported as the code was, but it would have been a lot more labor intensive and bug prone.

Currently the toolset for Banished only works on Windows – so it’s really just the engine and game code that are being written for OSX. This is a little annoying, as all the data has to be compiled on the PC, and then the Mac just reads it. I should convert everything to work on all platforms, so that I don’t have to flipflop machines, but this would take a lot of additional effort.

OpenGL

The OpenGL code I wrote for Windows came over smoothly, without any compile errors. Although I did have to write the platform specific startup code to create pixel formats and OpenGL contexts. I’m not using SDL or any other library to hide those platform differences.

I unfortunately have two different sets of the same GL code now. The Windows code is nearly identical, but since Windows supports multiple renderers (DX9/DX11/GL/etc) there’s some abstractions and separation of data that doesn’t exist on Mac.

Copy-pasted-slightly-edited code like this annoys me a bit but at the moment I don’t have a good way to abstract away the differences. It’s not terrible, but a little frustrating to have to update two sets of the same code when bug fixes are made. At least the Linux build should share the code 100%.

If nothing else, it was satisfying to actually see the title screen come up on the first try after bringing over the OpenGL code. If I was writing for a console (or other graphics API) it’d probably take an entire week just getting the first triangle on the screen.

The fact that it’s running at 1 FPS is a little disheartening – I know the GPU is fast enough. I’ve got a Windows machine with an Intel Iris Pro 5000 that runs Banished just fine, which is the same graphics hardware in my MacBook Pro. I’ve got my suspicions as to whats going on but I have a bunch of testing ahead of me to make sure I fix the issue properly.

More Coding

I’ve got a bit more coding and learning about Mac to do before the port is done. I don’t expect the remaining tasks to be too arduous – playing sounds, reading input, and compiling the Steam library should be easy. ish. Maybe.

40 Comments

Beta Build!

I just uploaded the new beta build (This is version 1.0.5 Beta, build 151026) to Steam, and also a patch for those of you that have the stand alone version. I’m going to be providing a bit of detail here for people that want to beta test and modders that want to try out the changes to the mod kit.

So first things first.

Where do I get the beta build?

If you have the game on Steam, go into your game library and right click on Banished. Select properties, and then in the windows that opens, select the BETAs tab. Select the drop down and pick Beta Test for 1.0.5.

If you don’t use Steam, you can download the patch here: BanishedPatch_1.0.4_To_1.0.5.151026.Beta.zip. Note that you need to apply the patch to version 1.0.4. Previous versions of the game won’t work with this patch. Once downloaded, just unzip the archive into the folder where you have Banished installed. This is usually C:\Program Files\Shining Rock Software\Banished\.

If you’re into modding, you can get the beta mod kit here: BanishedKit_1.0.5.151026.Beta.zip.

What’s changed in this beta build?

This beta build is mostly to test major engine changes that have been made to support multiple platforms for Banished and future games. It also contains a few bug fixes, and some changes to the mod kit. Note that save games and mods made with 1.0.5 beta will not work in 1.0.4. In fact they’ll probably crash the game if you try to move saves or mods over. Old mods and saves should work fine with the new version.

Here’s the change list:

  • – UTF8 is now used instead of USC2.
  • – Resource files can be in UTF8, USC2, UTF16, big and little endian. They’ll be converted to UTF8 on load.
  • – Memory usage allowance has been increased to 1 gigabyte, which should allow for larger mods.
  • – All materials now use custom shading language SRSL instead of HLSL.
    • – Any mods with custom materials will need to be modified to point to the new shaders and/or use SRSL.
  • – Math library can now be compiled without the need for SIMD instructions.
  • – OpenGL is now supported (but isn’t currently being released with the PC version)
  • – Data compilation is now in a separate DLL – CompileWin.dll – this can be swapped out for other platforms (consoles, mac, linux, etc)
  • – Shader compiler is now in it’s own DLL. Video DX9/DX11/GL dlls are no longer required for compiling shaders.
  • – Added safety code to check for invalid and dangling pointers – this should make catching hard to find and rare issues easier.
  • – Sped up mod details dialog for massive mods that have 10000’s of files included. This should make looking at conflicts and uploading to Steam workshop easier.
  • – Beta Mods and Mods newer than the currently released version can no longer be uploaded to Steam Workshop.
  • – Nvidia and AMD GPUs in laptops should now be auto selected for use, instead of an Intel Integrated card.
  • – Textile limit is now available for modders to use.
    • – Cropfields, Fishing, Forester, Hunters, Orchards, and Pastures now have a configurable resource limit.
    • – Livestock has a resource limit for the by product they make (eggs, wool, milk, etc) Note that if a by product isn’t created because of the resource limit, the icon won’t appear above the building.
    • – Added textile to the Status Bar, Resource Limit window, and Town Hall UI
    • – Added graphs for textiles to Town Hall UI

What if I find a problem with the Beta Build?

If you find a problem, I’d like to hear about it. You can submit bugs on the forum in the new beta sub forum. Or through the regular Support methods.

One major issue I’d like to know about if it occurs is a new debugging feature to help track down object reference errors. If you get an error that looks like one of the following images, I really want to know about it.


fatalassert
or
fatalerror

It’s pretty important that I fix these if they occur, otherwise the game state can become corrupt and make save games possibly invalid after load, causing crashes. If it happens, please include the crash.dmp and your save game. Especially if you can reproduce the problem reliably.

This additional error checking might slow down the game 5-10%, but it will go away for full release builds.

What changes happened in the mod kit?

First, Steam Workshop uploads are disabled in the beta build. Since 1.0.5 mods don’t work with version 1.0.4, I won’t be allowing uploads until 1.0.5 is updated for everyone.

Second, Textile is now a working resource type and limit.

All resource producing buildings can all specify which resource limit makes them stop producing. This allows a crop field to be limited by the textile limit and stop producing cotton, while a food producing crop field keeps working. It looks like this:

CropFieldDescription cropfield
{
	ResourceLimit _resourceLimit = Textile;
	float _growthPercentOnTend = 0.007;
}

The _resourceLimit variable now exists for Crops, Fishing Huts, Foresters, Hunters, Orchards and Pastures.

For pastures that have an animal that produces a byproduct (like sheep producing wool), you can now specify a resource limit on the animal. For example, the sheep resource looks like this now:

LivestockDescription livestock 
{ 
	ComponentDescription _additionalRawMaterial = "Template\RawMaterialWool.rsc";
	float _additionalCreateInMonths = 3.0;

	// this should probably be Textile, but is being left as food so the base game
	// is unchanged. Modders will definately want to change this...
	ResourceLimit _resourceLimit = Food; // Textile 
}

If your mod uses new materials, there will be some work to update them. Instead of what was previously there, the engine uses a custom shading language. Old materials should still load and work, but to build a new mod you’ll have to rework them. There’s some details of the new shading language here.

What does the future hold?

My current plan is that while this build is being tested by the community (big thank you!), I’ll be working on the Mac and Linux builds pretty much exclusively. With the Mac and Linux builds I’ll hopefully also be working on some bug fixes and performance issues.

If this current beta build comes back clean after a few weeks (or after a few fixes), I’ll push it out as an official build.

Once the Mac and Linux builds are ready there will be a Beta test for them as well. For modding, the only thing that may change is that you may have to recompile mods to get new audio to work on other platforms – but otherwise existing mods should load on other platforms. I know there’s a few requests for a few other changes to the mod kit, and I’ll be looking into some of them to determine how easy (or difficult) the more desired ones are.

56 Comments