Unity Cloud Build automation pipeline with webhooks, and querying the Build API

Once you’ve figured out a general process for deploying game releases, you certainly don’t want to be manually completing every step of the process every time there is a release. If you add up the time spent copying build releases, updating websites with new build information, sending out notifications or change logs, etc… that time can really add up in the long run. Unity Cloud Build helps solve some of this, but there is more that can be done…

Unity Cloud Build already allows you to hook up to your Git (or other) repository to the Cloud Builds platform, and have automated builds created of your project whenever you push code up to your repository. This is great, and allows you to automate builds for various platforms – PC, Mac, Mobile, etc all by just updating your source repository. You can even configure it to watch a specific branch and create builds from that branch, effectively allowing you to branch off of the trunk/master and work in feature branches, then merge work back into ‘release’ branches.

The question then is, how do you continue this automation pipeline after Unity Cloud has finished your build?

The answer is to use a: Webhook.

A webhook is an HTTP POST request that Unity Cloud Build can send to an API endpoint you specify, whenever a particular event in your build pipeline occurs. For example, on the successful completion of a build. (ProjectBuildSuccess event).

You can choose a particular content type that the webhook payload is delivered with (one that your API would understand like application/json). Let’s look at an example of configuring a webhook that will send a POST request to an API endpoint I manage/own when a build of my Unity project successfully completes.

For this I will be running a simple Node.js express web server with body-parser middleware. In my Node.js server, I have set up a route for POST actions to go to called ‘builds’. When a POST request is sent to this endpoint, I’ll simply log the content of the POST body to the console to show the information that Unity Cloud Build sends along.

After setting up a Unity Cloud Build step, I went to define a webhook in the “Notifications” tab. I provided the public URL for my endpoint where the HTTP POST request will be sent to, set the content type to application/json, and chose the event that I want this to happen on. For testing I didn’t bother with providing a secret to help secure the body of the POST, as this would then need additional processing of the content on the other side to interpret it.


Note: if you are doing this yourself as a test/proof of concept, and don’t have SSL on your domain, you should disable the SSL verify option. (You should always secure with SSL if you are using this in production though, as you don’t necessarily want information about builds being passed around un-encrypted).

With the webhook defined, and saved, to test it, I simply kick off a build of the Unity project. (You can manually start a build if you don’t want to hassle with making changes to your repository).

After the build completed, as expected it sent the POST request to my API’s /builds endpoint. (You may want to click to expand the image below to actually see the details the POST drops off in the body!)


From this point onward, I could then do whatever I needed with the information sent across in the body. Here is a quick example. Let’s say there is some interesting information I wanted out of the actual build log that is generated from a build. Say I wanted to publish some of that information to another website after a build completed.

I would take the build number from the webhook POST request’s body content when it hits my API on the /builds endpoint, craft a new POST request myself, and send this to the Unity Cloud Builds v1 API, including the following request parameters: orgid, projectid, buildtargetid, and finally the build number which I got from the webhook’s POST request. This is what the request would look like for a ‘get build log’:


I fire that off, and the Unity Cloud Build API should respond with my full build log. I can then parse that log for the information I need, and update my site with it.


Once that is all setup, all I need to do is update my Unity project source code in version control, and Unity Cloud Build will build the project, upon success, it will POST some information about the build to my own API, this information will then be used to in turn query the build API for the specific build log relating to this build, the response will come back, my system will parse that log, take the interesting information, and through some other mechanism, update a website, which might at that point end the whole automation pipeline.

Hopefully this gives you some interesting ideas or inspiration to automate your own builds. The sky is really the limit. RESTful APIs make automation and interoperability of systems so easy!



Dropship – a 2D procedurally generated survival game – update post


I started working on Dropship (the temporary name for this project) back in January, when I was on holiday in South Africa. It is a 2D ‘Minecraft’ style, sandbox survival game, drawing inspiration from Minecraft and Starbound, two of my favourite survival/sandbox games. After getting back from holiday I kind of just left it in its git repository, not doing too much, gathering dust. Recently I had the motivation to pick up development again in my spare time.


Over the last week or so, I’ve added the following new features.

  • JSON Serialization save/load system. Save your progress and quit, then load up your saved world again later. All of the game/chunk/player built items are now able to be written to JSON and persisted to disk.
  • Fishing and living fish colonies that are able to breed/multiply in water blocks
  • Cooking with fires + nutrition attributes on food
  • Basic survival stats and UI for health, food, and energy
  • Self-healing blocks – similar to Minecraft where they replenish ‘hits’ if you stop hitting them
  • Water/lava with surface settling effects (liquid blocks move down and fill out voids, and once settled, surface effects are created along the surface, such as lava bubbles, or jumping fish)


I’d like to work on the JSON serialization a bit more though. There are some items that still need to be accounted for when being re-generated from JSON data on load, and there are many optimisations I could still make to reduce file size and speed of the save/load system.

If you’d like to read a little more and see some videos of the game (taken before some of the above new features were added), take a look at the game page here.

Character creator with Amazon DynamoDB integration in Unity3D


In this tutorial, we’ll be covering building your very own Amazon AWS DynamoDB integrated Unity3D game. We’ll be using two public cloud based technologies provided by Amazon AWS, namely DynamoDB (a NoSQL database service), and Cognito (identity management and data synchronization).

We’ll be building a character creator with Amazon DynamoDB integration in Unity3D. This will be a modular character creator that allows you to configure the look of your character, as well as his/her stats. The screen will pull any saved characters down from your DynamoDB table, and allow you to save new characters back to the cloud.





Before we get started, here are two very quick primers for you if you are not already familiar with DynamoDB and/or NoSQL technologies.


NoSQL covers a group of database technology types that differ to traditional relational style database types. They were developed to help combat a rise in the volume of information stored in general about users, products and objects, as well as the performance requirements for accessing this information in cases where scale is very large. There are four main types of NoSQL ‘stores’ around

  • Document databases
  • Graph stores
  • Key-value stores
  • Wide-column stores

Without going into detail about the types, the main thing to keep in mind is that they differ to traditional relational databases, and make certain types of operations much faster in NoSQL than they are in relational style databases.


DynamoDB is a fully managed NoSQL database service from the Amazon AWS cloud stack that provides quick, predictable performance with automatic scalability based on the provisioned throughput settings you apply to your tables. DynamoDB tables can store and retrieve any amount of data, and serve any level of request traffic you require for your applications, and run across multiple servers in the Amazon AWS cloud stack, backed by SSD storage.

Getting started

We’ll be using Unity3D version 5 for this tutorial, so the UI is the new Unity UI that became available in version 4.6 and above. Hopefully you are already familiar with creating a basic UI, as I’ll be skipping that in this tutorial, otherwise it would just take up too much space! Download the starter project here, which includes all the UI setup for you, along with some fancy scrolling background, and base GameObject items created, waiting for you to attach new scripts to.

CharacterCreatorAWSDynamoDB Starter Unity3D Project

I must point out that the awesome sprites you see in this tutorial are all courtesy of kenney.nl.

Download the Amazon AWS Unity3D SDK

Grab the latest version here. I am using version for this tutorial. Extract the .zip file to a convenient location.

Importing the SDK

Open the starter project in Unity3D, and go to Assets -> Import Package -> Custom Package. Navigate to the extracted Amazon SDK folder, and located the “unitypackages” sub-folder. Choose the “aws-sdk-unity-dynamodb-” file to import and import all items into the project.

Browse the Project Assets hierachy, and locate the AWSPrefab prefab under the “AWSSDK -> src -> GameObjects” folder. Drag and drop this prefab into your scene hierarchy. This GameObject is required in your scene to initialise everything we need to get started using the AWS SDK in our scene. The prefab should have a “UnityInitializer” script attached to it.

Creating and configuring the required Amazon services

First of all you’ll of course need an AWS account. Register for one if you don’t already have one. You can use the free tier for 12 months on a new account which includes everything we need. Once configured, sign into your AWS Console.

Now we need to setup Cognito for identity management. This is under “Mobile services” from the main console. This will only be used in our case in an “unauthenticated” user capacity, however it has some great features like user sync across devices and identity management for user persistence if you wanted to dig further than this tutorial’s scope.

Create an identity pool and give it a name like “CharacterCreatorAWS”. Ensure you select “Enable access to unauthenticated identities“.



The next screen asks whether you would like Cognito to create some default roles with limited permissions. Make sure you click “Allow” here for the default roles to be setup. While you are still in the Cognito dashboard, select “Edit identity pool” and copy your pool ID down into a text document. We’ll need this later.

3.Get-Identity-Pool- Id-for-Cognito

Now we need to create a DynamoDB table. Go back to the main AWS console, and choose “DynamoDB“. Once the dashboard appears, click “Create Table“. Give the table a name of “CharacterCreator” and enter “CharacterID” as the Hash Attribute Name, making sure you select “String” as the type. Click “Continue”.


The next wizard screen is optional to add indexes. We won’t be adding any, so click “Continue” to skip this, and move to the Provisioned Throughput Capacity screen. Here we can choose how many read and write capacity units we require. For this tutorial you can choose 1 or 2 each, but would need to consider these sizes using the calculator if you were deploying this for a game that would see high amounts of characters being created/read to/from your database!

The next wizard screen offers to setup a basic alarm for table request rates that exceed 80% of your provisioned throughput in 60 minutes. This is a good idea if you wish to be notified of any potential utilisation issues. Enter your e-mail here if you wish to be notified in this case, then click “Continue”.


Finish the table creation wizard, and when done, select your new table and click the “Details” tab. Copy out the ARN (Amazon Resource Name) for your table and note it down. We’ll now create a custom role policy using this ARN and assign it to our Cognito identity pool, which will in effect give your users access to your newly created table.


So, our next step is to use Identity and Access Management (IAM) to apply a custom role to allow unauthenticated users that run your Unity3D game access to the DynamoDB table that will store character configurations.

Go back to the main AWS console and click “Identity and Access Management“. Click “Roles” on the side menu, and then locate and click on the “unauth” role that was automatically created by Cognito earlier. It should be named something like “Cognito_CharacterCreatorUnauth_Role”. Look for the Inline policy section and click “Create role policy”.



Select “Custom policy” when asked what type you would like to create, and then click “Select” to use the policy editor.

Give your new policy a name like “AllowDynamoDbTableAccess” and use the template provided below. Policies are formatted in JSON, and you’ll need to change the resource value in this template to the ARN you copied for your DynamoDB table you created earlier. Here is the policy template you can use:

Click “Validate” and once validated, click “Apply policy” to assign this policy to your Cognito unauth role. You now have all the groundwork for configuring your AWS services done. Well done! Let’s move back to Unity3D finally.

Unity3D and AWS code

Now that all the AWS setup is complete, lets begin adding our integration with AWS. The starter project you downloaded above has all of the UI ground work complete for you. If you run it now, you’ll get a character selection screen where you are able to change the look and configuration of your character, however you are not able to save it to the database or load any existing characters and change them either. This is what we will add now.

Start by adding some using statements at the top of the CharacterCreator.cs file. These will reference some of the Amazon SDK namespaces and allow us to use Amazon specific classes and services in our CharacterCreator script.

Now we’ll need to add a class for our characters to use with the DynamoDB data model concept.  In DynamoDB, a database is a collection of tables, and each table is a collection of items with each item being a collection of attributes. This class that we create will represent the items in our CharacterCreator table.

Create a new class called CharacterEntity in the Unity3D editor under the scripts folder, and open it up in your editor. Remove the inheritance to Monobehaviour as we will not be needing this, as this is a plain data model class. Copy and paste the below into your CharacterEntity class.

As you can see, this class contains various properties to store each character’s configuration, from the stats like Age, Strength and Dexterity, to what the character parts are composed of (“ShirtSpriteName”, “BodySpriteName”, etc…).

Each property has an attribute applied to it, mostly all indicating the each property is a DynamoDBProperty. Note however the first property “CharacterID“. This is the same as the hash key we created earlier when we setup our table in DynamoDB. Note that it is a string value, as we dictated when we created our table. This has the DynamoDBHashKey attribute applied to it, to tell the table that this property is our primary key. To quote the AWS SDK documentation on this Hash Type Primary Key:

the primary key is made of one attribute, a hash attribute. DynamoDB builds an unordered hash index on this primary key attribute. Each item in the table is uniquely identified by its hash key value.

We also have a DynamoDBTable attribute applied to the class itself, this has a value indicating the name of your table, so make sure this is the name of your table too! If you created your table name as CharacterCreator then the above should be fine. Save your CharacterEntity.cs class and open the CharacterCreator.cs script next.

Now we will add some public and private fields to the top of this class. These will store our Cognito AWS credentials, a reference to our DynamoDB client, and a context for DynamoDB to use. They will also store a list of characters pulled from the table when the scene loads, and store the currently selected character index value. Add these just below the comment “// Add AWS specific variables here.” on line 57.

We’ll now add a Context property with a getter on it to return our DynamoDB context each time we need it by creating an instance and passing in our DynamoDB _client reference. Add this below the “allSprites” Sprite[] array field, just before the method call to Awake().

The context is used as an entry point to your DynamoDB database. It provides a connection to your database, and enables you to perform various operations against your tables, mostly of the CRUD type (create, read, update, delete).

Now that we have a context setup, we need some methods to load CharacterEntity objects that are pulled from our table, and to switch between loaded characters in our UI. Add the following three methods to your CharacterCreator class.

The LoadCharacter method will take a CharacterEntity passed to it, and update the UI values to display the properties stored in the entity, in our UI. The Cycle methods will be assigned as listeners to our Next/Previous character buttons, so that when you click these, the character selection in the UI updates and changes to each character that was loaded from the DynamoDB table.

Add the following listeners to the top of your Awake() method call.

You may notice that we do not yet have the CreateCharacterInTable and FetchAllCharactersFromAWS methods created, and these are referred to by our create and refresh Operation fields. These will map to the create and refresh buttons in our UI. Let’s get started on those next.

First we’ll create the Load method. This will load our DynamoDB table asynchronously, and once done, execute a callback method that will use the Context to do an asynchronous scan of our table for all CharacterEntity objects that meet the condition of “Age” is greater than 0. In other words, all characters should be returned.

This is fine for the tutorial, but if you were working with large sets of data, a scan operation is not the most efficient. You would rather use a query operation to zero in on more precise bits of data you require. The SDK documentation has lots to read about this, so feel free to explore that later!

Once the async scan operation completes, it assigns the results (which will be a collection of CharacterEntity objects) to our characterEntities field. This allows us to then iterate over them and load them / cycle through them in our UI. Drop this method into your CharacterCreator.cs script.

Now we need our CreateCharacterInTable() method. Drop the following method into the same script.

This will create a new instance of CharacterEntity type, and assign the properties with the values entered into the UI fields, like Age and Strength. It will also grab the values assigned to the fields that keep track of what character components are selected in the UI, like “selectedHair” and “selectedShirt”. Finally, it will use our context to DynamoDB to async save our CharacterEntity to the table. Once this operation completes, the table will hold an entry for the character that was configured in the UI!

Before we can run any of this code though, we need to add some initialisation logic to our Start() method. This will only run once when the scene starts up. The following code will create some Cognito AWS Credentials by taking your identity pool string, and a RegionEndpoint specification. It will then asynchronously fetch an ID and initiate another async call after creating a DynamoDB client. This client is assigned to the _client field which will be used during run of the scene to fetch characters and create characters. After the _client field is assigned, the FetchAllCharactersFromAWS() method is executed to load all the characters up into the scene from the table. Add the following code to your Start() method in the CharacterCreator script.

You’ll notice that CognitoAWSCredentials object is created by passing in a string called cognitoIdentityPoolString. This is a public string that you need to assign a value to in the Unity editor. Go back to your scene, and select the CharacterCreator GameObject in the hierachy. Locate your Cognito Identity Pool Id that you hopefully noted down earlier (don’t worry if you forgot, just go back to your AWS console, load the Cognito dashboard, and edit your identity pool to find this ID). Enter the Id into the field on your GameObject.


One other thing to check here – make special note of the region you are using for your Cognito Identity Pool in the AWS console. If it is not US East 1, then you’ll need to change the code in a couple of places in the Start() method to specify your RegionEndpoint accordingly. Auto-complete will show you the other regions you can use on the RegionEndpoint. The two places to change are on creation of the credentials object, and the ddbClient object. Lastly, make sure the TableName specified in the Start() logic matches the name you used for your table (it should be “CharacterCreator” if you followed the naming convention when setting it up).

Ensure all your scripts and your scene file is saved, then give it a run from the Unity3D editor. If all goes well and everything is setup correctly, after a few moments the async calls should complete and the table should be initialised. There is a bit of debug text you can you view in the scene that various DynamoDB calls log to (you may need to increase the opacity of the font colour to view it. It is positioned just under the Refresh button).

So no characters will be loaded at first, as we have not yet created any. Enter a name and some stats for a new character and adjust his/her clothing and body types using the UI controls. Click “Create new” when you are done, and the character should be saved to your DynamoDB table in the cloud!




Note that when a new CharacterEntity is created, we give the CharacterID a new GUID value as the Hash ID. This ensures that every character created has a unique Hash ID value. If you wish to perform super fast lookup queries on your table you can create queries and search for these ID values.

Go to your DynamoDB console, select your table and click “Explore”. You should see your new character entry and see the GUID value that was assigned as its Hash ID. You’ll also see the properties that you selected in the UI saved into each row for each character you create.


Moving on from here

That is the tutorial complete now. We can see that there was initially a fair amount of ground work required to setup our AWS services, including Cognito, a DynamoDB table, and a custom IAM role, however using the Amazon SDK in our Unity3D project after this was done was relatively straightforward. The SDK provides lots of async method calls for you to utilise the various actions available to use DynamoDB tables. Remember that all the async calls can have a callback handler assigned so that you can execute code, or update values when the calls complete.

We didn’t use any table queries, or look into deleting entries, however the SDK documentation has lots of examples for you to try out if you wish to explore these areas further.

Going forward, I’m sure you can think of tons of use cases for DynamoDB and Cognito. One simple change I could think of is that you could identify users by unique login and store all of their own personal characters in the DynamoDB table. When they login using Cognito (you could provide a login dialog box in the UI), you could use Cognito to generate them temporary credentials to pull down their characters, and save/modify them on a per-user basis.

If you didn’t know too much about NoSQL databases, then hopefully this article also helped you out there. There are lots more out there. I have personally also tried out MongoDB, running in the Microsoft Azure cloud, and found working with it just as easy as DynamoDB, although the way documents are stored is slightly different. It wouldn’t take much to change this project to use MongoDB if you were feeling adventurous and preferred to try that out instead.

Finally, here is a link to download the complete Unity3D project, or to get the source from Github if you wish to get to the end point. Don’t forget to fill in the Cognito Identity Pool String on the CharacterCreator GameObject though, as I have removed it for the download due to it pointing to my own personal identity pool!

Finished Unity3D project archive


Creating a basic 2D runner game with Unity – GGJ14 solo session

Being quite busy at work meant that I didn’t really have any time to prepare a team or attend the Global Game Jam (14) this year, so I decided to “unofficially” participate from home. I used Unity3D (Unity2D to be precise) as my engine of choice, and created a very basic runner game. Being a solo effort, this of course meant I did the coding and artwork myself.

The tools involved were the following:

  • Unity and C# scripting
  • Photoshop
  • Wacom Bamboo
  • For the main character (a knight), I downloaded this asset from gsanimator.com

This video is a time lapse of the 8-10 hours I spent on the project. I didn’t manage to finish, so there is still more to do, but I have the following basics done:

  • Runner with basic jump/double jump controls. Character speeds up the longer time goes on for, making it more and more difficult
  • Box obstacles that drop and get in the way
  • End game is when you fall off the edge of the screen
  • Bullet time like effect every now and then – game slowly turns to slow motion, and fades to black – as the darkness hits, the landscape turns to a post-apocalyptic scene – idea being that more dangerous obstacles come down in this mode. After this, the time slowly speeds up again, and the landscape turns back to normal.
  • Managed to create some custom artwork – two different backdrops for the scrolling parallax background, some debris, etc… The knight character was found for free on gsanimator.com

If anyone is interested in source or how I achieved a certain effect, please just drop a comment below.

PowerShell script to create Advanced Installer updates.ini configuration file with a TeamCity build step

Recently, in my day to day job I have been doing a lot of automation and customisation using JetBrains’ TeamCity product, specifically around automated project builds, unit testing, installer builds and installer updates.

One process I recently created a custom build step for on our TeamCity server, was to create an updater configuration file for one of our products, which uses Advanced Installer for it’s installation package. The product has an item in the main menu that lights up when a newer version is available. Clicking this launches the Advanced Installer updater.exe program which is injected into the application’s installer when the installer is built in another step. The updater program reads the updates.ini file to get info on the new version update as well as it’s location, which allows the user to then update the application directly from the main menu.

To automatically create this updates.ini file each time a build is run on TeamCity, I decided to go with PowerShell. I needed two bits of information for a start:

  • The assembly version number (which for this program is partly generated using TeamCity’s build counter incrementer)
  • The file size of the installer package once built

As a side note, the application’s assembly versions are handled by TeamCity’s “AssemblyInfo Patcher”, which is an extra feature you can add to your builds under the “Additional Build Features” area for your TC project. This ensures that the application gets the same version number that TeamCity assigns to each build, and therefore we insert the correct build number from TeamCity into our updates.ini file too.

To start, I created a new build step for the TC project, right at the end of the build process, and setup an initial Advanced Installer “template” updates.ini file in a specific location that the script refers to. I set the build step to use PowerShell, version 3.0, 64bit and changed the “Script” option to “Source code”, allowing me to paste my script directly into the build step page. Here is the actual script. Note the few variables at the top to define first – paths to the initial updates.ini template file, etc…


# Setup variables

$BuildArtifactSetupFile = "%system.teamcity.build.workingDir%\TestProject.Installer\TestProject.Installer\Express\DVD-5\DiskImages\DISK1\setup.exe"
$InstallerDestination = "C:\Sean\Dropbox\Public\Updates" 
$FileSizeBytes = (Get-Item $BuildArtifactSetupFile).Length
$VersionNumber = "0.1.0.%build.counter%"
$UpdateConfigLocation = "C:\Sean\Dropbox\Public\Updates\updates.ini"
$UpdateConfigLocationRedirect = "C:\Sean\Dropbox\Public\Updates\updates.ini"
$UpdateConfigTempLocation = "C:\Sean\Dropbox\Public\Updates\updates-temp.ini"

# Do the regex search and replaces and update the configuration file

$contentStep1 = (Get-Content -Path $UpdateConfigLocation | Out-String) -replace "\b(?<=Version = )([0-9]+.[0-9]+.[0-9]+.[0-9])", $VersionNumber
$contentStep2 = $contentStep1 -replace "\b(?<=Size = )(\d*)", $FileSizeBytes

# Place the modified regex search and replace content into the redirection file for now
Set-Content $UpdateConfigLocationRedirect $contentStep2

#  Remove any newline characters or blank lines from the content in the redirection file and place revised content into temp file
Select-String -Pattern "\w" -Path $UpdateConfigLocationRedirect | ForEach-Object { $_.line } | Set-Content -Path $UpdateConfigTempLocation

# Remove the original updates.ini file
Remove-Item $UpdateConfigLocation

# Rename the temp updates.ini file to the original file name.
Rename-Item -Path $UpdateConfigTempLocation -NewName $UpdateConfigLocation

# Now copy the build artefact installer file to target location...
Copy-Item -Path $BuildArtifactSetupFile -Destination $InstallerDestination


So, to explain the above, the high level process is as follows:

  • Setup our initial variables for the script – we assign the TC %build.counter% variable to our PowerShell $VersionNumber info, and setup the locations of our updates.ini and temporary updates.ini files, as well as the location that TeamCity builds and outputs an installer package to each time it builds a pushed commit. We also get the size in bytes of this setup.exe package and specify a destination to copy the installer to, so that the updater utility in our application can access the new version’s installer when users try to update
  • Regex search and replace – we use $contentStep1 to replace the template updates.ini file’s “Version” line with our new version number. We then use $contentStep2 to assign the correct file size for the setup.exe installation package to our updates.ini file on the “Size” line.
  • Next, we remove any empty lines in the file by searching for any lines on our updates.ini file that contain any word characters (letters, numbers, underscores) using “\w” as a pattern. Any lines matching this pattern (i.e. not the lines with blank spaces), get piped into our temporary updates.ini file using Set-Content. A note here: Using the Out-File cmdlet instead of Set-Content messes with the formatting of the file I found, and the Advanced Installer updater utility can’t read the updates.ini file correctly in this format, so I found using Set-Content worked just fine. The drawback to this, is that Set-Content places a lock on the file, whereas Out-File does not, hence the reason we use a temporary, staging file, remove the original updates.ini file, then rename the temporary file back to the original file name later.
  • As explained above, we then remove the older updates.ini file (used on previous builds), and rename our temporary updates.ini file to “updates.ini”
  • Finally, we copy the built setup.exe installer file from our TeamCity project’s output directory and place this in the same location that our updates.ini file exists in. This installer package will also be tagged with the same version number present in our updates.ini file. Hint: you can get Advanced Installer to tag installation package version numbers with an .exe / assembly version number in the Advanced Installer product – this can be done in the Product Information settings (ProductVersion property) using the AI GUI.

Here is how my build step looks:



This is the resulting updates.ini that gets automatically created by the above build step each time a build is run on the TeamCity build server:


The completed TeamCity build summary showing the build number:


Notes / improvements:

  • You shouldn’t necessarily leave your updates.ini file on your TeamCity server to be read by your product’s updater.exe utility – this could be on a remote web server, and your PowerShell script could simply just update the updates.ini files remotely on this server from your TeamCity build server.
  • You could parameterise the script and pass through TeamCity environment variables instead of a flat script as it is above – think about creating Functions that you can re-use elsewhere.

Getting started with “PowerCLI” development (C#) – setting up your IDE

Recently I started toying with the idea of creating some of my own PowerShell cmdlets from scratch. I followed some MS documentation and soon had a basic version of “Get-Process” implemented, as well as my own “Stop-Process” cmdlet. Next in my sights is to create a few cmdlets that can interface with vSphere – ala PowerCLI. This is proving to be a little more difficult than I imagined, but I was able to create a basic cmdlet that could pump out some properties around the vCenter(s) connections by throwing the $global:DefaultVIServers VIServer[] array object into my custom cmdlet once a connection was made using PowerCLI. This wasn’t ideal though, as I want my cmdlet to “intercept” the current connection(s) made my PowerCLI, if any, otherwise to prompt the user to initiate a connection.

powershell cmdlet development

To get started I wanted to have some basic debugging capability available to allow me to inspect objects on the fly – this would help me determine what sorts of properties and methods are available on various objects, as well as providing me the ability to debug code easily – essential for any developer. In Visual Studio, this is normally a very easy task – just hit F5, or click the debug start button and away you go. With a cmdlet, you are essentially building an assembly (.dll file) from scratch, and you first of all need to load that into your environment and then debug from there.

I found this great post on amido.co.uk that explains how to debug normal PowerShell cmdlet development, but what about PowerCLI? Well a simple change can be made to get this to work. I have also made some modifications to my own pre-build build event settings in my PowerCLI Cmdlet project. These copy the built assembly into the PowerShell modules folder so that I can then load it using the Import-Module cmdlet.

First change I made to the amido blog settings was in my Command line arguments that are used when launching powershell.exe as an external program.

-NoExit -Command "& {add-pssnapin VMware.VimAutomation.Core; Import-Module ShoganPowerCLI}"

Firstly, I added a call to add-pssnapin – loading the PowerCLI snapin into the base powershell environment. Then I changed the add-pssnapin call to an Import-Module, loading my assembly (in this case called “ShoganPowerCLI”). This of course happens after the pre-build events, which I will explain next.

In your project’s properties (Alt-enter on the project name in the IDE), go to build events, and as per the amido blog post I linked above, use the following:

Pre-build event command line:

IF EXIST "$(TargetPath)" (%WinDir%\Microsoft.NET\Framework\v4.0.30319\installutil /uninstall "$(TargetPath)")

Post-build event command line:

cd $(ProjectDir)
del C:\Users\Shogan\Documents\windowspowershell\modules\$(ProjectName)\ShoganPowerCLI.dll
copy /y bin\debug\$(ProjectName).dll C:\Users\Shogan\Documents\windowspowershell\modules\$(ProjectName)\
%WinDir%\Microsoft.NET\Framework\v4.0.30319\installutil "$(TargetPath)"

Ensure your “Run the post-build event” option is set to “On successful build”.

pre and post build events for powershell cmdlet development

What this post-build event does, is it first of changes directory to your project folder. It then attempts to delete any older versions of the cmdlet assembly you may have from previous debug sessions (the del line). After this, it copies the debug version of your assembly .dll into a sub folder named after your project in the WindowsPowershell modules folder, attempting to overwrite any existing version if it is still there. After that, the last line is one from the amido blog post, running installutil.

Once this is all setup, compile and run your code in debug mode after setting a few breakpoints. Issue your custom cmdlet and watch in glee as the debugger halts on your breakpoint and allows you to inspect various objects/properties/etc…

P.S. if any of the PowerCLI dev team come across this post, I would love to get in touch to ask a few basic starting out questions 🙂

Working on a space sim / game with Unity3D

Over the past weeks I have been putting a bit of time into developing a space sim / game with Unity 3D. Space and science fiction fascinates me and is always something I enjoy working on, or interacting with. When it comes to developing games, it is always the first thing I think about when it comes to deciding on a subject matter – (just take a look at Cosmosis, a 2D space shooter I created for iOS)

At this stage I am not sure if I’ll take this further or not, as developing a full on game in this kind of setting requires a lot of work – something I just don’t have the time for being a hobbyist games developer. However, so far I have managed to implement the following:

  • Perlin noise based procedural planet generation (three types so far: Earth, Ice and Fire like)
  • Planets generated with various properties to determine their name, population, random asteroid belts, etc…
  • Fully controllable player ship with starfield movement and thruster effects
  • One weapon – simple dumb fire missiles
  • Planet stats overlay GUI when hovering over planets
  • Destructable planets and asteroids
  • Basic resource mining via a mining laser – not sure if I am going to use this for anything at this stage or not though.

I need to have a think about direction and gameplay mechanics if I do really start to get serious about developing this further though. Here is a demo of some of what I have achieved so far. The player spaceship model I am using is a freebie I found courtesy of http://www.solcommand.com/.



If anyone is looking for skyboxes for their own games/projects, the skybox galaxy scene you see in-game is one of the skyboxes I created and am selling here.

Announcing the Deep Space Skybox Asset Pack for Unity 3D

I have been working with Unity more and more in my spare time lately, and found that with my love of space and all things sci-fi, my direction has naturally moved to working on a small space sim prototype.

With that, came the idea of creating my own custom skyboxes. I figured out the process of generating cube maps and the order/way of flipping textures to create the perfect skybox and proceeded to create 3 x unique space themed skyboxes.

I have finished all three, and published them to the Unity Asset Store. The pack of three is available for anyone to purchase now for just $5. You can also buy the skyboxes individually for just $2 each. Here are some screenshots which feature sections of each skybox.




You can purchase direct, or find out more information about my Skyboxes using the navigation menu at the top of this site, or by clicking here. Alternatively, buy direct from the Unity Asset Store via my Asset store page

Jumper – a simple platform jumping game made with Unity 3D

After a period of little activity (moving house + renovations), I have finally got back to game development in my spare time. I decided to take a look at Unity, and Jumper is the result of my “first go” using this framework. This is my “August” entry for One Game a Month (1GAM)!

You can play the game directly from my other site (hosted WordPress doesn’t allow Unity Player to be embedded) here:

Play Jumper Now!



Jumper is a basic platform jumping concept, where you try to reach the highest possible height without falling down. Platforms are destroyed with some great volumetric explosion effects as you bounce through them. There are also booster power ups to collect, and highscores are logged to a simple online MySQL database I created for keeping track of these.

You can view the top 7 or 8 highscores from the main menu, or use a query in your web browser such as http://www.shogan.co.uk/JumperScores/get_scores.php?count=100 to get more.

Needless to say, I won’t be meeting my goal of 12 games for the year in the whole One Game a Month challenge I have been aiming at. It is just too difficult for me to work my full time job, and handle house renovations (of which there are many at the moment) & still fit the rest of life in around that. But, I will still be pushing out games whenever I can. I am getting better at the whole coding and graphics side of things so they are happening quicker than what I used to be able to manage!