Building a Continuous Integration Environment for Sitecore Part 11 – Deploying to Sitecore Multi-Instance Configuration

This is part 11 of the series of post that I am doing about building a Continuous Integration Environment using Sitecore, TeamCity, Octopus Deploy, and Unicorn.

Part 1 – Setting Up TeamCity
Part 2 – Setting up OctopusDeploy
Part 3 – Setting up SQL Server for Sitecore
Part 4 – IIS
Part 5 – Octopus Environment and Deployment Configuration
Part 6 – TeamCity Project
Part 7 – OctopusDeploy Project
Part 8 – Sitecore Item Synchronization using Unicorn
Part 9 – Displaying Build number on Sitecore Home Page & Tagging Git
Part 10 – Config transformations using Octopus Deploy
Part 11 – This Post

Process so far

When I was last working on my octopus deployment process I had it working to deploy to a single server CI and QA environment. By single server I meant that it was a single web server containing a a Sitecore CM/CD instance, with a separate SQL Server.

Recently, I have been working on updating the deployment process to deploy to UAT and production, on dedicated hardware at Rackspace. These environments were a standard Sitecore Multi-Instance configuration. A single CM server, and two load balanced CD servers.

This post may contain a few rants, list problems that experienced and how went about resolving them. I won’t detail how to install Octopus Tentacles on each server as detailed this in previous posts.

Gotcha

With my three servers, when Octopus Deploy runs, each process is executed on each machine before proceeding to the next process step. However in a multi instance Sitecore configuration, you probably want to deploy to your CM server(s) before deploying to each of your CD servers in turn. As well as performing multiple steps before proceeding to the next deployment target.

Octopus has an option Rolling Deployment. This looks perfect to configure so that a process will run on a machine before running the deployment process on the next machine. To create a rolling deployment, it is done using child process elements, otherwise, a rolling deployment will still process each task on each server in turn.

Another thing with a rolling deployment is that you cannot configure the order in which servers are processed. Therefore the only way to ensure that CM servers are processed first, is to have two deployment process steps.

A bug bear is that if you have already created your process, there is no way to move a process to become a child process. You can only re-order process steps. Therefore I had to recreate every step that I wanted to be part of the rolling deployment as a child step. Fortunately, as I had created step templates to use, this was not too bad, but it is still a pain to recreate each step and assign the correct variables to use. There is an open suggestion to implement this feature. https://octopusdeploy.uservoice.com/forums/170787-general/suggestions/5596942-support-moving-steps-in-and-out-of-rolling-deploym. I personally up voted for this.

Learn from my mistake. If you are building a deployment process that will eventually deploy to a Sitecore multi instance configuration, plan from the start to have at a minimum of two child processes and a lot of duplication.

Before I started, my deployment process was

  1. Windows – Ensure Hosts File Entry Exists
  2. IIS AppPool – Stop
  3. Close process locking deployment folder
  4. File System – Create Web Folder
  5. File System – Create Data Folder
  6. Copy Sitecore Licence File
  7. Deploy Base Sitecore Website
  8. Deploy Site
  9. Move Unicorn files out of webroot
  10. IIS AppPool – Create
  11. Deploy Sitecore Modules
    1. Load Site
    2. Deploy Sitecore Module-Package TransPerfect
    3. Deploy Sitecore Module-Package Package Gmap Location Picker Field10
  12. Unicorn Sync

Each process was configured to run on servers that had the CD or CM role. Once I had reworked the process I had the following:

  1. Windows – Ensure Hosts File Entry Exists
  2. IIS AppPool – Stop
  3. Close process locking deployment folder
  4. File System – Create Web Folder
  5. File System – Create Data Folder
  6. Copy Sitecore Licence File
  7. Deploy Base Sitecore Website
  8. Deploy Site CM
    1. Deploy Site
    2. IIS AppPool – Create
    3. Load Site
    4. Deploy Sitecore Module-Package TransPerfect
    5. Deploy Sitecore Module-Package Package Gmap Location Picker Field10
    6. Unicorn Sync
  9. Deploy Site CD
    1. Deploy Site
    2. IIS AppPool – Create
    3. Load Site
    4. Deploy Sitecore Module-Package TransPerfect
    5. Deploy Sitecore Module-Package Package Gmap Location Picker Field10

Both “Deploy Site CM” and “Deploy Site CD” where configured as rolling deployments, and which target servers configured with the CM and CD role respectively.

From the modified process, you can see that several steps are duplicated. It would have helped if Octopus had an option to duplicate a process and have the ability to place that process anywhere in the overall process. Templates helped with this, but it is still annoying.

Previously when I was defining servers, I would give them the CM and CD roles created. Now you have to ensure you only give each machine either the CM role or the CD role.

Blue / Green Deployments

People have different definitions of what a blue green deployment is. One accepted convention is to have two deployment folders. Once is always active, and the other is the backup. Octopus also has some posts detailing how to use this type of blue/green deployment.

That is one way of doing things. I did something different. In my project I had a variable #{base.folder}, and it was initially configured to have a value of D:\Websites\#{website.name}. The value of #{website.name} is #{Octopus.Project.Name}.#{Octopus.Environment.Name}

I created a new #{base.folder} value of D:\Websites\#{Octopus.Project.Name}.#{Octopus.Release.Number}. This value was assigned to machines that where in the QA,UAT, and Production environments. The original variable was now only for use for machines in the CI environment.

I did this so that every time a version was promoted to the next environment, it was going to get deployed to a new folder. This was a cleaner way for me. Now I know that with this type of deployment, you could very quickly fill up a hard disk, but I had the following thoughts. The only environment that I will be deploying to often will be my CI environment, and that is using the original #{base.folder} variable. QA may only be one or two deployments per sprint. Will be monitoring disk usage, and if performing too many deployments, and disk space is being used rapidly, its easy enough to switch back to the original #{base.folder} variable by changing the scope. UAT will probably be once per sprint, and with two week sprints not a problem. Finally for production will only be deploying at the end of a project. With all that in mind, I was satisfied that I would have no problems. In any case, I set up some MOM alerts to warn me on disk space just in case..

I did look into ways to try and automate deletion of old deployments, but unfortunately the built in functionality of Octopus will not delete previous deployments if you have used a custom deployment folder, which I have.

Deploying to a hosted environment, Rackspace.

For this customer, we are deploying to Rackspace. Initially all the servers could only be accessed after logging into a VPN. None of the individual server had a public facing ip address. The CD servers where behind a load balancer, and the CM server was only available to access once you had logged into the VPN. After days of trying to install and configure the Cisco VPN software, and ensure that it is active, logged in etc from the Octopus server, getting the Octopus Server and the tentacle communicating, we had to ask Rackspace to supply public ip addresses, and DNS entries for each server. The security that was put in place was that only specific ip addresses where able to access the server and the default Octopus port. Once this was in place, Octopus was able to successfully communicate with the Listening Tentacles that where installed.

The next issue faced, also in regards to the deployment process, was that even though the deployment was being deployed to a new folder, I still wanted to take each CD server out of the load balancer first before doing any work. There is a script on the Octopus Community library for taking a site out of a Rackspace load balancer, but this was not relevant as it only works for cloud based load balancers, and this customer has a dedicated hardware load balancer. The initial response from Rackspace was to phone them to take a server out of the load balancer, and then again to put the server back into the load balancer. Not what you want to hear when you are building an automated deployment process.

What was settled on was having the load balancer searching for a specific file in the web root, that contained a specific word. If the file didn’t exist or contained the wrong phrase, then the server was to be removed from the load balancer.

The next issue to overcome was deleting the file. This proved to be a bit more difficult that first thought.

There are Octopus variables #{Octopus.Release.Previous.Id} and #{Octopus.Release.Previous.Number}. I looked into these, to see if they would tell me that information of the previous release deployed on that deployment target, but unfortunatly these return the details about the previous deployment.

If I had the following

CI QA UAT Production
1.1 1.1 1.1 1.1
1.2
1.3
1.4

When the 1.4 release was promoted to QA, I was hoping that the previous release variables would be for release 1.1, but unfortunately that was not the case. It returned 1.3 as the previous release. So to be able to amend/remove the file that the load balancer is looking for, needed to run a powershell script that will query IIS and for a specific website return the physical path.

This was performed using the following powershell script


#$projectName = $OctopusParameters['Octopus.Project.Name']
#$sourceItem = "D:\Websites\$projectName.$previousVersion\Website\specialfile.txt"

$hostName = $OctopusParameters['website.hostname']
$sourcePath = (Get-Website -name "$hostName").physicalPath
$sourceItem = "$sourcePath\specialfile.txt"

Write-Output "sourceItem: " $sourceItem

If (Test-Path $sourceItem){
    remove-item -Path $sourceItem
}

<# Have been informed that the load balancer will poll the site every 5 seconds to determine if it is up or not Pause the script execution for "-s 30" ( 30 seconds ) to ensure that the machine is now correctly removed from the load balanced environment #>
Start-Sleep -s 30 # -s 30 is to have powershell sleep for 30 seconds

Once the deployment process was completed, all that was required was to create this file in the new folder location, and wait for the server to be placed back into the load balancer. Once again a powershell script to create the file with the relevant key word.


$sourcePath = $OctopusParameters['base.folder']

Write-Output "sourcePath: " $sourcePath

New-Item $sourcePath\Website\specialfile.txt -type file -force -value "KEYWORD"

<# Have been informed that the load balancer will poll the site every 5 seconds to determine if it is up or not Pause the script execution for "-s 120" ( 2 minutes ) to ensure that the machine is now correctly identified by the load balancer #>
Start-Sleep -s 120 # -s 120 is to pause the script for 2 minutes

Both of these steps where added to the “Deploy Site CD” process. Created as step templates first of course

Using SSL certificate installed

Certificates where installed on all web servers at Rackspace. The Octopus Deploy package command allows you to automatically bind to a certificate when it creates a site. To do this you need to supply the thumbnail and then when the site is deployed, all https bindings will automatically be assigned the correct certificate.

URL Rewrite was installed. Because of this, I modified web.config transformation files so that a new rule was put in place to redirect all traffic to https.


  <system.webServer>
    <rewrite xdt:Transform="Insert">
      <rules>
        <rule name="serve specialfile" stopProcessing="true">
          <match url="(.*)" />
          <conditions logicalGrouping="MatchAll" trackAllCaptures="false">
            <add input="{REQUEST_FILENAME}" pattern="specialfile.txt" />
          </conditions>
          <action type="None" />
        </rule>
        <rule name="https redirect">
          <match url="(.*)" ignoreCase="false" />
          <conditions>
            <add input="{HTTPS}" pattern="off" ignoreCase="false" />
          </conditions>
          <action type="Redirect" redirectType="Found" url="https://{HTTP_HOST}{REQUEST_URI}" />
        </rule>
      </rules>
    </rewrite>
  </system.webServer>

In addition, made sure that specialfile.txt that the load balancer was checking for, was served.

This then proved a problem with how the load balancer was configured to determine if a server was available, as the check was looking for the default website, and not the website that had just been created. Within Octopus, with the Deploy Package option, you can specify what hosts you want to use. But no matter how we tried with various combinations of variables, was not able to configure the step to create this blank site binding.

All that needed to do was create a new step template that will add any additional binding required using this powershell script


New-WebBinding -Name "$websiteName" -HostHeader "$hostHeader" -Protocol "http" -Port 80 -IPAddress "*"
New-WebBinding -Name "$websiteName" -HostHeader "$hostHeader" -Protocol "https" -Port 443 -IPAddress "*" 

To use so that the new site is set to the default (or the site with no hostname ) just pass an empty string for $hostHeader

Deploying Unicorn files using own NuGet file

In the original nuspec file, it included the Unicorn folder, and then a task was executed to move the Unicorn folder out of the web site and into the data folder. I amended this so that I created a new nuspec file that contained only the Unicorn files. It was a one line change on the existing Teamcity step to include the additional nuspec file in the list of specification files.

rackspace_1001

and then ensure that the NuGet package is published in the Publish to NuGet step

rackspace_1003

The new package is available for use in Octopus. As with everything, create a step template to deploy

The step template is based on the Deploy a package step, with only Custom installation directory feature enabled.
To use the process, two variables must be set

  • #{UnicornPackageName}
  • #{UnicornDeploymentFolder}

Here is the template to import.


{
  "Id": "ActionTemplates-109",
  "Name": "Deploy Unicorn Files to Data Folder",
  "Description": "Deploy Unicorn Files to Data Folder",
  "ActionType": "Octopus.TentaclePackage",
  "Version": 1,
  "Properties": {
    "Octopus.Action.Package.AutomaticallyRunConfigurationTransformationFiles": "False",
    "Octopus.Action.Package.AutomaticallyUpdateAppSettingsAndConnectionStrings": "False",
    "Octopus.Action.Package.DownloadOnTentacle": "False",
    "Octopus.Action.Package.NuGetFeedId": "feeds-builtin",
    "Octopus.Action.Package.NuGetPackageId": "#{UnicornPackageName}",
    "Octopus.Action.EnabledFeatures": "Octopus.Features.CustomDirectory",
    "Octopus.Action.Package.CustomInstallationDirectory": "#{UnicornDeploymentFolder}"
  },
  "Parameters": [
    {
      "Name": "UnicornPackageName",
      "Label": "Unicorn Package Name",
      "HelpText": null,
      "DefaultValue": null,
      "DisplaySettings": {
        "Octopus.ControlType": "SingleLineText"
      }
    },
    {
      "Name": "UnicornDeploymentFolder",
      "Label": "Deployment folder (data folder)",
      "HelpText": "The data folder where to deploy the unicorn files to. This should not be in the web root but in the data folder",
      "DefaultValue": null,
      "DisplaySettings": {
        "Octopus.ControlType": "SingleLineText"
      }
    }
  ],
  "$Meta": {
    "ExportedAt": "2016-09-12T14:33:54.908Z",
    "OctopusVersion": "3.3.1",
    "Type": "ActionTemplate"
  }
}

Sitecore Scalability

If you have read the Sitecore Scaling Guide, there are multiple steps that you need to do. For this installation, the following actions where performed:

  • Enable scalabilitysettings.config
  • Disable file based media
  • configure HTML cache clearing
  • Configure a Machine Key
  • remove all references to the master database from CD servers
  • Restrict access to client interfaces
  • Configuring the Default Analytics Definition Database

scalabilitysettings.config

scalabilitysettings.config.example is included within the Sitecore NuGet package I created. All I did was create a new script task to rename the file.


{
  "Id": "ActionTemplates-110",
  "Name": "Enable scalabilitysettings.config",
  "Description": null,
  "ActionType": "Octopus.Script",
  "Version": 1,
  "Properties": {
    "Octopus.Action.Package.NuGetFeedId": "feeds-builtin",
    "Octopus.Action.Script.Syntax": "PowerShell",
    "Octopus.Action.Script.ScriptSource": "Inline",
    "Octopus.Action.RunOnServer": "false",
    "Octopus.Action.Script.ScriptBody": "$websiteFolder = $OctopusParameters['website.folder']\r\n\r\nWrite-Host \"Rename-Item \"\"$websiteFolder\\App_Config\\Include\\ScalabilitySettings.config.example\"\" \"\"ScalabilitySettings.config\"\" \"\r\n\r\nRename-Item \"$websiteFolder\\App_Config\\Include\\ScalabilitySettings.config.example\" \"ScalabilitySettings.config\""
  },
  "Parameters": [],
  "$Meta": {
    "ExportedAt": "2016-09-12T14:36:43.831Z",
    "OctopusVersion": "3.3.1",
    "Type": "ActionTemplate"
  }
}

Disable file based media

I updated my project.UAT.config and project.production.config file to include a new entry to disable file media


      <setting name="Media.DisableFileMedia" xdt:Transform="Insert">
        <patch:attribute name="value">true</patch:attribute>
      </setting>

Configure HTML cache clearing

Another simple one to complete. All I did was add the following to my project.UAT.config file and project.production.config transformation files


    <events>
      <event name="publish:end">
        <handler type="Sitecore.Publishing.HtmlCacheClearer, Sitecore.Kernel" method="ClearCache" xdt:Transform="Insert">
          <sites hint="list">
            <site>MyProject</site>
          </sites>
        </handler>
      </event>
      <event name="publish:end:remote" xdt:Transform="Insert">
        <handler type="Sitecore.Publishing.HtmlCacheClearer, Sitecore.Kernel" method="ClearCache">
          <sites hint="list">
            <site>MyProject</site>
          </sites>
        </handler>
      </event>
    </events>

Configure a Machine Key

This one took a little bit more work. What I did was create a site in IIS, and then from the IIS Manager console, select Machine Key

iis_1002

Click Generate Keys
IIS will automatically create for you the validation and decryption key. Copy everything from each value except “,IsolateApps”

I generated unique keys for each environment.

All that is required is to add the following to each web.config transformation file


  <system.web>
    <machineKey xdt:Transform="Insert" decryptionKey="YOUR DECRYPTION KEY" validationKey="YOUR VALIDATION KEY" />
  </system.web>
  

Remove all references to the master database from CD servers

The easiest way to do this, is to download the switchmastertoweb.config. I placed this on my network share, and then created a new task to copy the file to the include folder on all CD instances only

I created this powershell script to accomplish that


$sourcePath = $OctopusParameters['NetworkShare']
$destinationPath = $OctopusParameters['base.folder']

Write-Output "sourcePath: " $sourcePath
Write-Output "destinationPath: " $destinationPath

New-Item -Type dir $destinationPath\Website\App_Config\Include\ZZZProcessLast

copy-item -Path $sourcePath\SwitchMasterToWeb\SwitchMasterToWeb.config -Destination $destinationPath\Website\App_Config\Include\ZZZProcessLast\SwitchMasterToWeb.config

Restrict access to client interfaces

The Sitecore documentation, including the Security Hardening Guide, discuss disabling anonymous access to the Sitecore folders on all CD servers, and in additional restricting IP address. For all my CD servers, I still wanted to be able to access Sitecore, but only if I had RDP’ed into the server and was accessing Sitecore as localhost. When I disabled anonymous access, no one could access Sitecore, even when accessing from localhost. My solution was:

  1. set the deny access for unspecified clients to Forbidden
  2. add 127.0.0.1 as an allowed entry

This was done to the following folders:

  • sitecore/admin
  • sitecore/debug
  • sitecore/login
  • sitecore/shell/WebService

This is the template I created to perform this action


{
  "Id": "ActionTemplates-103",
  "Name": "Restrict Access by Domain and IP to specific folders",
  "Description": "Implement domain and ip restrictions on the folders supplied",
  "ActionType": "Octopus.Script",
  "Version": 12,
  "Properties": {
    "Octopus.Action.Package.NuGetFeedId": "feeds-builtin",
    "Octopus.Action.Script.Syntax": "PowerShell",
    "Octopus.Action.Script.ScriptSource": "Inline",
    "Octopus.Action.RunOnServer": "false",
    "Octopus.Action.Script.ScriptBody": "$iisAppName = $OctopusParameters['website.hostname']\r\n$folders = $OctopusParameters['folders'] -split \",\"\r\n\r\nforeach ($item in $folders) {\r\n\t#Set-WebConfigurationProperty -Filter \"/system.webServer/security/authentication/anonymousAuthentication\" -Name Enabled -Value False -PSPath \"IIS:\\Sites\\$iisAppName\\sitecore\\$item\"\r\n\t\r\n\tWrite-Host \"Set-WebConfigurationProperty -Filter /system.webserver/security/ipsecurity -Name allowUnlisted -Value \"\"false\"\" -Location \"\"$iisAppName/$item\"\" -PSPath \"\"IIS:\\\"\" \"\r\n    Set-WebConfigurationProperty -Filter /system.webserver/security/ipsecurity -Name allowUnlisted -Value \"false\" -Location \"$iisAppName/$item\" -PSPath \"IIS:\\\"\r\n\r\n\tWrite-Host \"Add-WebConfiguration -filter /system.webServer/security/ipSecurity -location \"\"$iisAppName/$item\"\" -value @{ipAddress=\"\"127.0.0.1\"\";allowed=\"\"true\"\"} -PSPath \"\"IIS:\\\"\" \"\r\n\ttry {\r\n    \tAdd-WebConfiguration -filter /system.webServer/security/ipSecurity -location \"$iisAppName/$item\" -value @{ipAddress=\"127.0.0.1\";allowed=\"true\"} -PSPath \"IIS:\\\"\r\n\t}\r\n\tcatch {\r\n\t    Write-Host \"IP already allowed\"\r\n\t}\r\n\t\r\n}\r\n\r\n\r\n"
  },
  "Parameters": [
    {
      "Name": "folders",
      "Label": "Folders to have ip security restrictions applied to",
      "HelpText": "List each folder that is to have domain restrictions applied to. These paths will only be available from 127.0.0.1. Each value is to be separated by a comma",
      "DefaultValue": null,
      "DisplaySettings": {}
    }
  ],
  "$Meta": {
    "ExportedAt": "2016-09-12T14:48:27.438Z",
    "OctopusVersion": "3.3.1",
    "Type": "ActionTemplate"
  }
}

What is important is passing in a list of folders to process the action on. I had a list of folders separated by commas. My variable was defined as

sitecore/admin,sitecore/debug,sitecore/login,sitecore/shell/webservice

I could not figure out how to determine if an ip address had already been added, so I had to write the statement in a try catch. If I ever figure out how to read what allowed ip addresses have been defined I will update my script

This is the script that does the work


$iisAppName = $OctopusParameters['website.hostname']
$folders = $OctopusParameters['folders'] -split ","

foreach ($item in $folders) {
    Set-WebConfigurationProperty -Filter /system.webserver/security/ipsecurity -Name allowUnlisted -Value "false" -Location "$iisAppName/$item" -PSPath "IIS:\"

    try {
    	Add-WebConfiguration -filter /system.webServer/security/ipSecurity -location "$iisAppName/$item" -value @{ipAddress="127.0.0.1";allowed="true"} -PSPath "IIS:\"
	}
	catch {
	    Write-Host "IP already allowed"
	}
	
}

Configuring the Default Analytics Definition Database

Another easy one. Add the following to the project.uat.config and project.production.config transformation file


<sitecore>
  <settings>
    <setting name="Analytics.DefaultDefinitionDatabase" value="#{Sitecore.Analytics.DefaultDefinitionDatabase}" xdt:Transform="Insert"/>
  </settings>
 </sitecore>
 
 

Now that everything has been completed, I had the following depoyment process that will work on single server, or multi site configurations

  1. Windows – Ensure Hosts File Entry Exists
  2. IIS AppPool – Stop
  3. Close process locking deployment folder
  4. File System – Create Web Folder
  5. File System – Create Data Folder
  6. Deploy Unicorn Files to Data Folder
  7. Copy Sitecore Licence File
  8. Deploy Base Sitecore Website
  9. Deploy Site CM
    1. CM – Deploy Site
    2. CM IIS AppPool – Stop
    3. CM – IIS AppPool – Create
    4. CM – Load Site
    5. CM – Deploy Sitecore Package TransPerfect
    6. CM – Deploy Sitecore Package Gmap Location Picker Field
    7. CM – Unicorn Sync
  10. Deploy Site CD
    1. Take Site out of Load Balanced Environment
    2. CD – Deploy Site
    3. CD – IIS AppPool – Stop
    4. CD – Copy Master to Web Conversion file
    5. CD Enable scalabilitysettings.config
    6. CD – IIS AppPool – Create
    7. CD – Load Site
    8. CD – Deploy Sitecore Package Gmap Location Picker Field
    9. CD – Sitecore Folders Restrict Access by IP Address
    10. CD – Create Default Site Bindings
    11. CD – Copy Load Balance File
    12. CD – Ensure webserver back in Load Balancer

I timed this. From check in, which automatically builds and deployed to CI, manually promoting to QA, UAT, and finally Producton takes 25 minutes. That could be cut down as it includes all the delays put in place to ensure that servers have been successfully removed and then put back into the load balanced environment.

Note

Not all of the new tasks created where deployed onto every environment. A lot of them only ran on the UAT and production environments. What was suitable for me, may not be relevant in your environment.

Advertisements

My musing about anything and everything

Tagged with: , , ,
Posted in Continuous Integration, Octopus, Octopus Deploy, Octopus Tentacle, Sitecore
2 comments on “Building a Continuous Integration Environment for Sitecore Part 11 – Deploying to Sitecore Multi-Instance Configuration
  1. M J says:

    Hi Darren,

    Many thanks for your quick response on my previous comment. I have finished setting up the environment based on your guide which I am really grateful for, However I am having a few issues loading the site in the Live Environment. Some of which are caused by the change of the location of the Data folder for the Sitecore website. as right now the data folder is sitting directly in the drive directory D:\[project]\Data while the deployed Sitecore website sits in D:\Websites\[Project] with the web files in it . Thus I was wondering if you can share a snapshot of how the websites structure would be in the live environment?

    • Darren Guy says:

      The first thing I would suggest you look at is are you correctly transforming the location of the data folder. In my custom include file in the App_Config folder I have the following:

      <sc.variable name="dataFolder">
      <patch:attribute name="value">/Data</patch:attribute>
      </sc.variable>

      Then within my environment transformation files I have the following ( e.g. Z_ProjectName.Production.config )

      <sc.variable name="dataFolder" xdt:Transform="Replace" xdt:Location="Match(name)" set:value="#{data.folder}" />

      #{data.folder} Is an Octopus variable. Next check your variable in Octopus, and what scope etc it is set to. hopefully that will help solve your problem. I found that nearly every time I had a similar issue it was the environment scope.

      All conditions on the environment scope have to be true before the variable is used.

      As for me, this is my folder structure that i use
      D:\Websites\Project.Name\Data
      D:\Websites\Project.Name\Website

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 12 other followers

%d bloggers like this: