Line-of-business apps at Microsoft

  • June 15, 2019

microsoft

We have migrated almost all of those to Azure using Azure PaaS and Serverless and Azure technologies. So this is a basic architecture that we kind of follow for order management and fulfillment applications that we have. We have lots of partners in our supply chain and they connect or we expose our end points through APIM in Azure. In APIM, we redirect the requests to the web app, and for load-balancing purposes we are using Azure Traffic Manager.

Casinoslots online casino sg review site wich offers video poker, free slots, baccarat.

So API app here is a web app, here also it’s just two different regions that it has deployed. Then once the message is with the API app, it gets into the storage and the service bus. Service bus just has the metadata and storage has the payload, and then the Function App comes into the processor, logic app or function app.

We switch between that based on the service requirement. The logic and function apps are just the processes. which will just do the processing of the messages.

>> So you have a bunch of CI in a bunch of different PaaS services. So what does that look like? >> So we have PaaS services, right? This is what it is.

These are all the components that it has. >> So what do your CI and CD pipelines look like on this? >> Yeah, I can go through that.

So for all of these components we have enabled Gated check in for all the developers to check in, only if the unit tests are working fine and the solution is building. On a daily basis, this is the daily build that we run. We have created it in VSTS. If I go to the edit here, we can see all the steps that are on here, StyleCop is just, style of coding that the developers are doing is all consistent and then Restoring NuGet Packages that the solution is using. Certificate, it’s interesting because we had to introduce it because of the CRUD scan so that we don’t have to check in the certificates and upload them to the build machines. So we started using PowerShell to use the basic reports string to create the certificates on the fly in the Build, and keep it in the drop folder, and we run unit tests on top of that, and create a drop folder in the build machine.

For the release to use all of those packages. This is for the Web App and this is for the Function App. We have two different Function apps and so that’s why you see two packages for the Function App and then we publish that drop folder. Once the daily Build is run, it’s every night that we have set it for, a release is triggered, based on the daily Build.

You see different environments here. So in Azure, we have three resource groups for each of the services that we are building. One is the CI, Continuous Integration, where daily builds are going and releasing.

They are in two different regions that we are using. That’s why you see these two. Then the other environment is UAT, where once the feature is built, test it, functional test, all of that is done. That’s where it goes to user acceptance testing, but there are approvals set for this. Somebody has to approve for the deployment to happen here.

Then obviously production, once user acceptance testing gets passed, we go to the production environment. So all of these are chained in our release pipeline. CI-WestUS is the first one that gets deployed. You can see, there are like 30 tasks. So what they are doing, it is a combination.

All of this release is a combination of running the PowerShell scripts or the deploy using the ARM Templates to do the deployment. So resource group, it is creating the resource. I can go in detail for all of these steps.

If somebody has any questions. But yeah, at a high-level, it is a Service Bus deployment. This is also interesting, once the Service Bus is getting created, we need to know the connection string and use it maybe to set up the functional tests and whatnot. So we can in VSTS release create a VSO variable at runtime which will actually have the connection string to that particular Service Bus.

So all of that can be enabled through the ARM Template. If you look into the ARM Template, we are using a VSTS task called Azure Resource Group Deployment. We are saying that this is the template that I have to use from that drop folder. These are the parameters for that particular template.

All of that, we have defined in our Checked-in solution. Then this is again the same task, but for the Event Hub. Then we are storing the Event Hub connection string.

Storage account and then the containers needs to be deployed in the Blob storage account that we are using. Then we also upload certain files to the Blob, Configuration files to the Blob, that step is there. So each step is well-separated out, and we keep on storing the connection strings or keys that we need for the functional tests to run. Cosmos DB or DocDb, we have the same step here. Then in Cosmos DB, we need to create a collection, so there is a PowerShell script that we have written for that.

Then we are also using Application Insights. So you’ll see those steps for that, that ARM deployment as well as the PowerShell script to get the variable out of it. The function App, there are two function apps, as I had said. So this is interesting, this is a tokenize with x-path regular exception. What it is doing is, once you have a config file, checked in to your solution with a very particular specific semantics, with underscore underscore, a VSO variable named underscore underscore, out-of-the-box tokenize method will just put in the values in that configuration. We are using Key Vault, so all those storage connections, strings, Service Bus connection strings are actually stored in the Key Vault.

So this is cool. Then Function App is the other function app that I was talking about. Yeah, and the Web App, which is getting deployed at the primary region. We are using the ARM Template and the PowerShell command to just update the config for the Web App. Then Upload Certificate, the build that had created the certificate, so this step will just upload them.

Then this is the functional test config, which, we are just using a PowerShell script to update the config file. Then we run the functional test using Visual Studio Test. So, once the function tests are passing, that means we are good with the CI Environment and the next environment, deployment starts.

>> So, I’ve got a couple of questions here. You mentioned CRUD Scan. Is that an external-internal tool? What is CRUD scan?

>> CRUD scan is internal. >> Okay. It’s an internal tool that I think we use to go check code that’s being checked in. >> Yes. >> Basically, to make sure that you’re not storing any secrets.

>> Any secrets or certificates, yes. >> I think it’s an org level kind of policy that every check-in you are making, it is making sure that you’re not basically checking any credentials over there. So, I think right now, in our organization, you create, you basically check-in something, that particular build is going to trigger saying its gonna check that credentials in your particular check-in for that particular code, right?

So, I think that’s basically a policy where organization-level wide policy right now in our organization, right? >> This is something that VSTS does. >> Yeah. >> That’s kind of nice.

I had a question for the three of you. So, you talked a bit about the ARM template deployments you’re using in this point are fairly narrow. You can have an ARM template that has everything under the sun in it. So, what have you found that are the pros and cons, the balances especially when you start to talk service fabric or data warehousing? What are the pros and what’s the balance that you found?

>> I can answer that. So, we started with that. We created the ARM Deployment Project in Visual Studio and we kept all our ARM templates in that. We had just one task in the VSTS release where just everything gets deployed with that ARM deployment.

But the problem with that was whenever there was a single failure with a particular component, we didn’t have much control through the release template. That’s where we took a decision why not try out in a separate ARM templates as tasks in the VSTS release. So, that just worked for us but if you have a very simple deployment, I would suggest the ARM Deployment Project and just have that one task, and it does everything.

That’s the best. If you don’t have to change your components frequently and you do not do very frequent deployments, then that’s the best. >> Okay. So, we’ve walked through the really. >> Yeah. I would just say that’s totally concur, what Heena said.

It’s always, we have been depending on the project size especially in data warehouse work. We want to bucketize stuff. So, we exactly want to know what stepped failed and why it failed. One of the additional thing is from a data warehouse point of view or a big data point of view, that something may not be available. So, again VSO allows you to write your own components and use them and then publish them org wide so then others can use them. So, that’s amazing feature which helps us leverage across all of Azure.

>> Do you want to show us some of your stuff now? >> Sure. So, I can showcase and talk about the data warehouse, the CI we have implemented in the data warehouse. So, this project is actually a SQL data warehouse and this basically shows you it’s not easy that to be green all the time.

So, you’ll see there are some of the task are orange, there are some of the task were each of these columns are depicting the environments, right? So, this is where the production deployment has happened, so you’ll see the all 4 green, but rest of the dates, only the integration is happening. So, some of the task might have failed, probably a unit test or two failed, and that basically is a good way to make sure that your code is completely in place or not. Are you with your check-ins? Are you breaking something or not, right?

So, as an engineering team, that keeps us honest. So, if I have to go open one of the release. >> I can imagine yours is the team that has, it’s not easy being green, Kermit the frog up on the wall somewhere. >> Yeah. So, we have three environments. Technically, if you look at, there’s one integration.

But you will see integration environment is in two parts, and there is a reason why we had to do that. Then we have the “EndUser” or we call that UAT, it’s a hybrid environment. We were able to cut down the number of environments with the ARM because of the CI efforts we have put in and then the production. Now, if you look at the integration environment part one, what it is doing is it actually deploying the entire data warehouse from scratch.

So, the steps of what we have here is we are taking the build once it is created after a code merge. “Azure Key Vault”, as Heena mentioned, all our secrets are in key vault, not really compromised by anyway so we fetch all the secrets from the key vault. Then if you look at this step is about procuring the SQL Azure. So, this is a true PaaS implementation where this environment doesn’t exist. It’s on the fly, we created, and after deploying and testing everything, we decommission it.

So, this way, we procure our hardware and then we copy the database from production. So, in a data warehouse, were one of the biggest challenges, how do we really be sure about whatever you have done is really working. Because you need prod equivalent data, right? But then you have a lot of compliances where you cannot copy data from production to pre-production environments.

Well, this is the answer where it doesn’t exist and no one has access to this environment, it’s only a service principle which has access. It creates the data. It creates the database, copies the database from production, and runs all the test cases.

Some of the important thing is out of the box, some things are not available. For example, most of us would know with their DACPAC deployment which is very, very powerful thing. could create a DIF and deploys it on top of the existing database and you cannot really get away with it. But then it has a limitation.

Let’s say if I have a column which gets renamed because of business reasons, the schema has changed, the DACPAC has a problem today or a gap today there. It actually thinks that there’s a existing column which was renamed or dropped. So, it drops that columns and creates the column again.

So then, your data is lost. So, there are a few steps we have taken care like renaming column before we actually go and deploy the DACPAC. We reuse PowerShell script to rename the column so that when the DACPAC deploys on top of it, you don’t lose data. Then there are a bunch of other things and scaling up this server.

So, when we copy the database, if in production, we’re running a premium two edition, we want to run our test cases faster. So, what we do, we actually scale it up using the Azure full capability in terms of making sure everything is run fast. So, we scale it up to a P4 or P6 scale, and after doing bunch of other operations, where we have a lot of encryption, decryption pipelines, all these things get deployed. Then finally, we run our Unit Test. So, Unit Tests are very, very important for any project and data warehouse world we have been always lagging behind.

So, this is an honest effort to have even Unit Test cases in place. So, by the time we run all this integration environment done with it’s all Unit Test cases, you are pretty sure that whatever code has been added is pretty much tested. Then obviously, we do Azure data factory deployment so all of our jobs are Azure data factory job from the porting data from one point to other point. If you look at, there is a reason we have forked it to part two. One of the core requirement for any platform is to make sure that your functional test cases are done. Now, the functional test cases in a data warehouse could be little tricky in terms of you may not be able to achieve hundred percent functional test case green, right?

As you have seen that if even one test case fails, you actually turn out to be orange. But the Unit Test case should be hundred percent, right? So, we wanted to make sure that any day, even if a one Unit Test Case fail, then we should get to know about it without really digging into it and going into it, so we kind of forked it. So, our functional test cases are under part of part two whereas the Unit Test cases are under part one. So, any day we have an orange on our very first environment, we know our UTs have failed so the developers actually jump in to fix it.

Whereas the functional test cases, sometime, there is a data discrepancy, source system probably has got refreshed by the time we deployed, so some of the test cases might still fail. But then you really know what is really going on and that’s why it is forked. As you see, at the end of it, we decommission the whole environment. So, this environment stays for about four to six hours, depending on how much data we have, and we are able to use CI to have a new environment built every day with the new code which we check in. >> So, having the environment only up for a limited amount of time with only a managed service identity having access gets you covered on all the security and release.

All of those things makes your recoveries at risk. >> Exactly. >> When you said you scale it up, so then do you do your essentially when you compare your regression analysis?

Is it just purely based on the prior runs? Because you can’t really necessarily then compare your test at UAT to production if your test environments are P4, your productions are P2. So, how do you resolve that as a purely based on past tests? >> So, that we have kept separate. The idea is how fast you can build the whole environment. So, the regression test or performance test is part of a separate operation itself.

So, we are in a DevOps model. So, we have one of the person who is a DRI or Directly Responsible Individual has a job to keep a good track on the performance and Azure provides a lot of performance indicators on its own. So, we try to cut down on custom implementations, we keep an eye on our performances on those areas. So, that’s how we not try to do a regression on using this whole process because by the way, probably I didn’t mention that we had a zero down downtime data warehouse system, which is very difficult to get into.

The reason we are able to do it again at the same way, our job is to make sure that we deploy faster than anyone can imagine from a customer point of view. So, we have not kept it, the performance indicators are not part of the whole thing, it is kind of a separate entity itself. >> So, you really just scale up to make sure that you can get that deployed as quickly as possible and kind of bring it back to you as it has been. >> And it’s very important when you talk about DevOps I see there are four pillars of it. The very first is the planning when you’re doing the coding, and the second is CI, third is CD, which we are talking about here and the last one is the monitoring. So, I think the concept of DevOps is that, it’s okay to fail, but it’s very important to recover from it because you fail fast, but you recover from that failure quickly.

I think that’s the beauty of DevOps. So, using all these technologies, using all these automation using VSTS we are able to achieve those things. >> That’s true. And so your team is doing service fabric. >> Yeah. >> So, what are the other things that you’d like to show that are different as far as when it comes to service fabric doing CI/CD.

>> Yeah, I think one year back our application was, it is still monolithic, but we are from the last one year we are kind of splitting those into smaller microservices today. So, as I was talking about it is very important to have to shorten your life cycle there right. If I simply ask you how much time it will take for you to get one piece of code changed to get it into production, I think that life cycle if we can reduce that, that would be what we want to achieve here as per DevOps methodology. So, let me show our CI build definitions, which we are using. So, we’re using Azure Service Fabric here, and this is our daily build definition. So, it’s pretty straightforward, you can see we are building the solution here and we are running our unit test here.

With every check-in basically we are making sure that you are not basically corrupting our source-code here. This is a very important step in the build definition when you are using Azure Service Fabric so it is basically creating a service package for you, which is basically a combination of code and configuration that you basically will use in your release definition to deploy into a particular cluster, right? These two tasks are important because if you see during our build we are using a flag called deterministic flag. This flag basically makes sure that whatever input you’re compiler is getting it is going to create the same output. So, if there is no change in your code, your binaries would be exactly the same as it was previously so that because later on we want to use this particular feature so that we should not deploy any service, which do not have any change, right.

So, because we don’t want to use resources there. So, you can see these two tasks are basically needed because pdb files are always getting changed whether you are using deterministic flag or not. So, we have to delete pdb files because that always get changed.

So, this is important for this particular step. In this particular step what exactly we are doing is we are updating the version of the Manifest, version of the services, which we want to deploy on a particular cluster. So, the idea is if there is no change there should not be any version incrementation there.

Right. So, this particular task is making sure that if you basically check mark this thing, it will make sure that you are only updating if there is a change, right. But for this task to run, this deterministic flag is very important here.

So, you know make sure that you’re using those things. And then there are some Fortify scans, which we run on our codebase to do some static code analysis. We are using a third party here, Fortify Servers, which basically scan all the code and give you the report that all the things are in a good session in good manner there.

And we also run CredScans during our daily builds and ultimately we copy our artifacts so that it can be picked up by the release definition and kick in the CD part. So, this completes the CI part and we have the package ready for our release definition pipeline. So, let me go towards release pipelines here. So, you can see we have a continuous deployment here enabled for this particular daily build. So, we have three environments as you know development environment and then you have UAT and then ultimately we go to production.

You can see it’s a very simple very low in number task like first what we do is we, as Heena and Naval was mentioning that we are using keyvault for storing our secrets. So, the only thing which we do in our services is to just print the particular certificate, thumbprint for a particular environment that’s what we do. That’s the only part which we need to do, rest it will basically fetch when the system services will run. This task is very important for our Service Fabric.

This task is basically deploying the packages which you have received in your build definition to a particular cluster. So, there is a field called Cluster Connection where you have to specify your cluster, where you want to deploy these services to. There are several ways that you can create these connections. So right now, we are using certificate-based connection here, but you can also use AAD authentication and SP, and all those things, you can use those connection also.

Then you have to just specify the Cloud publish profile in the application parameter and that’s it. You just basically, use that particular application package to deploy that particular services to your cluster itself. Then, you simply running our BVT against that particular services deployment, and then we send out the report notification that whether all the test pass or any failures are there or not. This last task is also very important. So every time you deploy something on our environment, we’re making sure that all the cloud resources, they are secure enough and they are basically following the organization policy from a security perspective.

So there is a task provided by VSTS team which basically scans a particular resource group and see if there is any security vulnerabilities in that particular resource or not. So you can see, I specify two resource groups here which basically has all my resources. The reason of having two resource group is that we want to keep the cluster-specific resources in one resource group and all the dependency. From dependency, what I mean is like if you’re using a Key Vault or you are using DocumentDB, those are my dependency which has the data.

So we kept it different so that in case, tomorrow, we want to purge the data, purge the cluster itself, we can do that without worrying about our dependency which is, in our case, is Key Vault or DocumentDB. So, that basically helps us doing the security scans for that Cloud resources. So this was all about have CI/CD pipelines for your Service Fabric capability, and it’s very useful. Because right now, whenever we go to production, it’s not a ceremony for us. I mean, one year back with the monolithic, it’s supposed to be like a ceremony for us.

We have to ask for a downtime saying, “Okay. We are deploying things here and it going to be down for two or three hours.” But now, with this microservices, there is zero downtime. With every release, it is like a rolling upgrade which do not have any downtime can be seen in those application itself.

>> So I’ve got a few questions for each of you then. That’s interesting, especially with Service Fabric, it’s built to be up all the time from the whole stuff. When you’re doing your delivery in your release into production, Heena for your application or for some of the data warehousing stuff, what is that level of? Is there a few seconds a few minute downtime? I mean, what does that mean for your world? >> There’s no downtime.

That is a staging environment where the bits will go. If the functional tests are passing, they will be promoted to the production environment in Azure. But if it fails, it doesn’t go to the production environment. >> Okay.

Well, schema changes are the things that people have to know about. >> Yeah. >> So what? >> So how we actually deal with this, we build a parallel environment while current environments these being accessed by the customers. The parallel environment is getting build.

So today, to our customer, we say it’s a zero downtime deployment. We still give a window of 15-minute to 30-minute where we say, there maybe an intermittent X issue because what we have to do is we have to once the deployment is over, we have to swipe it. So we are doing it for last one year. Till date, none of the customers has reported that their reporting got interrupted because we try to do it in those hours where probably no one is using. So technically, from those ways if you look at, we have zero downtime but the swipes still could take couple of minutes. >> Something that I’m interested.

I think it’s probably still helps if you have an agreement with the business and the other teams that, “Hey, on a daily basis or on a weekly basis, this daily or weekly recurring, this time slot, there might be an intermittent.” Just so that way, there is that planning because I think one of the biggest misnomers is DevOps means that there’s a fewer people I am doing things in DevOps. I’m like, “Oh, you can still plan.” So there’s still planning involved with everything that you do. They’re just releasing and building quite a bit more often.

>> Definitely for the production environment, yes. >> One of the other questions that came up, as well. Vikram, you were talking about your Service Fabric and how you could wipe out a cluster. So the reality is I’m curious how you using.

Is it if you’re going to wipe out the cluster, it better be all stateless services. So do you have stateful service and what are you doing for those? >> Yeah. That’s a great question. So, we do have stateful services.

Initially, when we thought of, we kept this resource group separately. But when we launched the stateful services, then we thought of, “Okay, we cannot purge the resource group itself because we have the state with the services right now.” So I think the approach which we are using right now is to take the dump out of it. In case if you want to purge the environment, we have a backup plan there so that we can restore all those transaction from that backup itself. >> Okay.

>> Moreover, in the recent Microsoft Build, we saw that the Service Fabric team is providing more backup options now, which are more fast. So you no more have to go for an external storage itself, you can use the storage for storing your all the connections back up. So I think with all those technologies, we still can achieve what do you want to like purge the environment and all those stuff.

>> So part of your deployment could be to move that off or back it up, and then wipe the cluster and build the cluster and move your transactions state back on. >> Yeah. >> Okay. Some of the other things that all three of you slightly mentioned was really about security in Key Vault.

So I know the Azure Key Vault has made a lot of changes in the last two years, especially within the last year with the Managed Service Identities and then a lot of the other build tasks making things easier. So, can you talk a little bit about what Key Vault security secrets management, what are the things to be aware of or what have you had to do in your environments and with your stuff? >> Yeah, I can talk about that. So earlier, we were using the app.config to keep the Key Vault connection strings and keys.

But we had to encrypt them with the certificate so that they are not visible. But with the current more tight security, we are using MSI option and it’s very straightforward. There was a little bit code change required for us to say that how to get that MSI. But apart from that, we don’t have to now use certificates just to encrypt those keys and keep it there. So that is one learning that we had.

>> Just to add on. This MSI feature is still not available in Azure Service Fabric. So if you’re using, to deploy Azure Service Fabric, you’re still kind of. I think that feature is coming in. But right now, what we are doing is we are storing the thumbprint of the certificate. So we are not storing any secrets in our definitions.

But yeah, clusters should have their certificate installed in the cluster itself, and then it will take care of everything. So we are still safe, we are not storing any secrets in our release definition, we are not storing any secret in our code base. But yeah, I think in the future, they have those features coming up in the roadmap for support for Azure Service Fabric. >> Okay, that will just make it easier.

>> So just to add, yeah, but we use the same way in fact to make it more secure. So today, if we look at from VSTS point of view, from the release variable point of view, we use service principles to install everything. So there is, VSTS still has a limitation where it still depends on a key instead of a cert.

Though, it’s pretty secure, but my team have been trying to look into how do they really get rid of that kind of dependency with respect and not to have those keys because those keys are mostly valid for a year or two. So that’s the only area where probably things could change in coming days. But as of now, if you look at from end-to-end, all our secrets and passwords are securing Key Vault without even the engineers having access to it. >> Another question I had because a couple of you talked through your different environments of the nightly and the UAT and prod and pre-prod. What is your gating and branching strategy look like for your apps? What is that side of this?

>> For us, every developer has a local branch that they can publish to the server and when they think that they want to keep their code safe. But once they are ready with their code, all the unit tests, at least one happy path unit test and functional test is there, and that’s the minimum thing that we ask all the developers to follow. Then, they can try to merge with the main branch which we call as develop branch. That’s where the gated check in build gets triggered, and they will be able to check in only if the solution is getting built and the unit test are passing so that we have the sanity of the code which is in develop branch. So that’s the branching strategy. >> When Dev is develop then, when do things go develop from develop to main or is that?

>> Yes. Once a feature is built, that’s where we want it to go to production. >> Okay. >> That’s where there are checks about the functional tests and unit tests being complete.

They are not automated yet, but at least the test automation is there. But somebody has to see that all the test cases are covered like the functional tests especially. Once they are there, that’s where we push to the master branch from the develop branch. Master branch is the one which gets deployed in all the environments. >> Okay. So essentially, you have local devs on the branches going to the develop, which is your integration point for all the devs, that should be building every night.

So every night, that’s building to make sure nobody is breaking everybody else’s stuff. Then from time to time, when the business and the engineering team deems it appropriate, then you will essentially use Release Management to move that off to production. >> Yes, from developer’s perspective, we take care of the functional test covering everything. But obviously, the UAT, as user acceptance testing, where user has to say, “Okay, yeah. I’m good.” I mean, that is where we are just involved if there are any issues.

Mostly our PMs are interacting with the partners or if there are internal teams and making sure that everything is passing and doing functionally, and that’s then we approve the production deployment after that. >> I think when we talk about DevOps philosophy, there is nothing called production. I mean, every code which you are checking in should be a production-ready code. So, for us, it’s like when we do a feature development, we create a feature in branch and everybody start working on that. Eventually, whenever somebody wants to check in, so it has to be merged with master with all the sanity checks. You’re done with the unit test, you’re not checking in any credentials there.

So, all those minimum level checks are there while you’re checking in into the master. So, I think this is very important because it’s about changing the mindset. I mean, think of while you’re checking in, you’re checking into production right away, because you have the automation build pipelines with you and you’re going to snap it into the production right away.

So, I think this is very important that with all these automations, with all these tools, VSTS and everything, we want to achieve that, so that there is a continuous delivery of value to the end users. >> So your team is, you’re the developers and service rapid staff, are committing right into a particular feature branch, which that is actually building and deploying to some sort of environment on a nightly basis. Then, when do you make the call that your feature branch gets merged with master and what does that look like for you guys? >> Yeah. So, I think we have gated release pipelines.

So, we run our functional test- unit test, and we also see if there is no exceptions in our system while moving from one environment to another environment. We have gates for that like if you found any exceptions of a particular type for within a period of time, we’re going to block that particular environment transition to another environment. So, those feature are there in VSTS, we are leveraging those features, so that ultimately, the code which is getting merged and built on a particular environment is all good enough to go into production. Moreover, with these automation builds, we are getting a very rapid feedback from our stakeholders also. So, we have two weeks of sprint, and every sprint, we have a review with our stakeholders and they provide quick feedbacks to us, so that we can quickly apply those feedbacks using these automation.

Because imagine if we don’t have these release pipelines, we cannot achieve those things right away. So, this is very important while getting into the DevOps methodology here. >> Yeah.

So one of the other things that I didn’t mention that we haven’t really talked about is just the fact here, for all of our line of business applications, we all live in the same project, in the same VSTS instance. So that means, any of us that are checking in code, can go search everybody else’s Git repository to go find acceleration, I think, one of the other important things. So that means that all the thousands of engineers, in core services that are developing line of business apps, can see each other’s code, can go fix, we can go run scans, we can go look at and say, “Who are the teams that are doing Service Fabric by doing a code search,” and go learn from those teams.

But I think one of the other really important things there, in our VSTS instance for example, we have some federal level; you have to do this, there’s no exceptions. Then, there are state and city level. So, each of the applications or cities, they have autonomy. But at the federal level, we’ve said, “Hey, everybody is going to live in the same project I am sure on the screens, you’ve seen that little moniker One ITVSO.” At the federal level, we’ve set up iterations in a default two week cadence, across to everyone. So essentially, our federal level, you can’t go around this everybody has to use Git.

The two week sprints are set up for everyone. If for some reason the team needs larger than two week sprints, they’re free to change that but two weeks are set up for everyone. Then, the people that want our VSTS stuff are really good about let’s go run credential scans, lets go runs scans looking for secrets, let’s go look for other stuff on every single build that runs to our system. I think today, I’m pretty sure there are, I don’t know how many Git repositories there are, but I know we do over a 1,000 releases all the way through pipelines today, and we do over a 1,000 builds per day, and now it’s all completely automated. So, I want to get back to what is your branching strategy look like for the data warehousing stuff? What are they, if there any unique stuff in there.

>> It’s more or less on the same lines. The only variation we have is since we don’t have too many pre-production environments, we just have one hybrid environment which we expose to the end user customers. We do not have a develop branch.

So, as an engineer, every sprint I branch out from master, work on my story. As Heena mentioned, we have gated check in, you have the moment you create a pull request, the CI build will file and everything is getting checked if the build is compiling everything or not. The reviewers who are going to review, they actually, intentionally look for a couple of things. How we cover the unit test and the functional test scenarios apart from code reviewing code, that’s in our behavior now. So, in case if we don’t find the UTs or the functional test cases that covering the code is being written, we reject it.

>> Are you talking about the pull request of feature in VSTS? >> Yeah. I’m talking about the pull request as we are doing the gated check in thing. Then, once everything is approved, it gets merged with the master.

Once it is in master, every second week, which is every sprint, we deploy in our pre-production UAT environment, hybrid environment, which is open for our product owners and even the end users, to go and test. So, that way we have tried to keep it very lean. Everyone knows how to branch out and merge it back, and what it means when you merge it back, it means it’s going to go in UAT. >> So essentially you have, instead of things deploying to production on a nightly basis, you essentially everybody knows, “Hey, at the end of every two weeks, the UAT environment is going to be updated, and whatever the prior UAT environment may be rolling to production at some point.”

I told myself I wouldn’t have any notifications up and then one popped up. Great. Thanks, I’m glad we’re up on this.

So, one of the other questions that I wanted to ask, as far as VSTS itself. Heena, you talked a little bit about the tokenization. What are some of the other things that VSTS has released in the say last year where maybe it’s changed, what you’ve done, maybe it’s made things easier or for somebody else that’s going down this path? Maybe a little bit of learning here and there will help them. >> I think I can talk about one of the things. Let me show you that recently in the released pipeline, they came up with something called Gates.

Gate is basically, as Heena was mentioning, you can have pre-deployment of tools which are basically manual approvals. You can assign somebody who can approve. He basically will check the sanity of your artifacts and then allow you to deploy those particular artifacts on a particular environment. Plus, there are some features which is called Gates. They recently introduced these feature in the VSTS. So, right now we are using one of the Gate called “Monitoring alerts”.

Basically, what we are trying to say here is that after UAT environment is done, and if I’m seeing no issues in my App Insights, which is my telemetry, there is no issues, it will check like 15 minutes interval. If everything looks good, it’s going to say, okay, I’m good to deploy into my next environment. So, these are like one of the Gate but there are several types of Gates here. You can also use an external API. For example, in the Build, they were giving example like imagine you have your product owners and they usually have to give you a go that you are good to go to the production. So, they were using DocuSign for that.

So once BO basically sign your particular release, it’s going to call that API whether that is being approved or not and then basically you start deploying on the production environment for that. So, these features are very useful in some scenarios. This was the feature recently VSTS have provided in the released definition itself. This is also a very useful feature, you can apply on your release pipelines here. >> Then, I think one of the other things that I wanted to touch on briefly is how each of the teams, Heena I think that part of the Build talk, spoke about this, who are the target audience for Application Insights and who is the target audience for Azure Login Analytics OMS for your app?

So, how do you distinguish who your consumers are for those? >> For Application Insights, it’s mostly the reports that we create for the business people, as well as the developers. The DRI who’ll look into that. So, it’s solving two purposes. How?

Because the telemetry app that we have, it publishes all the data, not just the infrastructure related but also the business data related. All of that gets into the custom dimensions of Application Insights with every stage that we are passing through in our code. As a DRI, I can come and write queries on top of Application Insights and say, “Okay, I am interested in this particular order ID, or delivery order ID, purchase order ID,” and I can get the data. Because whenever a business comes back to us or a DRI or an incident has raised, it’s always based on I have a unique business ID, purchase order ID. I want to know what happened to this one.

Right? Or the other way, it can be there was an error and we got an alert. So, in that error itself, we would log those unique IDs to be able to query those efficiently in Application Insights.

Also that Application Insights data is used to create reports for the business side of things. So they can actually see how many orders came into our system per day or per hour. They have all those options. >> So, App Insights is really for the business and then your DRI, Direct Responsible Individuals. Then Log Analytics is really used for the operations side of the DevOps teams to go figure out whats.

Is that fairly similar for everyone? >> Yes. I think the point here is to do the proactiveness here. Instead of the customer is telling you that I’m facing a problem, I think when you are in DevOps you need to proactively look into those insights and see to figure out if he’s facing that problem let’s fix that. So, these things are really useful when you are doing research on that particular stuff.

>> Just the variance for data warehouse kind of project, the App Insights is not a default choice. There’s not too much of logging you can do with App Insights. So, most of the time, if you’re on Azure PaaS or any of the other Azure offerings for the Big Data, you get a lot of logging available through the portal.

Obviously, you can kind of plug it with your Power BI as well. But it’s still being a warehouse system you want to know how your queries are performing from one stage to other stage. Some of the things out-of-the-box available with SQL, where it gives a performance counter and you still get to know.

But as an engineering team, you want to know how much time each of these code pieces or code blocks are taking for you. So, we do custom logging for that. But kind of we have an internal tool which is called Unified Telemetry for our IT organization which provides you, which basically ports whole this logging data to App Insights and through which we kind of get to know about it.

So for us it’s a little twist but we still use App Insights through migrating those logs to App Insights. >> Are there any other things that besides tokenizers and how we use App Insights, are there any other things that if you were talking to somebody today and they said, “Hey, we’re looking at Azure or we want to start going down the Azure path,” and they’re starting to look at setting up CI/CD. They’re starting to look at doing everything. What kinds of tips would you give somebody that’s about to embark down the same path that your teams have just gone down? >> I would say it’s not easy to on-board.

It’s going to take a lot in terms of what you want to do with all these implementations. But then it’s worth every penny in terms of whatever you’re putting. It’s going to give you a lot of return later on once you have it. Because a lot of these are repetitive processes which are very important to make sure your quality of the product is taken care of. So, I would say initially if you could plan carefully what you need to do, design it, it’s going to pay you off big time.

So I would say definitely look out for these opportunities from modern engineering implementation point of view. I have seen Visual Studio team system has been adding more and more of these features. Anything you are using new in Azure or Cloud computing, there’s a lot of out-of-the-box support is coming in.

So it becomes very easy for you to kind of on-board to those and be very productive. >> Just to add on what Naval said, most of the developers think when you talk about VSTS, you’re talking about Microsoft technologies only, right? But it’s not the case. VSTS supports like all the open source. We are using Angular Applications today and I was amazed to see all those tasks which are in- Build in VSTS which we can use to build all these for our applications. Moreover, you can create your own tasks.

If you have your own custom things to do, you can always do in a VSTS. So, I think this tool is very much into forming your journey toward the box. >> Yeah. I know one of the teams that I’ve just been talking to that will have a blog post on the main Azure blog soon is they’re doing Linux VMs. So they’re using containers Kubernetes with Linux right now and they’re actually switching over to the Azure Kubernetes service with Linux. So, even though we’re Microsoft, we still use quite a bit of Linux and Open Source internally.

Heena, is there anything that you’d add? >> Yes, same thing. There are tons of options. So we have to do some hit and trials. In fact, in our organization, I’m working on only in order management and fulfillment space.

But we do have return space as well as planning. All these spaces have actually different kind of solutions, Azure components that we are using. We learn from each other and we say, “Oh, you know what, this works the best. Or this is the design pattern that we should follow as an organization.”

We standardize it then. So, they will be learning. On top of that, I’ve observed that almost daily, if not daily or every six months, there is something new coming up in Azure and we always make sure that we are trying those things out and keep it in our path for future considerations of how we can use their capabilities in our solutions. So there is a team who, not team but there are a few dedicated people who are actually does those kind of POCs in Azure and they kind of give a knowledge transfer or POCs session to others. So that’s how we learn about those and we start incorporating in our new designs as and when they are available. >> It was amazing to see that the VSTS and Azure teams are working hands on hands.

>> Yeah. >> Whenever you have feature available in Azure and you see all of those availabilities are there in VSTS to kind of get into the CI/CD pipelines altogether and see the beauty of that Azure resource right away. So I think that’s very great. >> Awesome. I know we sent out a link during this webcast Naval to a blog post that you wrote on modern data warehousing with continuous integration. The audience has that.

Please go take a look at that especially if you’re looking around the data stuff. So we’re at the end of our time. Thanks everyone.

Well, we already talked about key takeaways, so thanks for sharing those. I think the general consensus is completely worth the time. It’s worth not deploying from your laptop and having to deal with that. It also covers your compliance and security and those everything. >> Do the best of what you do.

Candice Lawrence

E-mail : info@thunderinthecascades.com