January 23, 2018 Exploring ASP.NET Core and Angular Applications with Docker What’s the secret to software serves business needs indefinitely? Containerizing Containers provide build, test, and deploy environments .NET technology. Rosen Kolev When embarking on large software projects, developers weigh their options for front-end frameworks — Angular, React, Vue, Polymer, and Ember, to name a few. As for backend frameworks, their choices range from ASP.NET to ASP.NET Core, Node.js, Ruby on Rails, Django, and beyond. Angular and .NET technology, such as .NET Core, are popular solutions for many teams. These development frameworks are supported by companies at the forefront of new technologies. ASP.NET and .NET Core are supported by Microsoft, while Angular is supported by Google. Will Your Business Benefit From the Cloud? Find Out. How does cloud development control expenses and reliably scale application delivery? Thanks to the support infrastructure of the companies behind the technology, .NET technology and Angular allow teams to write powerful web applications that — when controlled properly — can serve business needs indefinitely. Projects that involve highly complex architecture demand frequent yet seamless testing and build deployments. These are essential elements to low downtime and excellent experiences for end users. How do the best teams build high-performing software solutions for the business? Use Docker to Improve Your Angular, ASP.NET Core Apps Development teams build more successful applications with Angular or .NET technology when they use Docker. Dockerizing applications is key to testing and building web applications in a lightweight yet rapidly executable way. Why Should .NET Technology Teams Know About Docker? Docker is actually a platform that facilitates the deployment of software in containers. The Docker website provides a helpful definition of containers and the important role they play in software development: “A container image is a lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it: code, runtime, system tools, system libraries, settings. Available for both Linux and Windows-based apps, containerized software will always run the same, regardless of the environment.” A container is a software component that sits on top on another piece of software in an isolated environment. The container can be used to create an isolated environment in which to develop, test, and launch software. Then, whether deployed on Linux or Windows-based machines, the containerized applications will run smoothly on them despite any customized settings that differ from the environment in which their code was written and tested. Use of Hyper-V, VMWare, or other activated virtualization tools are required in order to use containers. How Do Teams Benefit by Using a Container? There are a few reasons why developers prefer to use containers over normal virtual machines, or VMs. Environment Consistency. Containers encapsulate all necessary application files and software dependencies. They serve as a building block that can be deployed on any compute resource regardless of software, operating system, or hardware configurations. When you run an application in a container, what you run on your local environment runs the same way on QA, staging, and production. Developer Productivity. A developer no longer needs to install SDKs, track library versions, setup registries keys, and configure hosts — not to mention all the extra work that is normally required when he or she must be onboarded to a project. The only things the developer needs are the Docker and Docker compose files. With the container, everything else is pre-configured. This leads to faster integration of new team members and better overall team productivity. Easy delivery and integration. Using Docker ise useful during the continuous integration (CI) and continuous deployment (CD) processes. Containers allow developers to track versions of application code and their dependencies. With a single command, teams can generate new environments for different software versions, roll-back versions, deploy a container to cloud providers, or implement containers in CI pipelines. Operational efficiency. Containers allow technical teams to run multiple applications on the same instance — representing a major boost in efficiency in the use of computing resources. With containers, one can specify the exact amount of memory, disk space, and CPU to be used by a container. Team efficiency. When several teams must develop multiple applications and services for the same product, containers allow them to focus their energy on more complicated tasks. An Angular team working in Linux can start their development machine using .NET technology and PHP service with the same configuration as they will run in production. Meanwhile, another team can create APIs, databases, web applications, storages, or other solutions on Linux or Windows for future purposes. This can be a big benefit when one system involves multiple technology stacks. It also helps to reduce the huge costs of getting new software up and running. Portability between on-premise and cloud providers. If an application runs in a container, support teams can easily switch operations from an on-premise solution to a cloud provider, or vice versa. All you need to do is start a new instance on the provider and redirect any existing domains. Since it’s relatively easy to switch from local servers to the cloud, there is no reason why teams couldn’t switch between actual cloud providers. This is an effective way for product owners to control costs of services while also deriving the most value from storage and support services. Docker isn’t the only containerizing service available on the market. What is the Structure of a Container? To answer this, imagine a team wants to run an Angular application with nginx in a container. They will have to deploy their own application in container my-application that will use nginx as the base image. nginx will use debian as base image. Relying on a base images ensures that the application runs in the same environment every time. debian:stretch-slim nginx:1 my-app:1 How Do You Dockerize an Application? Below is a demonstration of how to Dockerize — or containerize — an application (a simple application that serves as a to-do list, written in Angular is served by a .NET Core API for storing data) will be Dockerized in this example. By learning how to Dockerize an application, businesses can implement containers in their own development environments, saving themselves time and money as they speed development cycles along. The code for the application in question can be found here in our company GitHub account. Project Layout Here is a simple overview of the project directory structure. Understanding this project should help you understand the dockerizing process. /scripts/ – Docker compose files and scripts /src/ – application source codes /client/ – front-end Angular application /src/ – Angular source code for the todos front-end dockerfile – Angular front-end Dockerfile docker.nginx.default.conf – nginx configurations from Angular /server/ – .NET Core solution for the todos API /docker-compose/ – Here will be my Docker-compose project (.dcproj) /MyApp.Api/ – Todos ASP.NET Core API /MyApp.Tests/ – Todos API tests MyApp.sln – Todos ASP.NET Core API solution file /configs/ – some configurations like stylecop.json. How Do You Start Dockerizing an Application? Separate containers can be built for frontend and backend applications. But in the end, they will actually run together smoothly. Before this is possible, a single Docker-compose.yml for the application’s services is needed. This data is found in the /scripts/ folder. Then both applications can run like this: # in /scripts/ folder: docker-compose up build-api build-web docker-compose up run-api run-web Why Docker Compose is Key to Building Lightweight Solutions Docker Compose, a tool which is typically installed with Docker, helps to define and run multi-container Docker applications as a single entity. Creating a docker-compose.yml file allows developers to configure an application’s services, making it easier to build, test, and launch an application with a single command. More information about running run multi-container Docker applications with Docker Compose can be found here. How to Dockerize a .NET Technology Application Visual Studio and ASP.NET Core have built-in Docker support. Developers can build, run, and debug .NET technology in a Docker container inside Visual Studio using Visual Studio Tools for Docker. Here is an article for how to Dockerize an application in the official ASP.Net core documentation. Overall when you select a web application and choose “Add Docker Support“, VS will create an extra docker-compose project (docker-compose.dcproj), adding the following files: Dockerfile: describes the environment in which your application will run. The base container is microsoft/aspnetcore. The two docker-compose files — “docker-compose.yml” and “docker-compose.override.yml” — include the build and run docker services. When dockerizing a .NET Core Application, begin by moving the docker-compose file to the folder /scripts so all docker-compose services are in a single location. If the following configuration is left unchanged, the dcproj will not work. <ItemGroup> <None Include="../../scripts/docker-compose.yml" /> </ItemGroup> The docker-compose project requires the docker-compose.yml to be in the same directory as the dcproj file. This is accomplished through symbolic links to the file. Open the command prompt in your .dcproj folder and execute the link creation command mklink docker-compose.yml ../../../scripts/docker-compose.yml With the symbolic link in place, you can run the docker-compose project and it will use the docker-compose.yml in the scripts directory. Once a MSSQL container is added in the compose file, it looks as follows: # /scripts/docker-compose.yml version: '3' services: run-db: image: microsoft/mssql-server-linux:latest container_name: myapp.dev.db environment: - ACCEPT_EULA=Y - MSSQL_PID=Express - SA_PASSWORD=XXXX expose: - 1433 run-api: image: myapp-api container_name: myapp.dev.api build: context: ../src/server/MyApp.Api dockerfile: Dockerfile environment: - ASPNETCORE_ENVIRONMENT=Docker ports: - "8080:80" depends_on: - run-db links: - run-db:db This method is effective if when run with docker-compose up. But with Visual Studio, it tries to find the context relative to the .dcproj file. Instead, developers must use the docker-compose.override.yml in the .dcproj folder to override the context path. What is the override compose file? When executing the docker-compose up command, if you do not specify a file, docker-compose tries to find and use “docker-compose.yml ” and “docker-compose.override.yml” in the same folder. Our docker-compose.override.yml will look like this: # /src/server/docker-compose/docker-compose.override.yml services: run-api: build: context: ./../MyApp.Api Now when we run our project from Visual Studio, it should start and run successfully. Account for CI & CD Functionality in .NET Technology We need to also build the application for environments other than Visual Studio. This is vital to integration with CI and CD servers and tools. For that reason the Docker Tools create an extra yml file called “docker-compose.ci.build.yml”. With this file we can build the project outside of Visual Studio by executing something like “docker-compose -f docker-compose.ci.build.yml up ci-build”. But in this file there are several dotnet commands that will queue up in a command pipeline. This does not make for a flexible approach. Instead, using a build script that can be integrated with different parameters to do different tasks simultaneously — build, run tests, analyze, publish — can be convenient to busy development teams. While there are many approaches out there to synchronizing different parameters, in this instance I will use a simple tool called dotnet-script. It is easily installed in a container and is a runner for C# script. So we can use C# to build C# or anything else in that matter. I have written several helper methods you can find in /scripts/builder/csx/common.csx. The main build commands are in the file /scripts/builder/build.csx and are very similar to the original docker-compose.ci.build.yml. They execute dotnet-restore, dotnet-test, and dotnet-publish in a sequence. See the code snippet in the next section. Dockerizing an Angular Application When Dockerizing the Angular application, first add the build scripts. This is a simpler process, requiring developers to add only the yarn install and ng build. See how to build Angular application on the framework GitHub page. Our final build.csx file should look like this: Execute(() => { Cmd($"dotnet restore {pathToSolution}/server"); Cmd($"dotnet test {pathToApiTests}"); Cmd($"dotnet publish {pathToApiApp} -o {pathToApiOut} -c Release"); }, "Build ASP.NET Core API", "build-api"); Execute(() => { Cmd($"yarn install --production=false", pathToWebApp); Cmd($"ng build --app my-app --prod", pathToWebApp); }, "Build Angular 5 App", "build-web"); Now we can setup a common file that holds all build script. This can be docker-compose.ci.build.yml, but for convenience, when we use docker-compose, I will use a docker-compose.override.yml file in /scripts folder. Why Shouldn’t You Use the docker-compose.yml? The Docker Tools for visual studio cannot tolerate anything besides ASP.NET and mssql in the docker-compose file. Therefore, developers must “trick” it. First, put anything that is not a .NET technology-based application in the override file or in another file to which you can later refer. This will include: ASP.NET Core build service Angular build service Angular run service The Angular Dockerfile The same way VS created Dockerfile in the API folder, we should create the Angular application Dockerfile in the Angular folder. We can use nginx as base container. See more at nginx wiki page about nginx configurations. # Dockerfile FROM nginx RUN rm -rf /usr/share/nginx/html/* COPY Docker.nginx.default.conf /etc/nginx/conf.d/ COPY dist /usr/share/nginx/html EXPOSE 80 # Docker.nginx.default.conf server { listen 80; sendfile on; default_type application/octet-stream; root /usr/share/nginx/html; location / { try_files $uri $uri/ /index.html =404; } } About the Base Build Container When building an ASP.NET application, a developer can simply use microsoft/aspnetcore-build base image. But now with Angular and dotnet-scripts we will need some extra components installed in order for it to function properly. How is this done? The developer can create a new build image on top of microsoft/aspnetcore-build and install yarn, dotnet-scripts, and Angular/cli. myapp-builder: image: myapp-builder container_name: myapp-builder build: context: ./builder dockerfile: Dockerfile volumes: ["../src:/app"] build-api: image: myapp-builder container_name: myapp-build-api volumes: ["../src:/app", "./builder:/scripts"] command: build-api depends_on: ["myapp-builder"] build-web: image: myapp-builder container_name: myapp-build-web volumes: ["../src:/app", "./builder:/scripts"] command: build-web depends_on: ["myapp-builder"] run-web: image: myapp-web container_name: myapp.dev.web build: context: ../src/client dockerfile: Dockerfile ports: - "80:80" Build and Publish With Docker to Drive Your Efficiency At the end of a project, teams normally build and deploy the finished application to a server. All that is needed to deploy the API application to a server at id=100.100.100.100 is this: DOCKER_HOST=tcp://100.100.100.100:2376 docker-compose up build-api docker-compose up --build -d run-api Even the most efficient software builds are resource-intensive. When time and talent are dedicated to building large applications to serve equally large and complex organizations, development teams must equip themselves with the right tools that make their work as streamlined as possible. Put Your Software’s Best Version Forward in 2018 Why businesses need remote Agile teams & questions to ask before starting. Dockerizing a project with containers benefits in your teams, reducing the time needed for both initial application setup — but also for long-term maintenance. The magic of containers and the approached described above is that, once set up, image starting, services setup, databases, Redis cache, RabbitMQ, Elasticsearch, identity providers, background jobs — or anything else you may need — become operable with a single command. References https://www.docker.com/what-container https://aws.amazon.com/containers/ https://docs.docker.com/compose/extends/ https://angular.io/ https://docs.docker.com/compose/production Image Source: Unsplash, Chris Barbalis Tags DevelopmentWeb Share Share on Facebook Share on LinkedIn Share on Twitter Cloud Computing How does cloud development control expenses and reliably scale application delivery? Download Share Share on Facebook Share on LinkedIn Share on Twitter Sign up for our monthly newsletter. Sign up for our monthly newsletter.