Further reading

This is our blog. It contains the latest news and announcements about our open-source projects, services, and products; not least, there are gripping case studies, customer projects, and much more.

Municipal government permit management run on Orchard Core - GovTech company case study

Governments use Orchard Core too! Even in this blog, we've seen how Lombiq worked with the municipal government of Santa Monica and with the Smithsonian Folkways Recordings, which is an agency of the US federal government. But did you know that apart from flashy websites, Orchard can also provide important services for citizens behind the scenes? The multi-tenant case management platform of a GovTech company we worked with does exactly that: If you live in a US city, you may have dealt with your permit or other license via the platform! And as you may have guessed, we're working with the company as Orchard Core experts.We've been helping the company since late 2022 with a variety of Orchard Core consulting, troubleshooting, and development tasks. This started with a general review of the GovTech app, and how it's hosted in Azure, to find areas of improvement or potential issues. Since Lombiq has run Orchard, then Orchard Core projects, and hosted apps in Azure for a decade now, we can always pinpoint things we recommend changing.They also asked us to deliver some specific development tasks that improve the UX of the permit management platform or help the development team. Here's a quick overview of some of these: Setting up automated QA tools. For these, we utilized our Orchard Core-optimized projects: Lombiq UI Testing Toolbox for automated UI testing, Lombiq .NET Analyzers for checking the code for any possible issues, and Lombiq GitHub Actions to provide full-featured CI builds and Azure deployments. These all help keep the platform working well, improving the development team's productivity. A WYSIWYG editor for the Orchard Core admin area, utilizing the user-friendly Froala editor. Users of the platform weren't fully satisfied with Orchard's built-in editor, so this was a welcome improvement. Chunked file uploads: Hosting environments commonly have some restrictions on the size of an HTTP request. So, if you want to allow users to upload larger files, the app needs to upload them in multiple chunks (parts). This was important for them since files related to permit management can routinely grow beyond the usual size limits. So, we've implemented chunked file uploads both in the platform and as a contribution to Orchard Core. Since we at Lombiq are really focused on open-source, it's always great to work with clients who understand how the open-source ecosystem works and that you also have to contribute back. This is what their CTO & Co-founder says about us working together:Lombiq excels in SaaS technology development, particularly in the context of Orchard Core. Their distinctive expertise and capabilities enabled us to expedite the expansion of our platform. They were consistently responsive, delivered high-quality code, smoothly transitioned each project to our development team, and assumed full responsibility for their tasks. I highly recommend collaborating with them for any SaaS related project.Do you also work with government clients and want to make sure your Orchard Core app runs smoothly? Get in touch with us and let the Orchard Core experts help you!

Collecting Orchard usage telemetry with Azure Application Insights - module released

You can't base decisions on assumptions. What you can't measure you can't manage. Familiar? Of course, and what you want to do with your software is specifically what's hinted: measure how people use it. Azure Application Insights is a tool for this, being an application telemetry service. Now we created an Orchard module for it for easy integration! The Orchard Azure Application Insights module lets you send usage telemetry from Orchard easily: just install the module, configure the AI instrumentation key and that's it: server-side request telemetry (e.g. response time, log entries) and client-side telemetry (e.g. client-side processing time, JavaScript exceptions) will be sent to Azure and you can explore it from the Azure Portal on charts like above. With all this integrated you can get valuable insights, not just raw data but also answers to questions like "What was the request when this exception happened?" You can also chek out an overview of AI and a demonstration of the module on the Orchard Community Meeting's recording. Azure Application Insights is a very useful tool when operating Orchard applications and allows you to response to any issues quickly. Check out the module!

Choosing an Azure datacenter for your service

When building a web-based service that will be used by people all around the world you also have to think about network latencies that are imposed by geographical distances. Your service itself may be very fast, but it won't matter if users on the other side of the globe will experience it with a delay of 1 second. With Microsoft Azure, the cloud provider that we also use for all of our own services, you can choose from datacenters at different locations to run your service, thus optimizing network distance for your target audience. That is, of course, if you only want to run your service in just one datacenter and not in multiple ones to be able to reach more geographical regions on fast lines. We'll deal with the one-datacenter case in this blog post, since running a web-based service in multiple locations is neither technically simple, neither cheap, but optimizing the user experience for the intended target audience is something even the smallest applications are ought to do. So let's see how we can decide which datacenter to choose! The methodology described here is the same we used to decide where to deploy our Orchard SaaS, DotNest. Where are my users located? First, you have to determine where your (inteded) target audience is mostly located, since foremost you want users of your target audience to have the best user experience. If your service is already running in some form, or you know that your target audience is the same as one of your other service's than you can consult your web analytics to get some exact anwers. This is of course not available if your service is totally new and you don't yet have experience of this sort; then it depends on your business plan to make a guess that's sufficiently educated. For DotNest we decided that our primary audience is in West Europe and North America. This acutally is quite lucky match: since there are a lot of high-speed network cables laid out under the Atlantic Ocean between the two continents (as you can see from e.g. this slightly out-of-date picture) it's actually possible to find a location that will be able to serve both sides of the big pond equally well. Gathering the tools OK, now we know where our users are located. But how are we going to determine which datacenter is best suited for them? We'll make some measurements! Firstly we need to set up endpoints in all of the datacenters so we can use them to measure latency. For this we'll use Azure Web Sites: we won't deploy anything to the websites, the default page that a blank website returns will be enough, since we don't want to measure server performance but only network latency as much as possible. The websites will be just free ones, as the performance of a blank website, in our experience, doesn't differ from or vary significantly more than one on a paid tier, so it doesn't affect latency measurements. At the time when our measurements were made less datacenters were available on Azure, so we've only made test websites for those DCs, namely the following: http://eastasiaspeedtest.azurewebsites.net/ http://northcentralusspeedtest.azurewebsites.net/ http://northeuropespeedtest.azurewebsites.net/ http://westusspeedtest.azurewebsites.net/ http://eastusspeedtest.azurewebsites.net/ http://westeuropespeedtest.azurewebsites.net/ You can use these endpoints for your own tests if you wish of course. Since websites can go idle, especially the ones on the free tier, it's best if you warm them up by opening them before making any measurements. Seondly we also need some tool to measure latency of the various endpoints from different locations. For this we've used Alertra's Spot Check tool (see the textbox on the top of the page) that can measure response times from a variety of locations. (Another interesting tool to check the latency from your current locations to all Azure datacenters is Azure Speed Test.) Evaluating the results So we've set up all the test sites and have the right tool to measure response times. The next step is get the numbers clear and see which location performs best. For this purpose we've created a simple spreadsheet that you can use to evaluate the results. As you cen see we've made measurements to all of the websites with Alertra's tool and put them into a table. Then we calculated the average repsonse times for our intended target audience's locations, namely London, Chicago, Los Angeles and Washington DC in the P column. By simply changing the function that calculates the values in that row you can see the numbers for your target audience too. As one can see from the spreadsheet East US came out as the winner with North Central US being a close second. East US was faster than North Central US in the majority of cases and also a bit faster on average too. While the proportion of these numbers may be accurate, take the concrete values with a grain of salt. In our experience the actual response times are much better than the ones measured by Alertra's tool. For example from Hungary, where we're located, we can get a response from DotNest within 150ms at worst for cached pages, what is much faster even than what Alertra's tool measured from London (about 500ms) For DotNest we've gone with the East US datacenter, and so far it seems like a good decision. We've got numerous feedbacks form our users that the service is very fast. So where are you going to put your application?