How to get going with Modern Analytics Sandpit Environment using Azure in no time…(Part 2)

The previous post in this series of 3 was regarding the background / purpose of “how to get started with Modern Analytics” and in this post we will try to cover the actual approach adopted and why?

As the name suggests “Modern Analytics” the approach has to be unconventional regarding Information Management to support the idea of bringing all the different types of analytics together (Descriptive, Diagnostic, Predictive & Prescriptive). Especially, the shift that has drastically challenged traditional analytics in the form of “Big Data“, and for that we have to draw some parallels between the old and new worlds (ETL vs ELT) as following:

Information Management Approach

There are two approaches to doing information management for analytics and deriving actionable Insights:

  1. Top-down (deductive approach)

This is where analytics is done, starting with a clear understanding of corporate strategy where theories and hypothesis are made up front. The right data model is then designed and implemented prior to any data collection. Oftentimes, the top-down approach is good for descriptive and diagnostic analytics. What happened in the past and why did it happen?

2. Bottom-up (inductive approach)

This is the approach where data is collected up front before any theories and hypothesis are made. All data is kept so that patterns and conclusions can be derived from the data itself. This type of analysis allows for more advanced analytics such as doing predictive or prescriptive analytics: what will happen and/or how can we make it happen?

The following image very nicely  visualises the differences between the two and the types of analytics they cover.

analytics

In Gartner’s 2013 study, “Big Data Business Benefits Are Hampered by ‘Culture Clash,’” they make the argument that both approaches are needed for innovation to be successful. Oftentimes what happens in the bottom-up approach becomes part of the top-down approach and exactly this forms the basis of my approach towards accumulating / simplifying different types of Analytics within “Modern Analytics”.

This leads onto a basic idea of how to start with bottom-up approach as we are already familiar with Top-down in form of traditional BI / Relational paradigm? Yes, Using Data Lake, as it helps to land all the different data sources in one single place to derive actionable insights, irrespective of data volume, velocity, variety or Veracity challenges.

Data Lake Framework

However, before jumping straight into Data Acquisition and Ingestion steps there has to be some careful thinking to avoid Data Lakes from turning into Data Swamps, and especially thinking around What about Data Management structure considering data democratisation within an organisation?”, “How to apply Data Governance?”, “How to cater for Data Quality”, “How to apply Data Security based on certain Roles within organisation”, “How to store data for Real-time and batch Processing?” without making this too onerous having some basic framework around these fundamental questions does assist in delivering profound, relevant and quality analytics even in the Sandpit environment, above all, also helps in formulating certain aspects of Enterprise Data Strategy.

Additionally, consideration on these basic questions would form the solid basis of Best Practices and Patterns for delivering on Data Science & Advance Analytics projects in an effective manner.

We can go into a lot of details but already there has been some quality blogs written by the likes of my colleague Tony Smith and especially Adatis which is referenced below and the one used as an adopted approach for answering the questions regarding Data Governance & Data Management mentioned above.

Though, just to highlight one of the core areas which is quite significant and fundamental to all the above questions, and that’s regarding the Big Data Architecture applicable for managing the Real-Time and Batch Processing within Data Lake.

Lambda Architecture

Briefly, this architecture approach divides data processing into  “speed” (near real time) and “batch” (Raw, Base, Curated) layers.  This design is well established and is a relatively common implementation pattern on Azure.

lambda

Framework

The rest of the details for Data Lake Framework are articulated very nicely in the following blogs especially around the carving of Data Lake for better Data Governance and Management based on the above Lambda Architecture.

Here is a screenshot of the carved Azure Data Lake discussed in detail within the following blogs:

datalakeframework

Must reads…

Shaping The Lake: Data Lake Framework

Azure Data Lake Store–Storage and Best Practices

Architectural Components or Azure Building blocks for the Environment

The main architectural components or building blocks referenced for this environment are as following:

  1. Azure Data Lake Store
  2. Data Science Virtual Machines
  3. App Service
  4. Event Hubs
  5. Stream Analytics
  6. Azure Blob Storage
  7. Power BI

Modern Analytics Project Delivery Process (mainly Data Science and Advance Analytics) & Key Personas

The Project delivery process has been narrowed down specifically for Data Science and Advance Analytics  in the form of TDSP (Team Data Science Process).

The Team Data Science Process (TDSP) provides a recommended lifecycle that you can use to structure your data-science projects. The lifecycle outlines the steps, from start to finish, that projects usually follow when they are executed.

The TDSP lifecycle is composed of five major stages that are executed iteratively. These stages include:

  1. Business understanding
  2. Data acquisition and understanding
  3. Modeling
  4. Deployment
  5. Customer acceptance

Here is a visual representation of the TDSP lifecycle:

tdsp-lifecycle2

The key personas involved in the delivery of a Data Science & Advance Analytics project could slightly differ depending on the availability / organisational structure based on size/maturity but in majority of the cases the team structure would look like:

  1. Solution Architect
  2. Data Scientist
  3. Data Engineer
  4. Project Lead / Manager
  5. Developer (Integrating the Insights into downstream Apps)

For further details click here regarding standardised project structure, Infrastructure Resources, Tools & Utilities for delivering Data Science / Adavcne Analytics projects in an effective manner.

This is by no means the definitive / exhaustive details for DS & AA project delivery process as already mentioned could vary / tailored depending on numerous factors but as the topic for blog series suggests to “Get started” will surely be quite helpful.

The final post in this series of 3 will demonstrate the entire above approach in form of a practical Example Project (Cisco Meraki)

 

Some relevant blogs

Making Sense of the Swamp – Azure Data Catalog for your Data Lake

Granting Permissions In Azure Data Lake
Assigning Resource Management Permissions For Azure Data Lake Store (Part 2)
Assigning Data Permissions For Azure Data Lake Store (Part 3)

Buck Woody’s DevOps for Data Science Series

Advertisements

How to get going with Modern Analytics Sandpit Environment using Azure in no time…

Purpose

The objective behind referring to this environment as Modern Analytics Environment is due to the fact that it covers all types of Analytical projects whether Big Data, Modern BI/DW, Data Science and Advance Analytics.

It will be 3 blog post series, starting with the purpose then details of the approach and lastly a working example / Solution (using ARM) covering all the aspects discussed in this blog post series to get going on the Modern Analytics journey.

By no means this approach is recommended for production environments as it’s merely to get organisations started with a sandpit environment for doing mainly experimentation around Data Science and Advance Analytics related use cases.

I have been thinking of writing this blog for a while reason being in all my recent customer interactions irrespective of organisation size or sector there has been a single biggest challenge stopping them to get going mainly on their Data Science and Advance Analytical journey is “How do we get started”?

“How do we get started” question can very quickly unfold into an extensive debate seeking all eventualities, hence, in a lot of cases leading to inconclusive outcomes. Just to give a flavour of those, discussion could be in the form of queries like “How do we do different analytical projects within same environment?”, “How do we ingest data?”,”How do we store data keeping our organisation and source of data in mind?”, “How to catalog data?”, “How could Data Scientist access the relevant data and do exploratory analysis with ease to find key insights?” “How to ensure secure access to the data and manage azure data services spend?”, “How could the environment follow DevOps process?” and the list could go on and precisely for this very reason, to just “get going”, I have adopted a rather simplistic still solid enough approach in building an effective Modern Analytics sandpit environment which allows organisations to start transforming raw data into intelligent action and reinvent their business processes in a very quick efficient manner. And, once the organisation maturity level starts improving the environment and processes can be transformed for much coherent Modern Analytics projects delivery / management practices because the foundations of the environment will still still be intact.

The key and sole purpose of this blog is to provide a Modern Analytics sandpit environment boilerplate keeping industry best practices and basic enterprise readiness requirements in perceptive i.e. Data Governance, Security, Scalability, High Availability, Monitoring, Lower TCO and most importantly Agility.

Approach

Here is the background to rather simplistic still solid approach for the proposed Modern Analytics Sandpit environment:

  1. Information Management Approach
  2. Data Lake Framework
  3. Architectural Components or Azure Building blocks for the Environment
  4. Example Project (Cisco Meraki)
  5. Key Personas (mainly Data Science and Advance Analytics)
  6. Modern Analytics Project Delivery Process (mainly Data Science and Advance Analytics)

The environment code will bet available via a Github repository later so watch this space.

Solution

Now, before we go into details of each one of the core areas of the above mentioned approach, here is the high level view of the proposed Sandpit environment using Azure Data & Analytics offering to keep things in perspective for next blog post:

blomodernanalytics

Also, just to highlight here that the above high level architecture diagram represents the logical grouping of resources using Azure Resource Groups as an example implementation, the above implementations can be different depending on Organisation/Teams structure, Security/compliance etc.

Azure Resource groups are logical containers that allow to group individual resources such as virtual machines, storage accounts, websites and databases so they can be managed together.

The key benefits of the Azure Resource Groups are in the form of Cost Management, Security, Agility, Repeatability etc.

For complete benefit details please click on the following link:

https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-overview#the-benefits-of-using-resource-manager

Majority of the Services used in the proposed high level design are either PaaS or Managed Services to provide lower TCO and greater agility in deriving key actionable insights.

All PaaS services include cloud features such as scalability, high-availability, multi-tenant capability and Resiliency

Resource Group Details Technologies to be used
1 – rgCommonDev Shared resources to be used by all types of projects mainly for Information Management perspective and most important as well
  • Azure Data Lake Store
2 – rgAnalyticsTeamDev Development resources required for the Big Data, Data Science and Analytics Projects
  • Data Science Virtual Machines times no. of Data Scientists/Engineers
  • Azure Blob Storage
3 – rgMerakiDev Resources required for delivering the Meraki project which is to demonstrate a type of Advance Analytics example project
  • Web App
  • Event Hub
  • Stream Analytics
  • Azure SQL
  • Power BI
  1. rgCommonDev

In this resource group, the only resource will be Azure Data lake Store as that’s the most important and critical part of the implementation. Any data ingested from multiple data sources will be stored in here in an organised manner based on best practices / patterns.

More details will be added in the next series of this blog post Data Lake Framework process regarding the structure of the Data Lake.

2. rgAnalyticsTeamDev

rgAnalyticsTeamDev (Resource Group) contains 1 Data Science Virtual Machine but this is bare minimum setup and could be altered based on number of Data Scientists and Data Engineers usage. Regarding, Blob storage it will be used for storing ad-hoc data files, artefacts related to development environment.

Mainly, DSVMs will be used by the Data Scientists to perform exploratory data analysis by accessing data stored within Azure Data Lake Store, finding patterns and creating predictive models.

Talking explicitly around Azure Machine Learning & AI Platform portfolio the following image shows landscape and can be employed depending on organisation needs but to keep things simple have gone with DSVMs (details in next blogpost).

AI Platform Stack

For further options regarding machine learning offerings see here

3. rgMerakiDev

 The sample project to be deployed in this environment is Meraki. I have blogged earlier regarding this project here. This project helps demonstrating:

a) How different analytical projects can be deployed in Modern Analytics Environment side by side with completely separate governance framework

b) It’s also got the Lambda Architecture implementation which helps in showcasing the underlying Data architecture and framework whilst using Azure Data Lake store.

The key components of the project are: Web App, Event Hub, Stream Analytics, Azure SQL DB and Power BI. All these resources are already wrapped in ARM template within a Visual Studio Solution to automate the deployment process and ease of repeatability reasons which can be accessed separately by going to Github repository.

This project will also help in developing better understanding regarding the environment and processes involved in managing it going forward.

Details of the Modern Analytics approach mentioned above to follow in next blog post…

Just to give a quick glance of the Modern Analytics Pipeline in Azure with multiple offerings in each stage.

Azure Modern Analytics

Some relevant blogposts:

Buck Woody’s DevOps for Data Science series

Making sense of the swamp

 

 

 

IoT in Action – In Store Location Analytics on Microsoft Cloud

In store Internet of Things (IoT) analytics is a key area for retail organisations.  The nature of retail stores and the importance of maximising investment of staff and retail space makes this one of the leading areas in the IOT space.  However, getting started can seem daunting for many IT teams:  “What equipment do we need?” , “How do we collate the IoT data streams?”, “What should we measure?”.  This post introduces a Microsoft Cloud (Azure) based GitHub IoT project which acts as an end to end example of an IoT in store analytics implementation as well as demonstrating how Azure PaaS (Platform as a Service) Services can be used to quickly implement Enterprise class IoT solutions.

GitHub project

If you want to get straight into the project it’s available on GitHub as “Azure PaaS Implementation using Lambda Architecture of Cisco Meraki In-Store Location Analytics“.  The project is fully documented and self contained.

Overview of Solution

Almost any public place can become a “smart building“, retail stores, universities, hospitals can all benefit from implementing IoT devices & sensors.  This particular solution solves a common retail IoT problem which can be expressed as:

“Provide a chart of customer footfall in real time across selected areas in store” 

However, although that specific issue is addressed this solution more broadly demonstrates the art of the possible in IoT Analytics on Azure.  The implementation can be extended to apply to any building equipped with WAPs (Wireless Access Points) and a number of analytics objectives including:

  • Staff Optimisation
  • Store Layout Optimisation
  • Product Recommendation

As well as many other advanced use cases. The project delivers an end to end Azure based solution for IoT analytics from initial capture of events via the in store WAPs, real time analysis of the event stream, archiving of events onto a persistent storage layer and finally visualization of the real time & historical combined results.  Throughout, Azure PaaS services (in conjunction with Cisco Meraki Cloud) are used to provide a scalable, robust, extensible IoT analytics platform.The outcome of the solution is the result shown below, a foot fall chart operating in real time:

chart 1

Note: Azure PaaS & Cisco Meraki Cloud – Azure PaaS and Cisco Meraki cloud are the 2 main technology stacks used in this solution.  Cisco Meraki Location Analytics displays real-time location statistics to improve customer engagement and loyalty across sites, and is built in to Cisco Meraki Access Points with no additional cost or complexity.

Cisco Meraki does offer some insights out of the Box, however, this approach extends more in-depth analysis by having the flexibility to correlate other data sources on top of events data for richer / deeper actionable insights. Also, this solution can be extended to in-corporate other Vendors similar to Cisco Meraki in similar fashion.

Azure PaaS Services are part of the Microsoft Azure cloud platform.  They offer robust and extensible capabilities which are cloud first in design, meaning they are scalable, serverless (no patching or maintenance of VMs is required) and typically operate on a “pay as you go” model.

Architecture

The diagram below gives the high level architecture for the approach, which follows the established Lambda Architecture.  Briefly, this architecture approach divides data processing into  “speed” (near real time) and “batch” (historical, cleansed, aggregated) layers.  This design is well established and is a relatively common implementation pattern on Azure.ArchitectureThe general workflow is as follows:

Note: This  solution comes prepackaged with sample Cisco Meraki event data which is sufficient for end to end testing & evaluation.  However the solution can easily be integrated into a working Meraki installation.

Cisco Meraki tracks events as customers/visitors move around the building – each customer will lose one Wireless Access Point & acquire another and this loss/acquisition translates into movement around the building.  These events are passed in real time (1) to Azure and collated using an Azure Event Hub instance (2).  In turn, the event hub forwards the events to Azure Stream Analytics, the azure real time event processing engine which can operate on individual events or aggregated events over a rolling time window.stream Stream Analytics workflow

Following the Lambda architecture the Stream Analytics engine now routes its output to the “speed” (5) & “batch”  (4) processing streams.  In the “batch” stream events are captured for historical analysis on both Blob storage and an Azure SQL Database instance – from here, events can be cleansed, merged with other data or aggregated for further analysis.Meanwhile the output of the real time analytic processes is also forwarded directly to a Power BI Dashboard.  Here it be displayed as a real time data stream however it can also be combined with data from the historical store (6).As mentioned above, the outcome is a real time chart showing footfall for the selected area in the building.

Deployment & Outcome

As mentioned the GitHub solution is fully self contained and documented.  It includes all necessary code, sample “real time” Meraki Cloud data and provides step by step deployment guidance.

Azure PaaS Implementation using Lambda Architecture of Cisco Meraki In-Store Location Analytics

Example guidance

The upfront requirements (Azure subscription, Power BI etc) are listed in the documentation.  At the end of the deployment (which should only take a few hours) you’ll have a full end to end Lambda compliant Azure Cloud based IoT solution.  You’ll also have worked with some of the key event & IoT processing engines within Azure as well as the Power BI visualization tool.  Together this should provide a solid foundation to build richer and more complex IoT analytics as outlined in the introduction.

ASP.NET Core 1.0

This is very interesting announcement for maximising the use of microsofts best web based development technology.

I won’t be spending too much time explaining the in’s and out’s of ASP.NET Core, however, listing down the key highlights for you to ponder upon:

  1. Cross platform
  2. Open source
  3. Fast, modular, light weight, flexible
  4. Posing challenge to other open source languages I.e. Ruby, Python, php, Java etc.
  5. Remaining level with innovation
  6. Making existing .net developers relevant
  7. Replacing .csproj with project.json
  8. Heavily integrating Node.js 
  9. Extending community version IDE under the name Visual Code
  10. Ease of access / familiarity

To have in-depth detail and arguments please click on the following link and regarding the release date it’s some what in next few days I.e. Mid of Feb, 2016.

http://dusted.codes/understanding-aspnet-core-10-aka-aspnet-5-and-why-it-will-replace-classic-aspnet

Enjoy!

Key to Setting up Azure (Point to Site VPN) without pain!!!

Here, I am going to share the top tip to setup the Azure Point to Site VPN without any pain.

There are numerous blogs already dedicated to go-through step by step instructions of setting up the point-to-site VPN so not going to bore you with details, however, will put some useful links at the end so that you can find all the details in one place.

During the step of creating certificates make sure the certificate which is required to be uploaded to the Azure Point-to-Site VPN Virtual Network has to be the root certificate for secure connectivity to Azure and subsequently the client certificate has to be generated from the same root certificate which would be in the form of a .pfx file.

Right-click in the .pfx file to install it on any machine which needs connecting to that virtual network.

Once the client certificate installed and the root certificate uploaded to the newly created virtual network, click on “Create Gateway” option in Virtual Network section of Azure. This should now display a connected gateway visual representation.

Download the VPN client as that would have all the necessary connectivity details for the Virtual Network created. Once downloaded and installed, you should get another network option on your networks from taskbar.

Click on Azure Virtual Network connection to connect that should give you an option to select the client certificate previously installed and upon selection the point-to-site VPN should connect without any problem.

Some useful links:

https://msdn.microsoft.com/en-us/library/azure/dn133792.aspx

http://www.cloudcomputingadmin.com/articles-tutorials/windows-azure/configure-client-based-remote-access-vpn-windows-azure-virtual-networks.html

http://blogs.technet.com/b/cbernier/archive/2013/08/21/windows-azure-how-to-point-to-site-vpn-walk-through.aspx