SRK-TechBlog

Technologies Blog

Being IT Architect

As per TOGAF standard, it is IT Architect role that connects the IT with Business,

 

In order perform this role, having a a technical knowledge of product "A" for example is usually not sufficient for this role owner. One also need to know the costs associated, and licensing model of the solution plus at least some knowledge of the competing products. Without grip on these it is NOT possible to propose viable solution based on context of costs and market segment of the product overall.

TOGAF process to the development of architecture is very straightforward,

Blazer and .Net Core

Why MS ASP.Net Core Blazor is going to replace JavaScript in time to come

 

> .Net core is already available on Andriod, which enables a use of c# in browser, a proper language support instead of sticking with single threaded javascript language

> For now there is mixing of frontend code is possible unless you use Node.JS so with Blazor, Web dev will become a lot simpler because you would be using one same language at the backend as well as frontend

> you can already try out blazer avaiable:  Blazor | Build client web apps with C# | .NET (microsoft.com)

> offline apps will be a possible

> current supported browsers and adoption and technology stack required, which is possible because of webassembly

Computing of 2020: Virtualization based on Docker Containers and Kubernetes

Computing of 2020: Virtualization based on Docker Containers and Kubernetes

 

In past with the advent of windows, it was possible for one processor to write to a memory location which was in fact being used by Kernel of the OS or by another process as example. Process could halt the total execution of processor as well. So the testing was extremely difficult because things were highly tied together all the way from host to all the applications running on top of that OS and their states. There actually was no isolation until the windows 95 came into picture 25 years back:  it came with virtual addresses spaces and the feature called pre-emptive multitasking. https://www.windowsdatarecoverysoftware.com/win95/. This provided OS the feature to expose address space as virtual thing and actual addresses are traslated by the processor to somewhere else. This allowed any process which have coding issues, or bugs stop from brining the whole operating system down. The OS is able to bring down this process by forcefully taking the CPU from the process. This was made possible by the help of processor feature basically called virtual machine. There V8086 mode introduced by processor which helped enable this feature  https://en.wikipedia.org/wiki/Virtual_8086_mode.

 

The virtual 8086 mode eventually after decades transformed into a virtualization technology where multiples OS could share the processor and run in total isolation to the other OS and could not affect the whole server unconditionally. Vmware and Hyper-V became major players and it helped both the production in terms of consolidation of workloads on same physical host server as well as Dev environment because rolling back and spinning up environments became feasible in terms of space and time.

 

Soon it was realized that repeatition of OS kernel in each Virtual running on same host not just has its own disk space and CPU utilization cost, it is just not very needed. Docker came with a concept of sharing kernel in a way that each process is totally isolated from other process and see the OS totally independent to another process. That process is called a container. So cost of spinning a new process is much less also and very less overhead on disk and CPU so it became an easy success.

 

 In order to support container definitely the support from the operating system was also essential and its over size also became an important consideration. Linux in its current state could not be containerize either and needed some changes. Microsoft had to go to major changes because GUI based bulk load servers definitely could not be containerized. They came up with stripped down OS. So the servers went into below changes

 

Docker vs Virtual Machine

Docker is process virtualization where a process sees the host as isolated from all other processes on same host. Virtual machine is Operating System virtualization where multiple OS runs on same hardware without knowing the existence of each of other.

  

Disk Sizes

Microsoft Windows Full edition (10 GB install size) while windows server core edition takes lesser space i-e around 5-6 GB. There is newest edition to the family called Nano-Server which is the minimal server edition from Microsoft. Total Install size of this OS is around 250MB on disk while the source docker image is approx 100mb only. 

  

Licensing is very simpler also, 

source: docs.microsoft.com:

 

Memory Requirements,

Hyper-V Isolation vs Process Isolation


source: https://docs.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/hyperv-container

Shared Kernel Consideration

Base images are special OS. You need to use the matching base image corresponding to the host OS plus you need to consider the patch level of the HOST OS as well, otherwise you need to use Hyper-V Isolation,

Docker Engine and competing products

Key Take Aways,

> Docker is the leading de facto container standard as well as implementation as of now with MS also integrating it with Windows to provide necessary docker infra to windows servers.

> Container is essentially process isolation. Note to be heedful when used on windows server, if you are using Hyper-V isolation instead, which should not be the case. 

> The benefit to DevOps is much better CI/CD pipeline and cost to create new a new environments is much much less and testing is much streamlined

> The benefit to production is consolidation of workloads at much higher scale, rapid provisioning of service. No VM boot time. No disk overhead, No Ram overhead, No Mem overhead

> portability of the solution increases many folds similar to virtual machine but since docker images are even smaller so portability is even more enhanced than VM.

> The layered approach of images through docker file focuses on your customizations derived from the base images which are available in docker hub around the globe so essentially you just are carrying the customization and your binaries or web publish folder to carry your implementation. That really increases portability many folds. you are assured your code will arrangement will run successfully without having to carry a big virtual machine hard disks for example.

> Linux or Windows does not matter much with dot net core becoming cross platform capable and MS putting all efforts on .Net core instead and abandoning .Net FX means dot net core will be main thing. This incase if you service runs on net core.

> New Windows servers are no more bloated with GUI, rather windows Nano servers are 100MB image size and 250MB instance size when executed so you get all the benefits of Linux in there along with standardization of MS world.

> You really don't have to be docker file expert, although its good to know the parameters to fine tune but for most cases, you can get the docker file auto created as part of visual studio which you can carry along in your production and change it if needed.

 

 

Referenced Code: Shahid-Roofi-Khan/BlazorDockerSampleForWin2019 (github.com)

How to install Docker on Windows Server: https://docs.microsoft.com/en-us/virtualization/windowscontainers/quick-start/set-up-environment?tabs=Windows-Server

New cluster model for Windows 2019 allows router USB to be used as witness node

Cluster arbitration needs a third computer hosting a share folder as judge to facilitate automatic failover between two computers hosting high available service.

However it was costly to have one. It must be joined to domain, must not be a domain controller and must not be down and can't be in DMZ as domain authentication must also succeed.

That is really easy on hardware and software. Good News is MS Windows Server 2019 remove ALL! of these requirements.

This means NO kerberos , NO domain controller , NO certificates , and NO Cluster Name Object needed. While we are at it, NO account needed on the nodes. Oh my!!

Just use even your router's USB share to arbitrate your internal clusters !!!!

Now it does make a lot of sense to use your cloud machine to arbitrate your internal on premise cluster setups!!!

Just need that atleast SMB 2.0 level file sharing is supported by your shares.

#CloudWitness #Windows Server 2019 #NewFileShares #Security

Inventory your Servers using PowerShell

Inventory your servers is very time taking task, most the methods to automate collecting inventory information cannot fetch detailed BIOS information and MAC Address.

With powershell and WMI, I've contributed a script that additionally queries and tag that information as well. Details will include BIOS details, Serial number, Platform as well.

Output sample:

Script available at my MS Gallery contribution: https://gallery.technet.microsoft.com/How-to-inventory-all-ce4e1f34?redir=0

SQL Always-On - Choose between Async vs Sync Replication mode.

Consultants mostly get stuck at the decision as to which SQL Server Always On replication model to choose from.

SQL Server Always On offers SYNCHRONOUS vs ASYNCHRONOUS mode of replication.

One should try to have synchronous replication because then we will have zero loss recovery, but at the same time, enabling synchronous replication might not be realistically possible in 99% cases based on its cost. In Synchronous replication, data is replicated and only then write is considered completed on the primary node which ensures we have updated 2nd node. In the case of asynchronous replication, we have the 2nd node, always catching up the latest and would be having some backlog in case of loads.

Note that Microsoft does not refer to 10gbps or 50Gig link requirement or Storage Raid levels or IOPs required to have Synchronous replication. Since it is very loaded specific thing so Microsoft refers to performance counter of SQL to check if the hardware and setup is feasible for this replication type or NOT. Plus since DB is ever growing thing, so it needs to be the check on schedule instead of once only.

source: https://docs.microsoft.com/en-us/sql/database-engine/availability-groups/windows/availability-modes-always-on-availability-groups?view=sql-server-2017

As per Microsoft TechNet Article,

"In Always On availability groups, the availability mode is a replica property that determines whether a given availability replica can run in synchronous-commit mode. For each availability replica, the availability mode must be configured for either synchronous-commit mode, asynchronous-commit, or configuration only mode. If the primary replica is configured for asynchronous-commit mode, it does not wait for any secondary replica to write incoming transaction log records to disk (to harden the log). If a given secondary replica is configured for asynchronous-commit mode, the primary replica does not wait for that secondary replica to harden the log. If both the primary replica and a given secondary replica are both configured for synchronous-commit mode, the primary replica waits for the secondary replica to confirm that it has hardened the log (unless the secondary replica fails to ping the primary replica within the primary's session-timeout period)."

If primary's session-timeout period is exceeded by a secondary replica, the primary replica temporarily shifts into the asynchronous-commit mode for that secondary replica. When the secondary replica reconnects with the primary replica, they resume synchronous-commit mode.


Note that synchronous replication is subject to the fact that the partner is not down. Because every transaction is committed to 2nd node also, so it has to continue to function if the 2nd node is permanently down so it waits for the timeout period and shifts to async mode temporarily if the communication is disrupted. (Ping is the criteria!!)

 This also means, Synchronous Always On Availability Groups Is Not Zero Data Loss in all cases. These are scenarios that need planning if the use case is such.

Same i've posted on Technet Gallery great feedback: https://social.technet.microsoft.com/wiki/contents/articles/52671.sql-alwayson-choosing-between-the-right-replication-model.aspx

Further Reading: Microsoft Article

 

SharePoint DocumentSet and why you should always use them!

There are so many workflow solution that exist on planet, however what that differentiates and makes SharePoint stand out as workflow solution is its capability to document centric workflows and you can even use them for document sets. Normally in other solutions you have option of attachments. Contrary to that in SharePoint, content is the first class citizen. Consider a workflow where a set of documents move with the workflow with versioning of documents feature still there, and grouping of documents also possible. For e.g we developed a loan management system for one of the leading bank of the region with loan documents being centric to the workflow. That turned into a big success.

SharePoint is a content management system. Managing Content and documents are the core of this software. One such concept that it facilitates is grouping set of documents into a set. This is enabled through a feature called Document Set. 

Or you may say, Document Set is a special feature of SharePoint that can help group multiple document to make up a set and treated a single object or simply stating a "folder". e.g. as below,

 

Why SSL fate is doomed and TLS is the only option left

SSL, which refers to Secure Socket Layer, is a protocol used to provide secure connections between a client and a server. A TCP connection can provide a reliable link between a server and a client but cannot provide services such as confidentiality, integrity and end point authentication. So, SSL was introduced by Netscape in early 1990s to provide these services. The first version of SSL, which is known as SSL 1.0, was never released to the public as it had many security holes. However, in 1995, SSL 2.0, which provided better security than SSL 1.0, was introduced and, in 1996, SSL 3.0 was introduced with more improvements. The next versions of the SSL protocol appeared under the name TLS.

SSL, which is implemented in the transport layer, can secure a protocol such as TCP by applying various security measures. It will provide confidentiality by using encryptions to prevent anyone from eavesdropping. It uses both asymmetric and symmetric encryption. First, using asymmetric key encryption, a symmetric session key is established which then would be used for encrypting the traffic. Asymmetric key cryptography is also used for digital certificates used to authenticate the server. Then Message Authentication Code, which uses various hashing techniques, is used to provide integrity (identify any unauthenticated modification done to the real data). So a protocol like SSL allows transmitting sensitive information such as banktransactions and credit card information over the internet. Also, it is used for providing confidentiality for services such as email, web browsing, messaging, and voice over IP.

SSL is now outdated and has many security issues where its usage is not much recommended currently. SSL 3.0 was enabled by default until recently in many browsers but now they are planning to disable in the future versions due to severe security bugs such as POODLE attack.