Azure – what is it exactly?

January 8th, 2017 by Stephen Jones No comments »

You may have recently seen a television commercial for “The Microsoft Cloud,” which featured Healthcare, Cancer Research, and Cybercrime. So, what does this have to do with Microsoft Azure?

Microsoft Azure is the Microsoft product name for the Microsoft Cloud. The names are used synonymously in the technical industry.

The Cloud digital transformational shift, question remains, “What is Azure, and for whom is it meant?”

Azure was announced in October 2008 and released on February 2010 as Windows Azure, and was then renamed to Microsoft Azure in March 2014.

Azure is a cloud computing platform plus, the underlying infrastructure and management services created by Microsoft to build, deploy, and manage applications and services through a global network of Microsoft-managed data centers.

What Microsoft Azure Data Centers?

There are 34 interconnected Microsoft Data Regions around the world with more planned.

Microsoft describes Azure as a “growing collection of integrated cloud services, including analytics, computing, database, mobile, networking, storage, and web.” Azure’s integrated tools, pre-built templates and managed services simplify the task of building and managing enterprise applications (apps).

Microsoft Corp. CEO Satya Nadella calls Azure, “the industry’s most complete cloud — for every business, every industry and every geography.”

The Complete Cloud

For many businesses, their first foray into leveraging cloud software as a service (SaaS) is with Microsoft Office 365, Exchange online for hosted email, or CRM online for managing business and customer relationships. However, the Azure platform is much more than just an online business software delivery platform.

Here are just a few of the things that you can do with Azure:
• Build and deploy modern, cross platform web and mobile applications.
• Store, backup and recover your data in the cloud with Azure-based disaster recovery as a service (DRaaS).
• Run your Line of Business applications on Azure.
• Run large scale compute jobs and perform powerful predictive analytics.
• Encode, store and stream audio and video at scale.
• Build intelligent products and services leveraging Internet of Things services.

Use Azure, and your partner, to rapidly build, deploy, and host solutions across a worldwide network and to create hybrid solutions which seamlessly integrate on premise existing IT with Azure.

Many leverage Azure to protect data and meet privacy standards like the new international cloud privacy standard, ISO 27018, or HIPAA.

Azure customers can quickly scale up infrastructure, just importantly, scale it down, while only paying for what they use.

Azure also supports a broad selection of operating systems, programming languages, frameworks, tools, databases and devices.

Contrary to the perception that Azure is for Windows only, nearly 1 in three Azure virtual machines are Linux.3

Widespread Adoption

More than 80 percent of Fortune 500 companies rely on Azure, which offers enterprise grade SLAs on services. In addition, Microsoft is the only vendor positioned as a Leader across Gartner’s Magic Quadrants for Cloud Infrastructure as a Service (IaaS), Application Platform as a Service (PaaS), and Cloud Storage Services for the second consecutive year.1

What is Microsoft Azure IOT

Microsoft’s powerful Azure Internet of Things Hub and tool suite has also been widely adopted for use in commercial and scientific applications to securely connect and manage Internet of Things (IoT) assets. The service processes more than two trillion IoT messages weekly.4

From broadcasting the Olympics to building massively multiplayer online games, Azure customers are doing some amazing things, and in increasing numbers. Microsoft recently revealed that the rate of Azure customer growth has accelerated to more than 120k new Azure customer subscriptions per month.4 In line with the accelerated adoption, the company is projecting an annualized commercial cloud revenue run rate of $20 Billion in 2018.3

Cloud Leadership

With Azure, Microsoft has made a huge commitment to cloud computing. Since opening its first datacenter, Microsoft has invested more than $15 billion in building its global cloud infrastructure.5 In addition, the company recently announced it would build its first Azure data center in France this year as part of a $3 billion investment to build its cloud services in Europe.6

Microsoft is quickly closing the gap in market share with IaaS provider Amazon Web Services, (AWS). While 37.1% of IT professionals surveyed indicated that Amazon AWS is their primary IaaS platform, Microsoft Azure is a close second at 28.4%, followed by Google Cloud Platform at 16.5%.7

and hot off the press…….
Microsoft isn’t building its own connected car — but it is launching a new Azure-based cloud platform for car manufacturers to use the cloud to power their own connected-car services.

The new Microsoft Connected Vehicle Platform will go live as a public preview later this year.
“This is not an in-car operating system or a ‘finished product’,” Microsoft’s EVP for business development Peggy Johnson writes in this week’s announcement. “It’s a living, agile platform that starts with the cloud as the foundation and aims to address five core scenarios that our partners have told us are key priorities: predictive maintenance, improved in-car productivity, advanced navigation, customer insights and help building autonomous driving capabilities.”

Microsoft also announced that it is partnering with the Renault-Nissan Alliance to bring the new connected-car services to Renault-Nissan’s next-gen connected vehicles. The two companies were already working together on other projects before this, so it’s maybe no surprise that Renault-Nissan is Microsoft’s first partner.
Microsoft is also working with BMW to develop that company’s BMW Connected platform on top of Azure. BMW and Nissan also showed in-car integrations with Microsoft’s Cortana digital assistant at CES this year, so your future car could potentially use Cortana to power its voice-enabled services. For the time being, though, it looks like these are still experiments.

Microsoft has talked about its aim to bring “intelligence” to as many of its services as possible. It has also recently opened up Cortana to third-party developers, so bringing it to its connected car platform is a logical next step (and we’re also seeing Amazon doing the same thing with Alexa, ).

Johnson also used today’s announcement to take a thinly veiled swipe at Google/Alphabet, which spun out its self-driving car unit a few weeks ago. “As you may have gathered, Microsoft is not building its own connected car,” she writes. “Instead, we want to help automakers create connected car solutions that fit seamlessly with their brands, address their customers’ unique needs, competitively differentiate their products and generate new and sustainable revenue streams.”

Find documents and data fast in Dynamics 365.

January 8th, 2017 by Stephen Jones No comments »

Learn 3 ways to find documents and data faster in Microsoft Dynamics 365.

The new Office Delve dashboard component learns about on what you’re working, and with whom you’re working, to automatically surface documents that are relevant to you.

When you’re working on a document that you’re not ready to share, store it and any other private documents in OneDrive for Business.

Learn how to quickly view and present data with Export to Excel and Excel templates.

Dynamics 365 personalisations

January 8th, 2017 by Stephen Jones No comments »

Dynamics Ax has always had powerful user personalisation features. See how this is done in Dynamics 365 Operations. Watch this video to learn six easy ways to customize Microsoft Dynamics 365 without writing any code.
You’ll learn how to:
– add a logo or color theme to your forms;
– get rid of fields you don’t use;
– change the business process bar to match the way your organization works;
– create custom dashboards;
– take advantage of customization packages (solutions) from third-parties.

P.S. to find a partner for further customisations, no need to look at another website just call us 0097143365589

Discontinuation of Dynamics Online Payment Services

January 8th, 2017 by Stephen Jones No comments »

A notice was recently released on the discontinuation of Dynamics Online Payment Services.
CustomerSource
https://mbs.microsoft.com/customersource/northamerica/news-events/news-events/news/onlinepaymentservdisnotice

This was not available in the U.A.E. but may be of interest to those with a global deployment.

Discontinuation verbatim:
Effective January 1, 2018, Payments Services for Microsoft Dynamics ERP (Payment Services), available with any versions of Microsoft Dynamics AX, Microsoft Dynamics NAV, Microsoft Dynamics GP, Microsoft Dynamics RMS, Microsoft Dynamics POS 2009, and Microsoft Office Accounting, will be discontinued. Customers of the Payment Services will not be able to process credit or debit card transactions after December 31, 2017.

To mitigate the potential business impact of the Payment Services being discontinued, customers should consider payment solutions provided by Dynamics Independent Solution Providers (ISVs) by searching by searching Microsoft’s AppSource or Solution Finder, or work with their Dynamics implementation partner to determine options.

Customers who have not cancelled their subscription to the Payment Services and whose subscription is due for renewal before January 1, 2018 will receive the annual, automatically generated, 12-month subscription renewal notice for this service. Subscriptions to the Payment Services will be renewed only for the period beginning on the date specified in the renewal notice and ending on January 1, 2018. A customer’s subscription to the Payment Services may not be renewed for 12 months depending on the date it expires. The notice supersedes any subscription renewal a customer receives. If you have any questions, please email dops@microsoft.com.

Common Reporting Standard – CRS is effective in UAE from 1 January 2017.

January 2nd, 2017 by Stephen Jones No comments »

The CRS is a global standard for the automatic exchange of financial information between jurisdictions that have agreed to adopt it. The Organization for Economic Co-operation and Development (OECD) introduced CRS as a means to combat tax evasion and to improve cross-border tax compliance.

The CRS came into effect in early adopter jurisdictions from 01 January 2016 and will take effect from 01 January 2017 in late adopter jurisdictions including the U.A.E.

Why is this important?

In countries where CRS requirements have been enacted into local law, compliance with CRS is mandatory.
The Banks must comply with the CRS requirements in accordance with country specific legislation.

Under the CRS, the Bank must collect certain information to establish the country (or countries) of tax residence of each of its clients. The Bank may further be obliged to report certain financial information regarding the financial accounts held by its clients to the tax authority where the account is maintained. This local tax authority may exchange this information with the tax authority of another country, subject to the information exchange agreements that are in place.

In readiness for meeting the CRS requirements as those come into force, your Bank may contact you to collect certain tax-related information and/or documents.

The financial information to be reported with respect to reportable accounts includes all types of
investment income (including interest, dividends, income from certain insurance contracts and
other similar types of income) but also account balances and sales proceeds from financial assets.

– The financial institutions that are required to report under the CRS do not only include banks and
custodians but also other financial institutions such as brokers, certain collective investment
vehicles and certain insurance companies.
– Reportable accounts include accounts held by individuals and entities (which includes trusts and
foundations), and the standard includes a requirement to look through passive entities to report on
the individuals that ultimately control these entities.

Offshore accounts will be disclosed to the taxman and you may lose your privacy. This year, many account holders may opt to close overseas accounts to avoid being reported to tax authorities. This amy result in the financial death of many offshore financial institutions and banks.

Year end close for your Windows operating system

December 30th, 2016 by Stephen Jones No comments »

1: Upgrade applications

Update of applications should be part of a regular maintenance cycle, but its a task that sometimes falls through the cracks. Ensure that applications are always current, so as to maximize compatibility with newer hardware and to support the overall security posture of a system. Don’t head into 2017 with out-of-date software.

2: Back up data

This critical task should be performed on a regular basis to ensure that data is recoverable in the event of loss, theft, or catastrophe. If you don’t have a properly configured, automated backup scheme, then manually perform a full backup of all your data. All versions of Windows since Vista include a modern backup application built into the OS which allows backup to an external drive or to a shared folder on a network drive. Although not as robust a backup solution as some third-party offerings, it works as advertised and even allows for backups to run on a schedule.

3: Update Windows

Windows XP featured the ability to integrate systems updates automatically. Such a simple feature has continued to be streamlined into current Windows versions to assist in keeping machines patched against malware and security threats. Even so, millions of devices worldwide do not regularly receive system updates. I can’t think of a better time than the new year to develop the habit of performing system updates to protect your devices and keep them stable.

4: Clean temporary files/cache folders

With large amounts of data going back and forth online and increased reliance on web-based applications, the temporary folders and cache folders, including the cookies, that store all of this data can grow to unbelievable sizes in a short amount of time. To free up storage space—and to prevent this type of data from being used to compromise your system and/or accounts—it’s important to delete these temporary files to clean your system.

Among the many applications available that offer system cleaning utilities, CCleaner stands out as powerful and easy to use. Even the freeware version has enough capabilities to clean out all temporary folders and caches, and it can make storage space available with its handy scripts. You can set it to run upon startup, so that your system is always clean and functioning properly.

5: Update anti-malware and run a full-system scan

The popularity of Windows, makes it a magnet for security threats. An up to date malware detection system is often the only thing standing between keeping and losing your data.

Additional security protections, such as a firewall and web and email filtering should also be used. Free apps, such as Avira, Windows Defender, and Avast also rate highly, though they have a slight impact on system resources while offering excellent performance. For business look at a tool like Kapserksy.

6: Use System File Checker (SFC)

Windows files get modified both when system updates occur, and when applications get installed and upgraded. Those files can also be corrupted by malicious software or incomplete updates.

When system files aren’t as they should be, weird things will occur to your Windows installation.

To prevent Windows from acting erratically or failing to load the system and/or applications correctly, regularly run SFC—the built-in Microsoft utility to check and fix system file issues. Here’s how:
1.Launch CMD with elevated privileges.
2.Type sfc /scannow to begin the verification process for all system files. As the scan progresses, any corrupt files will automatically be corrected from the cache stored locally in the Windows directory.

7: Uninstall unused applications

We all use a variety of apps to get work accomplished. Some are small, while other are large suites. Over time some of these apps lose their viability and no longer serve their function, which presents several problems Unnecessary apps can use up resources and present security issues. If the apps are no longer being used and re also no longer supported by the developer, then there could be an even greater security risk. Close out the year by ridding yourself of these unused apps before data loss occurs.

8: Transfer Windows data from one PC to another

If you’re upgrading to a new PC or swapping out your gear, then transfer your account profile, including files & folders and settings, from your old PC to the new one. Sadly, Microsoft’s Windows Easy Transfer does not support Windows 10. However, Microsoft has a partnership with LapLink to officially provide Windows 10 support for its PCMover Express software ($14.99-29.99) to migrate data to a new Windows 10-enabled PC. The application also includes regular and enterprise editions that may be used over corporate networks and provides zero-touch support.

9: Perform a PC reset if your pc is constantly playing up/not working

From Windows 8 on, Microsoft has included recovery options to fix non-working computers, as well as adding the option to factory-reset an installation. This essentially deletes all user data, including apps, and reloads the Windows OS back to its defaults. Depending on the speed of the computer, the process will typically take two hours or so to complete.

To accomplish this, follow the steps below:
1.Go to Settings | Update & Security | Reset This PC | Get Started.
2.Choose the option Remove Everything, as the best option to fully clean the internal drive, settings, and all user data.

10: Reboot Windows to clear sleep/hibernation data

Most of us are guilty of this one on the PC We use the PC for work and when done, put it to sleep. Hardly ever do we reboot, and never shut down unless the system has become unstable or the battery runs out of power.

Each time the PC goes to sleep it stores copies of the working environment into RAM and hibernation files so that when the user wakes the system, they can resume where they left off. The problem is that the files never get flushed properly until a reboot, or shutdown. So they just sit there taking up space and potentially leaving a security vulnerability, since some system updates require a machine restart to complete properly.

11: Upgrade hardware

For those working on non-2016 Windows PCs, it may be a good time to to upgrade it by adding more RAM or swapping out a mechanical HDD for a solid-state drive. Or you consider upgrading to a larger external drive or adding some accessories, like a docking station, to boost performance.

If you choose to go the total system upgrade path, performing the tasks listed above will prepare your current PC for its new owner by ensuring that your data completely backed up and ready to be transferred to its new home and that the older equipment is in primo condition for the next user.

12. Clean up your desktop
Files on the desktop typically go into cache and eat up your memory.
consider storing all shortcuts in a folder and then put one shortcut to that on the desktop.

Learn to use all the feature

The taskbar calendar which now integrates with Windows 10’s core Calendar. Click the date and time in the right-hand side of your taskbar, the calendar that pops up includes a full look at your schedule for the day.

If you’d like to be able to just bark commands at your PC, open Cortana by clicking the search field in the taskbar and select the Notebook icon in the left-side options pane. Select Settings from the list, then simply enable the Let Cortana respond when you say “Hey Cortana” option. You’ll need an active microphone for this to work
Short cut keys

• Windows Logo Key + Ctrl + D: Use this combination to switch to a new virtual desktop. Why do you need this? Let’s say you’ve launched too many applications at the same time that you actually lose track of everything! What could be better than switching to a clean and slick desktop?
• Windows Logo Key + L: Use this combination to switch between accounts.
• Windows Logo Key + C: Use this combination to wake Cortana up in listening mode.
• Windows Logo Key + I: Use this combination to open settings panel or the co-called Control Panel.
• Win+Tab – Activates Task View (more on that later)
• Win+Q or Win+S – open up Search/Cortana in typing mode, perfect for Queries and Searches
• Win+Left or Win+Right – snap the current window to the left or right, talking up half of the screen space
• Win+Up – maximize a window
• Ctrl+Win+Left or Ctrl+Win+Right – switch to the next virtual desktop
• Ctrl+Win+D – create a new virtual desktop

Task View, is a fancier and more visual way to view all the open windows that you have. It presents windows in a grid-like arrangement versus the horizontal strip of Alt+Tab. You can group together windows for a certain topic or task. The advantage to this is that you can have, say, 10 windows open but only 4 or 5 are really visible at a time. The rest are hidden away on another virtual desktop and won’t show up on your taskbar or when you Alt+Tab. Of course, you can change that default behavior, too. Virtual Desktops are a great way to compartmentalize your activities so that you don’t get overloaded with unnecessary windows and just focus on the task at hand. Virtual Desktops don’t work in Tablet Mode. Task View, however, works as normal and is in fact the default way of switching windows (you can’t really Alt+Tab on a touchscreen).

Here’s a fun little easter egg. Create a new folder on the desktop and name it exactly as below (noting the period after “GodMode”):

GodMode.{ED7BA470-8E54-465E-825C-99712043E01C}

Once you hit Enter, the folder will change its icon and you will be presented with a folder that has a smorgasbord of settings all laid out in a single list. These are practically the entire contents of Control Panel and then some, but not the new Settings app. It might be handy to get an overview of everything there is to find as far as settings go, but you might still be better off using Search.

Offline Maps – you can now download certain maps for use even when you’re not connected to the Internet. Very handy for traveling. Just be sure to mind your storage space, as they can eat up quite a lot.

Customize the Start Menu using the “Ctrl” and “Arrow” keys to customize the size of the Menu

for many more tips look here
http://www.thewindowsclub.com/windows-10-settings
http://allbestposts.com/windows-10-tips-and-tricks/

Happy Christmas

December 25th, 2016 by Stephen Jones No comments »

Christmas morning at Meadow Farm.

As we approach the year end seasonal greetings to all.

Powershell – why?

December 19th, 2016 by Stephen Jones No comments »

What is the point of PowerShell?

It handles any task that requires scripting and gives power back to the user, developer, or administrator. Power Shell is a tool that is at a very high level of abstraction, and thus can quickly provide a means of getting a task done by creating a chain of software tools without resorting to writing a compiled application.

PowerShell is an: extensible, open-source, cross-platform object-oriented scripting language that uses .NET.

It can use: COM, WMI, WS-Management and CIM to communicate with, and interact with, any Windows-based process.
It can execute scripts on either a local workstation or remotely.

It is ideal for automating all sorts of processes, and is simple enough to manage your workstation, and yet robust enough to manage SQL Azure.

It will evolve to become the built-in batch processing system in future versions of Windows.

It is important as a configuration management tool and task automation tool, and is versatile enough to be used as a general-purpose programming language.

How did PowerShell come about?

Unix inherited from its mainframe ancestors the use of the batch and its script.
The use of the script allowed UNIX to develop a group of specialized applications that did one job and did it well.
Data could be passed into an application through its standard input, and the results passed to the standard output which meant that data could be streamed like items on a conveyor belt.

It was like building repeatable processes out of Lego like blocks of code.

Scripting did more than encourage piped streams of data. It also encouraged batches and command-line configuration of machines and services. This made it easy to use Unix for servers, because all administration tasks could be scripted.

Scaling up to large groups of servers was smooth since everything was in place to allow it to happen.

Scripting also made the management of servers more precise and error-free. After the script was developed, no further work was needed. The script would do the same thing in the same order with the same result. It made operations work a lot easier.

Think tKorn Shell, with ideas from Bash shell.

Why didn’t Windows have a powerful scripting language like Korn?

Unlike the contemporary UNIX workstations, The first PCs had no pretensions to host server processes. Those were low-specification affordable personal computers and initially conquered the market previously occupied by dedicated word processors, before becoming ubiquitous with the invention of the spreadsheet.
They had the ability to run batches, but this was intended merely to ease the task of installing software. Scripting just seemed old-fashioned or best left to dos.

Microsoft DOS could and did run batches from the command processor, and autoexec.bat is still there in Windows (called AUTOEXEC.NT and located in the %SystemRoot%\system32 directory).

After MSDOS borrowed from the UNIX clone Xenix, this command processor took on some of the features of UNIX shells such as the pipe, but with limited functionality when compared to the UNIX shells.

Microsoft Windows was originally booted from the command processor, and , in later editions, it took over the tasks of the operating system and incorporated the old MSDOS command-line interface tool (shell).

The features of the batch were sufficient to allow it to do a lot of configuration, installation and software maintenance tasks. The system wasn’t encouraged or enhanced after Xenix was abandoned, but remained a powerful tool. Xenix’s replacement, Windows NT or WNT (add a letter to DEC’s VMS to guess its parent.) did not have anything new for the command processor, and inherited MSDOS’s enhanced version from MSDOS 3.3.

This Batch language still exists in the latest versions of Windows, though it is due to be deprecated. It has had quite a few enhancements over the years but essentially what came into MSDOS is still the basis of what is currently shipped. It has a major failing within a Windows environment that it cannot be used to automate all facets of GUI functionality, since this demanded at least COM automation, and some way of representing data other than text.

There have been attempts to replace the DOS batch file technology, including VBS and Windows Script Host (1998), but PowerShell is by far the most effective replacement.

Why did it take so long to get PowerShell?

Maybe because Microsoft under Bill Gates retained the vision that BASIC should remain the core language for Windows scripting and administration …

Basic scripting driving COM automation

Today the office applications still have, underlying BASIC scripting that can be controlled via COM automation. To keep batches consistent with this, the tasks done by batch scripting were to be done by Visual Basic for Applications: VBA. This was supplied with the operating system to drive all automation tasks. Wang’s Office system, similarly, was automated and scripted via Cobol!

Language -driven development and divergence

Over time each office application developed a slightly different incompatible dialect and could not be kept in sync. Visual Basic was inadequate for the task and evolved into vb.net, a somewhat comical dialect of Java. It proved to be unpopular.. VBA was never quite consistent with the Visual Basic used for building applications.

Windows Script Host

Windows Script Host was introduced as an automation and administration tool to provide automation technology, primarily for Visual Basic and JavaScript. it supported several interpretive languages such BASIC, Perl, Ruby, Tcl, JavaScript, Delphi and Python. Initially, it had security loopholes finally solved with digital signing in Windows XP. It is still installed with MS Windows and still provides a number of useful COM interfaces that can be accessed in PowerShell and any other application that can interact with COM.

Windows Script Host was, designed before .NET so it is not able to directly use the .NET library. It also does use WMI, WS-Management and CIM for administration and monitoring. It focused on manage the platform by using very low level abstractions such as complex object models, schema, and APIs. Although it was useful for systems programming it was almost unusable for the typical small, simple and incremental task that is at the heart of administration, which needs very high levels of abstraction.

Microsoft competes in the server market

Microsoft was focused on the desktop market for a long time, so maybe did not realize the scale of the problem to compete in the server market. The GUI-centric Microsoft culture and ecosystem, idea was that all configuration was a point-and-click affair. OK for one or two servers, but not so easy or error free for a server-room.

PowerShell

Due to the determination and persuasive powers of Jeffrey Snover, Microsoft belatedly woke up to the fact that it hadn’t a viable solution for the administration of a number of servers in a medium sized company.
The GUI didn’t scale, and the batch system of the command line, though useful, was stuck in mid-eighties time-warp.

Microsoft had to replace the command line; so it needed all the things it and other interactive shells had, such as aliases, wildcard matching, running groups of commands, conditional running of groups of commands and editing previous commands.
It also had to replace VBA, and to integrate easily with Windows Management Objects.
It had to take over the role of VBA embedded in applications to make automation easier.

Microsoft needed something that looked both backwards and forwards, i.e an industry standard shell backward compatible with the command Line.

PowerShell started with the POSIX standard shell of IEEE Specification 1003.2, the Korn Shell, which is also available in Windows. However, this dealt only with strings, so it had to be altered to also deal with objects so that it could access WMI, WS-Management, CIM and COM. Because it needed so much connectivity and data interchange, it had to be able to use the .NET library to process NET objects and datatypes.

So a new system also needed to understand .NET to utilize the man-years of work of providing a common object model able to describe itself, and that can be manipulated without converting either to or from text. The new scripting system had to be resolutely object-oriented.

So the new PowerShell needed the ability to use any .NET object or value.

PowerShell, was given an intuitive naming convention based on the verb-noun pair, with simple conventions such as ‘Get’ to get an object and a noun describing the object.

To replace the command line Powershell had to be better. The whole point of a command shell is that it must be convenient to type short commands into it e.g. like ‘REPL’ in Python. Powershell also needs to work with existing terse DOS command-line commands so that an expert can type in very truncated commands.

PowerShell was also to be used in scripts stored on disk and repeatedly invoked, with just a change in parameters. This also meant that it had to be easy to read, with intuitive commands and obvious program flow.

It wasn’t an easy compromise, but it was done by means of aliases. Aliases also helped to ‘transition’ users from the other shells they were using to PowerShell (For CMD.EXE it is dir, type, copy etc, for UNIX ls, cat, cp etc.) You can even define your own in Power Shall!

Powershell took an idea from.NET everything should be learnable by discovery, without needing documentation. All the objects and Cmdlets in Powershell are self-documenting in that you can use PowerShell to find out what they do, what functions can be called, and parameters.

Why is the PowerShell Pipeline Important?
The pipeline in PowerShell inherited the concept of a pipe from UNIX. The PowerShell team had to solve the problem of dealing with Windows Management Objects and Instrumentation by passing objects rather than text down the pipe.

Having done so, it found itself in possession of a radical and extraordinarily useful system. It had the means of processing objects as though they were on a conveyor belt, with the means of selecting and manipulating each one as it passed down the pipeline.

This made the code easier to understand and also helped with memory management. A long file could be passed down a pipeline, line-by line, for example, searching for text, instead of having to read the entire file into memory (you can do that too if you want, and if you have no fear of the large object stack; you have to do it if you want, for example, to order the lines). . It also meant you needed only one cmdlets for selecting things, one for sorting, one for grouping , and only one for listing things out in a table. PowerShell could do a lot in a line of code, far, far, more than C# could.

Suddenly, the task of tackling the huge range of data on the average server that one might need to know about was less frightening. It was already there, and was now easy to get at and filter.

Why is PowerShell useful?
Scripts don’t require special components.

PowerShell now has all the power of a compiled .NET language. Automating processes using the windows command line needed many existing command files to determine settings and to configure. This meant that, the developer often had to write components in a compiled language. In developing scripts, part of the time was spent making small commands. This isn’t necessary in PowerShell thanks to .NET.

PowerShell simplifies the management of hierarchical data stores. Through its provider model, PowerShell lets you manage data stores such as the registry, or a group of SQL Servers using the same techniques of specifying and navigating paths that you already use to manage files and folders.

This doesn’t turn PowerShell into a rival to C#, VB-Net, ActionScript or F#.

It is not for developing applications but for automating administrative tasks.

It is theoretically possible to write a webserver in PowerShell, or an interactive GUI using Windows Presentation Foundation but that is not its purpose.

What is PowerShell’s main use?

Traditionally, the command line was used for complex deployments. BPowerShell can work remotely on any computer in the domain, and give far more information about the computer. It quickly became the default means of deployment for Windows.

This is great for the developer. He develops his package in NuGet and can use Chocolatey to deploy it. Linux allows you to install a package with just one simple command. Chocolatey does the same, but also allows you to update and uninstall the package just as easily. A simple script can grab the latest source from Git, compile it, install it and any dependencies, and do any special configuration tasks. There are 4,511 packages you can install from the Chocolatey site. PowerShell now has its own package manager but the current incarnation isn’t as versatile as Chocolatey.

Server Administration.

The release of PowerShell was most welcomed by the server teams.
The Microsoft Exchange Server team were early adopters and used PowerShell to allow the administration of Exchange.
The Microsoft SQL Server team, and Active Directory team also followed suit.
These teams provided specialized Applets that covered all aspects of the administration of the server.

Windows Server now has the capabilities of using Hyper-V to provide a ‘private cloud’ which allows companies to allow a degree of ‘self-service’ for server resources – all driven and maintained by PowerShell

Provisioning.

Provisioning is one of the areas where PowerShell excels.

PowerShell’s DSC package allows a PowerShell script to specify the configuration of the machine being provisioned, using a declarative model in a simple standard way that is easy to maintain and to understand.

It can either ‘push’ the configuration to the machine being provisioned, or get the machine to ‘pull’ the configuration.

Chocolatey, a PowerShell script, can not only install a large range of software, but also update it or remove it.

PowerShell has a built-in system called ‘PackageManagement’ that isn’t so versatile, but which allows you to install packages from a wider variety of sources.

Use PowerShell within an application or website
As well as providing a scripting environment, PowerShell can be embedded into an application by using System.Management,Automation , so that the user of the application can extend it via scripts. You can even do this in ASP.NET

Parallelism and workflow
Although PowerShell is an interpreted dynamic language (using .NET’s DLR) , its performance is enhanced by its ability to run parallel processes and to be able to run asynchronously. It is also designed to be able to run securely on other machines, remotely, and pass data between them. All this is possible without Workflow.

Scripted processes that have complex interdependencies, need to be interruptable and robust, and that is supported by PowerShell workflow.

Workflow can be complicated, and will always be a niche technique for scripting. . it is now able to run complex workflows within a domain thereby making it possible to script even the most difficult of business processes that contain long-running tasks that require persistence and need to survive restarts and interruptions.

PowerShell uses the Windows Workflow Foundation (WF) engine. A PowerShell workflow involves the PowerShell runtime compiling the script into Extensible Application Markup Language (XAML) and submitting this XAML document to the local computer’s Workflow Foundation engine for processing.

PowerShell Workflow scripting is particularly useful in high availability environments for processes such as ETL (data Extraction, Transform and Load), that potentially requiring throttling and connection pooling, and it is ideal where data must come from a number of sources and be load in a certain order.

AzureExpress Route for fast secure connection to Dynamics 365 Operations

November 24th, 2016 by Stephen Jones No comments »

Azure ExpressRoute support for Dynamics 365 for Operations deployments on Microsoft Azure.

ExpressRoute lets customers connect their on-premises infrastructure to Azure data centers using a dedicated, private connection that’s: highly available, highly reliable, and with a low latency that is supported with a published 99.95% financially-backed SLA.


This new AzureExpress Route offering addresses my of our concerns about how to connect business applications to cloud services over the public Internet with minimal latency while ensuing for data privacy and privacy and security perspective. It removes concern about inconsistent connectivity, local infrastructure, or high latency, especially for emerging global markets.

Once configured, in addition to connecting to Dynamics 365 for Operations, customers can also connect to a variety of other applications: Office 365, supported Azure services including virtual machines and cloud services deployed in virtual networks, Azure Websites, Azure Active Directory, Key Vault, Visual Studio Team Services, the IoT Hub, and the Cortana Intelligence Suite.

Extend your on-premises networks into the Microsoft cloud over a dedicated private connection facilitated by a connectivity provider. Connectivity can be from an any-to-any (IP VPN) network, a point-to-point Ethernet network, or a virtual cross-connection through a connectivity provider at a co-location facility.

Key benefits include:+
• Layer 3 connectivity between an on-premises network and the Microsoft Cloud through a connectivity provider.
Connectivity from: any-to-any (IPVPN) network, a point-to-point Ethernet connection, or through a virtual cross-connection via an Ethernet exchange.
• Connectivity to Microsoft cloud services across all regions in the geopolitical region.
• Global connectivity to Microsoft services across all regions with ExpressRoute premium add-on.
• Dynamic routing between your network and Microsoft over industry standard protocols (BGP).
• Built-in redundancy in every peering location for higher reliability.
• Connection uptime SLA.
• QoS and support for multiple classes of service for special applications, such as Skype for Business

You can create a connection between your on-premises network and the Microsoft cloud in three different ways:
Co-located at a cloud exchange
When you are co-located in a facility with a cloud exchange, order virtual cross-connections to the Microsoft cloud through the co-location provider’s Ethernet exchange. Co-location providers offer either Layer 2 cross-connections, or managed Layer 3 cross-connections between your infrastructure in the co-location facility and the Microsoft cloud.

Point-to-point Ethernet connections
You can connect your on-premises datacenters/offices to the Microsoft cloud through point-to-point Ethernet links. Point-to-point Ethernet providers can offer Layer 2 connections, or managed Layer 3 connections between your site and the Microsoft cloud.

Any-to-any (IPVPN) networks
Integrate your WAN with the Microsoft cloud. IPVPN providers (typically MPLS VPN) offer any-to-any connectivity between your branch offices and datacenters. The Microsoft cloud can be interconnected to your WAN to make it look just like any other branch office. WAN providers typically offer managed Layer 3 connectivity. ExpressRoute capabilities and features are all identical across all of the above connectivity models.

ExpressRoute supports the following features and capabilities: +
Layer 3 connectivity
Microsoft uses industry standard dynamic routing protocol (BGP) to exchange routes between your on-premises network, your instances in Azure, and Microsoft public addresses. Establish multiple BGP sessions with your network for different traffic profiles.

Redundancy
Each ExpressRoute circuit consists of two connections to two Microsoft Enterprise edge routers (MSEEs) from the connectivity provider / your network edge. Microsoft will require dual BGP connection from the connectivity provider / your side – one to each MSEE. You can choose not to deploy redundant devices / Ethernet circuits at your end. However, connectivity providers use redundant devices to ensure that your connections are handed off to Microsoft in a redundant manner. A redundant Layer 3 connectivity configuration is a requirement for our SLA to be valid.

Connectivity to Microsoft cloud services
ExpressRoute provides private network connectivity to Microsoft cloud services. Infrastructure and platform services running in Azure often benefit by addressing network architecture and performance considerations. Therefore we recommend enterprises use ExpressRoute for Azure.

Software as a Service offerings, like Office 365 and Dynamics 365, were created to be accessed securely and reliably via the Internet. Therefore, we only recommend ExpressRoute for these applications in specific scenarios.

Important
Using ExpressRoute to access Azure is recommended for all enterprises.

ExpressRoute connections enable access to the following services:
• Microsoft Azure services
• Microsoft Office 365 services
• Microsoft CRM Online services

Connectivity to all regions within a geopolitical region
Cc=onnect to Microsoft in a peering location and have access to all regions within the geopolitical region.+
For example when connected to Microsoft in Amsterdam through ExpressRoute, you will have access to all Microsoft cloud services hosted in Northern Europe and Western Europe.

Global connectivity with ExpressRoute premium add-on
Enable the ExpressRoute premium add-on feature to extend connectivity across geopolitical boundaries. For example, when you are connected to Microsoft in Amsterdam through ExpressRoute, you will have access to all Microsoft cloud services hosted in all regions across the world (national clouds are excluded). You can access services deployed in South America or Australia the same way you access the North and West Europe regions.

Connectivity to national clouds
Microsoft operates isolated cloud environments for special geopolitical regions and customer segments.

Supported bandwidth options
You can purchase ExpressRoute circuits for a wide range of bandwidths. The list of supported bandwidths is listed below.
• 50 Mbps
• 100 Mbps
• 200 Mbps
• 500 Mbps
• 1 Gbps
• 2 Gbps
• 5 Gbps
• 10 Gbps

You have the ability to increase the ExpressRoute circuit bandwidth (on a best effort basis) without having to tear down your connections.

Flexible billing models
You can pick a billing model that works best for you.
Unlimited data. The ExpressRoute circuit is charged based on a monthly fee, and all inbound and outbound data transfer is included free of charge.
Metered data. The ExpressRoute circuit is charged based on a monthly fee. All inbound data transfer is free of charge. Outbound data transfer is charged per GB of data transfer. Data transfer rates vary by region.
ExpressRoute premium add-on. The ExpressRoute premium is an add-on over the ExpressRoute circuit. The ExpressRoute premium add-on provides the following capabilities:
Increased route limits for Azure public and Azure private peering from 4,000 routes to 10,000 routes.
Global connectivity for services. An ExpressRoute circuit created in any region (excluding national clouds) will have access to resources across any other region in the world. For example, a virtual network created in West Europe can be accessed through an ExpressRoute circuit provisioned in Silicon Valley.
Increased number of VNet links per ExpressRoute circuit from 10 to a larger limit, depending on the bandwidth of the circuit.

Dynamics 365 – Customer Service at its best

November 23rd, 2016 by Stephen Jones No comments »

Take a tour of the Service area in Dynamics 365 and step through a few of the most common activities like creating a case, finding similar cases to draw solutions from, and adding notes so that everything that is said and done is tracked. All the details are stored in Dynamics 365 where all your team everyone can have access to relevant information.