The original SQL Server Konferenz. From February 19 -21, 2019 in Darmstadt
SQL Server Konferenz 2019 Logo

SQL SERVER KONFERENZ 2019
FROM 19 TO 21 FEB 2019
KONGRESSCENTER
DARMSTADT, GERMANY

PASS Logo

PreCon Workshops & MainCon Agenda

PreCon

All PreCon workshopss are confirmed!
Please select your favourite workshop during the registration process. Your choice is binding.

MainCon

There they are: The best speakers with sessions on a huge variety of topics at different levels. Have a look. They are awesome!

  • PreCon Workshop english
    19.02. | 10.00 - ca. 17.00
    Level 300 Advanced Data Protection: Security and Privacy in SQL Server Thomas LaRock & Karen Lopez

    Modern database systems have introduced more support for security, privacy, and compliance over the last few years. We expect this to increase as compliance issues such as GDPR and other data compliance challenges arise.
    In this advanced workshop, we cover data security and privacy protection for SQL Server and Azure SQL Database. With demonstrations and several exercises, this workshop uses group labs to cover database and data protection techniques, including threat analysis and remediation. We'll look at the new features, why you should consider them, where they work, where they don't.
    Discussion topics will include:

    • Data Categorization and Classification
    • Data Catalog and discovery
    • Encryption; Transparent Data Encryption, Always Encrypted
    • Data Masking
    • Row-level Security
    • Proactive monitoring
    • Data Governance
    • Threat Detection
    • Vulnerability Assessment

    Attendees will leave this session with an understanding of the following:

    • How to find and assess data assets, using modern tools and techniques
    • How to identify and tag sensitive data
    • How to perform cost, benefit, and risk analysis on threats and solutions
    • How to deploy features in the right location to best protect data
    • The day will include lecture style format as well as interactive discussions and exercises.

    Attendee prerequisites: Hands-on experience with SQL Server (any version) and Azure SQL Database. Basic understanding of database design concepts. Familiarity with basic Azure and Data Platform features. Laptops are required to participate in the hands-on labs.

  • PreCon Workshop english
    19.02. | 10.00 - ca. 17.00
    Level 200 Loading and transforming data in Power BI Chris Webb

    Anyone who is serious about building BI and reporting solutions using Power BI needs to know how to load and transform data. This precon will provide an introduction to the functionality found in the Power Query Editor screen in Power BI and the M language that it uses behind the scenes; it will also look at dataflows in the Power BI Service. You will learn how to use it to connect to multiple data sources, how to shape your data appropriately for Power BI, how to write M code and lots more.

    This precon will not explicitly cover how the same functionality can be used in Excel’s Get&Transform feature, Analysis Services Tabular 2017 or Azure Analysis Services, although most of the content will be relevant to those platforms too.

    Some basic knowledge of Power BI is required for this session, but no previous experience of Power BI’s data loading functionality will be assumed. This precon is suitable for BI consultants, business analysts and Power BI report developers.

    TOPICS COVERED:

    • The basic concepts of loading data in the Power Query Editor, including queries and steps
    • A guided tour of the data sources that you can connect to
    • Transforming data using the Power Query Editor user interface
    • Working with multiple queries: duplicating, referencing, merging and appending
    • Introduction to the M language for writing expressions and queries
    • Creating and using parameters
    • Creating functions from parameterised queries and by writing M code, and using them to apply the same business logic to multiple data sources
    • Understanding data privacy levels and making sure queries can be refreshed after they have been published
    • Creating custom data connectors in M
    • Working with dataflows
  • PreCon Workshop german
    19.02. | 10.00 - ca. 17.00
    200-300 Azure Black Magic for Consultants & Seller with Flavor: Data & SQL Patrick Heyde

    Azure Black Magic is a training Series for Technical and Sales people to discover the Change in Designing Architectures & Operation Process on the Azure Way. This Session starts with an Overview of Design Elements in the Area of Data Processing and SQL Server tasks. After that, you receive a Task/Lab: Design something “….”, it will be something of common SQL & Data daily tasks and we define multiple option of create a solution from the traditional way and from the Azure Way. By comparing each solution we will analyze the Architecture in technical, business & costs points.

    Topics of the course:

    • Azure Data related Services like: Virtual Machines, Azure SQL DB, CosmosDb, Table Storage, etc.
    • Azure Storage Design & Stamps
    • Azure Performance Planning & Rules
    • Rethinking Business Operation Process like Backup Strategies,…
    • Rethinking Network, Storage, Hosting Options with Azure
    • Azure Cost Prediction of an Architecture
    • Building Azure Operation concept

    Requirement:

    • Everybody has an Azure Subscription
    • Required Azure Resources for Labs: max. 10 €
  • PreCon Workshop english
    19.02. | 10.00 - ca. 17.00
    Level 300 Power BI Administration in a Day Deep Dive Kay Unkroth

    Administering a Power BI environment is a complex task that requires a deep understanding of the fundamental concepts and technologies of self-service BI and enterprise BI. The purpose of this deep dive is to help you make key decisions regarding the configuration of your Power BI tenant and provisioning of licenses and resources. It also covers automating repetitive tasks by using PowerShell cmdlets and highlights potential issues you may encounter in your role as a Power BI administrator. Best practices and suggestions are offered when possible.

    The target audience for this deep dive are technology professionals. Some knowledge of Power BI and general BI concepts is assumed.

  • PreCon Workshop german class=
    19.02. | 10.00 - ca. 17.00
    200-300 Cutting Edge: AI und intelligente Datenverarbeitung mit Azure IoT in der Cloud und auf dem Device Constantin "Kostja" Klein & Marcel Tilly

    Das Internet der Dinge vereint künstliche Intelligenz, die Cloudnutzung und Edge-Computing. In diesem Kontext wird die Datenverarbeitung an Orten mit extremen Umgebungsbedingungen, schlechter Konnektivität oder besonderen Sicherheitsanforderungen zur Normalität. Dadurch werden Lösungen mit Echtzeitanforderungen und die ausschließliche Verarbeitung von Daten in der Cloud schwieriger, teurer oder in Teilen unmöglich.
    Mit Azure IoT Hub und Azure IoT Edge begegnet Microsoft diesen Herausforderungen und ermöglicht die Datenverarbeitung am - oder näher am - Entstehungsort.
    Diese Pre-Con Session bietet einen Einstieg in die von Microsoft bereitgestellten Technologien und Services und bildet damit einen Jumpstart für diejenigen, die sich dafür interessieren, wie sich intelligente und verteilte Datenverarbeitung auf der Microsoft Azure Plattform umsetzen lässt.

    Den Teilnehmern werden in dieser Pre-Con Kenntnisse zu folgenden Themen vermittelt:

    • Azure IoT Hub und Azure IoT Edge
    • IoT Edge auf dem Raspberry PI3 als IoT Edge Device (Setup und Verbindung mit IoT Hub und der Azure Container Registry)
    • Einführung in Docker und Docker Container
    • Vom einfachen Datenstrom zum Cognitiven Service (speziell Custom Vision) auf dem Device
    • Bereitstellung eines (AI) Modules auf einem IoT Edge Device
  • Welcome german
    20.02. | 09.15 - 09.30
    Track 1 Welcome to SQL Server Konferenz 2019
  • Keynote german
    20.02. | 09.30 - 10.00
    Track 1 Back to the SQL future – mit Marty McKröhnert & Doc Buchta Jens Kröhnert alias „Marty McKröhnert“ & Hilmar Buchta alias „Doc Buchta“

    30 Jahre SQL Server, 20 Jahre ORAYLIS, 15 Jahre PASS Deutschland – der perfekte Zeitpunkt für eine Retrospektive!
    Marty McKröhnert und Doc Buchta nehmen Sie mit auf eine spannende Reise durch die Geschichte des SQL Servers und der Datenanalyse mit Microsoft BI. Wann erblickte welche Technologie das Licht der Welt? Was ist schon längst wieder von der Bildfläche verschwunden? Und wohin geht die Reise in Zukunft? Gewürzt wird der Vortrag mit vielen Anekdoten aus dem Projektalltag eines führenden Komplettanbieters für Business Intelligence, Data Analytics und Artificial Intelligence. Freuen Sie sich auf einen bunten Einstieg in die SQL Server Konferenz 2019!

  • Keynote german
    20.02. | 10.00 - 11.00
    Track 1 Monsters of Data Platform tba

  • BREAK
    20.02. | 11.00 - 11.15
      Short break // Changing Rooms
  • DEV-OPS english
    20.02. | 11.15 - 12.15
    Track 1
    Level 200
    DevOps for the DBA Grant Fritchey

    Far too many people responsible for production data management systems are reluctant to embrace DevOps. The concepts behind DevOps can appear to be contrary to many of the established best practices for securing, maintaining and operating a reliable database. However, there is nothing inherent to a well-designed DevOps process that would preclude ensuring that the information stored within your data management system is completely protected. This session will examine the various methods and approaches available to the data professional to both embrace a DevOps approach to building, deploying, maintaining and managing their databases and protect those databases just as well as they ever have been. We will explore practices and plans that can be pursued using a variety of tooling and processes to provide DevOps methodologies to the systems under your control. You can embrace DevOps and protect your data.

  • BI english
    20.02. | 11.15 - 12.15
    Track 2
    Level 200
    Checking in with Power Apps, Flow and Power BI Ásgeir Gunnarsson

    In this session we will create an Power App that will allow users to checkin their location. We will then create a Flow that will take that location and write to a Power BI data source and refresh it. We will then create a Power BI report that will display the data on a map.
    Power Apps is a great tool that allows you to create a desktop or mobile app with minimal coding. The app we are creating in this session uses the Bing location services to get the users location when a button is pressed. Microsft Flow is similarly a tool that allows you to create data flows and logic with minimal coding. The flow we create in this session will take the location and user information and write it to a Excel. We will then create a custom connector in Flow that will allow us to refresh a Power BI data set. This will mean that the data will be visible in the Power BI report almost as soon as the user presses the checkin button. Power BI is a self-service reporting tool that allows you to connect to multiple data sources and mash up data into a beutiful report or dashboard. In the report we will create in this session we will connect to a Excel file with the location information in it and display it in a report including the location on a map.
    The audience will take away useful information about Power Apps, Flow and Power BI including all the code created during this demo.

  • DBA english
    20.02. | 11.15 - 12.15
    Track 3
    Level 200
    Beware of the Dark Side - A Guided Tour of Oracle for the SQL DBA David Postlethwaite

    Today, SQL Server DBAs are more than likely at some point in their careers to come across Oracle and Oracle DBAs. To the unwary this can be very daunting and, at first glance, Oracle can look completely different with little obvious similarities to SQL Server.
    This talk sets out to explain some of the terminology, the differences and the similarities between Oracle and SQL Server and hopefully make Oracle not look quite so intimidating.

    At the end of this session you will have a better understanding of Oracle and the differences between the Oracle RDBMS and SQL Server.
    Although you won’t be ready to be an Oracle DBA it will give you a foundation to build on.

  • BIGDATA & ANALYTICS german
    20.02. | 11.15 - 12.15
    Track 4
    Level 300
    Der Azure-Daten-Explorer, früher bekannt als KUSTO! Markus Raatz

    Microsoft ist ja quasi berühmt dafür, dass sehr vergleichbare Aufgaben mehrfach von verschiedenen Teams auf verschiedene Weise gelöst werden. Ein gutes Beispiel ist der Ende 2018 veröffentliche Azure-Dienst KUSTO, der von einer übergeschnappten Marketing-Abteilung den Namen "Azure-Daten-Explorer" verpasst bekommen hat. Dies ist eine echte Big Data-Technologie, die strukturierte und semi-strukturierte Daten verwalten und sekundenschnell durchsuchen kann! Die Daten können sowohl batch-weise als auch in Echtzeit, per Streaming eintreffen!
    In dieser Demo-reichen Session sehen wir uns mal an, was der Daten-Explorer besonders gut kann, und was nicht.

  • AZURE german
    20.02. | 11.15 - 12.15
    Track 5
    Level 300
    Die Kosten im Blick: Die Azure Billing API Benjamin Kettner

    Die Nutzung der Cloud schreitet weiter voran und wirft dabei auch neue Fragen auf. Wie teuer ist mein Service? Welche Kosten verursacht welcher User denn nun genau? Welcher Dienst kann wie mit dem User verrechnet werden? "Flatrate" oder Nutzungsbasierte Kostenweitergabe an die Fachabteilungen? Die Wichtigste Frage ist dabei: Welche Werkzeuge habe ich denn, um diese Fragen zu beantworten? Ich stelle die Billing API vor und zeige an einem praktischen Beispiel, wie man mit dieser Schnittstelle die Kostenentwicklung im Blick behalten kann.

  • AZURE german
    20.02. | 11.15 - 12.15
    Track 6
    Level 400
    Databricks meets Power BI Marcel Franke & Gerhard Brückl

    Databricks und Spark finden immer mehr Beliebtheit und werden heute als moderne Datenplattform zur Analyse von Real-time- oder Batch-Daten verwendet. Außerdem bietet Databricks eine tolle Integration für Machine Learning Entwickler.
    Power BI hingegen ist eine tolle Plattform zur einfachen graphischen Analyse von Daten und bietet tolle Möglichkeiten hunderte verschiedener Datenquellen zusammenzubringen, gemeinsam zu analysieren und auf allen möglichen Geräten zugänglich zu machen.
    Also bringen wir doch einfach beide Welten zusammen und schauen mal, wie gut Databricks mit Power BI funktioniert.

  • BREAK
    20.02. | 12.15 - 13.30
      Lunch // Partner Exhibition
  • AZURE german
    20.02. | 13.30 - 14.30
    Track 1
    Level 300
    Sicherheit im Fokus: SQL Server und Azure SQL Database Andreas Wolter

    Diese Session nimmt den Fokus auf neue Features und Fähigkeiten, die bei den Themen Compliance und Sicherheit mit SQL Server on-premises als auch Azure SQL Database unterstützen. Vorgestellt wird die neue statische Daten-Maskierung, neue Authentifizierungs-Features, Verbesserungen des Vulnerability Assessment und der Threat Detection als auch die neue Technologie hinter Always Encrypted. Wer zum neuesten Entwicklungsstand von SQL Server Sicherheit informiert sein möchte, ist hier richtig.

  • BI english
    20.02. | 13.30 - 14.30
    Track 2
    Level 300
    Get into the Flow Sandra Geisler

    Microsoft Flow is an easy-to-use workflow engine which helps to create an automated process by connecting different services and applications with each other. In this talk I will provide an insight into the basics of Microsoft Flow and show the great possibilities plugging business processes, such as notification or signature workflows, together. Especially the combination of Microsoft Flow, PowerBI, and Stream Analytics will be presented by showing a practical example involving the audience.

  • AZURE german
    20.02. | 13.30 - 14.30
    Track 3
    Level 200
    How to Get Started with Azure Managed Instance Björn Peters

    This session will be about how to start with Azure Managed Instances and what you'll need to know before.
    Showing and explaining how you to steer around all the cliffs and make sure everything works smoothly.
    It is about the features and why it might save you costs and make the life of your admin much easier by using Azure Managed Instances. Showing how to deploy, connect and migrate your application data.
    After this session, you'll understand this great database a bit better and know how to deploy it via portal and Powershell and why it a step ahead.

    • Showing the highlights of Azure Managed Instances
    • Showing the special requirements (network, data migration)
    • Showing deployment via portal and Powershell (order of required actions)
    • Showing how to connect
    • Showing how to migrate data into Managed Instances
  • DEV-OPS english
    20.02. | 13.30 - 14.30
    Track 4
    Level 200
    The Gremlins are attacking the Azure Cosmos DB Cédric Charlier

    Have you got some data structures not playing well with the relational concepts? Data as it appears in the real world is naturally connected. Traditional data modeling focuses on entities and is not perfectly rendering this highly connected world. For many applications, there's a need to model both entities and relationships naturally. Azure Cosmos DB, with its Gremlin Graph API, supports this property graph model. During this session, you’ll learn the best use-cases and the great benefits of graph models but also how Azure Cosmos DB is supporting this. You’ll receive a good explanation of what’s TinkerPop and why it’s so important in the world of graph databases. The Gremlin language will be explained from the basics to some more advanced concerns in the form of recipes for common use-cases of graph databases!

  • IM english
    20.02. | 13.30 - 14.30
    Track 5
    Level 200
    Connecting SQL and Blockchain: The Azure Blockchain Workbench Christoph Seck

    Hype as it is, a valid business case currently is the most critical point for a Blockchain project. So better check it out with a PoC and best with a fast PoC before you start burning money. But getting a even a simple PoC for a Blockchain project running still is hard. Connecting Blockchain data to classical databases, making them available to analytics or standard applications is worse (ever queried LevelDB?). Microsofts new "Azure Blockchain Workbench" let you do this in hours, not months. So (besides wondering why one needs this hype stuff at all) we will see how to

    • Deploy a Blockchain on Azure in a breeze
    • Generate templates for smart contracts
    • Connect them to IoT devices
    • Interact with the contracts via a simple UI
    • Get all the data into a "real" database automatically
  • BI german
    20.02. | 13.30 - 14.30
    Track 6
    Level 100
    101 BI and AI Alexander Klein

    Wie kann die KI meine BI Anwendung intelligenter machen? Alle reden aktuelle von KI aber was kann man heute schon damit anfangen? Derzeit gibt es Tools wie die Cognitive Services, Azure ML oder Azure Bot Framework, diese können bei dem klassischen ETL Prozess helfen Daten aufzubereiten oder anzureichern. Beispiele dafür sind die Analyse großer Datenströme aus dem Bereich IoT, wie kann ich meine Bedarfsplanung eines Callcenters verbessen oder die Analyse von Social Media hinsichtlich von Trends.
    Hat man seine BI Anwendung schon in der Cloud wären Azure Data Factory, Logic App oder Azure Stream Analytics die richtigen Komponenten für eine Erweiterung durch die KI.

  • BREAK
    20.02. | 14.30 - 14.45
      Short break // Changing Rooms
  • BIG DATA & ANALYTICS german
    20.02. | 14.45 - 15.45
    Track 1
    Level 300
    Case Study: Finde den bösen Bot! Schnelle Web-Logfile-Analysen mit HDInsight IA Jens Kröhnert

    Der Vortrag im Überblick:

    • Basics zu Hadoop und HDInsight
    • HDInsight IA und die Neuerungen
    • E-Recruiting mit GermanPersonnel
    • Demo: Analyse von Web-Logfiles bei GermanPersonnel
    • Weitere Nutzungsszenarien, Best Practices und Tools
  • AZURE german
    20.02. | 14.45 - 15.45
    Track 2
    Level 300
    Data flows in Azure Data Factory - endlich auch hier Transformationen! Stefan Kirner

    Die Data Factory steht seit längerem in der Microsoft Azure Cloud für das effiziente Kopieren von Massendaten zwischen vielen Arten von Datenspeichern zur Verfügung. Ergänzt wird dieses Angebot um die Modellierung von Abhängigkeiten zwischen mehreren Verarbeitungsschritten und seit der Version 2 auch durch einen Control Flow für die Abbildung von Schleifen und Bedingungen und dem Ausführen von Integration Services Paketen. Seit kurzem können jetzt auch visuell unterstützt Data Flows für das gezielte Anreichern, Korrigieren und Umwandeln innerhalb der Data Factory genutzt werden!
    Der Vortrag zeigt nach einem allgemeinen Überblick zur ADF dieses und andere top aktuelle Features des Platform-as-a-Service Dienstes mit wichtigen Tipps aus der Praxis.

  • DEV-OPS english
    20.02. | 14.45 - 15.45
    Track 3
    Level: 300
    Query Optimizer Torsten Strauss

    Despite of the fact the Query Optimizer represents one of the core engines of SQL Server there is only little know about the internals.
    In this session, we will dive into the different trees and optimization phases of the query optimization. Also, we will understand the limits of the query engine.
    This knowledge will bring your query tuning to the next level.

  • BI english
    20.02. | 14.45 - 15.45
    Track 4
    Level 200
    Administering Power BI in the enterprise Kay Unkroth

    Microsoft Power BI administration is the management of a Power BI tenant, including the configuration of tenant settings, usage monitoring, and provisioning of licenses, and other organizational resources. The job is to make business users productive and ensure security and compliance with laws and regulations. This session covers the typical admin tasks and tools, such as Power BI admin portal and Office 365 admin center, and how to automate them by using administrative APIs and PowerShell cmdlets.

  • IM german
    20.02. | 14.45 - 15.45
    Track 5
    Level 300
    Heute zieh ich in die Cloud... Und Biml wird mein Umzugswagen Ben Weissman

    Ihr habt eine on premise SSIS Lösung und wollt diese in die Cloud umziehen? Und das vielleicht auch nur schrittweise statt mit einem Big Bang? Dann lasst uns gemeinsam anschauen, wie das mit Biml funktionieren kann!

  • DEV-OPS german
    20.02. | 14.45 - 15.45
    Track 6
    Level 300
    Azure Data Studio - the new Kid in Town Frank Geisler

    Auf dem Pass Summit 2017 hat Microsoft ein neues Tool für die Verwaltung und Entwicklung von SQL Server enthüllt: SQL Operations Studio.
    In meiner mit Demos vollgepackten Session werde ich zeigen, wie man SQL Operations Studio nutzt, was im Vergleich zu SQL Server Management Studio (SSMS) und SQL Server Data Tools (SSDT) neu ist. Außerdem werde ich zeigen welche Möglichkeiten man mit diesem Tool bei der Entwicklung von SQL Server Lösungen und bei der Verwaltung von SQL Server Datenbanken hat. Ein weiterer wichtiger Punkt ist, dass man SQL Operations Studio mit eigenen Extensions erweitern kann. Auch hierauf werde ich eingehen.
    Übrigens: einer der größten Vorteile ist, dass SQL Operations Studio nicht nur für Windows, sondern auch für Mac und Linux verfügbar ist.

  • BREAK
    20.02. | 15.45 - 16.15
      Coffee break // Partner Exhibition
  • DBA english
    20.02. | 16.15 - 17.15
    Track 1
    Level 300
    Partition Magic Kalen Delaney

    There are many reasons for partitioning your data and indexes in SQL Server, and one of those is because moving data in and out of a partition can be more efficient than any other type of data movement. This is because the way SQL Server keeps track of the internal storage of partitioned data allows data to be moved as a metadata only operation. In this session we’ll look at the metadata for table, index and partition storage to explore exactly what happens when a partition is moved. Looking at the internals of partition storage will also allow us to understand the reasons for some of the restrictions on how and when partitions can be moved, and allow us to design our data movement processes much more reliably.
    In this session we’ll look at:

    • How partitioning works?
    • How your partitioned data is organized?
    • The metadata behind your partitions?
    • What is really happening when you switch partitions, split partitions and merge partitions.
  • BI english
    20.02. | 16.15 - 17.15
    Track 2
    Level 300
    Sharing methods in Power BI: how to fulfill your audience Andrea Martorana Tusa

    You like Power BI. You think it’s a great suite of tools for data analytics, modeling and report. Now you’d like to adopt it into your organization as the standard reporting tool. And here we are with the first questions. How many times have you been asked: “Can you share this report/dashboard, with me?”; “Can we distribute our work to other users?”; “Shall we pay for it? Can we have licenses for free?”.
    To make things worse, the licensing model is constantly evolving, bringing more confusion to end-users. When using sharing? What is an App workspace? And Power BI Embedded? How can you manage permissions to reports and dashboards? Is it possible to send reports via e-mail through a subscription?
    Come to this session, if you want to dispel any doubt about the sharing methods in Power BI. We’ll give a clear and complete overview of all the collaborative features in Power BI, helping you to choose the solution that best fits your needs.

  • AZURE english
    20.02. | 16.15 - 17.15
    Track 3
    Level 200
    SQL Agent In The Cloud Sam Cogan

    Azure’s Platform as a Service solution, SQL Azure is a compelling solution for many who don’t want to manage their own highly available SQL implementation. SQL Azure, however, does not replicate all of the services of on-premises SQL, and one of those missing is the SQL Agent.
    This session looks at what alternatives exist for running and managing SQL jobs in Azure without SQL Agent. In particular, we focus on Elastic Database Jobs, Azure Automation and Azure Functions. The presentation includes a brief overview of these services and how they are applicable to SQL workloads, followed by demos creating and running SQL jobs.

  • AZURE german
    20.02. | 16.15 - 17.15
    Track 4
    Level 300
    Vorsicht Helmpflicht! - Datenverarbeitung auf der Baustelle mit Azure IoT Edge Constantin "Kostja" Klein & Marcel Tilly

    Im Kontext vom Internet der Dinge wird die Datenverarbeitung an Orten mit rauen Umgebungsbedingungen, schlechter Konnektivität oder besonderen Sicherheitsanforderungen zur Normalität. Dadurch werden Lösungen mit Echtzeitanforderungen und die ausschließliche Verarbeitung von Daten in der Cloud schwieriger, teurer oder in Teilen unmöglich. Mit Azure IoT Edge begegnet Microsoft diesen Herausforderungen und ermöglicht die Datenverarbeitung am - oder näher am - Entstehungsort. Damit wird es möglich bereits auf dem Edge Device AI Modelle, z.B. auf Basis von Microsoft Cognitive Services, zu verwenden oder Datenstromanalysen mit Hilfe von Azure Stream Analytics einzusetzen. In dieser Session schauen wir uns die Azure IoT Edge Plattform etwas genauer an und steigen in die Baugrube hinab, um live zu demonstrieren, wie wir die Sicherheit auf unserer Baustelle auf die nächste Ebene heben. Das ist IoT mit Ecken und Kanten - eben IoT Edge!

  • DEV-OPS german
    20.02. | 16.15 - 17.15
    Track 5
    200-300
    SQL Server im Container Frank Geisler

    In seinem einstündigen Vortrag erklärt Data Platform MVP Frank Geisler die Vorteile der Container Technologie Docker für die Entwicklung von SQL Server basierten Anwendungen und zeigt wie man sich schnell und einfach SQL Server Entwicklungsumgebungen als Docker-Images zusammenbauen kann. Durch ihren kleineren Footprint im Vergleich zu virtuellen Maschinen, durch ihre zentrale Bereitstellung von vorgefertigten Images und durch ihre Plattformunabhängihkeit eignen sich Docker Container ideal als lokale Entwicklungsplattform. Darüber hinaus können sie auch für die Verteilung und Bereitstellung von Anwendungen hervorragend genutzt werden.

  • BI german
    20.02. | 16.15 - 17.15
    Track 6
    Level 200
    Der BIERout-Vortrag – oder: Ressourcen-Skalierung mit SSIS Scale Out am Beispiel von Bier Nora Herentrey & Stefan Grigat, powered by ORAYLIS

    Was haben Daten und Bier gemeinsam? Bei diesem Vortrag eine ganze Menge. Zwei anerkannte Experten auf beiden Gebieten stellen Nutzungsszenarien und Best Practice der Ressourcen-Skalierung mit SSIS Scale Out im Rahmen einer nächtlichen Bewirtschaftung vor – und ziehen dabei einen anschaulichen Vergleich: Wie schaffe ich eine (oder mehr?) Bierkisten innerhalb einer Nacht, um das wertvolle Gut vor dem Verfall zu schützen? Das Publikum ist zur tatkräftigen Ressourcen-Skalierung bei der Bier-Verarbeitung eingeladen.

  • BREAK
    20.02. | 17.15 - 17.30
      Short break // Changing Rooms
  • AZURE english
    20.02. | 17.30 - 18.30
    Track 1
    100 - 200
    5 Critical Considerations When Moving to the Cloud Kevin Kline

    Migrating an existing on-premises SQL Server application to the cloud can be a daunting task that consists of many complicated steps. In this session, we will focus on what is perhaps the MOST important step of the process – those that lead up to the actual move to the cloud. We will dive deep into the most critical considerations for moving your data and databases into the cloud, whether you use Microsoft Azure, Amazon, or another cloud provider.
    We will cover:

    • The importance of cleansing your data before a major migration
    • Fully documenting your data sources and metadata
    • Choosing from many different tools and techniques to actually move data
    • How to test data for fidelity
    • How to maintain parallel systems during transitional phases

    You will see demos and overviews of native features within SQL Server that enable you to move data, as well as show powerful tools from the SentryOne product set that can make data migrations painless. Take part in this session to learn the best practices of moving to the cloud from one of the industry’s top experts.

  • BI german
    20.02. | 17.30 - 18.30
    Track 2
    Level 400
    Use advanced Power BI techniques for text mining Thomas Martens

    In the age where we are overloaded with information it becomes essential to identify what to read first. For this reason a Power BI app helps to analyze all the emails using methods from text mining and text analysis. Using just the Power BI tool stack this solution makes email analysis versatile and easy to use.
    This session provides a short introduction into text mining and text analysis and demonstrates how certain concepts can be utilized using Power BI. Throughout this session some advanced techniques from the whole Power BI stack are applied, ranging from Power Query, to data modeling topics, to advanced DAX usage, even Cognitive Services are incorporated in this analytical pipeline. Technics are used to filter out stop words from the email body and to create a whitelist to easily discover emails that contain content that is of special interest.

  • DBA german
    20.02. | 17.30 - 18.30
    Track 3
    Level 300
    SQL Server 201x im Sog der Virtualisierung - das geht aber auch performant Bodo Michael Danitz

    Heutzutage wird virtualisiert auf Teufel komm raus - Platzhirsch: VMware.
    Entweder, wie es der ursprüngliche Gedanke war, um die vorhandene Hardware besser auszunutzen oder, immer beliebter, weil alles andere ja auch schon virtualisiert ist.
    So bleibt auch der SQL Server hiervon nicht verschont. Virtualisierte SQL Server bedienen alles, von Enterprise - Speisekarten bis hin zu ERP Systemen, doch von letzteren erwartet man durchaus höchste Performance.
    Wie erreicht man die? Je nach Whitepaper, das man gerade liest, findet man allerdings die unterschiedlichsten Konfigurationen mit gelegentlich unerwarteten Auswirkungen.
    Diese Session erzählt "Dos and Don'ts"aus der Praxis. Stichwörter: vStorage, vCPU, vMemory, vHyperthreading, vHochverfügbarkeit,...

  • AZURE german
    20.02. | 17.30 - 18.30
    Track 4
    Level 200
    Azure - Disruptive Geschäftsmodelle & IT Patrick Heyde

    Azure, von Aufbau, Planung und Umsetzung eines disruptiven Geschäftsmodells. Wo fängt man mit IT an? Was macht man anders und wo ändert man die Vorgehensweise zum weiteren Aufbau disruptiver Eigenschaften.

    Themen:

    • Aufbau Neukundengeschäft
    • Was macht mein Business Model disruptiv?
    • Eigenschaften von Disruptiven Business Modellen
  • DBA german
    20.02. | 17.30 - 18.30
    Track 5
    200 - 300
    Der moderne DBA in einer heterogenen Datenbankwelt: Performance- & Verfügbarkeitsüberwachung Markus Schröder, powered by Quest

    Die Überwachung der Datenbankperformance und der Verfügbarkeit war schon immer ein essentieller Bestandteil der täglichen Administration. Neben den klassischen Systemen wie dem SQL Server etablieren sich auch Open Source-Lösungen und NoSQL Datenbanken am Markt und gewinnen an Bedeutung in der Unternehmensinfrastruktur. Die steigende Zahl von Datenbanken, für die ein DBA verantwortlich ist und die verschiedenen Architekturen, machen eine zentrale und einheitliche Überwachung von Performance- und Verfügbarkeitsmetriken unumgänglich. Wir diskutieren die Probleme und Lösungsansätze, um einen Teil der Arbeit für DBAs zu erleichtern.

  • BIGDATA & ANALYTICS english
    20.02. | 17.30 - 18.30
    Track 6
    Level 200
    Robust & Scalable Big Data Pipelines with a Spark-based Framework on HDInsight/Databricks Hitesh Sahni & Prashanth Sayeenathan, powered by Adastra

    Big Data Integration pipelines are one of the critical components of the data infrastructure of modern enterprises. Building your Data Pipelines for Big Data processing using Apache Spark has become a viable choice of many as it not only helps organisations to dramatically reduce costs but also facilitates agile and iterative data discovery between legacy systems and big data sources.
    In this session, we present the feature-rich & flexible ADASTRA Framework for Big Data Integration that enables you to build robust, scalable and reliable Data Lakes either on the Cloud (HDInsight/Databricks) or on-prem. We will talk about the benefits of a Framework-based approach gained through valuable experience from successful customer projects.
    Come and learn how ADASTRA has successfully built one of the largest Data Lakes in Germany with this Framework.

  • Power Hour
    20.02. | 18.30 - 19.30
      Data Nerds Power Hour & smooth slide into party mode
  • BREAK
    20.02. | 19.00 - 1.00 am
      Get-Together // Anniversary Party
  • IM english
    21.02. | 09.30 - 10.30
    Track 1
    Level 200
    Data Integration through Data Virtualization: New SQL Server 2019 Features Cathrine Wilhelmsen

    Data virtualization is an alternative to Extract, Transform and Load (ETL) processes. It handles the complexity of integrating different data sources and formats without requiring you to replicate or move the data itself. Save time, minimize effort, and eliminate duplicate data by creating a virtual data layer using PolyBase in SQL Server.
    In this session, we will first go through fundamental PolyBase concepts such as external data sources and external tables. Then, we will look at the PolyBase improvements in SQL Server 2019. Finally, we will create a virtual data layer that accesses and integrates both structured and unstructured data from different sources. Along the way, we will cover lessons learned, best practices, and known limitations.

  • BI english
    21.02. | 09.30 - 10.30
    Track 2
    Level 200
    Power BI server and Office Online server, modernize your on-premises BI approach. Isabelle Van Campenhoudt

    In this session we will explore all the possibilities offered by Power Bi Report server:
    What are the difference with the classical SSRS ? What is the difference with Power BI Service ?
    What kind of data sources can you use ? How can you manage the refresh ?
    What infrastructure do you need to make it working ? How to manage the authentication from the datasource to the report ?
    In the second part we will extend the capacities of the PBIRS by making it work with Office Online Server and give you the possibility to deploy server based excel reports and analytics.
    I will also explain the pitfalls we encountered after a one-year project around it.

  • DEV-OPS english
    21.02. | 09.30 - 10.30
    Track 3
    Level 200
    Improve your Database Performance in Seven Simple Steps Hugo Kornelis

    You wrote the code, you tested it, it works, and it’s fast. So you deploy. And then those pesky users insist on entering not hundreds, not thousands, but millions of rows – and suddenly, you have performance problems.
    What to do? Blaming SQL Server is a good start, but won’t solve the problems. You can of course hire a database consultant to make all your performance problems (and all your money) disappear - but why not first take a look yourself?
    This session will show you seven simple things, that might alleviate most of your database related performance problems. Use these tricks at your workplace, and you can be the hero of the department!

  • BIGDATA & ANALYTICS german
    21.02. | 09.30 - 10.30
    Track 5
    Level 100
    Einstieg in Machine Learning für Datenbankentwickler Sascha Dittmann

    Hast Du Dich als Datenbankentwickler schon einmal gefragt, wie Du Deine Datenbank-Projekte mit Machine Learning Technologien erweitern kannst?
    Wie kannst Du Dein vorhandenes Wissen wiederverwenden und was muss Du noch lernen?
    In dieser Session stellt Sascha Dittmann verschiedene Lernpfade vor, um als Datenbankentwickler in die Welt des Data Science eintauchen zu können. Für seine Praxisbeispiele nutzt er dabei verschiedene Werkzeuge, wie beispielsweise die SQL Server ML Services, Azure Databricks und die Azure ML Services, um bekanntes Wissen mit Neuen zu vereinen.

  • BIG DATA & ANALYTICS german
    21.02. | 09.30 - 10.30
    Track 6
    Level 300
    Big Data as a Service – Cloud based Big Data Plattformen Guido Jacobs

    Egal ob Sie ein Industrie 4.0/ IoT Projekt haben, oder ein Data Lake Konzept umsetzen wollen. Die Datenexplosion macht vor niemanden halt. Wenn Sie nicht von dieser Datenflut überrollt werden wollen, dann sollten Sie diesen Vortrag besuchen. Guido Jacobs zeigt Ihnen Möglichkeiten, wie Sie eine Big Data Plattform als „managed Service“ mit Hilfe von Microsoft Azure realisieren können. Gerade in einem Big Data Szenario ist die Verwendung einer Cloud Lösung die effektivste Variante, da im Vorfeld keine Vorab-Investitionen getätigt werden müssen. Sie werden überrascht sein, wie Sie eine modulare Lösung mit den vorhanden Azure Services aufbauen können und diese um weitere open Source Projekte erweitern können.

  • BREAK
    21.02. | 10.30 - 11.00
      Coffee break // Partner Exhibition
  • DEV-OPS german
    21.02. | 11.00 - 12.00
    Track 1
    Level 300
    Intelligent Query Processing in SQL Server 2019 Milos Radivojevic

    SQL Server 2017 started with query processing improvements called Adaptive Query Processing. Now, in SQL Server 2019 CTP2, there are additional improvements, and all of them are packed in a feature with the most promising name - Intelligent Query Processing.
    The intention of these improvements is to fix poor performing queries due to wrong cardinality estimations and other sub-optimal plan decisions, and hereby enhance query performance with almost no code changes.
    This session will cover briefly all these features: Batch and Row Mode Memory Grant Feedback, Batch Mode Adaptive Join and Interleaved Execution, Table Variable Deferred Compilation, and Approximate Query Processing.
    Better cardinality estimations should bring better plans, so one of the questions that inevitably arise is do you still need to tune your queries, does an intelligent query processor really needs my help, does it also solve parameter sniffing issues?
    The session will address such questions, and also suggest how much improvements you should expect in real workloads with this very promising set of features.

  • IM english
    21.02. | 11.00 - 12.00
    Track 2
    Level 200
    Leveraging Data Value with Azure Data Catalog Karen Lopez

    Do you know all the data resources in your enterprise? Do you know what data they contain? How do end users find and use data? Do you know where all your personally identifiable data is located? How do you identify sensitive data?
    Microsoft Azure Data Catalog is a fully managed cloud service that serves as a system of registration and system of discovery for enterprise data sources. In other words, Azure Data Catalog is all about helping people discover, understand, and use data sources, and helping organizations to get more value from their existing data.
    Learn how to:

    • Register Data Sources
    • Discover Data Sources
    • Annotate and Define Data
    • See Sample Data
    • Data Profile Data
    • Set Up an Enterprise Data Glossary

    Plus take away 10 tips to make your enterprise data more valuable for everyone.

  • BIGDATA & ANALYTICS english
    21.02. | 11.00 - 12.00
    Track 3
    Level 400
    Comparing Predictive Mining Models from R, Python, SSAS, and Azure ML Dejan Sarka

    There is a lot of overlapping in Microsoft BI suite. For advanced analytics, like data mining, you can use SQL Server Analysis Services (SSAS), or R, or Python, or Azure ML. The question arises which tool to use. The answer is simple, through another question. Why not all of them?
    In data mining, you typically create multiple predictive models for the same task, and then evaluate them to select the best one. So why wouldn’t you even use different tools for different models? You will learn how to evaluate the predictive models. Then you will see how to bring all of the mining models together and compare them no matter of the source they come from. You will see how you can use SQL Server Integration Services, Excel, and other tools for this task.

  • DBA english
    21.02. | 11.00 - 12.00
    Track 5
    Level 300
    Becoming a hipster DBA, a guide to Github and CI/CD for Admins André Kamman

    The use of securing your code in source control and testing it automatically with a continuous integration tool is not something developers need to be convinced of. But DBAs have traditionally not used any of these techniques. It's also not that straight forward for them. Checking the outcome of a typical DBA script usually involves the need for infrastructure to be available in a certain state and tests that check the changes made to that infrastructure. This session will combine an introduction level talk about Git(hub) and Continuous Integration with the complexity of Infrastructure testing aimed ad DBAs.

  • BI english
    21.02. | 11.00 - 12.00
    Track 6
    Level 200
    Let your data flow - Introducing Power BI dataflows Wolfgang Strasser

    Power BI serves as a self-service BI platform with a strong focus on data preparation and interactive analysis. With the introduction of Power BI dataflows, self-service data preparation is brought to a new level.

    The main concepts used are:

    • Usage of common and mature technologies: data is stored as entities following the Common Data Model in Azure Data Lake Storage Gen2;
    • Integration: dataflows are created and managed in Power BI app workspace
    • Self-Service and low-code/no-code - Power Query is used as data preparation engine
    • Connectivity: dataflows will support a variety of different data sources (including cloud-based and on-premises sources)

    Join this session if you would like to learn more about the basic concepts and especially see Power BI dataflows in action.

  • BREAK
    21.02. | 12.00 - 13.00
      Lunch // Partner Exhibition
  • DEV-OPS english
    21.02. | 13.00 - 14.00
    Track 1
    Level 300
    Persistence is Futile - Implementing Delayed Durability in SQL Server Mark Broadbent

    The concurrency model of most Relational Database Systems are defined by the ACID properties but as they aim for ever increasing transactional throughput, those rules are bent, ignored, or even broken.
    In this session, we will investigate how SQL Server implements transactional durability in order to understand how Delayed Durability bends the rules to remove transactional bottlenecks and achieve improved throughput. We will take a look at how this can be used to compliment In-Memory OLTP performance, and how it might impact or compromise other things.
    Attend this session and you will be assimilated!

  • BI english
    21.02. | 13.00 - 14.00
    Track 2
    Level 400
    Working with web services in Power BI and M Chris Webb

    The need to use web services as data sources in Power BI is becoming more and more common. However, for all but the simplest scenarios, this involves writing M code and may even require writing a custom data connector. In this session you'll learn how to use the Web.Contents M function to call a web service and make GET and POST requests, pass XML and JSON values, handle errors, and deal with issues such as caching. You'll also find out about the limitations of the On-Premises Gateway and scheduled refresh when using web services as data sources. Finally, you'll learn how to handle authentication, and also handle OAuth2 authentication in Power BI custom data connectors.

  • BI english
    21.02. | 13.00 - 14.00
    Track 3
    Level 200
    Introduction to Power BI dataflows Matthew Roche

    This fall Microsoft introduced Power BI dataflows, a powerful new feature for self-service data preparation powered by Azure Data Lake Storage gen2. Dataflows bring the familiar Power Query data prep experience to the Power BI service, and enable analysts and business users to define reusable data entities that can be shared and reused across workspaces and apps.
    In this session, Power BI program manager Matthew Roche will introduce this exciting new capability, including an end-to-end demonstration of the Power BI dataflows experience. If you haven’t yet started to explore dataflows – or if you want to build on what you already know – this is the session for you.

  • BIGDATA & ANALYTICS english
    21.02. | 13.00 - 14.00
    Track 6
    Level 200
    Real-World Data Movement and Orchestration Patterns using Azure Data Factory V2 Jason Horner

    In this session, we will start with an overview of Azure Data Factory V2 concepts, then show you how you can use metadata to quickly build scalable serverless pipelines to move data from disparate data sources including On-Premises and Platform As A Service. Next, we will look at how to integrate the solution using continuous integration and deployment techniques. Finally, we will look at how to schedule, monitor and log our solution.
    Whether you are just getting started with Azure Data Factory or looking to make your current data factory robust and enterprise-ready this session will take you to the next level.

  • BREAK
    21.02. | 14.00 - 14.15
      Short break // Changing Rooms
  • DBA english
    21.02. | 14.15 - 15.15
    Track 1
    Level 300
    SQL Server Audit and Threat Detection Thomas LaRock

    Beginning with SQL Server 2016 SP1, all editions of SQL Server now support SQL Server Audit. SQL Server Audit is a feature that allows for administrators to capture and track activity at both the database and server instance level. Built upon Extended Events, SQL Server Audit will log details of events each time the audit action happens.
    Attend this interactive session for an overview of SQL Server Audit, learn how to create and configure audits, and discuss the best ways to consume, centralize, and analyze audit logs.

  • IM english deutsch
    21.02. | 14.15 - 15.15
    Track 2
    Level 300
    Data Quality matters! Go for a quality roundtrip in the MS data platform - Update 2019 Tillman Eitelberg & Oliver Engels

    In the times of data explosion, a dazzling array of information streams and unbelievable possibilities of different techniques to manage data via the Microsoft Data Platform, this session asks the question: what about data curation?
    We dive into the requirements of modern curation and show in examples, what the Microsoft Data Platform has to offer: Can SQL Server Master Data Service (MDS), Data Quality Services (DQS) and Integration Service (SSIS) help?
    Or are they outdated? What about Azure Data Catalog and how does the Microsoft Common Model come into play? We show, in a demo driven presentation, what the Microsoft Data Platform has to offer, how to combine it with Open Source Projects and give you hints what is on the roadmap. At the end of this session you will have an overview and understanding of the possibilities, that Microsoft Data Platform offers you to address your data curation requirements.

  • BI deutsch
    21.02. | 14.15 - 15.15
    Track 3
    Level 200
    Nutoka, Nuspli oder Nutella? Nur eine Frage des Geschmacks? Volker Hinz

    Wer stand nicht schon mal vor der Entscheidung der richtigen Nuss-Nougat-Creme? Genauso geht es mittlerweile vielen Firmen und Anwendern mit Power BI von Microsoft.
    Basic, Pro oder Premium? Was steckt hinter den einzelnen Editionen und wann ist welche sinnvoll? Was bietet die neue Premium Version und ist sie wirklich so teuer? Diese Fragen klären wir in der Session und geben anhand von Demos einen Überblick über den neuen Premium Service mit paginierten Berichten (aus SSRS) und weiteren neuen Funktionalitäten.

  • DBA english
    21.02. | 14.15 - 15.15
    Track 5
    Level 200
    Terraform for Beginners John Martin

    Whether you are a Developer or DBA, managing infrastructure as code is becoming more common, but what if you need to managed Hybrid or multi-cloud deployments? Having one tool to do this can simplify management and this is where Terraform comes in.
    Together we will look at what Terraform is and how we can use it to simplify managing our infrastructure needs alongside our application code. Join me as I talk through how we can define one VM through to building production ready infrastructure by using the open source tool Terraform.

  • BI german
    21.02. | 14.15 - 15.15
    Track 6
    Level 300
    Continuous Integration für Analysis Services Gabi Münster + Special guest: Benjamin Kettner

    Speziell im BI Bereich sind die Themen Continuous Integration und Continuous Deployment immer noch stark vernachlässigt. Warum ist das der Fall?
    Neben der mangelnden Unterstützung durch Softwareanbieter gibt es noch weitere Faktoren, die wir in dieser Session gemeinsam erarbeiten wollen.
    Doch welche Lösungsmöglichkeiten gibt es denn heutzutage? Und warum könnte Azure hier eine wichtige Rolle spielen?

  • BREAK
    21.02. | 15.15 - 15.45
      Coffee break // Partner Exhibition
  • BIG DATA & ANALYTICS german
    21.02. | 15.45 - 16.45
    Track 1
    Level 200
    Wie Runtastic von der Integration von R in die Microsoft Data Platform profitiert Markus Ehrenmüller-Jensen

    Runtastic unterstützt Menschen weltweit ein längeres und gesünderes Leben zu haben. Unser Ansatz ist dabei datengetrieben. Daten zu sammeln ist heutzutage kein Problem - nützliche Einsichten zu gewinnen manchmal aber herausfordernd. Die Integration von R als Sprache und als Service in die Microsoft’s Data Platform ist uns dabei sehr gelegen gekommen. Lernen Sie in diesem Vortrag, wo die Integration bereits überall passiert ist und wie wir bei Runtastic daraus Nutzen ziehen.

  • BI german
    21.02. | 15.45 - 16.45
    Track 2
    Level 200
    JSON-Formate in Power BI meistern Imke Feldmann

    Mehr und mehr Daten werden heutzutage im JSON-Format zur Verfügung gestellt und viele Webservices verlangen ihrerseits Parameter in JSON-Format. In dieser Session werden zahlreiche Tips und Tricks vorgestellt, wie man JSON-Formate einfach und effizient handhaben kann. Dies beinhaltet auch Funktionen zur Automatisierung dieser Prozeduren.

  • AZURE english
    21.02. | 15.45 - 16.45
    Track 3
    200 - 300
    Azure SQL Database - Lessons learned from the trenches Jose Manuel Jurado Diaz

    In this session you will learn the best practices, tips and tricks on how to successfully use Azure SQL Database on production environments. You will learn how to monitor and improve Azure SQL Database query performance. I will cover how Microsoft CSS has been using Query Store, Extended Events, DMVs to help customers monitor and improve query response times when running their databases in the Microsoft Azure cloud. These learnings are fruit of Microsoft CSS support cases, and customer field engagements. This session includes several demos.

  • IM german
    21.02. | 15.45 - 16.45
    Track 5
    Level 300
    SQL Server 2019 und SharePoint 2019 - SharePoint das Fliegen lernen Lars Platzdasch

    Eine Best-Practice Session über das Zusammenspiel von SQL Server 2019 und SharePoint 2019.
    Wir besprechen Historisches.

    Die SQL Server Konfiguration im SharePoint,
    SQL Server Basics und Best-Practice
    SharePoint Topologie sowie die Bereitstellung von SharePoint und SQL Server
    Best-Practice, Fallstricke, HA/DR Lösungen werden besprochen.

  • DBA english
    21.02. | 15.45 - 16.45
    Track 6
    Level 200
    Managing Always On Availability Groups with Powershell Marcos Freccia

    As a Database Administrator once you have successfully configured Availability Groups, managing it becomes an important part of your daily job. One of the challenges is to guarantee that uncontained objects such as SQL Server Agent Jobs, Logins, Credentials, Database Mails, Reporting Services Subscriptions jobs are replicated across replicas. You will also need to automate the database restore of databases that are in an Availability Groups session. PowerShell comes to the rescue! It isn't going away and it is worth learning to make your life as a SQL Server professional easier. So, why not using what PowerShell has to offer and take advantage of this great tool to automate your work? In this session, you will see real-world examples of how I manage several Always ON Availability Groups installation with simple scripts that with a single command will do all the work for you and guarantee that your environment is compliant. Instead of focusing on the syntax, the session will provide scenarios that every DBA faces in their companies.

    Prerequisites: The attendee should have knowledge about how Always On Availability Groups works and are configured. Some knowledge about PowerShell is essential.

  • Farewell
    21.02. | 16.45 - 17.15
      Partner & PASS Raffle