You are on page 1of 5191

Table of Contents

Get started with Universal Windows Platform


What's a UWP app?
Guide to Universal Windows Platform apps
Get set up
Enable your device for development
Sign up
Your first app
Create a "Hello, world" app (C#)
Create a "Hello, world" app (JS)
Create a "Hello, world" app (C++)
A 2D UWP game in JavaScript
A 3D UWP game in JavaScript
A UWP game in MonoGame
Plan your app
What's next?
Get UWP app samples
Design & UI
Layout
Intro to app UI design
Navigation basics
Command basics
Content basics
Screen sizes and breakpoints
Define page layouts with XAML
Layout panels
Alignment, margins, and padding
Creating app layouts with Grid and StackPanel
Style
Color
Icons
Motion
Sound
Typography
Styling controls
Controls and patterns
Intro
Index of controls by function
App bar and command bar
Auto-suggest box
Buttons
Check box
Date and time
Dialogs and flyouts
Flip view
Hub
Hyperlinks
Images and image brushes
Inking controls
Lists
Master/details
Media playback
Menus and context menus
Nav pane
Progress
Radio button
Scrolling and panning controls
Search
Semantic zoom
Slider
Split view
Tabs and pivot
Text
Tiles, badges, and notifications
Toggle
Tooltip
Web view
Inputs and devices
Input primer
Device primer
Usability
Accessibility
App settings
Globalization and localization
Guidelines for app help
Microsoft Design Language
Downloads
Develop Windows apps
App-to-app communication
Share data
Receive data
Copy and paste
Drag and drop
Audio, video, and camera
Camera
Media playback
Detect faces in images or videos
Custom video effects
Custom audio effects
Media compositions and editing
Audio device information properties
Create, edit, and save bitmap images
Transcode media files
Process media files in the background
Audio graphs
MIDI
Import media from a device
Camera-independent Flashlight
Supported codecs
Contacts and calendar
Select contacts
Send email
Send an SMS message
Manage appointments
Connect your app to actions on a contact card
Data access
Entity framework Core with SQLite for C# apps
SQLite databases
Data binding
Data binding overview
Data binding in depth
Sample data on the design surface, and for prototyping
Bind hierarchical data and create a master/details view
Debugging, testing, and performance
Deploying and debugging UWP apps
Testing and debugging tools for PLM
Test with the Microsoft Emulator for Windows 10
Test Surface Hub apps using Visual Studio
Beta testing
Windows Device Portal
Windows App Certification Kit
Performance
Version adaptive code
Develop UWP education apps
Take a Test API
Devices, sensors, and power
Enable device capabilities
Enable usermode access
Enumerate devices
Pair devices
Point of service
Sensors
Bluetooth
Printing and scanning
3D printing
NFC
Get battery information
Enterprise
Windows Information Protection (WIP)
Enterprise shared storage
Files, folders, and libraries
Enumerate and query files and folders
Create, write, and read a file
Get file properties
Open files and folders with a picker
Save a file with a picker
Accessing HomeGroup content
Determining availability of Microsoft OneDrive files
Files and folders in the Music, Pictures, and Videos libraries
Track recently used files and folders
Access the SD card
File access permissions
Games and DirectX
Windows 10 game development guide
Planning
UWP programming
DirectX programming
Game porting guides
Game development videos
Direct3D Graphics Learning Guide
Coordinate systems and geometry
Vertex and index buffers
Devices
Lighting
Depth and stencil buffers
Textures
Graphics pipeline
Views
Compute pipeline
Resources
Streaming resources
Appendices
Graphics and animation
Draw shapes
Use brushes
Animations overview
Transforms overview
Visual layer
Hosted Web Apps
Launching, resuming, and background tasks
App lifecycle
Launch an app with a URI
Launch an app through file activation
Reserved file and URI scheme names
Auto-launching with AutoPlay
Use app services
Support your app with background tasks
Connected apps and devices (Project "Rome")
Splash screens
Maps and location
Request authentication key
Display maps
Map control
Display POI
Display routes and directions
Overlay tiled images
Perform geocoding
Get current location
Design guidance for location-aware apps
Set up a geofence
Design guidance for geofencing
Monetization, engagement, and Store services
In-app purchases and trials
Microsoft Store Services SDK
Run app experiments with A/B testing
Launch Feedback Hub from your app
Configure your app for targeted push notifications
Log custom events for Dev Center
Display ads in your app
Windows Store services
Create a Retail Demo Experience (RDX) app
Networking and web services
Networking basics
Which networking technology?
Network communications in the background
Sockets
WebSockets
HttpClient
RSS/Atom feeds
Background transfers
Packaging apps
Package a UWP app with Visual Studio
Manual app packaging
Install apps with the WinAppDeployCmd.exe tool
App capability declarations
Set up automated builds for your UWP app
Download and install package updates for your app
Porting apps to Windows 10
Move from Windows Phone Silverlight to UWP
Move from Windows Runtime 8.x to UWP
Desktop to UWP Bridge
Windows apps concept mapping for Android and iOS developers
Move from iOS to UWP
Hosted Web Apps
Security
Intro to secure Windows app development
Authentication and user identity
Cryptography
Threading and async programming
Asynchronous programming (UWP apps)
Asynchronous programming in C++ (UWP apps)
Best practices for using the thread pool
Call asynchronous APIs in C# or Visual Basic
Create a periodic work item
Submit a work item to the thread pool
Use a timer to submit a work item
UWP on Xbox One
Getting started
What's new
Xbox best practices
Known issues
FAQ
Xbox One Developer Mode activation
Tools
Development environment setup
System resource allocation
Introduction to multi-user applications
Samples
Bringing existing games to Xbox
Disabling developer mode on Xbox One
API reference
Windows Runtime components
Creating Windows Runtime components in C++
Walkthrough: Creating a basic Windows Runtime component in C++ and calling it
from JavaScript or C#
Creating Windows Runtime components in C# and Visual Basic
Walkthrough: Creating a Simple Windows Runtime component and calling it from
JavaScript
Raising events in Windows Runtime components
Brokered Windows Runtime components for side-loaded Windows Store apps
XAML platform
XAML overview
Dependency properties overview
Custom dependency properties
Attached properties overview
Custom attached properties
Events and routed events overview
API reference
App development for Windows as a service
Choose a UWP version
Services
What's new in Windows 10 for developers
Prerelease APIs in Windows 10 preview builds
What's new in Windows 10, version 1607
What's new in Windows 10, version 1607 Preview
What's new in Windows 10, version 1511
What's new in Windows 10, version 1507
Windows 8.x guides
Windows Phone Silverlight 8.x guides
Publish Windows apps
Using the Windows Dev Center dashboard
Account types, locations, and fees
Opening a developer account
Managing your account settings and profile info
How your app appears in the Store for Windows 10 customers
Trademark and copyright protection
Dev Center Insider Program
Manage account users
Set custom permissions for account users
Create your app by reserving a name
App submissions
Set app pricing and availability
Enter app properties
Age ratings
Upload app packages
Create app Store listings
Notes for certification
The app certification process
Package flights
Gradual package rollout
Beta testing and targeted distribution
Distribute LOB apps to enterprises
Add-on submissions
Set your add-on product ID and product type
Enter add-on properties
Set add-on pricing and availability
Create add-on Store listings
Manage add-ons in bulk
Monetize with ads
About affiliate ads
App management and services
Use map services
View app identity details
Manage app names
Generate preinstall packages for OEMs
Analytics
Acquisitions report
Add-on acquisitions report
Installs report
Usage report
Health report
Ratings report
Reviews report
Feedback report
Channels and conversions report
Ad mediation report
Advertising performance report
Affiliates performance report
Promote your app report
Download analytics reports
Create customer groups
Create customer segments
App promotion and customer engagement
Create an ad campaign for your app
Create a custom app promotion campaign
Send targeted push notifications to your app's customers
Generate promotional codes
Put apps and add-ons on sale
Respond to customer feedback
Respond to customer reviews
App marketing guidelines
Link to your app
Make your app easier to promote
Getting paid
Setting up your payout account and tax forms
Payout thresholds, methods, and timeframes
Payout summary
Tax details for paid apps
Understand IRS tax forms
Mobile operator billing
VAT info
Store Policies and Code of Conduct
Windows Store Policies
App quality
Developer Code of Conduct
What's a Universal Windows Platform (UWP) app?
3/7/2017 5 min to read Edit on GitHub

The Universal Windows Platform (UWP) is the app platform for Windows 10. You can develop apps for UWP with
just one API set, one app package, and one store to reach all Windows 10 devices PC, tablet, phone, Xbox,
HoloLens, Surface Hub and more. Its easier to support a number of screen sizes, and also a variety of interaction
models, whether it be touch, mouse and keyboard, a game controller, or a pen. At the core of UWP apps is the idea
that users want their experiences to be mobile across ALL their devices, and they want to use whatever device is
most convenient or productive for the task at hand.
UWP is also flexible: you don't have to use C# and XAML if you don't want to. Do you like developing in Unity or
MonoGame? Prefer JavaScript? Not a problem, use them all you want. Have a C++ desktop app that you want to
extend with UWP features and sell in the store? That's okay, too.
The bottom line: You can spend your time working with familiar programming languages, frameworks and APIs, all
in single project, and have the very same code run on the huge range of Windows hardware that exists today. Once
you've written your UWP app, you can then publish it to the store for the world to see.

So, what exactly is a UWP app?


What makes a UWP app special? Here are some of the characteristics that make UWP apps on Windows 10
different.
Apps target device families, not an OS
A device family, like Xbox or PC, identifies the APIs, system characteristics, and behaviors that you can use in
your app. Users with that kind of device can buy your app from the store.
Apps are packaged using the .AppX packaging format and distributed from the Store.
All UWP apps are distributed as an AppX package. This provides a trustworthy installation mechanism and
ensures that your apps can be deployed and updated seamlessly.
There's one store for all devices.
After you register as an app developer, you can submit your app to the store and make it available on all
device families, or only those you choose. You submit and manage all your apps for Windows devices in
one place.
There's a common API surface across device families.
The Universal Windows Platform (UWP) core APIs are the same for all Windows device families. If your app
uses only the core APIs, it will run on any Windows 10 device.
Extension SDKs make your app light up on specialized devices.
Extension SDKs add specialized APIs for each device family. If your app is intended for a particular device
family, like HoloLens, you can add HoloLens features in addition to the normal UWP core APIs. Your app has
a single app package that runs on all devices, but you can check what device family your app is running on
before calling an extension API for HoloLens.
Apps support adaptive controls and input
UI elements use effective pixels (see Responsive design 101 for UWP apps), so they can respond with a
layout that works based on the number of screen pixels available on the device. And they work well with
multiple types of input such as keyboard, mouse, touch, pen, and Xbox One controllers. If you need to
further tailor your UI to a specific screen size or device, new layout panels and tooling help you adapt your
UI to the devices your app may run on.

Use a language you already know


UWP apps use the Windows Runtime, a native API built into the operating system. This API is implemented in C++
and supported in C#, Visual Basic, C++, and JavaScript. Some options for writing apps in UWP include:
XAML UI and a C#, VB, or C++ backend
DirectX UI and a C++ backend
JavaScript and HTML
Microsoft Visual Studio 2015 provides a UWP app template for each language that lets you create a single project
for all devices. When your work is finished, you can produce an app package and submit it to the Windows Store
from within Visual Studio to get your app out to customers on any Windows 10 device.

UWP apps come to life on Windows


On Windows, your app can deliver relevant, real-time info to your users and keep them coming back for more. In
the modern app economy, your app has to be engaging to stay at the front of your users lives. Windows provides
you with lots of resources to help keep your users returning to your app:
Live tiles and the lock screen show contextually relevant and timely info at a glance.
Push notifications bring real-time, breaking alerts to your users attention when they're needed.
The Action Center is a place where you can organize and display notifications and content that users need to
take action on.
Background execution and triggers bring your app to life just when the user needs it.
Your app can use voice and Bluetooth LE devices to help users interact with the world around them.
Finally, you can use roaming data and the Windows Credential Locker to enable a consistent roaming experience
across all of the Windows screens where users run your app. Roaming data gives you an easy way to store a users
preferences and settings in the cloud, without having to build your own sync infrastructure. And you can store user
credentials in the Credential Locker, where security and reliability are the top priority.

Monetize your app


On Windows, you can choose how you'll monetize your appacross phones, tablets, PCs, and other devices. We
give you a number of ways to make money with your app and the services it delivers. All you need to do is choose
the one that works best for you:
A paid download is the simplest option. Just name your price.
Trials let users try your app before buying it, providing easier discoverability and conversion than the more
traditional "freemium" options.
Use sale prices for apps and add-ons.
In-app purchases and ads are also available.
Let's get started
For a more detailed look at the UWP, read the Guide to Universal Windows Platform apps. Then, check out Get set
up to download the tools you need to start creating apps.

More advanced topics


.NET Native - What it means for Universal Windows Platform (UWP) developers
Universal Windows apps in .NET
.NET for UWP apps
Intro to the Universal Windows Platform
3/6/2017 19 min to read Edit on GitHub

In this guide, you'll learn about the Universal Windows Platform (UWP) and Windows 10:
What a device family is, and how to decide which one to target.
New UI controls and panels for adapting your UI to different screen sizes or rotations.
How to understand and control the API surface that is available to your app.
Windows 10 introduces the Universal Windows Platform (UWP), which provides a common app platform available
on every device that runs Windows 10. With this evolution, apps that target the UWP can call not only the WinRT
APIs that are common to all devices, but also APIs (including Win32 and .NET APIs) that are specific to the device
family the app is running on. The UWP provides a guaranteed core API layer across devices. This means you can
create a single app package that can be installed onto a wide range of devices. And, with that single app package,
the Windows Store provides a unified distribution channel to reach all the device types your app can run on.

Because your UWP app runs on a wide variety of devices with different form factors and types of input, you want it
to be tailored to each device and be able to unlock the unique capabilities of each device. Each device family has
unique APIs, in addition to the guaranteed core API layer. You can write code to access those unique APIs
conditionally so that your app lights up features specific to one type of device while presenting a different
experience on other devices. Adaptive UI controls and new layout panels help you to tailor your UI across a broad
range of screen resolutions.

Device families
Windows 8.1 and Windows Phone 8.1 apps target an operating system (OS): either Windows, or Windows Phone.
With Windows 10 you no longer target an operating system but you instead target your app to one or more device
families. A device family identifies the APIs, system characteristics, and behaviors that you can expect across
devices within the device family. It also determines the set of devices on which your app can be installed from the
Store. Here is the device family hierarchy.
A device family is a set of APIs collected together and given a name and a version number. A device family is the
foundation of an OS. PCs run the desktop OS, which is based on the desktop device family. Phones and tablets, etc.,
run the mobile OS, which is based on the mobile device family. And so on.
The universal device family is special. It is not, directly, the foundation of any OS. Instead, the set of APIs in the
universal device family is inherited by child device families. The universal device family APIs are thus guaranteed to
be present in every OS and on every device.
Each child device family adds its own APIs to the ones it inherits. The resulting union of APIs in a child device family
is guaranteed to be present in the OS based on that device family, and on every device running that OS.
One benefit of device families is that your app can run on any, or even all, of a variety of devices from phones,
tablets, desktop computers, Surface Hubs, Xbox consoles, and HoloLens. Your app can also use adaptive code to
dynamically detect and use features of a device that are outside of the universal device family.
The decision about which device family (or families) your app will target is yours to make. And that decision
impacts your app in these important ways. It determines:
The set of APIs that your app can assume to be present when it runs (and can therefore call freely).
The set of API calls that are safe only inside conditional statements.
The set of devices on which your app can be installed from the Store (and consequently the form factors that
you need to consider).
There are two main consequences of making a device family choice: the API surface that can be called
unconditionally by the app, and the number of devices the app can reach. These two factors involve tradeoffs and
are inversely related. For example, a UWP app is an app that specifically targets the universal device family, and
consequently is available to all devices. An app that targets the universal device family can assume the presence of
only the APIs in the universal device family (because that's what it targets). Other APIs must be called conditionally.
Also, such an app must have a highly adaptive UI and comprehensive input capabilities because it can run on a
wide variety of devices. A Windows mobile app is an app that specifically targets the mobile device family, and is
available to devices whose OS is based on the mobile device family (which includes phones, tablets, and similar
devices). A mobile device family app can assume the presence of all APIs in the mobile device family, and its UI has
to be moderately adaptive. An app that targets the IoT device family can be installed only on IoT devices and can
assume the presence of all APIs in the IoT device family. That app can be very specialized in its UI and input
capabilities because you know that it will run only on a specific type of device.
Here are some considerations to help you decide which device family to target:
Maximizing your app's reach
To reach the maximum range of devices with your app, and to have it run on as many kinds of devices as possible,
your app will target the universal device family. By doing so, the app automatically targets every device family
that's based on universal (in the diagram, all the children of universal). That means that the app runs on every OS
based on those device families, and on all the devices that run those operating systems. The only APIs that are
guaranteed to be available on all those devices is the set defined by the particular version of the universal device
family that you target. (With this release, that version is always 10.0.x.0.) To find out how an app can call APIs
outside of its target device family version, see Writing code later in this topic.
Limiting your app to one kind of device
You may not want your app to run on a wide range of devices; perhaps it's specialized for a desktop PC or for an
Xbox console. In that case you can choose to target your app at one of the child device families. For example, if you
target the desktop device family, the APIs guaranteed to be available to your app include the APIs inherited from
the universal device family plus the APIs that are particular to the desktop device family.
Limiting your app to a subset of all possible devices
Instead of targeting the universal device family, or targeting one of the child device families, you can instead target
two (or more) child device families. Targeting desktop and mobile might make sense for your app. Or desktop and
HoloLens. Or desktop, Xbox and Surface Hub, and so on.
Excluding support for a particular version of a device family
In rare cases you may want your app to run everywhere except on devices with a particular version of a particular
device family. For example, let's say your app targets version 10.0.x.0 of the universal device family. When the
operating system version changes in the future, say to 10.0.x.2, at that point you can specify that your app runs
everywhere except version 10.0.x.1 of Xbox by targeting your app to 10.0.x.0 of universal and 10.0.x.2 of Xbox. Your
app will then be unavailable to the set of device family versions within Xbox 10.0.x.1 (inclusive) and earlier.
By default, Microsoft Visual Studio specifies Windows.Universal as the target device family in the app package
manifest file. To specify the device family or device families that your app is offered to from within the Store,
manually configure the TargetDeviceFamily element in your Package.appxmanifest file.

UI and universal input


A UWP app can run on many different kinds of devices that have different forms of input, screen resolutions, DPI
density, and other unique characteristics. Windows 10 provides new universal controls, layout panels, and tooling
to help you adapt your UI to the devices your app may run on. For example, you can tailor the UI to take advantage
of the difference in screen resolution when your app is running on a desktop computer versus on a mobile device.
Some aspects of your app's UI will automatically adapt across devices. Controls such as buttons and sliders
automatically adapt across device families and input modes. Your app's user-experience design, however, may
need to adapt depending on the device the app is running on. For example, a photos app should adapt the UI when
running on a small, hand-held device to ensure that usage is ideal for single-hand use. When the photos app is
running on a desktop computer, the UI should adapt to take advantage of the additional screen space.
Windows helps you target your UI to multiple devices with the following features:
Universal controls and layout panels help you to optimize your UI for the screen resolution of the device
Common input handling allows you to receive input through touch, a pen, a mouse, or a keyboard, or a
controller such as a Microsoft Xbox controller
Tooling helps you to design UI that can adapt to different screen resolutions
Adaptive scaling adjusts to resolution and DPI differences across devices
Universal controls and layout panels
Windows 10 includes new controls such as the calendar and split view. The pivot control, which was previously
available only for Windows Phone, is also now available for the universal device family.
Controls have been updated to work well on larger screens, adapt themselves based on the number of screen
pixels available on the device, and work well with multiple types of input such as keyboard, mouse, touch, pen, and
controllers such as the Xbox controller.
You may find that you need to adapt your overall UI layout based on the screen resolution of the device your app
will be running on. For example, a communication app running on the desktop may include a picture-in-picture of
the caller and controls well suited to mouse input:

However, when the app runs on a phone, because there is less screen real-estate to work with, your app may
eliminate the picture-in-picture view and make the call button larger to facilitate one-handed operation:
To help you adapt your overall UI layout based on the amount of available screen space, Windows 10 introduces
adaptive panels and design states.
Design adaptive UI with adaptive panels
Layout panels give sizes and positions to their children, depending on available space. For example, StackPanel
orders its children sequentially (horizontally or vertically). Grid is like a CSS grid that places its children into cells.
The new RelativePanel implements a style of layout that is defined by the relationships between its child
elements. It's intended for use in creating app layouts that can adapt to changes in screen resolution. The
RelativePanel eases the process of rearranging elements by defining relationships between elements, which
allows you to build more dynamic UI without using nested layouts.
In the following example, blueButton will appear to the right of textBox1 regardless of changes in orientation or
layout, and orangeButton will appear immediately below, and aligned with, blueButtoneven as the width of
textBox1 changes as text is typed into it. It would previously have required rows and columns in a Grid to achieve
this effect, but now it can be done using far less markup.

<RelativePanel>
<TextBox x:Name="textBox1" Text="textbox" Margin="5"/>
<Button x:Name="blueButton" Margin="5" Background="LightBlue" Content="ButtonRight" RelativePanel.RightOf="textBox1"/>
<Button x:Name="orangeButton" Margin="5" Background="Orange" Content="ButtonBelow" RelativePanel.RightOf="textBox1"
RelativePanel.Below="blueButton"/>
</RelativePanel>

Use visual state triggers to build UI that can adapt to available screen space
Your UI may need to adapt to changes in window size. Adaptive visual states allows you to change the visual state
in response to changes in the size of the window.
StateTriggers define a threshold at which a visual state is activated, which then sets layout properties as
appropriate for the window size that triggered the state change.
In the following example, when the window size is 720 pixels or more in width, the visual state named wideView
is triggered, which then arranges the Best-rated games panel to appear to the right of, and aligned with the top
of, the Top free games panel.
When the window is less than 720 pixels, the narrowView visual state is triggered because the wideView trigger
is no longer satisfied and so no longer in effect. The narrowView visual state positions the Best-rated games
panel below, and aligned with the left of, the Top paid games panel:

Here is the XAML for the visual state triggers described above. The definition of the panels, alluded to by " ... "
below, has been removed for brevity.

<Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">


<VisualStateManager.VisualStateGroups>
<VisualStateGroup>
<VisualState x:Name="wideView">
<VisualState.StateTriggers>
<AdaptiveTrigger MinWindowWidth="720" />
</VisualState.StateTriggers>
<VisualState.Setters>
<Setter Target="best.(RelativePanel.RightOf)" Value="free"/>
<Setter Target="best.(RelativePanel.AlignTopWidth)" Value="free"/>
</VisualState.Setters>
</VisualState>
<VisualState x:Name="narrowView">
<VisualState.Setters>
<Setter Target="best.(RelativePanel.Below)" Value="paid"/>
<Setter Target="best.(RelativePanel.AlignLeftWithPanel)" Value="true"/>
</VisualState.Setters>
<VisualState.StateTriggers>
<AdaptiveTrigger MinWindowWidth="0" />
</VisualState.StateTriggers>
</VisualState>
</VisualStateGroup>
</VisualStateManager.VisualStateGroups>
...
</Grid>

Tooling
By default, you'll probably want to target the broadest possible device family. When you're ready to see how your
app looks and lays out on a particular device, use the device preview toolbar in Visual Studio to preview your UI on
a small or medium mobile device, on a PC, or on a large TV screen. That way you can tailor and test your adaptive
visual states:

You dont have to make a decision up front about every device type that you'll support. You can add an additional
device size to your project later.
Adaptive scaling
Windows 10 introduces an evolution of the existing scaling model. In addition to scaling vector content, there is a
unified set of scale factors that provides a consistent size for UI elements across a variety of screen sizes and
display resolutions. The scale factors are also compatible with the scale factors of other operating systems such as
iOS and Android. This makes it easier to share assets between these platforms.
The Store picks the assets to download based in part of the DPI of the device. Only the assets that best match the
device are downloaded.
Common input handling
You can build a Universal Windows app using universal controls that handle various inputs such as mouse,
keyboard, touch, pen, and controller (such as the Xbox controller). Traditionally, inking has been associated only
with pen input, but with Windows 10, you can ink with touch on some devices, and with any pointer input. Inking is
supported on many devices (including mobile devices) and can easily be incorporated with a just few lines of code.
The following APIs provide access to input:
CoreIndependentInputSource is a new API that allows you to consume raw input on the main thread or a
background thread.
PointerPoint unifies raw touch, mouse, and pen data into a single, consistent set of interfaces and events that
can be consumed on the main thread or background thread by using CoreInput.
PointerDevice is a device API that supports querying device capabilities so that you can determine what kinds
of input are available on the device.
The new InkCanvas XAML control and InkPresenter Windows Runtime APIs allow you to access ink stroke
data.
Writing code
Your programming language options for your Windows 10 project in Visual Studio include Visual C++, C#, Visual
Basic, and JavaScript. For Visual C++, C#, and Visual Basic, you can use XAML for a full-fidelity, native UI
experience. For Visual C++ you can choose to draw with DirectX either instead of or as well as using XAML. For
JavaScript, your presentation layer will be HTML, and HTML is of course a cross-platform web standard. Much of
your code and UI will be universal and it will run the same way everywhere. But for code tailored to particular
device families, and for UI tailored to particular form factors, you'll have the option to use adaptive code and
adaptive UI. Let's look at these different cases.
Calling an API that's implemented by your target device family
Whenever you want to call an API, you'll need to know whether the API is implemented by the device family that
your app is targeting. If in doubt, you can look it up in the API reference documentation. If you open the relevant
topic and look at the Requirements section, you'll see what the implementing device family is. Let's say that your
app is targeting version 10.0.x.0 of the universal device family and you want to call members of the
Windows.UI.Core.SystemNavigationManager class. In this example, the device family is "Universal". It's a good
idea to further confirm that the class members that you want to call are also within your target, and in this case
they are. So in this example, you now know that the APIs are guaranteed to be present on every device that your
app can be installed on, and you can call the APIs in your code just like you normally would.

Windows.UI.Core.SystemNavigationManager.GetForCurrentView().BackRequested += TestView_BackRequested;

As another example, imagine that your app is targeting version 10.0.x.0 of the Xbox device family, and the
reference topic for an API that you want to call says that the API was introduced in version 10.0.x.0 of the Xbox
device family. In that case, again, the API is guaranteed to be present on every device that your app can be installed
on. So you would be able to call that API in your code in the normal way.
Note that Visual Studio's IntelliSense will not recognize APIs unless they are implemented by your app's target
device family or any extension SDKs that you have referenced. Consequently, if you haven't referenced any
extension SDKs, you can be sure that any APIs that appear in IntelliSense must therefore be in your target device
family and you can call them freely.
Calling an API that's NOT implemented by your target device family
There will be cases when you want to call an API, but your target device family is not listed in the documentation. In
that case you can opt to write adaptive code in order to call that API.
Writing adaptive code with the ApiInformation class
There are two steps to write adaptive code. The first step is to make the APIs that you want to access available to
your project. To do that, add a reference to the extension SDK that represents the device family that owns the APIs
that you want to conditionally call. See Extension SDKs.
The second step is to use the Windows.Foundation.Metadata.ApiInformation class in a condition in your code
to test for the presence of the API you want to call. This condition is evaluated wherever your app runs, but it
evaluates to true only on devices where the API is present and therefore available to call.
If you want to call just a small number of APIs, you could use the ApiInformation.IsTypePresent method like
this.
// Note: Cache the value instead of querying it more than once.
bool isHardwareButtonsAPIPresent =
Windows.Foundation.Metadata.ApiInformation.IsTypePresent("Windows.Phone.UI.Input.HardwareButtons");

if (isHardwareButtonsAPIPresent)
{
Windows.Phone.UI.Input.HardwareButtons.CameraPressed +=
HardwareButtons_CameraPressed;
}

In this case we can be confident that the presence of the HardwareButtons class implies the presence of the
CameraPressed event, because the class and the member have the same requirements info. But in time, new
members will be added to already-introduced classes, and those members will have later "introduced in" version
numbers. In such cases, instead of using IsTypePresent, you can test for the presence of individual members by
using IsEventPresent, IsMethodPresent, IsPropertyPresent, and similar methods. Here's an example.

bool isHardwareButtons_CameraPressedAPIPresent =
Windows.Foundation.Metadata.ApiInformation.IsEventPresent
("Windows.Phone.UI.Input.HardwareButtons", "CameraPressed");

The set of APIs within a device family is further broken down into subdivisions known as API contracts. You can use
the ApiInformation.IsApiContractPresent method to test for the presence of an API contract. This is useful if
you want to test for the presence of a large number of APIs that all exist in the same version of an API contract.

bool isWindows_Devices_Scanners_ScannerDeviceContract_1_0Present =
Windows.Foundation.Metadata.ApiInformation.IsApiContractPresent
("Windows.Devices.Scanners.ScannerDeviceContract", 1, 0);

Win32 APIs in the UWP


A UWP app or Windows Runtime Component written in C++/CX has access to the Win32 APIs that are part of the
UWP. These Win32 APIs are implemented by all Windows 10 device families. Link your app with Windowsapp.lib.
Windowsapp.lib is an "umbrella" lib that provides the exports for the UWP APIs. Linking to Windowsapp.lib will
add to your app dependencies on dlls that are present on all Windows 10 device families.
For the full list of Win32 APIs available to UWP apps, see API Sets for UWP apps and Dlls for UWP apps.

User experience
A Universal Windows app allows you to take advantage of the unique capabilities of the device on which it is
running. Your app can make use of all of the power of a desktop device, the natural interaction of direct
manipulation on a tablet (including touch and pen input), the portability and convenience of mobile devices, the
collaborative power of Surface Hub, and other devices that support UWP apps.
Good design is the process of deciding how users will interact with your app, as well as how it will look and
function. User experience plays a huge part in determining how happy people will be with your app, so don't skimp
on this step. Design basics introduce you to designing a Universal Windows app. See the Introduction to Universal
Windows Platform (UWP) apps for designers for information on designing UWP apps that delight your users.
Before you start coding, see the device primer to help you think through the interaction experience of using your
app on all the different form factors you want to target.
In addition to interaction on different devices, plan your app to embrace the benefits of working across multiple
devices. For example:
Use cloud services to sync across devices. Learn how to connect to web services in support of your app
experience.
Consider how you can support users moving from one device to another, picking up where they left off.
Include notifications and in-app purchases in your planning. These features should work across devices.
Design your workflow using Navigation design basics for UWP apps to accommodate mobile, small-screen,
and large-screen devices. Lay out your user interface to respond to different screen sizes and resolutions.
Consider whether there are features of your app that dont make sense on a small mobile screen. There may
also be areas that dont make sense on a stationary desktop machine and require a mobile device to light
up. For example, most scenarios around location imply a mobile device.
Consider how you'll accommodate multiple kinds of input. See the Guidelines for interactions to learn how
users can interact with your app by using Cortana, Speech, Touch interactions, the Touch keyboard and
more.
See the Guidelines for text and text input for more tradition interaction experiences.

Submit a Universal Windows app through your Dashboard


The new unified Windows Dev Center dashboard lets you manage and submit all of your apps for Windows
devices in one place. New features simplify processes while giving you more control. You'll also find detailed
analytic reports combined payout details, ways to promote your app and engage with your customers, and much
more.
See Using the unified Windows Dev Center dashboard to learn how to submit your apps for publication in the
Windows Store.

See Also
For more introductory material, see Windows 10 - An Introduction to Building Windows Apps for Windows 10
Devices
Get set up
3/7/2017 1 min to read Edit on GitHub

It's easier than you think to get going. Follow these instructions and start creating Universal Windows Platform
(UWP) apps for Windows 10.

1. Get Windows 10
To develop UWP apps, you need the latest version of Windows.
Get Windows 10 online
Are you an MSDN subscriber? You can get ISO downloads here:
Get Windows 10 from MSDN Subscriber Downloads

2. Download or update Visual Studio


Microsoft Visual Studio 2017 helps you design, code, test, and debug your apps.
If you don't already have Visual Studio 2017, you can install the free Microsoft Visual Studio Community 2017.
This download includes device simulators for testing your apps:
Download Windows 10 developer tools
When you install Visual Studio, make sure to select the Universal Windows App Development Tools option, as
shown here:
Need some help with Visual Studio? See Get Started with Visual Studio.
If you have already started using Visual Studio, but discover you are missing some components, you can launch
the installer again from the New project dialog:

3. Enable your device for development


Its important to test your UWP apps on a real PCs and phones. Before you can deploy apps to your PC or
Windows Phone, you have to enable it for development.
For detailed instructions, see Enable your device for development.

4. Register as an app developer


You can start developing apps now, but before you can submit them to the store, you need a developer account.
To get a developer account, go to the Sign up page.

What's next?
After you've installed the tools and gotten a developer license or a developer account, use our tutorials to create
your first app:
Create your first app tutorials

Want more tools and downloads?


For the complete list of tools and downloads, see Downloads.
Enable your device for development
3/13/2017 8 min to read Edit on GitHub

Activate Developer Mode, sideload apps and access other developer


features

If you are using Visual Studio on a computer for first time, you will need to enable Developer Mode on both the
development PC, and on any devices you'll use to test your code. Opening a UWP project when Developer Mode is not
enabled will either open the For developers settings page, or cause this dialog to appear in Visual Studio:

When you see this dialog, click settings for developers to open the For developers settings page.

NOTE
You can go to the For developers page at any time to enable or disable Developer Mode: simply enter "for developers" into the
Cortana search box in the taskbar.

Accessing settings for Developers


To enable Developer mode, or access other settings:
1. From the For developers settings dialog, choose the level of access that you need.
2. Read the disclaimer for the setting you chose, then click Yes to accept the change.
NOTE
If your device is owned by an organization, some options might be disabled by your organization.

Here's the settings page on the desktop device family:

Here's the settings page on the mobile device family:


Which setting should I choose: sideload apps or Developer Mode?
You can enable a device for development, or just for sideloading.
Windows Store apps is the default setting. If you aren't development apps, or using special internal apps issued by
your company, keep this setting active.
Sideloading is installing and then running or testing an app that has not been certified by the Windows Store. For
example, an app that is internal to your company only.
Developer mode lets you sideload apps, and also run apps from Visual Studio in debug mode.
By default, you can only install Universal Windows Platform (UWP) apps from the Windows Store. Changing these
settings to use developer features can change the level of security of your device. You should not install apps from
unverified sources.
Sideload apps
The Sideload apps setting is typically used by companies or schools that need to install custom apps on managed devices
without going through the Windows Store. In this case, it's common for the organization to enforce a policy that disables
the Windows Store apps setting, as shown previously in the image of the settings page. The organization also provides
the required certificate and install location to sideload apps. For more info, see the TechNet articles Sideload apps in
Windows 10 and Get started with app deployment in Microsoft Intune.
Device family specific info
On the desktop device family: You can install an app package (.appx) and any certificate that is needed to run the
app by running the Windows PowerShell script that is created with the package ("Add-AppDevPackage.ps1"). For
more info, see Packaging UWP apps.
On the mobile device family: If the required certificate is already installed, you can tap the file to install any .appx
sent to you via email or on an SD card.
Sideload apps is a more secure option than Developer Mode because you cannot install apps on the device without a
trusted certificate.

NOTE
If you sideload apps, you should still only install apps from trusted sources. When you install a sideloaded app that has not been
certified by the Windows Store, you are agreeing that you have obtained all rights necessary to sideload the app and that you are
solely responsible for any harm that results from installing and running the app. See the Windows > Windows Store section of this
privacy statement.

Developer Mode
Developer Mode replaces the Windows 8.1 requirements for a developer license. In addition to sideloading, the
Developer Mode setting enables debugging and additional deployment options. This includes starting an SSH service to
allow this device to be deployed to. In order to stop this service, you have to disable Developer Mode.
Device family specific info
On the desktop device family:
Enable Developer Mode to develop and debug apps in Visual Studio. As stated previously, you will be prompted in
Visual Studio if Developer Mode is not enabled.
Allows enabling of the Windows subsystem for Linux. For more info, see About Bash on Ubuntu on Windows.
On the mobile device family:
Enable developer mode to deploy apps from Visual Studio and debug them on the device.
You can tap the file to install any .appx sent to you via email or on an SD card. Do not install apps from unverified
sources.
Additional Developer Mode features
For each device family, additional developer features might be available. These features are available only when
Developer Mode is enabled on the device, and might vary depending on your OS version.
When you enable Developer Mode, a package of options is installed that includes:
Installs Windows Device Portal. Device Portal is enabled and firewall rules are configured for it only when the Enable
Device Portal option is turned on.
Installs, enables, and configures firewall rules for SSH services that allow remote installation of apps.
(Desktop only) Allows enabling of the Windows subsystem for Linux. For more info, see About Bash on Ubuntu on
Windows.
This image shows developer features for the mobile device family on Windows 10:

Device Portal
To learn more about device discovery and Device Portal, see Windows Device Portal overview.
For device specific setup instructions, see:
Device Portal for Desktop
Device Portal for HoloLens
Device Portal for IoT
Device Portal for Mobile
Device Portal for Xbox
If you encounter problems enabling Developer Mode or Device Portal, see the Known Issues forum to find workarounds
for these issues.
SSH
SSH services are enabled when you enable Developer Mode on your device. This is used when your device is a
deployment target for UWP applications. The names of the services are 'SSH Server Broker' and 'SSH Server Proxy'.

NOTE
This is not Microsoft's OpenSSH implementation, which you can find on GitHub.

In order to take advantage of the SSH services, you can enable device discovery to allow pin pairing. If you intend to run
another SSH service, you can set this up on a different port or turn off the Developer Mode SSH services. To turn off the
SSH services, simply disable Developer Mode.
Device Discovery
When you enable device discovery, you are allowing your device to be visible to other devices on the network through
mDNS. This feature also allows you to get the SSH pin for pairing to this device.

You should enable device discovery only if you intend to make the device a deployment target. For example, if you use
Device Portal to deploy an app to a phone for testing, you need to enable device discovery on the phone, but not on your
development PC.
Error reporting (Mobile only)
Set this value to specify how many crash dumps are saved on your phone.
Collecting crash dumps on your phone gives you instant access to important crash information directly after the crash
occurs. Dumps are collected for developer-signed apps only. You can find the dumps in your phone's storage in the
Documents\Debug folder. For more info about dump files, see Using dump files.
Optimizations for Windows Explorer, Remote Desktop, and PowerShell (Desktop only)
On the desktop device family, the For developers settings page has shortcuts to settings that you can use to optimize
your PC for development tasks. For each setting, you can select the checkbox and click Apply, or click the Show settings
link to open the settings page for that option.
Tip
There are several tools you can use to deploy an app from a Windows 10 PC to a Windows 10 mobile device. Both
devices must be connected to the same subnet of the network by a wired or wireless connection, or they must be
connected by USB. Either of the ways listed installs only the app package (.appx); they do not install certificates.
Use the Windows 10 Application Deployment (WinAppDeployCmd) tool. Learn more about the WinAppDeployCmd
tool.
Starting in Windows 10, Version 1511, you can use Device Portal to deploy from your browser to a mobile device
running Windows 10, Version 1511 or later. Use the Apps page in Device Portal to upload an app package (.appx) and
install it on the device.

Use group policies or registry keys to enable a device


For most developers, you want to use the settings app to enable your device for debugging. In certain scenarios, such as
automated tests, you can use other ways to enable your Windows 10 desktop device for development.
You can use gpedit.msc to set the group policies to enable your device, unless you have Windows 10 Home. If you do
have Windows 10 Home, you need to use regedit or PowerShell commands to set the registry keys directly to enable
your device.
Use gpedit to enable your device
1. Run Gpedit.msc.
2. Go to Local Computer Policy > Computer Configuration > Administrative Templates > Windows Components > App
Package Deployment
3. To enable sideloading, edit the policies to enable:
Allow all trusted apps to install
OR -
To enable developer mode, edit the policies to enable both:
Allow all trusted apps to install
Allows development of Windows Store apps and installing them from an integrated development
environment (IDE)
4. Reboot your machine.
Use regedit to enable your device
1. Run regedit.
2. To enable sideloading, set the value of this DWORD to 1:
HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\AppModelUnlock\AllowAllTrustedApps
OR -
To enable developer mode, set the values of this DWORD to 1:
HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\AppModelUnlock\AllowDevelopmentWithoutDevLicense
Use PowerShell to enable your device
1. Run PowerShell with administrator privileges.
2. To enable sideloading, run this command:
PS C:\WINDOWS\system32> reg add
"HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\AppModelUnlock" /t
REG_DWORD /f /v "AllowAllTrustedApps" /d "1"
OR -
To enable developer mode, run this command:
PS C:\WINDOWS\system32> reg add
"HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\AppModelUnlock" /t
REG_DWORD /f /v "AllowDevelopmentWithoutDevLicense" /d "1"

Upgrade your device from Windows 8.1 to Windows 10


When you create or sideload apps on your Windows 8.1 device, you have to install a developer license. If you upgrade
your device from Windows 8.1 to Windows 10, this information remains. Run the following command to remove this
information from your upgraded Windows 10 device. This step is not required if you upgrade directly from Windows 8.1
to Windows 10, Version 1511 or later.
To unregister a developer license
1. Run PowerShell with administrator privileges.
2. Run this command: unregister-windowsdeveloperlicense.
After this you need to enable your device for development as described in this topic so that you can continue to develop
on this device. If you don't do that, you might get an error when you debug your app, or you try to create a package for it.
Here is an example of this error:
Error : DEP0700 : Registration of the app failed.

See Also
What's a Universal Windows app?
Get set up
Sign up for Windows account
Ready to sign up?
3/6/2017 1 min to read Edit on GitHub

Register now for a developer account so you can get your apps into the Windows Store and participate in other
Microsoft programs.
Sign up now!

Opening your developer account


We offer individual and company accounts in locations around the world. Check out our overview of the sign-up
process to see how it works.

Have a name for your app?


As soon as you open your developer account, you can create your app by reserving a name.
Create your first app
3/13/2017 2 min to read Edit on GitHub

Write a UWP app using your favorite programming language

Welcome to the UWP platform! These tutorials will help you create your first UWP app in the language of your
choice. You'll learn how to:
Create a UWP app project in Microsoft Visual Studio.
Add UI elements and code to your project.
Use third party libraries to add new functionality.
Build and debug your app on your local machine.
To get started, choose your favorite language!

C# and XAML
Use your .NET, WPF, or Silverlight skills to build apps using XAML with C#.
Create a "Hello, world" app using XAML with C#
We assume you're already comfortable with XAML and either C#. If you want to learn the basics, or just refresh
your memory, try these courses from the Microsoft Virtual Academy.
And as a language refresher:
C# Fundamentals for Absolute Beginners
VB Fundamentals for Absolute Beginners
A Developer's Guide to Windows 10
If you are ready to attempt something more fun than "Hello, World!", try this C# and MonoGame tutorial:
A simple 2D UWP game for the Windows Store, written in C# and MonoGame

JavaScript and HTML


Take advantage of your web skills to build apps using HTML5, Cascading Style Sheets, Level 3 (CSS3), and
JavaScript.
Create a "Hello, world" app using HTML and JavaScript
A simple 2D UWP game for the Windows Store, written in JavaScript and CreateJS
A 3D UWP game for the Windows Store, written in JavaScript and threeJS
We assume you're already comfortable with HTML5, CSS3, and JavaScript. If you want to learn the basics, or just
refresh your memory, try these courses from the Microsoft Virtual Academy.
JavaScript Fundamentals for Absolute Beginners
HTML5 & CSS3 Fundamentals for Absolute Beginners

Visual C++ component extensions (C++/CX) and XAML


Take advantage of your C++ programming expertise to build apps using Visual C++ component extensions
(C++/CX) with XAML.
Create a "Hello, world" app using XAML with C++/CX
We assume you're already comfortable with XAML and C++. If you want to learn the basics, or just refresh your
memory, try these courses from the Microsoft Virtual Academy.
C++: A General Purpose Language and Library Jump Start

Objective-C
Are you more of an iOS developer?
Use the Windows Bridge for iOS to convert your existing code to a UWP app, and keep developing in Objective
C.

Cross-platform and mobile development


Need to target Android and iOS? Check out Xamarin.

Related topics
Publishing your Windows Store app.
How-to articles on developing UWP apps
Code Samples for UWP developers
What's a Universal Windows app?
Get set up
Sign up for Windows account
Create a "Hello, world" app (XAML)
3/9/2017 7 min to read Edit on GitHub

This tutorial teaches you how to use XAML and C# to create a simple "Hello, world" app for the Universal Windows
Platform (UWP) on Windows 10. With a single project in Microsoft Visual Studio, you can build an app that runs on
any Windows 10 device.
Here you'll learn how to:
Create a new Visual Studio 2017 project that targets Windows 10 and the UWP.
Write XAML to change the UI on your start page.
Run the project on the local desktop in Visual Studio.
Use a SpeechSynthesizer to make the app talk when you press a button.

Before you start...


What's a Universal Windows app?
To complete this tutorial, you need Windows 10 and Visual Studio 2017. Get set up.
We also assume you're using the default window layout in Visual Studio. If you change the default layout, you
can reset it in the Window menu by using the Reset Window Layout command.

NOTE
This tutorial is using Visual Studio Community 2017. If you are using a different version of Visual Studio, it may look a little
different for you.

Step 1: Create a new project in Visual Studio.


1. Launch Visual Studio 2017.
2. From the File menu, select New > Project... to open the New Project dialog.
3. From the list of templates on the left, open Installed > Templates > Visual C# > Windows, and then
choose Universal to see the list of UWP project templates.
(If you don't see any Universal templates, you might be missing the components for creating UWP apps. You
can repeat the installation process and add UWP support by clicking Open Visual Studio installer on the
New Project dialog. See Get set up
4. Choose the Blank App (Universal Windows) template, and enter "HelloWorld" as the Name. Select OK.

NOTE
If this is the first time you have used Visual Studio, you might see a Settings dialog asking you to enable Developer mode.
Developer mode is a special setting that enables certain features, such as permission to run apps directly, rather than only
from the Store. For more information, please read Enable your device for development. To continue with this guide, select
Developer mode, click Yes, and close the dialog.
1. The target version/minimum version dialog appears. The default settings are fine for this tutorial, so select
OK to create the project.

2. When your new project opens, its files are displayed in the Solution Explorer pane on the right. You may
need to choose the Solution Explorer tab instead of the Properties tab to see your files.
Although the Blank App (Universal Window) is a minimal template, it still contains a lot of files. These files are
essential to all UWP apps using C#. Every project that you create in Visual Studio contains them.
What's in the files?
To view and edit a file in your project, double-click the file in the Solution Explorer. Expand a XAML file just like a
folder to see its associated code file. XAML files open in a split view that shows both the design surface and the
XAML editor.

NOTE
What is XAML? Extensible Application Markup Language (XAML) is the language used to define your app's user interface. It
can be entered manually, or created using the Visual Studio design tools. A .xaml file has a .xaml.cs code-behind file which
contains the logic. Together, the XAML and code-behind make a complete class. For more information, see XAML overview.

App.xaml and App.xaml.cs


App.xaml is where you declare resources that are used across the app.
App.xaml.cs is the code-behind file for App.xaml. Like all code-behind pages, it contains a constructor that calls
the InitializeComponent method. You don't write the InitializeComponent method. It's generated by Visual Studio, and
its main purpose is to initialize the elements declared in the XAML file.
App.xaml.cs is the entry point for your app.
App.xaml.cs also contains methods to handle activation and suspension of the app.
MainPage.xaml
MainPage.xaml is where you define the UI for your app. You can add elements directly using XAML markup, or
you can use the design tools provided by Visual Studio.
MainPage.xaml.cs is the code-behind page for MainPage.xaml. It's where you add your app logic and event
handlers.
Together these two files define a new class called MainPage , which inherits from Page, in the HelloWorld
namespace.
Package.appxmanifest
A manifest file that describes your app: its name, description, tile, start page, etc.
Includes a list of the files that your app contains.
A set of logo images
Assets/Square150x150Logo.scale-200.png represents your app in the start menu.
Assets/StoreLogo.png represents your app in the Windows Store.
Assets/SplashScreen.scale-200.png is the splash screen that appears when your app starts.

Step 2: Adding a button


Using the designer view
Let's add a button to our page. In this tutorial, you work with just a few of the files listed previously: App.xaml,
MainPage.xaml, and MainPage.xaml.cs.
1. Double-click on MainPage.xaml to open it in the Design view.
You'll notice there is a graphical view on the top part of the screen, and the XAML code view underneath.
You can make changes to either, but for now we'll use the graphical view.

2. Click on the vertical Toolbox tab on the left to open the list of UI controls. (You can click the pin icon in its
title bar to keep it visible.)
3. Expand Common XAML Controls, and drag the Button out to the middle of the design canvas.

If you look at the XAML code window, you'll see that the Button has been added there too:
<Button x:name="button" Content="Button" HorizontalAlignment="Left" Margin = "152,293,0,0" VerticalAlignment="Top"/>

4. Change the button's text.


Click in the XAML code view, and change the Content from "Button" to "Hello, world!".

<Button x:name="button" Content="Hello, world!" HorizontalAlignment="Left" Margin = "152,293,0,0" VerticalAlignment="Top"/>

Notice how the button displayed in the design canvas updates to display the new text.

Step 3: Start the app


At this point, you've created a very simple app. This is a good time to build, deploy, and launch your app and see
what it looks like. You can debug your app on the local machine, in a simulator or emulator, or on a remote device.
Here's the target device menu in Visual Studio.

Start the app on a Desktop device


By default, the app runs on the local machine. The target device menu provides several options for debugging your
app on devices from the desktop device family.
Simulator
Local Machine
Remote Machine
To start debugging on the local machine

1. In the target device menu ( ) on the Standard toolbar, make sure that Local Machine is
selected. (It's the default selection.)
2. Click the Start Debugging button ( ) on the toolbar.
or
From the Debug menu, click Start Debugging.
or
Press F5.
The app opens in a window, and a default splash screen appears first. The splash screen is defined by an image
(SplashScreen.png) and a background color (specified in your app's manifest file).
The splash screen disappears, and then your app appears. It looks like this.

Press the Windows key to open the Start menu, then show all apps. Notice that deploying the app locally adds its
tile to the Start menu. To run the app again later (not in debugging mode), tap or click its tile in the Start menu.
It doesn't do muchyetbut congratulations, you've built your first UWP app!
To stop debugging

Click the Stop Debugging button ( ) in the toolbar.


or
From the Debug menu, click Stop debugging.
or
Close the app window.
Step 3: Event handlers
An "event handler" sounds complicated, but it's just another name for the code that is called when an event
happens (such as the user clicking on your button).
1. Stop the app from running, if you haven't already.
2. Double-click on the button control on the design canvas to make Visual Studio create an event handler for
your button.
You can of course, create all the code manually too. Or you can click on the button to select it, and look in
the Properties pane on the lower right. If you switch to Events (the little lightning bolt) you can add the
name of your event handler.
3. Edit the event handler code in MainPage.xaml.cs, the code-behind page. This is where things get interesting.
The default event handler looks like this:

private void Button_Click(object sender, RouteEventArgs e)


{

Let's change it, so it looks like this:

private async void Button_Click(object sender, RoutedEventArgs e)


{
MediaElement mediaElement = new MediaElement();
var synth = new Windows.Media.SpeechSynthesis.SpeechSynthesizer();
Windows.Media.SpeechSynthesis.SpeechSynthesisStream stream = await synth.SynthesizeTextToStreamAsync("Hello, World!");
mediaElement.SetSource(stream, stream.ContentType);
mediaElement.Play();
}

Make sure you include the async keyword as well, or you'll get an error when you try to run the app.
What did we just do?
This code uses some Windows APIs to create a speech synthesis object, and then gives it some text to say. (For
more information on using SpeechSynthesis, see the SpeechSynthesis namespace docs.)
When you run the app and click on the button, your computer (or phone) will literally say "Hello, World!".

Summary
Congratulations, you've created your first app for Windows 10 and the UWP!
To learn how to use XAML for laying out the controls your app will use, try the grid tutorial, or jump straight to next
steps?
Create a "Hello, world" app (JS)
3/7/2017 4 min to read Edit on GitHub

This tutorial teaches you how to use JavaScript and HTML to create a simple "Hello, world" app that targets the
Universal Windows Platform (UWP) on Windows 10. With a single project in Microsoft Visual Studio, you can build
an app that runs on any Windows 10 device.

NOTE
This tutorial is using Visual Studio Community 2017. If you are using a different version of Visual Studio, it may look a little
different for you.

Here you'll learn how to:


Create a new Visual Studio 2017 project that targets Windows 10 and the UWP.
Add HTML and JavaScript content
Run the project on the local desktop in Visual Studio

Before you start...


What's a Universal Windows app?
To complete this tutorial, you need Windows 10 and Visual Studio 2017. Get set up.
We also assume you're using the default window layout in Visual Studio. If you change the default layout, you
can reset it in the Window menu by using the Reset Window Layout command.

Step 1: Create a new project in Visual Studio.


1. Launch Visual Studio 2017.
2. From the File menu, select New > Project... to open the New Project dialog.
3. From the list of templates on the left, open Installed > Templates > JavaScript, and then choose
Windows Universal to see the list of UWP project templates.
(If you don't see any Universal templates, you might be missing the components for creating UWP apps. You
can repeat the installation process and add UWP support by clicking Open Visual Studio installer on the
New Project dialog. See Get set up
4. Choose the Blank App (Universal Windows) template, and enter "HelloWorld" as the Name. Select OK.
NOTE
If this is the first time you have used Visual Studio, you might see a Settings dialog asking you to enable Developer mode.
Developer mode is a special setting that enables certain features, such as permission to run apps directly, rather than only
from the Store. For more information, please read Enable your device for development. To continue with this guide, select
Developer mode, click Yes, and close the dialog.
1. The target version/minimum version dialog appears. The default settings are fine for this tutorial, so select
OK to create the project.

2. When your new project opens, its files are displayed in the Solution Explorer pane on the right. You may
need to choose the Solution Explorer tab instead of the Properties tab to see your files.

Although the Blank App (Universal Window) is a minimal template, it still contains a lot of files. These files are
essential to all UWP apps using JavaScript. Every project that you create in Visual Studio contains them.
What's in the files?
To view and edit a file in your project, double-click the file in the Solution Explorer.
default.css
The default stylesheet used by the app.
main.js
The default JavaScript file. It's referenced in the index.html file.
index.html
The app's web page, loaded and displayed when the app is launched.
A set of logo images
Assets/Square150x150Logo.scale-200.png represents your app in the start menu.
Assets/StoreLogo.png represents your app in the Windows Store.
Assets/SplashScreen.scale-200.png is the splash screen that appears when your app starts.
Step 2: Adding a button
Click on index.html to select it in the editor, and change the HTML it contains to read:

<!DOCTYPE html>
<html>

<head>
<meta charset="utf-8" />
<title>Hello World</title>
<script src="js/main.js"></script>
<link href="css/default.css" rel="stylesheet" />
</head>

<body>
<p>Click the button..</p>
<button id="button">Hello world!</button>
</body>

</html>

It should look like this:

This HTML references the main.js that will contain our JavaScript, and then adds a single line of text and a single
button to the body of the web page. The button is given an ID so the JavaScript will be able to reference it.

Step 3: Adding some JavaScript


Now we'll add the JavaScript. Click on main.js to select it, and add the following:
// Your code here!

window.onload = function () {
document.getElementById("button").onclick = function (evt) {
sayHello()
}
}

function sayHello() {
var messageDialog = new Windows.UI.Popups.MessageDialog("Hello, world!", "Alert");
messageDialog.showAsync();
}

It should look like this:

This JavaScript declares two functions. The window.onload function is called automatically when index.html is
displayed. It finds the button (using the ID we declared) and adds an onclick handler: the method that will be called
when the button is clicked.
The second function, sayHello(), creates and displays a dialog. This is very similar to the Alert() function you may
know from previous JavaScript development.

Step 4: Run the app!


Now you can run the app by pressing F5. The app will load, the web page will be displayed. Click on the button, and
the message dialog will pop-up.
Summary
Congratulations, you've created a JavaScript app for Windows 10 and the UWP! This is a rediculously simple
example, however, you can now start adding your favorite JavaScript libraries and frameworks to create your own
app. And as it's a UWP app, you can publish it to the Store. For example of how third party frameworks can be
added, see these projects:
A simple 2D UWP game for the Windows Store, written in JavaScript and CreateJS
A 3D UWP game for the Windows Store, written in JavaScript and threeJS
Create a "hello world" app in C++
3/9/2017 16 min to read Edit on GitHub

With Microsoft Visual Studio 2015, you can use C++ to develop an app that runs on Windows 10 with a UI that's
defined in Extensible Application Markup Language (XAML).

NOTE
This tutorial is using Visual Studio Community 2017. If you are using a different version of Visual Studio, it may look a little
different for you.

Before you start...


To complete this tutorial, you must use Visual Studio Community 2017, or one of the non-Community versions
of Visual Studio 2017, on a computer that's running Windows 10. To download, see Get the tools.
We assume you have a basic understanding of standard C++, XAML, and the concepts in the XAML overview.
We assume you're using the default window layout in Visual Studio. To reset to the default layout, on the menu
bar, choose Window > Reset Window Layout.

Comparing C++ desktop apps to Windows apps


If you're coming from a background in Windows desktop programming in C++, you'll probably find that some
aspects of writing apps for the UWP are familiar, but other aspects require some learning.
What's the same?
You can use the STL, the CRT (with some exceptions), and any other C++ library as long as the code does
not attempt to call Windows functions that are not accessible from the Windows Runtime environment.
If you're accustomed to visual designers, you can still use the designer built into Microsoft Visual Studio, or
you can use the more full-featured Blend for Visual Studio. If you're accustomed to coding UI by hand, you
can hand-code your XAML.
You're still creating apps that use Windows operating system types and your own custom types.
You're still using the Visual Studio debugger, profiler, and other development tools.
You're still creating apps that are compiled to native machine code by the Visual C++ compiler. Windows
Store apps in C++ don't execute in a managed runtime environment.
What's new?
The design principles for Windows Store apps and Universal Windows apps are very different from those
for desktop apps. Window borders, labels, dialog boxes, and so on, are de-emphasized. Content is foremost.
Great Universal Windows apps incorporate these principles from the very beginning of the planning stage.
You're using XAML to define the entire UI. The separation between UI and core program logic is much
clearer in a Windows Universal app than in an MFC or Win32 app. Other people can work on the
appearance of the UI in the XAML file while you're working on the behavior in the code file.
You're primarily programming against a new, easy-to-navigate, object-oriented API, the Windows Runtime,
although on Windows devices Win32 is still available for some functionality.
You use C++/CX to consume and create Windows Runtime objects. C++/CX enables C++ exception
handling, delegates, events, and automatic reference counting of dynamically created objects. When you use
C++/CX, the details of the underlying COM and Windows architecture are hidden from your app code. For
more information, see C++/CX Language Reference.
Your app is compiled into a package that also contains metadata about the types that your app contains, the
resources that it uses, and the capabilities that it requires (file access, internet access, camera access, and so
forth).
In the Windows Store and Windows Phone Store your app is verified as safe by a certification process and
made discoverable to millions of potential customers.

Hello World Store app in C++


Our first app is a "Hello World" that demonstrates some basic features of interactivity, layout, and styles. We'll
create an app from the Windows Universal app project template. If you've developed apps for Windows 8.1 and
Windows Phone 8.1 before, you might remember that you had to have three projects in Visual Studio, one for the
Windows app, one for the phone app, and another with shared code. The Windows 10 Universal Windows Platform
(UWP) makes it possible to have just one project, which runs on all devices, including desktop and laptop
computers running Windows 10, devices such as tablets, mobile phones, VR devices and so on.
We'll start with the basics:
How to create a Universal Windows project in Visual Studio 2017.
How to understand the projects and files that are created.
How to understand the extensions in Visual C++ component extensions (C++/CX), and when to use them.
First, create a solution in Visual Studio
1. In Visual Studio, on the menu bar, choose File > New > Project.
2. In the New Project dialog box, in the left pane, expand Installed > Visual C++ > Windows Universal.

NOTE
You may be prompted to install the Windows Universal tools for C++ development.

1. In the center pane, select Blank App (Universal Windows).


(If you don't see these options, make sure you have the Universal Windows App Development Tools
installed. See Get set up for more info.)
2. Enter a name for the project. We'll name it HelloWorld.
3. Choose the OK button.

NOTE
If this is the first time you have used Visual Studio, you might see a Settings dialog asking you to enable Developer mode.
Developer mode is a special setting that enables certain features, such as permission to run apps directly, rather than only
from the Store. For more information, please read Enable your device for development. To continue with this guide, select
Developer mode, click Yes, and close the dialog.

Your project files are created.


Before we go on, lets look at what's in the solution.
About the project files
Every .xaml file in a project folder has a corresponding .xaml.h file and .xaml.cpp file in the same folder and a .g file
and a .g.hpp file in the Generated Files folder, which is on disk but not part of the project. You modify the XAML
files to create UI elements and connect them to data sources (DataBinding). You modify the .h and .cpp files to add
custom logic for event handlers. The auto-generated files represent the transformation of the XAML markup into
C++. Don't modify these files, but you can study them to better understand how the code-behind works. Basically,
the generated file contains a partial class definition for a XAML root element; this class is the same class that you
modify in the *.xaml.h and .cpp files. The generated files declare the XAML UI child elements as class members so
that you can reference them in the code you write. At build time, the generated code and your code are merged
into a complete class definition and then compiled.
Let's look first at the project files.
App.xaml, App.xaml.h, App.xaml.cpp: Represent the Application object, which is an app's entry point.
App.xaml contains no page-specific UI markup, but you can add UI styles and other elements that you want to
be accessible from any page. The code-behind files contain handlers for the OnLaunched and OnSuspending
events. Typically, you add custom code here to initialize your app when it starts and perform cleanup when it's
suspended or terminated.
MainPage.xaml, MainPage.xaml.h, MainPage.xaml.cpp:Contain the XAML markup and code-behind for
the default "start" page in an app. It has no navigation support or built-in controls.
pch.h, pch.cpp: A precompiled header file and the file that includes it in your project. In pch.h, you can include
any headers that do not change often and are included in other files in the solution.
Package.appxmanifest: An XML file that describes the device capabilities that your app requires, and the app
version info and other metadata. To open this file in the Manifest Designer, just double-click it.
HelloWorld_TemporaryKey.pfx:A key that enables deployment of the app on this machine, from Visual
Studio.

A first look at the code


If you examine the code in App.xaml.h, App.xaml.cpp in the shared project, you'll notice that it's mostly C++ code
that looks familiar. However, some syntax elements might not be as familiar if you are new to Windows Runtime
apps, or you've worked with C++/CLI. Here are the most common non-standard syntax elements you'll see in
C++/CX:
Ref classes
Almost all Windows Runtime classes, which includes all the types in the Windows API--XAML controls, the pages in
your app, the App class itself, all device and network objects, all container types--are declared as a ref class. (A few
Windows types are value class or value struct). A ref class is consumable from any language. In C++, the lifetime
of these types is governed by automatic reference counting (not garbage collection) so that you never explicitly
delete these objects. You can create your own ref classes as well.

namespace HelloWorld
{
/// <summary>
/// An empty page that can be used on its own or navigated to within a Frame.
/// </summary>
public ref class MainPage sealed
{
public:
MainPage();
};
}

All Windows Runtime types must be declared within a namespace and unlike in ISO C++ the types themselves
have an accessibility modifier. The public modifier makes the class visible to Windows Runtime components
outside the namespace. The sealed keyword means the class cannot serve as a base class. Almost all ref classes are
sealed; class inheritance is not broadly used because Javascript does not understand it.
ref new and ^ (hats)
You declare a variable of a ref class by using the ^ (hat) operator, and you instantiate the object with the ref new
keyword. Thereafter you access the object's instance methods with the -> operator just like a C++ pointer. Static
methods are accessed with the :: operator just as in ISO C++.
In the following code, we use the fully qualified name to instantiate an object, and use the -> operator to call an
instance method.

Windows::UI::Xaml::Media::Imaging::BitmapImage^ bitmapImage =
ref new Windows::UI::Xaml::Media::Imaging::BitmapImage();

bitmapImage->SetSource(fileStream);

Typically, in a .cpp file we would add a using namespace Windows::UI::Xaml::Media::Imaging directive and the auto keyword,
so that the same code would look like this:

auto bitmapImage = ref new BitmapImage();


bitmapImage->SetSource(fileStream);

Properties
A ref class can have properties, which, just as in managed languages, are special member functions that appear as
fields to consuming code.

public ref class SaveStateEventArgs sealed


{
public:
// Declare the property
property Windows::Foundation::Collections::IMap<Platform::String^, Platform::Object^>^ PageState
{
Windows::Foundation::Collections::IMap<Platform::String^, Platform::Object^>^ get();
}
...
};

...
// consume the property like a public field
void PhotoPage::SaveState(Object^ sender, Common::SaveStateEventArgs^ e)
{
if (mruToken != nullptr && !mruToken->IsEmpty())
{
e->PageState->Insert("mruToken", mruToken);
}
}

Delegates
Just as in managed languages, a delegate is a reference type that encapsulates a function with a specific signature.
They are most often used with events and event handlers

// Delegate declaration (within namespace scope)


public delegate void LoadStateEventHandler(Platform::Object^ sender, LoadStateEventArgs^ e);

// Event declaration (class scope)


public ref class NavigationHelper sealed
{
public:
event LoadStateEventHandler^ LoadState;
};

// Create the event handler in consuming class


MainPage::MainPage()
{
auto navigationHelper = ref new Common::NavigationHelper(this);
navigationHelper->LoadState += ref new Common::LoadStateEventHandler(this, &MainPage::LoadState);
}

Adding content to the app


Let's add some content to the app.
Step 1: Modify your start page
1. In Solution Explorer, open MainPage.xaml.
2. Create controls for the UI by adding the following XAML to the root Grid, immediately before its closing tag.
It contains a StackPanel that has a TextBlock that asks the user's name, a TextBox element that accepts
the user's name, a Button, and another TextBlock element.

<StackPanel x:Name="contentPanel" Margin="120,30,0,0">


<TextBlock HorizontalAlignment="Left" Text="Hello World" FontSize="36"/>
<TextBlock Text="What's your name?"/>
<StackPanel x:Name="inputPanel" Orientation="Horizontal" Margin="0,20,0,20">
<TextBox x:Name="nameInput" Width="300" HorizontalAlignment="Left"/>
<Button x:Name="inputButton" Content="Say \"Hello\""/>
</StackPanel>
<TextBlock x:Name="greetingOutput"/>
</StackPanel>

3. At this point, you have created a very basic Universal Windows app. To see what the UWP app looks like,
press F5 to build, deploy, and run the app in debugging mode.
The default splash screen appears first. It has an imageAssets\SplashScreen.scale-100.pngand a background
color that are specified in the app's manifest file. To learn how to customize the splash screen, see Adding a splash
screen.
When the splash screen disappears, your app appears. It displays the main page of the App.

It doesn't do muchyetbut congratulations, you've built your first Universal Windows Platform app!
To stop debugging and close the app, return to Visual Studio and press Shift+F5.
For more information, see Run a Store app from Visual Studio.
In the app, you can type in the TextBox, but clicking the Button doesn't do anything. In later steps, you create an
event handler for the button's Click event, which displays a personalized greeting.

Step 2: Create an event handler


1. In MainPage.xaml, in either XAML or design view, select the "Say Hello" Button in the StackPanel you added
earlier.
2. Open the Properties Window by pressing Alt+Enter, and then choose the Events button ( ).
3. Find the Click event. In its text box, type the name of the function that handles the Click event. For this
example, type "Button_Click".
4. Press Enter. The event handler method is created in MainPage.xaml.cpp and opened so that you can add the
code that's executed when the event occurs.
At the same time, in MainPage.xaml, the XAML for the Button is updated to declare the Click event handler,
like this:

<Button Content="Say \"Hello\" Click="Button_Click"/>

You could also have simply added this to the xaml code manually, which can be helpful if the designer
doesn't load. If you enter this manually, type "Click" and then let IntelliSense pop up the option to add a new
event handler. That way, Visual Studio creates the necessary method declaration and stub.
The designer fails to load if an unhandled exception occurs during rendering. Rendering in the designer
involves running a design-time version of the page. It can be helpful to disable running user code. You can
do this by changing the setting in the Tools, Options dialog box. Under XAML Designer, uncheck Run
project code in XAML designer (if supported).
5. In MainPage.xaml.cpp, add the following code to the Button_Click event handler that you just created. This
code retrieves the user's name from the nameInput TextBox control and uses it to create a greeting. The
greetingOutput TextBlock displays the result.

void HelloWorld::MainPage::Button_Click(Platform::Object^ sender, Windows::UI::Xaml::RoutedEventArgs^ e)


{
greetingOutput->Text = "Hello, " + nameInput->Text + "!";
}

6. Set the project as the startup, and then press F5 to build and run the app. When you type a name in the text
box and click the button, the app displays a personalized greeting.
Step 3: Style the start page
Choosing a theme
It's easy to customize the look and feel of your app. By default, your app uses resources that have a light style. The
system resources also include a light theme. Let's try it out and see what it looks like.
To switch to the dark theme
1. Open App.xaml.
2. In the opening Application tag, edit the RequestedTheme property and set its value to Dark:

RequestedTheme="Light"

Here's the full Application tag with the dark theme :

<Application
x:Class="HelloWorld.App"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:local="using:HelloWorld"
RequestedTheme="Dark">

3. Press F5 to build and run it. Notice that it uses the dark theme.
Which theme should you use? Whichever one you want. Here's our take: for apps that mostly display images or
video, we recommend the dark theme; for apps that contain a lot of text, we recommend the light theme. If you're
using a custom color scheme, use the theme that goes best with your app's look and feel. In the rest of this tutorial,
we use the Light theme in screenshots.
Note The theme is applied when the app is started and can't be changed while the app is running.
Using system styles
Right now, in the Windows app the text is very small and difficult to read. Let's fix that by applying a system style.
To change the style of an element
1. In the Windows project, open MainPage.xaml.
2. In either XAML or design view, select the "What's your name?"TextBlock that you added earlier.
3. In the Properties window (F4), choose the Properties button ( ) in the upper right.
4. Expand the Text group and set the font size to 18 px.
5. Expand the Miscellaneous group and find the Style property.
6. Click the property marker (the green box to the right of the Style property), and then, on the menu, choose
System Resource > BaseTextBlockStyle.
BaseTextBlockStyle is a resource that's defined in the ResourceDictionary in \Program Files\Windows
Kits\10\Include\winrt\xaml\design\generic.xaml.
On the XAML design surface, the appearance of the text changes. In the XAML editor, the XAML for the
TextBlock is updated:

<TextBlock Text="What's your name?" Style="{StaticResource BasicTextStyle}"/><

7. Repeat the process to set the font size and assign the BaseTextBlockStyle to the greetingOutput TextBlock
element.
Tip Although there's no text in this TextBlock, when you move the pointer over the XAML design surface, a
blue outline shows where it is so that you can select it.
Your XAML now looks like this:

<StackPanel x:Name="contentPanel" Margin="120,30,0,0">


<TextBlock Style="{ThemeResource BaseTextBlockStyle}" FontSize="16" Text="What's your name?"/>
<StackPanel x:Name="inputPanel" Orientation="Horizontal" Margin="0,20,0,20">
<TextBox x:Name="nameInput" Width="300" HorizontalAlignment="Left"/>
<Button x:Name="inputButton" Content="Say \"Hello\"" Click="Button_Click"/>
</StackPanel>
<TextBlock Style="{ThemeResource BaseTextBlockStyle}" FontSize="16" x:Name="greetingOutput"/>
</StackPanel>

8. Press F5 to build and run the app. It now looks like this:
Step 4: Adapt the UI to different window sizes
Now we'll make the UI adapt to different screen sizes so it looks good on mobile devices. To do this, you add a
VisualStateManager and set properties that are applied for different visual states.
To adjust the UI layout
1. In the XAML editor, add this block of XAML after the opening tag of the root Grid element.

<VisualStateManager.VisualStateGroups>
<VisualStateGroup>
<VisualState x:Name="wideState">
<VisualState.StateTriggers>
<AdaptiveTrigger MinWindowWidth="641" />
</VisualState.StateTriggers>
</VisualState>
<VisualState x:Name="narrowState">
<VisualState.StateTriggers>
<AdaptiveTrigger MinWindowWidth="0" />
</VisualState.StateTriggers>
<VisualState.Setters>
<Setter Target="contentPanel.Margin" Value="20,30,0,0"/>
<Setter Target="inputPanel.Orientation" Value="Vertical"/>
<Setter Target="inputButton.Margin" Value="0,4,0,0"/>
</VisualState.Setters>
</VisualState>
</VisualStateGroup>
</VisualStateManager.VisualStateGroups>

2. Debug the app on the local machine. Notice that the UI looks the same as before unless the window gets
narrower than 641 device-independent pixels (DIPs).
3. Debug the app on the mobile device emulator. Notice that the UI uses the properties you defined in the
narrowState and appears correctly on the small screen.
If you've used a VisualStateManager in previous versions of XAML, you might notice that the XAML here uses a
simplified syntax.
The VisualState named wideState has an AdaptiveTrigger with its MinWindowWidth property set to 641. This
means that the state is to be applied only when the window width is not less than the minimum of 641 DIPs. You
don't define any Setter objects for this state, so it uses the layout properties you defined in the XAML for the page
content.
The second VisualState, narrowState , has an AdaptiveTrigger with its MinWindowWidth property set to 0. This
state is applied when the window width is greater than 0, but less than 641 DIPs. (At 641 DIPs, the wideState is
applied.) In this state, you do define some Setter objects to change the layout properties of controls in the UI:
You reduce the left margin of the contentPanel element from 120 to 20.
You change the Orientation of the inputPanel element from Horizontal to Vertical.
You add a top margin of 4 DIPs to the inputButton element.
Summary
Congratulations, you've completed the first tutorial! It taught how to add content to Windows Universal apps, how
to add interactivity to them, and how to change their appearance.

Next steps
If you have a Windows Universal app project that targets Windows 8.1 and/or Windows Phone 8.1, you can port it
to Windows 10. There is no automatic process for this, but you can do it manually. Start with a new Windows
Universal project to get the latest project system structure and manifest files, copy your code files into the project's
directory structure, add the items to your project, and rewrite your XAML using the VisualStateManager
according to the guidance in this topic. For more information, see Porting a Windows Runtime 8 project to a
Universal Windows Platform (UWP) project and Porting to the Universal Windows Platform (C++).
If you have existing C++ code that you want to integrate with a UWP app, such as to create a new UWP UI for an
existing application, see How to: Use existing C++ code in a Universal Windows project.
Get Started Tutorial: A UWP game in JavaScript
3/7/2017 11 min to read Edit on GitHub

A simple 2D UWP game for the Windows Store, written in JavaScript


and CreateJS

Introduction
Publishing an app to the Windows Store means you can share it (or sell it!) with millions of people, on many
different devices.
In order to publish your app to the Windows Store it must be written as a UWP (Universal Windows Platform) app.
However the UWP is extremely flexible, and supports a wide variety of languages and frameworks. To prove the
point, this sample is a simple game written in JavaScript, making use of several CreateJS libraries, and
demonstrates how to draw sprites, create a game loop, support the keyboard and mouse, and adapt to different
screen sizes.
This project is built with JavaScript using Visual Studio. With some minor changes, it can also hosted on a website
or adapted to other platforms.
Note: This is a not a complete (or good!) game; it is designed to demonstrate using JavaScript and a third party
library to make an app ready to publish to the Windows Store.

Requirements
To play with this project, you'll need the following:
A Windows computer (or a virtual machine) running the current version of Windows 10.
A copy of Visual Studio. The free Visual Studio Community Edition can be downloaded from the Visual Studio
homepage.
This project makes use of the CreateJS JavaScript framework. CreateJS is a free set of tools, released under a MIT
license, designed to make it easy to create sprite-based games. The CreateJS libraries are already present in the
project (look for js/easeljs-0.8.2.min.js, and js/preloadjs-0.6.2.min.js in the Solution Explorer view). More
information about CreateJS can be found at the CreateJS home page.

Getting started
The complete source code for the app is stored on GitHub.
The simplest way to get started it to visit GitHub, click on the green Clone or download button, and select Open
in Visual Studio.

You can also download the project as a zip file, or use any other standard ways to work with GitHub projects.
Once the solution has been loaded into Visual Studio, you'll see several files, including:
Images/ - a folder containing the various icons required by UWP apps, as well as the game's SpriteSheet and
some other bitmaps.
js/ - a folder containing the JavaScript files. The main.js file is our game, the other files are EaselJS and
PreloadJS.
index.html - the webpage which contains the canvas object which hosts the game's graphics.
Now you can run the game!
Press F5 to start the app running. You should see a window open, and our familar dinosaur standing in an idyllic (if
sparse) landscape. We will now examine the app, explain some important parts, and unlock the rest of the features
as we go.

Note: Something go wrong? Be sure you have installed Visual Studio with web support. You can check by creating
a new project - if there is no support for JavaScript, you will need to re-install Visual Studio and check the Microsoft
Web Developer Tools box.
Walkthough
If you started the game with F5, you're probably wondering what is going on. And the answer is "not a lot", as a lot
of the code is currently commented-out. So far, all you'll see is the dinosaur, and a ineffectual plea to press Space.
1. Setting the Stage
If you open and examine index.html, you'll see it's almost empty. This file is the default web page that contains
our app, and it does only two important things. First, it includes the JavaScript source code for the EaselJS and
PreloadJS CreateJS libraries, and also main.js (our own source code file). Second, it defines a <canvas> tag,
which is where all our graphics are going to appear. A <canvas> is a standard HTML5 document component. We
give it a name (gameCanvas) so our code in main.js can reference it. By the way, if you are going to write your
own JavaScript game from scratch, you too will need to copy the EaselJS and PreloadJS files into your solution,
and then create a canvas object.
EaselJS provides us with a new object called a stage. The stage is linked to the canvas, and is used for displaying
images and text. Any object we want to be displayed on the stage must first be added as a child of the stage, like
this:

stage.addChild(myObject);

You will see that line of code appear several times in main.js
Speaking of which, now is a good time to open main.js.
2. Loading the bitmaps
EaselJS provides us with several different types of graphical objects. We can create simple shapes (such as the blue
rectangle used for the sky), or bitmaps (such as the clouds we're about to add), text objects, and sprites. Sprites use
a (SpriteSheet)[http://createjs.com/docs/easeljs/classes/SpriteSheet.html]: a single bitmap containing multiple
images. For example, we use this SpriteSheet to store the different frame of dinosaur animation:

We make the dinosaur walk, by defining the different frames and how fast they should be animated in this code:
// Define the animated dino walk using a spritesheet of images,
// and also a standing-still state, and a knocked-over state.
var data = {
images: [loader.getResult("dino")],
frames: { width: 373, height: 256 },
animations: {
stand: 0,
lying: {
frames: [0, 1],
speed: 0.1
},
walk: {
frames: [0, 1, 2, 3, 2, 1],
speed: 0.4
}
}
}

var spriteSheet = new createjs.SpriteSheet(data);


dino_walk = new createjs.Sprite(spriteSheet, "walk");
dino_stand = new createjs.Sprite(spriteSheet, "stand");
dino_lying = new createjs.Sprite(spriteSheet, "lying");

Right now, we're going to add some little fluffy clouds to the stage. Once the game is running, they'll drift across
the screen. The image for the cloud is already in the solution, in the images folder.
Look through main.js until you find the init() function. This is called when the game starts, and it's where we
begin to set up all our graphic objects.
Find the following code, and remove the comments (\) from the line that references the cloud image.

manifest = [
{ src: "walkingDino-SpriteSheet.png", id: "dino" },
{ src: "barrel.png", id: "barrel" },
{ src: "fluffy-cloud-small.png", id: "cloud" },
];

JavaScript needs a little help when it comes to loading resources such as images, and so we're using a feature of
the CreateJS library that can preload images, called a LoadQueue. We're can't be sure how long it will take the
images to load, so we use the LoadQueue to take care of it. Once the images are available, the queue will tell us
they are ready. In order to do that, we first create a new object that lists all our images, and then we create a
LoadQueue object. You'll see in the code below how it is set-up to call a function called loadingComplete() when
everything is ready.

// Now we create a special queue, and finally a handler that is


// called when they are loaded. The queue object is provided by preloadjs.

loader = new createjs.LoadQueue(false);


loader.addEventListener("complete", loadingComplete);
loader.loadManifest(manifest, true, "../images/");

When the function loadingComplete() is called, the images are loaded and ready to use. Uou'll see a
commented-out section that creates the clouds, now their bitmap is available. Remove the comments, so it looks
like this:
// Create some clouds to drift by..
for (var i = 0; i < 3; i++) {
cloud[i] = new createjs.Bitmap(loader.getResult("cloud"));
cloud[i].x = Math.random()*1024; // Random start location
cloud[i].y = 64 + i * 48;
stage.addChild(cloud[i]);
}

This code creates three cloud objects each using our pre-loaded image, defines their location, and then adds them
to the stage.
Run the app again (press F5) and you'll see our clouds have appeared.
3. Moving the clouds
Now we're going to make the clouds move. The secret to moving clouds - and moving anything, in fact - is to set-
up a ticker function that is repeatedly called multiple times a second. Every time this function is called, it redraws
the graphics in a slightly different place.
The code to do that is already in the main.js file, provided by the CreateJS library, EaselJS. It looks like this:

// Set up the game loop and keyboard handler.


// The keyword 'tick' is required to automatically animated the sprite.
createjs.Ticker.timingMode = createjs.Ticker.RAF;
createjs.Ticker.addEventListener("tick", gameLoop);

This code will call a function called gameLoop() between 30 and 60 frames a second. The exact speed depends on
the speed of your computer.
Look for the gameLoop() function, and down towards the end you'll see a function called animateClouds(). Edit
it so that it is not commented out.

// Move clouds
animateClouds();

If you look at the defintion of this function, you'll see how it takes each cloud in turn, and changes its x co-ordinate.
If the x-ordinate is off the side of screen, it is reset to the far right. Each cloud also moves at a slightly different
speed.

function animate_clouds()
{
// Move the cloud sprites across the sky. If they get to the left edge,
// move them over to the right.

for (var i = 0; i < 3; i++) {


cloud[i].x = cloud[i].x - (i+1);
if (cloud[i].x < -128)
cloud[i].x = width + 128;
}
}

If you run the app now, you'll see that the clouds have started drifting. Finally we have motion!
4. Adding keyboard and mouse input
A game that you can't interact with isn't a game. So let's allow the player to use the keyboard or the mouse to do
something. Back in the loadingComplete() function, you'll see the following. Remove the comments.
// This code will call the method 'keyboardPressed' is the user presses a key.
this.document.onkeydown = keyboardPressed;

// Add support for mouse clicks


stage.on("stagemousedown", mouseClicked);

We now have two functions being called whenever the player hits a key or clicks the mouse. Both event will call
userDidSomething(), a function which looks at the gamestate variable to decide what the game is currently
doing, and what needs to happen next as a result.
Gamestate is a common design pattern used in games. Everything that happens, happens in the gameLoop()
function called by the ticker timer. The gameLoop() keeps track of whether the game is playing, or in a "game over
state", or a "ready-to-play state", or any other states defined by the author, using a variable. This state variable is
tested in a switch statement, and that defines what other functions are called. So if the state is set to "playing", the
functions to make the dinosaur jump and make the barrels move will be called. If the dinosaur is killed by
something, the gamestate variable will be set to "game over state", and the "Game over!" message will be
displayed instead. If you are interested in game design patterns, the book Game Programming Patterns is very
helpful.
Try running the app again, and finally you'll be able to start playing. Press space (or click the mouse, or tap the
screen) to start things happening.
You'll see a barrel come rolling towards you: press space or click again at just the right time, and the dinosaur will
leap. Time it wrong, and your game is over.
The barrel is animated in the same way as the clouds (although it gets faster each time), and we check the position
of the dinosaur and the barrel to make sure they haven't collided:

// Very simple check for collision between dino and barrel


if ((barrel.x > 220 && barrel.x < 380)
&&
(!jumping))
{
barrel.x = 380;
GameState = GameStateEnum.GameOver;
}

If the dinosaur isn't jumping and the barrel is nearby, the code changes the state varaible to the state we've called
GameOver. As you can imagine, GameOver stops the game.
And so the main mechanics of our game are complete.
5. Resizing support
We're almost done here! But before we stop, there is one annoying problem to take care of first. When the game is
running, try resizing the window. You'll see that the game quickly becomes very messed-up, as objects are no
longer where they should be. We can take care of that by creating a handler for the window resizing event
generated when the player resizes the window, or when the device is rotated from landscape to portrait.
The code to do this is already present (in fact, we call it when the game first starts, to make sure the default window
size works, because when a UWP app is launched, you can't be certain what size the window will be).
Just uncomment this line to call the function when the screen size event is fired:

// This code makes the app call the method 'resizeGameWindow' if the user resizes the current window.
window.addEventListener('resize', resizeGameWindow);
If you run the app again, you should now be able to resize the window and get better results.

Publishing to the Windows Store


Now you have a UWP app, it is possible to publish it to the Windows Store (assuming you have improved it first!)
There are a few steps to the process.
1. You must be registered as a Windows Developer.
2. You must use the app submission checklist.
3. The app must be submitted for certification.
For more details, see Publishing your Windows Store app.

Suggestions for other features.


What next? Here are a few suggestions for features to add to your (soon to be) award-winning app.
1. Sound effects. The CreateJS library includes support for sound, with a library called SoundJS.
2. Gamepad support. There is an API available.
3. Make it a much, much better game! That part is up to you, but there are lots of resources available online.

Other links
Make a simple Windows game with JavaScript
Picking an HTML/JS game engine
Using CreateJS in your JS based game
Game development courses on LinkedIn Learning
Get Started Tutorial: A UWP game in MonoGame 2D
3/7/2017 22 min to read Edit on GitHub

A simple 2D UWP game for the Windows Store, written in C# and


MonoGame

Introduction
MonoGame is a lightweight game development framework. This tutorial will teach you the basics of game
development in MonoGame, including how to load content, draw sprites, animate them, and handle user input.
Some more advanced concepts like collision detection and scaling up for high-DPI screens are also discussed. This
tutorial takes 30-60 minutes.

Prerequisites
Windows 10 and Microsoft Visual Studio 2015. Click here to learn how to get set up with Visual Studio.
Basic knowledge of C# or a similar object-oriented programming language. Click here to learn how to get
started with C#.
Familiarity with basic computer science concepts like classes, methods, and variables is a plus.

Why MonoGame?
Theres no shortage of options when it comes to game development environments. From full-featured engines like
Unity to comprehensive and complex multimedia APIs like DirectX, it can be hard to know where to start.
MonoGame is a set of tools, with a level of complexity falling somewhere between a game engine and a grittier API
like DirectX. It provides an easy-to-use content pipeline, and all the functionality required to create lightweight
games that run on a wide variety of platforms. Best of all, MonoGame apps are written in pure C#, and you can
distribute them quickly via the Windows Store or other similar distribution platforms.
Get the code
If you dont feel like working through the tutorial step-by-step and just want to see MonoGame in action, click here
to get the finished app.
Open the project in Visual Studio 2015, and press F5 to run the sample. The first time you do this may take a while,
as Visual Studio needs to fetch any NuGet packages that are missing from your installation.
If youve done this, skip the next section about setting up MonoGame to see a step-by-step walkthrough of the
code.
Note: The game created in this sample is not meant to be complete (or any fun at all). Its only purpose is to
demonstrate all the core concepts of 2D development in MonoGame. Feel free to use this code and make
something much betteror just start from scratch after youve mastered the basics!

Set up MonoGame project


1. Install MonoGame 3.6 for Visual Studio from MonoGame.net
2. Start Visual Studio 2015.
3. Go to File -> New -> Project
4. Under the Visual C# project templates, select MonoGame and MonoGame Windows 10 Universal
Project
5. Name your project MonoGame2D" and select OK. With the project created, it will probably look like it is full
of errorsthese should go away after you run the project for the first time, and any missing NuGet
packages are installed.
6. Make sure x86 and Local Machine are set as the target platform, and press F5 to build and run the empty
project. If you followed the steps above, you should see an empty blue window after the project finishes
building.

Method overview
Now youve created the project, open the Game1.cs file from the Solution Explorer. This is where the bulk of the
game logic is going to go. Many crucial methods are automatically generated here when you create a new
MonoGame project. Lets quickly review them:
public Game1() The constructor. We arent going to change this method at all for this tutorial.
protected override void Initialize() Here we initialize any class variables that are used. This method is called
once at the start of the game.
protected override void LoadContent() This method loads content (eg. textures, audio, fonts) into memory
before the game starts. Like Initialize, its called once when the app starts.
protected override void UnloadContent() This method is used to unload non content-manager content. We
dont use this one at all.
protected override void Update(GameTime gameTIme) This method is called once for every cycle of the
game loop. Here we update the states of any object or variable used in the game. This includes things like an
objects position, speed, or color. This is also where use input is handled. In short, this method handles every part of
the game logic except drawing objects on screen. protected override void Draw(GameTime gameTime) This is
where objects are drawn on the screen, using the positions given by the Update method.

Draw a sprite
So youve run your fresh MonoGame project and found a nice blue skylets add some ground. In MonoGame, 2D
art is added to the app in the form of sprites. A sprite is just a computer graphic that is manipulated as a single
entity. Sprites can be moved, scaled, shaped, animated, and combined to create anything you can imagine in the 2D
space.
1. Download a texture
For our purposes, this first sprite is going to be extremely boring. Click here to download this featureless green
rectangle.
2. Add the texture to the Content folder
Open the Solution Explorer
Right click Content.mgcb in the Content folder and select Open With. From the popup menu select
Monogame Pipeline, and select OK.
In the new window, Right-Click the Content item and select Add -> Existing Item.
Locate and select the green rectangle in the file browser
Name the item grass.png and select Add.
3. Add class variables
To load this image as a sprite texture, open Game1.cs and add the following class variables.

const float SKYRATIO = 2f/3f;


float screenWidth;
float screenHeight;
Texture2D grass;

The SKYRATIO variable tells us how much of the scene we want to be sky versus grassin this case, two-thirds.
screenWidth and screenHeight will keep track of the app window size, while grass is where well store our green
rectangle.
4. Initialize class variables and set window size
The screenWidth and screenHeight variables still need to be initialized, so add this code to the Initialize
method:

ApplicationView.PreferredLaunchWindowingMode = ApplicationViewWindowingMode.FullScreen;

screenHeight = (float)ApplicationView.GetForCurrentView().VisibleBounds.Height;
screenWidth = (float)ApplicationView.GetForCurrentView().VisibleBounds.Width;

this.IsMouseVisible = false;

Along with getting the screens height and width, we also set the apps windowing mode to Fullscreen, and make
the mouse invisible.
5. Load the texture
To load the texture into the grass variable, add the following to the LoadContent method:

grass = Content.Load<Texture2D>("grass.png");

6. Draw the sprite


To draw the rectangle, add the following lines to the Draw method:
GraphicsDevice.Clear(Color.CornflowerBlue);
spriteBatch.Begin();
spriteBatch.Draw(grass, new Rectangle(0, (int)(screenHeight * SKYRATIO),
(int)screenWidth, (int)screenHeight), Color.White);
spriteBatch.End();

Here we use the spriteBatch.Draw method to place the given texture within the borders of a Rectangle object. A
Rectangle is defined by the x and y coordinates of its top left and bottom right corner. Using the screenWidth,
screenHeight, and SKYRATIO variables we defined earlier, we draw the green rectangle texture across the bottom
one-third of the screen. If you run the program now you should see the blue background from before, partially
covered by the green rectangle.

Scale to high DPI screens


If youre running Visual Studio on a high pixel-density monitor, like those found on a Surface Pro or Surface Studio,
you may find that the green rectangle from the steps above doesnt quite cover the bottom third of the screen. Its
probably floating above the bottom-left corner of the screen. To fix this and unify the experience of our game
across all devices, we will need to create a method that scales certain values relative to the screens pixel density:

public float ScaleToHighDPI(float f)


{
DisplayInformation d = DisplayInformation.GetForCurrentView();
f *= (float)d.RawPixelsPerViewPixel;
return f;
}

Next replace the initializations of screenHeight and screenWidth in the Initialize method with this:

screenHeight = ScaleToHighDPI((float)ApplicationView.GetForCurrentView().VisibleBounds.Height);
screenWidth = ScaleToHighDPI((float)ApplicationView.GetForCurrentView().VisibleBounds.Width);

If youre using a high DPI screen and try to run the app now, you should see the green rectangle covering the
bottom third of the screen as intended.

Build the SpriteClass


Before we start animating sprites, were going to make a new class called SpriteClass, which will let us reduce the
surface-level complexity of sprite manipulation.
1. Create a new class
In the Solution Explorer, right-click MonoGame2D (Universal Windows) and select Add -> Class. Name the
class SpriteClass.cs then select Add.
2. Add class variables
Add this code to the class you just created:

public Texture2D texture


{
get;
}

public float x
{
get;
set;
}

public float y
{
get;
set;
}

public float angle


{
get;
set;
}

public float dX
{
get;
set;
}

public float dY
{
get;
set;
}

public float dA
{
get;
set;
}

public float scale


{
get;
set;
}

Here we set up the class variables we need to draw and animate a sprite. The x and y variables represent the
sprites current position on the plane, while the angle variable is the sprites current angle in degrees (0 being
upright, 90 being tilted 90 degrees clockwise). Its important to note that, for this class, x and y represent the
coordinates of the center of the sprite, (the default origin is the top-left corner). This is makes rotating sprites
easier, as they will rotate around whatever origin they are given, and rotating around the center gives us a uniform
spinning motion.
After this, we have dX, dY, and dA, which are the per-second rates of change for the x, y, and angle variables
respectively.
3. Create a constructor
When creating an instance of SpriteClass, we provide the constructor with the graphics device from Game1.cs, the
path to the texture relative to the project folder, and the desired scale of the texture relative to its original size. Well
set the rest of the class variables after we start the game, in the update method.

public SpriteClass (GraphicsDevice graphicsDevice, string texturePath, float scale)


{
this.scale = scale;
if (texture == null)
{
using (var stream = TitleContainer.OpenStream(textureName))
{
texture = Texture2D.FromStream(graphicsDevice, stream);
}
}
}

4. Update and Draw


There are still a couple of methods we need to add to the SpriteClass declaration:

public void Update (float elapsedTime)


{
this.x += this.dX * elapsedTime;
this.y += this.dY * elapsedTime;
this.angle += this.dA * elapsedTime;
}

public void Draw (SpriteBatch spriteBatch)


{
Vector2 spritePosition = new Vector2(this.x, this.y);
spriteBatch.Draw(texture, spritePosition, null, Color.White, this.angle, new Vector2(texture.Width/2, texture.Height/2), new Vector2(scale, scale),
SpriteEffects.None, 0f);
}

The Update SpriteClass method is called in the Update method of Game1.cs, and is used to update the sprites x,
y, and angle values based on their respective rates of change.
The Draw method is called in the Draw method of Game1.cs, and is used to draw the sprite in the game window.

User input and animation


Now we have the SpriteClass built, well use it to create two new game objects, The first is an avatar that the player
can control with the arrow keys and the space bar. The second is an object that the player must avoid
1. Get the textures
For the players avatar were going to use Microsofts very own ninja cat, riding on his trusty t-rex. Click here to
download the image.
Now for the obstacle that the player needs to avoid. What do ninja-cats and carnivorous dinosaurs both hate more
than anything? Eating their veggies! Click here to download the image.
Just as before with the green rectangle, add these images to Content.mgcb via the MonoGame Pipeline, naming
them ninja-cat-dino.png and broccoli.png respectively.
2. Add class variables
Add the following code to the list of class variables in Game1.cs:
SpriteClass dino;
SpriteClass broccoli;

bool spaceDown;
bool gameStarted;

float broccoliSpeedMultiplier;
float gravitySpeed;
float dinoSpeedX;
float dinoJumpY;
float score;

Random random;

dino and broccoli are our SpriteClass variables. dino will hold the player avatar, while broccoli holds the broccoli
obstacle.
spaceDown keeps track of whether the spacebar is being held down as opposed to pressed and released.
gameStarted tells us whether the user has started the game for the first time.
broccoliSpeedMultiplier determines how fast the broccoli obstacle moves across the screen.
gravitySpeed determines how fast the player avatar accelerates downward after a jump.
dinoSpeedX and dinoJumpY determine how fast the player avatar moves and jumps. score tracks how many
obstacles the player has successfully dodged.
Finally, random will be used to add some randomness to the behavior of the broccoli obstacle.
3. Initialize variables
Next we need to initialize these variables. Add the following code to the Initialize method:

broccoliSpeedMultiplier = 0.5f;
spaceDown = false;
gameStarted = false;
score = 0;
random = new Random();
dinoSpeedX = ScaleToHighDPI(1000f);
dinoJumpY = ScaleToHighDPI(-1200f);
gravitySpeed = ScaleToHighDPI(30f);

Note that the last three variables need to be scaled for high DPI devices, because they specify a rate of change in
pixels.
4. Construct SpriteClasses
We will construct SpriteClass objects in the LoadContent method. Add this code to what you already have there:

dino = new SpriteClass(GraphicsDevice, "Content/ninja-cat-dino.png", ScaleToHighDPI(1f));


broccoli = new SpriteClass(GraphicsDevice, "Content/broccoli.png", ScaleToHighDPI(0.2f));

The broccoli image is quite a lot larger than we want it to appear in the game, so well scale it down to 0.2 times its
original size.
5. Program obstacle behavior
We want the broccoli to spawn somewhere offscreen, and head in the direction of the players avatar, so they need
to dodge it. To accomplish they, add this method to the Game1.cs class:
public void SpawnBroccoli()
{
int direction = random.Next(1, 5);
switch (direction)
{
case 1:
broccoli.x = -100;
broccoli.y = random.Next(0, (int)screenHeight);
break;
case 2:
broccoli.y = -100;
broccoli.x = random.Next(0, (int)screenWidth);
break;
case 3:
broccoli.x = screenWidth + 100;
broccoli.y = random.Next(0, (int)screenHeight);
break;
case 4:
broccoli.y = screenHeight + 100;
broccoli.x = random.Next(0, (int)screenWidth);
break;
}

if (score % 5 == 0) broccoliSpeedMultiplier += 0.2f;

broccoli.dX = (dino.x - broccoli.x) * broccoliSpeedMultiplier;


broccoli.dY = (dino.y - broccoli.y) * broccoliSpeedMultiplier;
broccoli.dA = 7f;
}

The first part of the of the method determines what off screen point the broccoli object will spawn from, using two
random numbers.
The second part determines how fast the broccoli will travel, which is determined by the current score. It will get
faster for every five broccoli the player successfully dodges.
The third part sets the direction of the broccoli sprites motion. It heads in the direction of the player avatar (dino)
when the broccoli is spawned. We also give it a dA value of 7f, which will cause the broccoli to spin through the air
as it chases the player.
6. Program game starting state
Before we can move on to handling keyboard input, we need a method that sets the initial game state of the two
objects weve created. Rather than the game starting as soon as the app runs, we want the user to start it manually,
by pressing the spacebar. Add the following code, which sets the initial state of the animated objects, and resets the
score:

public void StartGame()


{
dino.x = screenWidth / 2;
dino.y = screenHeight * SKYRATIO;
broccoliSpeedMultiplier = 0.5f;
SpawnBroccoli();
score = 0;
}

1. Handle keyboard input Next we need a new method to handle user input via the keyboard. Add this this method
to Game1.cs:
void KeyboardHandler()
{
KeyboardState state = Keyboard.GetState();

// Quit the game if Escape is pressed.


if (state.IsKeyDown(Keys.Escape))
{
Exit();
}

// Start the game if Space is pressed.


if (!gameStarted)
{
if (state.IsKeyDown(Keys.Space))
{
StartGame();
gameStarted = true;
spaceDown = true;
gameOver = false;
}
return;
}
// Jump if Space is pressed
if (state.IsKeyDown(Keys.Space) || state.IsKeyDown(Keys.Up))
{
// Jump if the Space is pressed but not held and the dino is on the floor
if (!spaceDown && dino.y >= screenHeight * SKYRATIO - 1) dino.dY = dinoJumpY;

spaceDown = true;
}
else spaceDown = false;

// Handle left and right


if (state.IsKeyDown(Keys.Left)) dino.dX = dinoSpeedX * -1;

else if (state.IsKeyDown(Keys.Right)) dino.dX = dinoSpeedX;


else dino.dX = 0;
}

Above we have a series of four if-statements:


The first quits the game if the Escape key is pressed.
The second starts the game if the Space key is pressed, and the game is not already started.
The third makes the dino avatar jump if Space is pressed, by changing its dY property. Note that the player cannot
jump unless they are on the ground (dino.y = screenHeight * SKYRATIO), and will also not jump if the space key
is being help down rather than pressed once. This stops the dino from jumping as soon as the game is started,
piggybacking on the same keypress that starts the game.
Finally, the last if/else clause checks if the left or right directional arrows are being pressed, and if so changes the
dinos dX property accordingly.
Challenge: can you make the keyboard handling method above work with the WASD input scheme as well as the
arrow keys?
8. Add logic to the Update method
Next we need to add logic for all of these parts to the Update method in Game1.cs:
float elapsedTime = (float)gameTime.ElapsedGameTime.TotalSeconds;
KeyboardHandler(); // Handle keyboard input
// Update animated SpriteClass objects based on their current rates of change
dino.Update(elapsedTime);
broccoli.Update(elapsedTime);

// Accelerate the dino downward each frame to simulate gravity.


dino.dY += gravitySpeed;

// Set game floor so the player does not fall through it


if (dino.y > screenHeight * SKYRATIO)
{
dino.dY = 0;
dino.y = screenHeight * SKYRATIO;
}

// Set game edges to prevent the player from moving offscreen


if (dino.x > screenWidth - dino.texture.Width/2)
{
dino.x = screenWidth - dino.texture.Width/2;
dino.dX = 0;
}
if (dino.x < 0 + dino.texture.Width/2)
{
dino.x = 0 + dino.texture.Width/2;
dino.dX = 0;
}

// If the broccoli goes offscreen, spawn a new one and iterate the score
if (broccoli.y > screenHeight+100 || broccoli.y < -100 || broccoli.x > screenWidth+100 || broccoli.x < -100)
{
SpawnBroccoli();
score++;
}

9. Draw SpriteClass objects


Finally, add the following code to the Draw method of Game1.cs, just after the last call of spriteBatch.Draw:

broccoli.Draw(spriteBatch);
dino.Draw(spriteBatch);

In MonoGame, new calls of spriteBatch.Draw will draw over any prior calls. This means that both the broccoli and
the dino sprite will be drawn over the existing grass sprite, so they can never be hidden behind it regardless of their
position.
Try running the game now, and moving around the dino with the arrow keys and the spacebar. If you followed the
steps above, you should be able to make your avatar move within the game window, and the broccoli should at an
ever-increasing speed.
Render text with SpriteFont
Using the code above, we keep track of the players score behind the scenes, but we dont actually tell the player
what it is. We also have a fairly unintuitive introduction when the app starts upthe player sees a blue and green
window, but has no way of knowing they need to press Space to get things rolling.
To fix both these problems, were going to use a new kind of MonoGame object called SpriteFonts.
1. Create SpriteFont description files
In the Solution Explorer find the Content folder. In this folder, Right-Click the Content.mgcb file and select
Open With. From the popup menu select MonoGame Pipeline, then press OK. In the new window, Right-Click
the Content item and select Add -> New Item. Select SpriteFont Description, name it Score and press OK.
Then, add another SpriteFont description named GameState using the same procedure.
2. Edit descriptions
Right click the Content folder in the MonoGame Pipeline and select Open File Location. You should see a
folder with the SpriteFont description files that you just created, as well as any images youve added to the Content
folder so far. You can now close and save the MonoGame Pipeline window. From the File Explorer open both
description files in a text editor (Visual Studio, NotePad++, Atom, etc).
Each description contains a number of values that describe the SpriteFont. We're going to make a few changes:
In Score.spritefont, change the value from 12 to 36.
In GameState.spritefont, change the value from 12 to 72, and the value from Arial to Agency. Agency is another
font that comes standard with Windows 10 machines, and will add some flair to our intro screen.
3. Load SpriteFonts
Back in Visual Studio, were first going to add a new texture for the intro splash screen. Click here to download the
image.
As before, add the texture to the project by right-clicking the Content and selecting Add -> Existing Item. Name
the new item start-splash.png.
Next, add the following class variables to Game1.cs:

Texture2D startGameSplash;
SpriteFont scoreFont;
SpriteFont stateFont;

Then add these lines to the LoadContent method:


startGameSplash = Content.Load<Texture2D>("start-splash");
scoreFont = Content.Load<SpriteFont>("Score");
stateFont = Content.Load<SpriteFont>("GameState");

4. Draw the score


Go to the Draw method of Game1.cs and add the following code just before spriteBatch.End();

spriteBatch.DrawString(scoreFont, score.ToString(),
new Vector2(screenWidth - 100, 50), Color.Black);

The code above uses the sprite description we created (Arial Size 36) to draw the players current score near the
top right corner of the screen.
5. Draw horizontally centered text
When making a game, you will often want to draw text that is centered, either horizontally or vertically. To
horizontally center the introductory text, add this code to the Draw method just before spriteBatch.End();

if (!gameStarted)
{
// Fill the screen with black before the game starts
spriteBatch.Draw(startGameSplash, new Rectangle(0, 0,
(int)screenWidth, (int)screenHeight), Color.White);

String title = "VEGGIE JUMP";


String pressSpace = "Press Space to start";

// Measure the size of text in the given font


Vector2 titleSize = stateFont.MeasureString(title);
Vector2 pressSpaceSize = stateFont.MeasureString(pressSpace);

// Draw the text horizontally centered


spriteBatch.DrawString(stateFont, title,
new Vector2(screenWidth / 2 - titleSize.X / 2, screenHeight / 3),
Color.ForestGreen);
spriteBatch.DrawString(stateFont, pressSpace,
new Vector2(screenWidth / 2 - pressSpaceSize.X / 2,
screenHeight / 2), Color.White);
}

First we create two Strings, one for each line of text we want to draw. Next, we measure the width and height of
each line when printed, using the SpriteFont.MeasureString(String) method. This gives us the size as a Vector2
object, with the X property containing its width, and Y its height.
Finally, we draw each line. To center the text horizontally, we make the X value of its position vector equal to
screenWidth / 2 - textSize.X / 2
Challenge: how would you change the procedure above to center the text vertically as well as horizontally?
Try running the game. Do you see the intro splash screen? Does the score count up each time the broccoli
respawns?
Collision detection
So we have a broccoli that follows you around, and we have a score that ticks up each time a new one spawnsbut
as it is there is no way to actually lose this game. We need a way to know if the dino and broccoli sprites collide,
and if when they do, to declare the game over.
1. Rectangular collision
When detecting collisions in a game, objects are often simplified to reduce the complexity of the math involved. We
are going to treat both the player avatar and broccoli obstacle as rectangles for the purpose of detecting collision
between them.
Open SpriteClass.cs and add a new class variable:

const float HITBOXSCALE = .5f;

This value will represent how forgiving the collision detection is for the player. With a value of .5f, the edges of
the rectangle in which the dino can collide with the broccolioften call the hitboxwill be half of the full size of
the texture. This will result in few instances where the corners of the two textures collide, without any parts of the
images actually appearing to touch. Feel free to tweak this value to your personal taste.
Next, add a new method to SpriteClass.cs:

public bool RectangleCollision(SpriteClass otherSprite)


{
if (this.x + this.texture.Width * this.scale * HITBOXSCALE / 2 < otherSprite.x - otherSprite.texture.Width * otherSprite.scale / 2) return false;
if (this.y + this.texture.Height * this.scale * HITBOXSCALE / 2 < otherSprite.y - otherSprite.texture.Height * otherSprite.scale / 2) return false;
if (this.x - this.texture.Width * this.scale * HITBOXSCALE / 2 > otherSprite.x + otherSprite.texture.Width * otherSprite.scale / 2) return false;
if (this.y - this.texture.Height * this.scale * HITBOXSCALE / 2 > otherSprite.y + otherSprite.texture.Height * otherSprite.scale / 2) return false;
return true;
}

This method detects if two rectangular objects have collided. The algorithm works by testing to see if there is a gap
between any of the side sides of the rectangles. If there is any gap, there is no collisionif no gap exists, there must
be a collision.
2. Load new textures
Then, open Game1.cs and add two new class variables, one to store the game over sprite texture, and a Boolean to
track the games state:
Texture2D gameOverTexture;
bool gameOver;

Then, initialize gameOver in the Initialize method:

gameOver = false;

Finally, load the texture into gameOverTexture in the LoadContent method:

gameOverTexture = Content.Load<Texture2D>("game-over");

3. Implement game over logic


Add this code to the Update method, just after the KeyboardHandler method is called:

if (gameOver)
{
dino.dX = 0;
dino.dY = 0;
broccoli.dX = 0;
broccoli.dY = 0;
broccoli.dA = 0;
}

This will cause all motion to stop when the game has ended, freezing the dino and broccoli sprites in their current
positions.
Next, at the end of the Update method, just before base.Update(gameTime), add this line:

if (dino.RectangleCollision(broccoli)) gameOver = true;

This calls the RectangleCollision method we created in SpriteClass, and flags the game as over if it returns true.
4. Add user input for resetting the game
Add this code to the KeyboardHandler method, to allow the user to reset them game if they press Enter:

if (gameOver && state.IsKeyDown(Keys.Enter))


StartGame();
gameOver = false;
}

5. Draw game over splash and text


Finally, add this code to the Draw method, just after the first call of spriteBatch.Draw (this should be the call that
draws the grass texture).
if (gameOver)
{
// Draw game over texture
spriteBatch.Draw(gameOverTexture, new Vector2(screenWidth / 2 - gameOverTexture.Width / 2, screenHeight / 4 - gameOverTexture.Width / 2),
Color.White);

String pressEnter = "Press Enter to restart!";

// Measure the size of text in the given font


Vector2 pressEnterSize = stateFont.MeasureString(pressEnter);

// Draw the text horizontally centered


spriteBatch.DrawString(stateFont, pressEnter, new Vector2(screenWidth / 2 - pressEnterSize.X / 2, screenHeight - 200), Color.White);
}

Here we use the same method as before to draw text horizontally centered (reusing the font we used for the intro
splash), as well as centering gameOverTexture in the top half of the window.
And were done! Try running the game again. If you followed the steps above, the game should now end when the
dino collides with the broccoli, and the player should be prompted to restart the game by pressing the Enter key.

Publish to the Windows Store


Because we built this game as a UWP app, it is possible to publish this project to the Windows Store. There are a
few steps to the process.
You must be registered as a Windows Developer.
You must use the app submission checklist.
The app must be submitted for certification.
For more details, see Publishing your Windows Store app.
Plan your Universal Windows Platform (UWP) app
3/6/2017 19 min to read Edit on GitHub

On Microsoft design teams, our process for creating apps consists of five distinct stages: concept, structure,
dynamics, visual, and prototype. We encourage you to adopt a similar process and have fun making new
experiences for the world to enjoy.

Concept
Focus your app
When planning your Universal Windows Platform (UWP) app, you should determine not only what your app will do
and who it's for, but also what your app will be great at. At the core of every great app is a strong concept that
provides a solid foundation.
Say you want to create a photo app. Thinking about the reasons users work with, save, and share their photos,
youll realize that they want to relive memories, connect with others through the photos, and keep the photos safe.
These, then, are the things that you want the app to be great at, and you use these experience goals to guide you
through the rest of the design process.
What's your app about? Start with a broad concept and list all of the things that you want to help users do with
your app.
For example, suppose you want to build an app that helps people plan their trips. Here are some ideas you might
sketch out on the back of a napkin:
Get maps of all the places on an itinerary, and take them with you on the trip.
Find out about special events happening while you're in a city.
Let travel buddies create separate but shareable lists of must-do activities and must-see attractions.
Let travel buddies compile all of their photos to share with friends and family.
Get recommended destinations based on flight prices.
Find a consolidated list of deals for restaurants, shops, and activities around your destination.

What's your app great at? Take a step back and look at your list of ideas to see if a particular scenario really
jumps out at you. Challenge yourself to trim the list to just a single scenario that you want to focus on. In the
process, you might cross off many good ideas, but saying "no" to them is crucial to making a single scenario great.
After you choose a single scenario, decide how you would explain to an average person what your app is great at
by writing it down in one sentence. For example:
My travel app is great at helping friends create itineraries collaboratively for group trips.
My workout app is great at letting friends track their workout progress and share their achievements with each
other.
My grocery app is great at helping families coordinate their weekly grocery shopping so they never miss or
duplicate a purchase.

This is your app's "great at" statement, and it can guide many design decisions and tradeoffs that you make as you
build your app. Focus on the scenarios you want users to experience in your app, and be careful not to turn this into
a feature list. It should be about what your users will be able to do, as opposed to what your app will be able to do.
The design funnel
Its very temptinghaving thought of an idea you liketo go ahead and develop it, perhaps even taking it quite a
ways into production. But lets say you do that and then another interesting idea comes along. Its natural that youll
be tempted to stick with the idea youve already invested in regardless of the relative merits of the two ideas. If only
youd thought of that other idea earlier in the process! Well, the design funnel is a technique to help uncover your
best ideas as early as possible.
The term "funnel" comes from its shape. At the wide end of the funnel, many ideas go in and each one is realized as
a very low-fidelity design artifact (a sketch, perhaps, or a paragraph of text). As this collection of ideas travels
through toward the narrow end of the funnel, the number of ideas is trimmed down while the fidelity of the
artifacts representing the ideas increases. Each artifact should capture only the information necessary to judge one
idea against another, or to answer a particular question such as "is this usable, or intuitive?". Put no more time and
effort into each than that. Some ideas will fall by the wayside as you test them, and youll be okay with that because
you wont be invested in them any more than was necessary to judge the idea. Ideas that survive to move further
into the funnel will receive successively high-fidelity treatments. In the end, youll have a single design artifact that
represents the winning idea. This is the idea that won because of its merits, not merely because it came along first.
You will have designed the best app you could.

Structure
Organization makes everything easier
When you're happy with your concept, you're ready for the next stagecreating your app's blueprint. Information
architecture (IA) gives your content the structural integrity it needs. It helps define your app's navigational model
and, ultimately, your app's identity. By planning how your content will be organizedand how your users will
discover that contentyou can get a better idea of how users will experience your app.
Good IA not only facilitates user scenarios, but it helps you envision the key screens to start with. The Audible app,
for example, launches directly into a hub that provides access to the user's library, store, news, and stats. The
experience is focused, so users can get and enjoy audiobooks quickly. Deeper levels of the app focus on more
specific tasks.
For related guidelines, see Navigation design basics.

Dynamics
Execute your concept
If the concept stage is about defining your app's purpose, the dynamics stage is all about executing that purpose.
This can be accomplished in many ways, such as using wireframes to sketch out your page flows (how you get
from one place to the next within the app to achieve their goals), and thinking about the voice and the words used
throughout your app's UI. Wireframes are a quick, low-fidelity tool to help you make critical decisions about your
app's user flow.
Your app flow should be tightly tied to your "great at" statement, and should help users achieve that single scenario
that you want to light up. Great apps have flows that are easy to learn, and require minimal effort. Start thinking on
a screen-to-screen levelsee your app as if you're using it for the first time. When you pinpoint user scenarios for
pages you create, you'll give people exactly what they want without lots of unnecessary screen touches. Dynamics
are also about motion. The right motion capabilities will determine fluidity and ease of use from one page to the
next.
Common techniques to help with this step:
Outline the flow: What comes first, what comes next?
Storyboard the flow: How should users move through your UI to complete the flow?
Prototype: Try out the flow with a quick prototype.
What should users be able to do? For example, the travel app is "great at helping friends collaboratively create
itineraries for group trips." Let's list the flows that we want to enable:
Create a trip with general information.
Invite friends to join a trip.
Join a friend's trip.
See itineraries recommended by other travelers.
Add destinations and activities to trips.
Edit and comment on destinations and activities that friends added.
Share itineraries for friends and families to follow.

Visual
Speak without words

Once you've established the dynamics of your app, you can make your app shine with the right visual polish. Great
visuals define not only how your app looks, but how it feels and comes alive through animation and motion. Your
choice of color palette, icon, and artwork are just a few examples of this visual language.
All apps have their own unique identity, so explore the visual directions you can take with your app. Let the content
guide the look and feel; don't let the look dictate your content.

Prototype
Refine your masterpiece
Prototyping is a stage in the design funnela technique we talked about earlierat which the artifact representing
your idea develops into something more than a sketch, but less complicated than a complete app. A prototype
might be a flow of hand-drawn screens shown to a user. The person running the test might respond to cues from
the user by placing different screens down, or sticking or unsticking smaller pieces of UI on the pages, to simulate a
running app. Or, a prototype might be a very simple app that simulates some workflows, provided the operator
sticks to a script and pushes the right buttons. At this stage, your ideas begin to really come alive and your hard
work is tested in earnest. When prototyping areas of your app, take the time to sculpt and refine the components
that need it the most.
To new developers, we can't stress enough: Making great apps is an iterative process. We recommend that you
prototype early and often. Like any creative endeavor, the best apps are the product of intensive trial and error.

Decide what features to include


When you know what your users want and how you can help them get there, you can look at the specific tools in
your toolbox. Explore the Universal Windows Platform (UWP) and associate features with your app's needs. Be sure
to follow the user experience (UX) guidelines for each feature.
Common techniques:
Platform research: Find out what features the platform offers and how you can use them.
Association diagrams: Connect your flows with features.
Prototype: Exercise the features to ensure that they do what you need.
App contracts Your app can participate in app contracts that enable broad, cross-app, cross-feature user flows.
Share Let your users share content from your app with other people through other apps, and receive shareable
content from other people and apps, too.
Play To Let your users enjoy audio, video, or images streamed from your app to other devices in their home
network.
File picker and file picker extensions Let your users load and save their files from the local file system,
connected storage devices, HomeGroup, or even other apps. You can also provide a file picker extension so
other apps can load your app's content.
For more info, see App contracts and extensions.
Different views, form factors, and hardware configurations Windows puts users in charge and your app in the
forefront. You want your app UI to shine on any device, using any input mode, in any orientation, in any hardware
configuration, and in whatever circumstance the user decides to use it.
Touch first Windows provides a unique and distinctive touch experience that does more than simply emulate
mouse functionality.
For example, semantic zoom is a touch-optimized way to navigate through a large set of content. Users can pan or
scroll through categories of content, and then zoom in on those categories to view more and more detailed
information. You can use this to present your content in a more tactile, visual, and informative way than with
traditional navigation and layout patterns like tabs.
Of course, you can take advantage of a number of touch interactions, like rotate, pan, swipe, and others. Learn more
about Touch and other user interactions.
Engaging and fresh Be sure your app feels fresh and engages users with these standard experiences:
Animations Use our library of animations to make your app fast and fluid for your users. Help users
understand context changes and tie experiences together with visual transitions. Learn more about animating
your UI.
Toast notifications Let your users know about time-sensitive or personally relevant content through toast
notifications, and invite them back to your app even when your app is closed. Learn more about tiles, badges,
and toast notifications.
App tiles Provide fresh and relevant updates to entice users back into your app. There's more info about this in
the next section. Learn more about app tiles.
Personalization
Settings Let your users create the experience they want by saving app settings. Consolidate all of your settings
on one screen, and then users can configure your app through a common mechanism that they are already
familiar with. Learn more about Adding app settings.
Roaming Create a continuous experience across devices by roaming data that lets users pick up a task right
where they left off and preserves the UX they care most about, regardless of the device they're using. Make it
easy to use your app anywheretheir kitchen family PC, their work PC, their personal tablet, and other form
factorsby maintaining settings and states with roaming. Learn more about Managing application data and see
Guidelines for roaming application data.
User tiles Make your app more personal to your users by loading their user tile image, or let the users set
content from your app as their personal tile throughout Windows.
Device capabilities Be sure your app takes full advantage of the capabilities of today's devices.
Proximity gestures Let your users connect devices with other users who are physically in close proximity, by
physically tapping the devices together (multiplayer games). Learn more about proximity and tapping.
Cameras and external storage devices Connect your users to their built-in or plugged-in cameras for
chatting and conferencing, recording vlogs, taking profile pics, documenting the world around them, or
whatever activity your app is great at. Learn more about Accessing content on removable storage.
Accelerometers and other sensors Devices come with a number of sensors nowadays. Your app can dim or
brighten the display based on ambient light, reflow the UI if the user rotates the display, or react to any physical
movement. Learn more about sensors.
Geolocation Use geolocation information from standard web data or from geolocation sensors to help your
users get around, find their position on a map, or get notices about nearby people, activities, and destinations.
Learn more about geolocation.
Let's consider the travel app example again. To be great at helping friends collaboratively create itineraries for
group trips, you could use some of these features, just to name a few:
Share: Users share upcoming trips and their itineraries to multiple social networks to share the pre-trip
excitement with their friends and families.
Search: Users search for and find activities or destinations from others' shared or public itineraries that they can
include in their own trips.
Notifications: Users are notified when travel companions update their itineraries.
Settings: Users configure the app to their preference, like which trip should bring up notifications or which social
groups are allowed to search the users' itineraries.
Semantic zoom: Users navigate through the timeline of their itinerary and zoom in to see greater details of the
long list of activities they've planned.
User tiles: Users choose the picture they want to appear when they share their trip with friends.

Decide how to monetize your app


You have a lot of options for earning money from your app. If you decide to use in-app ads or sales, you'll want to
design your UI to support that. For more information, see Plan for monetization.

Design the UX for your app


This is about getting the basics right. Now that you know what your app is great at, and you've figured out the
flows that you want to support, you can start to think about the fundamentals of user experience (UX) design.
How should you organize UI content? Most app content can be organized into some form of groupings or
hierarchies. What you choose as the top-level grouping of your content should match the focus of your "great at"
statement.
To use the travel app as an example, there are multiple ways to group itineraries. If the focus of the app is
discovering interesting destinations, you might group them based on interest, like adventure, fun in the sun, or
romantic getaways. However, because the focus of the app is planning trips with friends, it makes more sense to
organize itineraries based on social circles, like family, friends, or work.
Choosing how you want to group your content helps you decide what pages or views you need in your app. See UI
basics for more info.
How should you present UI content? After you've decided how to organize your UI, you can define UX goals that
specify how your UI gets built and presented to your user. In any scenario, you want to make sure that your user
can continue using and enjoying your app as quickly as possible. To do this, decide what parts of your UI need to be
presented first, and make sure that those parts are complete before you spend time building the noncritical parts.
In the travel app, probably the first thing the user will want to do in the app is find a specific trip itinerary. To
present this info as fast as possible, you should show the list of trips first, using a ListView control.
After showing the trips list, you could start loading other features, like a news feed of their friends' trips.
What UI surfaces and commands do you need? Review the flows that you identified earlier. For each flow,
create a rough outline of the steps users take.
Let's look at the "Share itineraries for friends and families to follow" flow. We'll assume that the user has already
created a trip. Sharing a trip itinerary might require these steps:
1. The user opens the app and sees a list of trips she created.
2. The user taps on the trip she wants to share.
3. The details of the trip appear on screen.
4. The user accesses some UI to initiate sharing.
5. The user selects or enters the email address or name of the friend she wants to share the trip with.
6. The user accesses some UI to finalize sharing.
7. Your app updates the trip details with the list of people she has shared her trip with.
During this process, you begin to see what UI you need to create and the additional details you need to figure out
(like drafting a standard email boilerplate for friends who aren't using your app yet). You also can start eliminating
unnecessary steps. Perhaps the user doesn't actually need to see the details of the trip before sharing, for example.
The cleaner the flow, the easier to use.
For more details on how to use different surfaces, take a look at .
What should the flow feel like? When you have defined the steps your user will take, you can turn that flow into
performance goals. For more info, see Plan for performance.
How should you organize commands? Use your outline of the flow steps to identify potential commands that
you need to design for. Then think about where to use those commands in your app.
Always try to use the content. Whenever possible, let users directly manipulate the content on the app's
canvas, rather than adding commands that act on the content. For example, in the travel app, let users rearrange
their itinerary by dragging and dropping activities in a list on the canvas, rather than by selecting the activity and
using Up or Down command buttons.
If you can't use the content. Place commands on one of these UI surfaces if you are not able to use the
content:
In the command bar: You should put most commands on the command bar, which is usually hidden until
the user taps to make it visible.
On the app's canvas: If the user is on a page or view that has a single purpose, you can provide
commands for that purpose directly on the canvas. There should be very few of these commands.
In a context menu: You can use context menus for clipboard actions (such as cut, copy, and paste), or for
commands that apply to content that cannot be selected (like adding a push pin to a location on a map).
Decide how to lay out your app in each view. Windows supports landscape and portrait orientations and
supports resizing apps to any width, from full screen to a minimum width. You want your app to look and work
great at any size, on any screen, in either orientation. This means you need to plan the layout of your UI elements
for different sizes and views. When you do this, your app UI changes fluidly to meet your user's needs and
preferences.

For more info on designing for different screen sizes, see .

Make a good first impression


Think about what you want users to think, feel, or do when they first launch your app. Refer back to your "great at"
statement. Even though you won't get a chance to personally tell your users what your app is great at, you can
convey the message to them when you make your first impression. Take advantage of these:
Tile and notifications The tile is the face of your app. Among the many other apps on a user's Start screen, what
will make the user want to launch your app? Be sure your tile highlights your app's brand and shows what the app
is great at. Use tile notifications so your app will always feel fresh and relevant, bringing the user back to your app
again and again.
Splash screen The splash screen should load as fast as possible, and remain on the screen only as long as you
need to initialize your app state. What you show on the splash screen should express your app's personality.
First launch Before users sign up for your service, log in to their account, or add their own content, what will they
see? Try to demonstrate the value of your app before asking users for information. Consider showing sample
content so people can look around and understand what your app does before you ask them to commit.
Home page The home page is where you bring users each time they launch your app. The content here should
have a clear focus, and immediately showcase what your app is tailored to do. Make this page great at one thing
and trust that people will explore the rest of your app. Focus on eliminating distractions on the landing page, and
not on discoverability.

Validate your design


Before you get too far into developing your app, you should validate your design or prototype against guidelines,
user impressions, and requirements to avoid having to rework it later. Each feature has a set of UX guidelines to
help you polish your app, and a set of Store requirements that you must meet to publish your app in the Windows
Store. You can use the Windows App Certification Kit to test for technical compliance with Store requirements. You
can also use the performance tools in Microsoft Visual Studio to make sure that you're giving your users a great
experience in every scenario.
Use the detailed UX guidelines for UWP apps to stay focused on important features. Use the Visual Studio
performance tools to analyze the performance of each of your app's scenarios.
3/6/2017 3 min to read Edit on GitHub

What's next?
So you want to write an app and publish it to the Windows Store: where do you start? If you're completely new to
the UWP platform, try some of the Channel 9 videos and Microsoft Virtual Academy and LinkedIn Learning
courses. If you are already familiar with Windows development, you can start reading through the topics below, or
go straight to downloading some samples.
There are many tools and frameworks available to help you write apps, and many support cross-platform
development. For example, if you want to write 2D games, you might want to look at Monogame or some of the
many JavaScript/HTML frameworks. For 3D games, there's Unity, and don't forget Xamarin if your focus is mobile
devices.
If you want to get started writing something that isn't a game, our recommendation is that you look through the
UWP topics to get a feel for the platform, and then investigate creating your user interface by using, and then
customizing, XAML controls. You'll use XAML to design your app (here's a tutorial that will walk you through it), but
XAML's main strength is the use of data binding which couples the controls to the information your app wants to
display: if you are new to the Windows platform, this will be an important concept to understand.

UWP and the UWP app Lifecycle


How does an app start, what happens when you start another one? Heres the story.
Guide to Universal Windows Platform (UWP) apps
UWP app lifecycle
What's cool in Windows 10

UX and UI
What controls do you have at your disposal, and how can they be used? These topics explain how controls and
code work together, and how you can customize them to suit the look of your app.
Design and UI
Define page layouts with XAML
Controls by function
Intro to controls and patterns
Styling controls
Screen sizes and break points for responsive design
Use the UWP Community Toolkit for a selection of prebuilt controls and patterns
Data and Services
Learn about data binding, which lets your code automatically populate lists and grids. Discover how to link to
external resources to get data into your apps.
Data binding
ListViews, GridViews and data binding
Data access

Publishing
Share your work with the world, make money. Well walk you through the process of getting your app onto the
store.
Publish Windows apps
Packaging apps

Other resources
Samples, tutorials, videos, other tools and SDKs. Take it to the next level.
How-to articles
Code samples
C# reference
API Reference
Writing apps for Xbox One
Developing for HoloLens
Porting apps to Windows 10
Writing apps for the Enterprise
The UWP Community Toolkit

Windows Developer Blog


The Windows Developer Blog includes regular postings on the latest in coding techniques, project ideas, and tools.
Here are some you might find useful as you explore Windows development.
Animations with the Visual layer
Interop between XAML and the Visual layer
Creating beautiful effects for UWP
Beautiful apps made possible and easy with Windows.UI
Polishing your app with animation and audio cues
Adding color to your design

Finding help in the Dev Center


The docs.microsoft.com site contains a multitude of documentation for many different tools, frameworks and
platforms. When you are browsing for topics and samples, you should make sure you are reading UWP specific
content. You'll find the UWP reference starts at the Windows Dev Center, and the API reference you need is at
Develop UWP apps. When reading content taht is specifically for UWP, the URL path will contain uwp, and so will
the path displayed at the top of the page, like this:
When using a search engine, appending "Windows app development" to your search string will more often than
not lead you to UWP content.

Important Dev Center topics


Here is a list of the key sections of content in the DevCenter.

Design Design guidelines for UWP apps.

Develop Detailed info and coding examples for the many of the features available to your app.

Language reference The programming languages available for UWP development.

Games Developing games with DirectX.

Internet of Things Building your own connected devices.

Porting Leverage your Android and iOS skills to quickly make UWP apps.

Windows Bridges Tools for updating older apps and iOS apps to UWP.

Xamarin Use C# to write apps for iOS, Android and Windows 10.

Task snippets Ready-to-use code that accomplish small but useful tasks.

How-to topics Sample code covering specific UWP features.

Hardware Hardware for developers from the Microsoft Store.


Get the Universal Windows Platform (UWP) samples
from GitHub
3/6/2017 1 min to read Edit on GitHub

The UWP app samples are available through repositories on GitHub. If this is your first time working with UWP,
you'll want to start with the Microsoft/Windows-universal-samples repository, which contains samples that
demonstrate all of the UWP features and their API usage patterns.

Additional samples can be found using the Samples section of the Dev Center.

Download the code


To download the samples, go to the repository and select Clone or download, then Download ZIP. Or, just click
here.
The zip file will always have the latest samples. You dont need a GitHub account to download it. When an SDK
update is released or if you want to pick up any recent changes/additions, just check back for the latest zip file.
Note: The UWP samples require Visual Studio 2015 and the Windows SDK to open, build, and run. If you dont
have Visual Studio already installed, you can get a free copy of Visual Studio 2015 Community Edition with
support for building UWP apps here.
Also, be sure to unzip the entire archive, and not just individual samples. The samples all depend on the
SharedContent folder in the archive. The UWP feature samples use Linked files in Visual Studio to reduce
duplication of common files, including sample template files and image assets. These common files are stored
in the SharedContent folder at the root of the repository, and are referred to in the project files using links.

After you download the zip file, open the samples in Visual Studio:
1. Before you unzip the archive, right-click it, select Properties > Unblock > Apply. Then, unzip the archive to
a local folder on your machine.

2. Within the samples folder, youll see a number of folders, each of which contains a UWP feature sample.
3. Select a sample, such as Altimeter, and youll see multiple folders indicating the languages supported.

4. Select the language youd like to use, such as CS for C#, and youll see a Visual Studio solution file, which
you can open in Visual Studio.
Give feedback, ask questions, and report issues
If you have problems or questions, just use the Issues tab on the repository to create a new issue and well do what
we can to help.
Layout for UWP apps
3/6/2017 2 min to read Edit on GitHub

App structure, page layout, and navigation are the foundation of your app's user experience. The articles in this
section help you create an app that is easy to navigate and looks great on a variety of devices and screen sizes.

Intro
Intro to app UI design
When you design a UWP app, you create a user interface that suits a variety of devices with different display
sizes. This article provides an overview of UI-related features and benefits of UWP apps and some tips & tricks
for designing a responsive UI.

App layout and structure


Check out these recommendations for structuring your app and using the three types of UI elements: navigation,
command, and content.

Navigation basics
Navigation in UWP apps is based on a flexible model of navigation structures, navigation elements, and system-
level features. This article introduces you to these components and shows you how to use them together to
create a good navigation experience.

Content basics
The main purpose of any app is to provide access to content: in a photo-editing app, the photo is the content; in
a travel app, maps and info about travel destinations is the content; and so on. This article provides content
design recommendations for the three content scenarios: consumption, creation, and interaction.

Command basics
Command elements are the interactive UI elements that enable the user to perform actions, such as sending an
email, deleting an item, or submitting a form. This article describes the command elements, such as buttons and
check boxes, the interactions they support, and the command surfaces (such as command bars and context
menus) for hosting them.

Page layout
These articles help you create a flexible UI that looks great on different screen sizes, window sizes, resolutions, and
orientations.
Screen sizes and breakpoints
The number of device targets and screen sizes across the Windows 10 ecosystem is too great to worry about
optimizing your UI for each one. Instead, we recommended designing for a few key widths (also called
"breakpoints"): 360, 640, 1024 and 1366 epx.

Define layouts with XAML


How to use XAML properties and layout panels to make your app responsive and adaptive.

Layout panels
Learn about each type of layout each panel and show how to use them to layout XAML UI elements.

Alignment, margins, and padding


In addition to dimension properties (width, height, and constraints) elements can also have alignment, margin,
and padding properties that influence the layout behavior when an element goes through a layout pass and is
rendered in a UI.
Introduction to UWP app design
3/6/2017 10 min to read Edit on GitHub

A Universal Windows Platform (UWP) app can run on any Windows-based device, from your phone to your tablet
or PC.

Designing an app that looks good on such a wide variety of devices can be a big challenge. So how do you go
about designing an app that provides a great UX on devices with dramatically different screen sizes and input
methods? Fortunately, the Universal Windows Platform (UWP) provides a set of built-in features and universal
building blocks that help you do just that.

This articles describes the UI features and benefits of UWP apps and provides some high-level design guidance for
creating your first UWP app. Let's start by taking a look at some of the features that you get when you create a
UWP app.

UWP app features


Effective pixels and scaling
UWP apps automatically adjust the size of controls, fonts, and other UI elements so that they are legible on all
devices.
When your app runs on a device, the system uses an algorithm to normalize the way UI elements display on the
screen. This scaling algorithm takes into account viewing distance and screen density (pixels per inch) to optimize
for perceived size (rather than physical size). The scaling algorithm ensures that a 24 px font on Surface Hub 10
feet away is just as legible to the user as a 24 px font on 5' phone that's a few inches away.

Because of how the scaling system works, when you design your UWP app, you're designing in effective pixels, not
actual physical pixels. So, how does that impact the way you design your app?
You can ignore the pixel density and the actual screen resolution when designing. Instead, design for the
effective resolution (the resolution in effective pixels) for a size class (for details, see the Screen sizes and
breakpoints article).
When the system scales your UI, it does so by multiples of 4. To ensure a crisp appearance, snap your
designs to the 4x4 pixel grid: make margins, sizes and positions of UI elements, and the position (but not the
sizetext can be any size) of text a multiple of 4 effective pixels.
This illustration shows design elements that map to the 4x4 pixel grid. The design element will always have crisp,
sharp edges.

The next illustration shows design elements that don't map to the 4x4 grid. These design elements will have blurry,
soft edges on some devices.
TIP
When creating screen mockups in image editing programs, set the DPI to 72 and set the image dimensions to the effective
resolution for the size class you're targeting. (For a list of size classes and effective resolutions, see the Recommendations for
specific size classes section of this article.)

Universal input and smart interactions


Another built-in capability of the UWP is universal input enabled via smart interactions. Although you can design
your apps for specific input modes and devices, you arent required to. Thats because Universal Windows apps by
default rely on smart interactions. That means you can design around a click interaction without having to know or
define whether the click comes from an actual mouse click or the tap of a finger.
Universal controls and styles
The UWP also provides some useful building blocks that make it easier to design apps for multiple device families.
Universal controls
The UWP provides a set of universal controls that are guaranteed to work well on all Windows-powered
devices. This set of universal controls includes everything from common form controls like radio button and
text box to sophisticated controls like grid view and list view that can generate lists of items from a stream
of data and a template. These controls are input-aware and deploy with the proper set of input affordances,
event states, and overall functionality for each device family.
For a complete list of these controls and the patterns you can make from them, see the Controls and
patterns section.
Universal styles
Your UWP app automatically gets a default set of styles that gives you these features:
A set of styles that automatically gives your app a light or dark theme (your choice) and can
incorporate the user's accent color preference.
A Segoe-based type ramp that ensures that app text looks crisp on all devices.
Default animations for interactions.
Automatic support for high-contrast modes. Our styles were designed with high-contrast in mind, so
when your app runs on a device in high-contrast mode, it will display properly.
Automatic support for other languages. Our default styles automatically select the correct font for every
language that Windows supports. You can even use multiple languages in the same app and they'll be
displayed properly.
Built-in support for RTL reading order.
You can customize these default styles to give your app a personal touch, or you can completely replace
them with your own to create a unique visual experience. For example, here's a design for a weather app
with a unique visual style:

Now that we've described the building blocks of UWP apps, let's take a look at how to put them together to create
a UI.

The anatomy of a typical UWP app


A modern user interface is a complex thing, made up of text, shapes, colors, and animations which are ultimately
made up out of individual pixels of the screen of the device you're using. When you start designing a user
interface, the sheer number of choices can be overwhelming.
To make things simpler, let's define the anatomy of an app from a design perspective. Let's say that an app is made
up of screens and pages. Each page has a user interface, made up of three types of UI elements: navigation,
commanding, and content elements.

Navigation elements
Navigation elements help users choose the content they
want to display. Examples of navigation elements include
tabs and pivots, hyperlinks, and nav panes.
Navigation elements are covered in detail in the
Navigation design basics article.
Command elements
Command elements initiate actions, such as
manipulating, saving, or sharing content. Examples of
command elements include button and the command
bar. Command elements can also include keyboard
shortcuts that aren't actually visible on the screen.
Command elements are covered in detail in the
Command design basics article.
Content elements
Content elements display the app's content. For a
painting app, the content might be a drawing; for a news
app, the content might be a news article.
Content elements are covered in detail in the Content
design basics article.

At a minimum, an app has a splash screen and a home page that defines the user interface. A typical app will have
multiple pages and screens, and navigation, command, and content elements might change from page to page.
When deciding on the right UI elements for your app, you might also consider the devices and the screen sizes
your app will run on.

Tailoring your app for specific devices and screen sizes.


UWP apps use effective pixels to guarantee that your design elements will be legible and usable on all Windows-
powered devices. So, why would you ever want to customize your app's UI for a specific device family?
Note
Before we go any further, Windows doesn't provide a way for your app to detect the specific device your app is
running on. It can tell you the device family (mobile, desktop, etc) the app is running on, the effective resolution,
and the amount of screen space available to the app (the size of the app's window).
To make the most effective use of space and reduce the need to navigate
If you design an app to look good on a device that has a small screen, such as a phone, the app will be
usable on a PC with a much bigger display, but there will probably be some wasted space. You can
customize the app to display more content when the screen is above a certain size. For example, a shopping
app might display one merchandise category at a time on a phone, but show multiple categories and
products simultaneously on a PC or laptop.
By putting more content on the screen, you reduce the amount of navigation that the user needs to perform.
To take advantage of devices' capabilities
Certain devices are more likely to have certain device capabilities. For example, phones are likely to have a
location sensor and a camera, while a PC might not have either. Your app can detect which capabilities are
available and enable features that use them.
To optimize for input
The universal control library works with all input types (touch, pen, keyboard, mouse), but you can still
optimize for certain input types by re-arranging your UI elements. For example, if you place navigation
elements at the bottom of the screen, they'll be easier for phone users to accessbut most PC users expect
to see navigation elements toward the top of the screen.

Responsive design techniques


When you optimize your app's UI for specific screen widths, we say that you're creating a responsive design. Here
are six responsive design techniques you can use to customize your app's UI.
Reposition
You can alter the location and position of app UI elements to get the most out of each device. In this example, the
portrait view on phone or phablet necessitates a scrolling UI because only one full frame is visible at a time. When
the app translates to a device that allows two full on-screen frames, whether in portrait or landscape orientation,
frame B can occupy a dedicated space. If you're using a grid for positioning, you can stick to the same grid when UI
elements are repositioned.

In this example design for a photo app, the photo app repositions its content on larger screens.
Resize
You can optimize the frame size by adjusting the margins and size of UI elements. This could allow you, as the
example here shows, to augment the reading experience on a larger screen by simply growing the content frame.

Reflow
By changing the flow of UI elements based on device and orientation, your app can offer an optimal display of
content. For instance, when going to a larger screen, it might make sense to switch larger containers, add columns,
and generate list items in a different way.
This example shows how a single column of vertically scrolling content on phone or phablet can be reflowed on a
larger screen to display two columns of text.
Reveal
You can reveal UI based on screen real estate, or when the device supports additional functionality, specific
situations, or preferred screen orientations.
In this example with tabs, the middle tab with the camera icon might be specific to the app on phone or phablet
and not be applicable on larger devices, which is why it's revealed in the device on the right. Another common
example of revealing or hiding UI applies to media player controls, where the button set is reduced on smaller
devices and expanded on larger devices. The media player on PC, for instance, can handle far more on-screen
functionality than it can on a phone.

Part of the reveal-or-hide technique includes choosing when to display more metadata. When real estate is at a
premium, such as with a phone or phablet, it's best to show a minimal amount of metadata. With a laptop or
desktop PC, a significant amount of metadata can be surfaced. Some examples of how to handle showing or hiding
metadata include:
In an email app, you can display the user's avatar.
In a music app, you can display more info about an album or artist.
In a video app, you can display more info about a film or a show, such as showing cast and crew details.
In any app, you can break apart columns and reveal more details.
In any app, you can take something that's vertically stacked and lay it out horizontally. When going from phone
or phablet to larger devices, stacked list items can change to reveal rows of list items and columns of metadata.
Replace
This technique lets you switch the user interface for a specific device size-class or orientation. In this example, the
nav pane and its compact, transient UI works well for a smaller device, but on a larger device tabs might be a better
choice.

Re-architect
You can collapse or fork the architecture of your app to better target specific devices. In this example, going from
the left device to the right device demonstrates the joining of pages.
Here's an example of this technique applied to the design for a smart home app.
Related articles
What's a UWP app?
Navigation design basics for UWP apps
3/6/2017 6 min to read Edit on GitHub

Navigation in Universal Windows Platform (UWP) apps is based on a flexible model of navigation structures,
navigation elements, and system-level features. Together, they enable a variety of intuitive user experiences for
moving between apps, pages, and content.
In some cases, you might be able to fit all of your app's content and functionality onto a single page without
requiring the user to do anything more than pan to navigate through that content. However, the majority of apps
typically have multiple pages of content and functionality with which to explore, engage, and interact. When an
app has more than one page, you need to provide the right navigation experience.
To be successful and make sense to users, multi-page navigation experiences in UWP apps include (described in
detail later):
The right navigation structure
Building a navigation structure that makes sense to the user is crucial to creating an intuitive navigation
experience.
Compatible navigation elements that support the chosen structure.
Navigation elements can help the user get to the content they want and can also let users know where
they are within the app. However, they also take up space that could be used for content or commanding
elements, so it's important to use the navigation elements that are right for your app's structure.
Appropriate responses to system-level navigation features (such as Back)
To provide a consistent experience that feels intuitive, respond to system-level navigation features in
predictable ways.

Build the right navigation structure


Let's look at an app as a collection of groups of pages, with each page containing a unique set of content or
functionality. For example, a photo app might have a page for taking photos, a page for image editing, and
another page for managing your image library. The way you arrange these pages into groups defines the app's
navigation structure. There are two common ways to arrange a group of pages:

IN A HIERARCHY AS PEERS

Pages are organized into a tree-like structure. Each child Pages exist side-by-side. You can go from one page to
page has only one parent, but a parent can have one or another in any order.
more child pages. To reach a child page, you travel through
the parent.
A typical app will use both arrangements, with some portions being arranged as peers and some portions being
arranged into hierarchies.

So, when should you arrange pages into hierarchies and when you should arrange them as peers? To answer
that question we must consider the number of pages in the group, whether the pages should be traversed in a
particular order, and the relationship between the pages. In general, flatter structures are easier to understand
and faster to navigate, but sometimes it's appropriate to have a deep hierarchy.
We recommend using a hierarchical relationship when
You expect the user to traverse the pages in a specific order. Arrange the hierarchy to enforce that order.
There is a clear parent-child relationship between one of the pages and the other pages in the group.
There are more than 7 pages in the group.
When there are more than 7 pages in the group, it might be difficult for users to understand how the
pages are unique or to understand their current location within the group. If you don't think that's an
issue for your app, go ahead and make the pages peers. Otherwise, consider using a hierarchical
structure to break the pages into two or more smaller groups. (A hub control can help you group
pages into categories.)
We recommend using a peer relationship when
The pages can be viewed in any order.
The pages are clearly distinct from each other and don't have an obvious parent/child relationship.
There are fewer than 8 pages in the group.

When there are more than 7 pages in the group, it might be difficult for users to understand how the
pages are unique or to understand their current location within the group. If you don't think that's an
issue for your app, go ahead and make the pages peers. Otherwise, consider using a hierarchical
structure to break the pages into two or more smaller groups. (A hub control can help you group
pages into categories.)

Use the right navigation elements


Navigation elements can provide two services: they help the user get to the content they want, and some
elements also let users know where they are within the app. However, they also take up space that the app could
use for content or commanding elements, so it's important to use the navigation elements that are just right for
your app's structure.
Peer-to-peer navigation elements
Peer-to-peer navigation elements enable navigation between pages in the same level of the same subtree.

For peer-to-peer navigation, we recommend using tabs or a navigation pane.


NAVIGATION ELEMENT DESCRIPTION

Tabs and pivot Displays a persistent list of links to pages at the same level.
Use tabs/pivots when:
There are 2-5 pages.
(You can use tabs/pivots when there are more
than 5 pages, but it might be difficult to fit all the
tabs/pivots on the screen.)
You expect users to switch between pages frequently.
This design for a restaurant-finding app uses tabs/pivots:

Nav pane Displays a list of links to top-level pages.


Use a navigation pane when:
You don't expect users to switch between pages
frequently.
You want to conserve space at the expense of slowing
down navigation operations.
The pages exist at the top level.
This design for a smart home app features a nav pane:

If your navigation structure has multiple levels, we recommend that peer-to-peer navigation elements only link
to the peers within their current subtree. Consider the following illustration, which shows a navigation structure
that has three levels:

For level 1, the peer-to-peer navigation element should provide access to pages A, B, C, and D.
At level 2, the peer-to-peer navigation elements for the A2 pages should only link to the other A2 pages. They
should not link to level 2 pages in the C subtree.
Hierarchical navigation elements
Hierarchical navigation elements provide navigation between a parent page and its child pages.

NAVIGATION ELEMENT DESCRIPTION

Hub A hub is a special type of navigation control that provides


previews/summaries of its child pages. Unlike the navigation
pane or tabs, it provides navigation to these child pages
through links and section headers embedded in the page
itself.
Use a hub when:
You expect that users would want to view some of
the content of the child pages without having to
navigate to each one.
Hubs promote discovery and exploration, which makes
them well suited for media, news-reader, and shopping
apps.

Master/details Displays a list (master view) of item summaries. Selecting an


item displays its corresponding items page in the details
section.
Use the Master/details element when:
You expect users to switch between child items
frequently.
You want to enable the user to perform high-level
operations, such as deleting or sorting, on individual
items or groups of items, and also want to enable the
user to view or update the details for each item.
Master/details elements are well suited for email inboxes,
contact lists, and data entry.
This design for a stock-tracking app makes use of a
master/details pattern:

Historical navigation elements


NAVIGATION ELEMENT DESCRIPTION

Back Lets the user traverse the navigation history within an app
and, depending on the device, from app to app. For more
info, see the Navigation history and backwards navigation
article.

Content-level navigation elements

NAVIGATION ELEMENT DESCRIPTION

Hyperlinks and buttons Content-embedded navigation elements appear in a page's


content. Unlike other navigation elements, which should be
consistent across the page's group or subtree, content-
embedded navigation elements are unique from page to
page.

Combining navigation elements


You can combine navigation elements to create a navigation experience that's right for your app. For example,
your app might use a nav pane to provide access to top-level pages and tabs to provide access to second-level
pages.
Navigation history and backwards navigation for
UWP apps
3/6/2017 7 min to read Edit on GitHub

On the Web, individual web sites provide their own navigation systems, such as tables of contents, buttons, menus,
simple lists of links, and so on. The navigation experience can vary wildly from website to website. However, there
is one consistent navigation experience: back. Most browsers provide a back button that behaves the same way
regardless of the website.
For similar reasons, the Universal Windows Platform (UWP) provides a consistent back navigation system for
traversing the user's navigation history within an app and, depending on the device, from app to app.
The UI for the system back button is optimized for each form factor and input device type, but the navigation
experience is global and consistent across devices and UWP apps.
Here are the primary form factors with the back button UI:

Devices Back button behavior

Phone Always present.


A software or hardware button
at the bottom of the device.
Global back navigation within
the app and between apps.

Tablet Always present in Tablet mode.


Not available in Desktop mode.
Title bar back button can be
enabled, instead. See PC,
Laptop, Tablet. Users can switch
between running in Tablet mode
and Desktop mode by going to
Settings > System > Tablet
mode and setting Make
Windows more touch-friendly
when using your device as a
tablet.
A software button in the
navigation bar at the bottom of
the device.
Global back navigation within
the app and between apps.
PC, Laptop, Tablet Optional in Desktop mode. Not
available in Tablet mode. See
Tablet. Disabled by default. Must
opt in to enable it. Users can
switch between running in
Tablet mode and Desktop mode
by going to Settings > System
> Tablet mode and setting
Make Windows more touch-
friendly when using your
device as a tablet.
A software button in the title
bar of the app.
Back navigation within the app
only. Does not support app-to-
app navigation.

Surface Hub Optional.


Disabled by default. Must opt in
to enable it.
A software button in the title
bar of the app.
Back navigation within the app
only. Does not support app-to-
app navigation.

Here are some alternative input types that don't rely on a back button UI, but still provide the exact same functionality.

Input devices

Keyboard Windows key + Backspace

Cortana Say, "Hey Cortana, go back"

When your app runs on a phone, tablet, or on a PC or laptop that has system back enabled, the system notifies
your app when the back button is pressed. The user expects the back button to navigate to the previous location in
the app's navigation history. It's up to you to decide which navigation actions to add to the navigation history and
how to respond to the back button press.

How to enable system back navigation support


Apps must enable back navigation for all hardware and software system back buttons. Do this by registering a
listener for the BackRequested event and defining a corresponding handler.
Here we register a global listener for the BackRequested event in the App.xaml code-behind file. You can register
for this event in each page if you want to exclude specific pages from back navigation, or you want to execute
page-level code before displaying the page.
Windows.UI.Core.SystemNavigationManager.GetForCurrentView().BackRequested +=
App_BackRequested;

Windows::UI::Core::SystemNavigationManager::GetForCurrentView()->
BackRequested += ref new Windows::Foundation::EventHandler<
Windows::UI::Core::BackRequestedEventArgs^>(
this, &amp;App::App_BackRequested);

Here's the corresponding BackRequested event handler that calls GoBack on the root frame of the app.
This handler is invoked on a global back event. If the in-app back stack is empty, the system might navigate to the
previous app in the app stack or to the Start screen. There is no app back stack in Desktop mode and the user stays
in the app even when the in-app back stack is depleted.

private void App_BackRequested(object sender,


Windows.UI.Core.BackRequestedEventArgs e)
{
Frame rootFrame = Window.Current.Content as Frame;
if (rootFrame == null)
return;

// Navigate back if possible, and if the event has not


// already been handled .
if (rootFrame.CanGoBack &amp;&amp; e.Handled == false)
{
e.Handled = true;
rootFrame.GoBack();
}
}

void App::App_BackRequested(
Platform::Object^ sender,
Windows::UI::Core::BackRequestedEventArgs^ e)
{
Frame^ rootFrame = dynamic_cast<Frame^>(Window::Current->Content);
if (rootFrame == nullptr)
return;

// Navigate back if possible, and if the event has not


// already been handled.
if (rootFrame->CanGoBack && e->Handled == false)
{
e->Handled = true;
rootFrame->GoBack();
}
}

How to enable the title bar back button


Devices that support Desktop mode (typically PCs and laptops, but also some tablets) and have the setting enabled
(Settings > System > Tablet mode), do not provide a global navigation bar with the system back button.
In Desktop mode, every app runs in a window with a title bar. You can provide an alternative back button for your
app that is displayed in this title bar.
The title bar back button is only available in apps running on devices in Desktop mode, and only supports in-app
navigation historyit does not support app-to-app navigation history.
Important The title bar back button is not displayed by default. You must opt in.

Desktop mode, no back navigation. Desktop mode, back navigation enabled.

Override the OnNavigatedTo event and set AppViewBackButtonVisibility to Visible in the code-behind file
for each page that you want to enable the title bar back button.
For this example, we list each page in the back stack and enable the back button if the CanGoBack property of the
frame has a value of true.

protected override void OnNavigatedTo(NavigationEventArgs e)


{
Frame rootFrame = Window.Current.Content as Frame;

string myPages = "";


foreach (PageStackEntry page in rootFrame.BackStack)
{
myPages += page.SourcePageType.ToString() + "\n";
}
stackCount.Text = myPages;

if (rootFrame.CanGoBack)
{
// Show UI in title bar if opted-in and in-app backstack is not empty.
SystemNavigationManager.GetForCurrentView().AppViewBackButtonVisibility =
AppViewBackButtonVisibility.Visible;
}
else
{
// Remove the UI from the title bar if in-app back stack is empty.
SystemNavigationManager.GetForCurrentView().AppViewBackButtonVisibility =
AppViewBackButtonVisibility.Collapsed;
}
}
void StartPage::OnNavigatedTo(NavigationEventArgs^ e)
{
auto rootFrame = dynamic_cast<Windows::UI::Xaml::Controls::Frame^>(Window::Current->Content);

Platform::String^ myPages = "";

if (rootFrame == nullptr)
return;

for each (PageStackEntry^ page in rootFrame->BackStack)


{
myPages += page->SourcePageType.ToString() + "\n";
}
stackCount->Text = myPages;

if (rootFrame->CanGoBack)
{
// If we have pages in our in-app backstack and have opted in to showing back, do so
Windows::UI::Core::SystemNavigationManager::GetForCurrentView()->AppViewBackButtonVisibility =
Windows::UI::Core::AppViewBackButtonVisibility::Visible;
}
else
{
// Remove the UI from the title bar if there are no pages in our in-app back stack
Windows::UI::Core::SystemNavigationManager::GetForCurrentView()->AppViewBackButtonVisibility =
Windows::UI::Core::AppViewBackButtonVisibility::Collapsed;
}
}

Guidelines for custom back navigation behavior


If you choose to provide your own back stack navigation, the experience should be consistent with other apps. We
recommend that you follow the following patterns for navigation actions:

NAVIGATION ACTION ADD TO NAVIGATION HISTORY?

Page to page, different peer groups Yes


In this illustration, the user navigates from level 1 of the
app to level 2, crossing peer groups, so the navigation is
added to the navigation history.

In the next illustration, the user navigates between two


peer groups at the same level, again crossing peer groups,
so the navigation is added to the navigation history.
NAVIGATION ACTION ADD TO NAVIGATION HISTORY?

Page to page, same peer group, no on-screen Yes


navigation element In the following illustration, the user navigates between
The user navigates from one page to another with the two pages in the same peer group. The pages don't use
same peer group. There is no navigation element that is tabs or a docked navigation pane, so the navigation is
always present (such as tabs/pivots or a docked added to the navigation history.
navigation pane) that provides direct navigation to both
pages.

Page to page, same peer group, with an on-screen No


navigation element When the user presses back, go back to the last page
The user navigates from one page to another in the same before the user navigated to the current peer group.
peer group. Both pages are shown in the same navigation
element. For example, both pages use the same
tabs/pivots element, or both pages appear in a docked
navigation pane.

Show a transient UI No
The app displays a pop-up or child window, such as a When the user presses the back button, dismiss the
dialog, splash screen, or on-screen keyboard, or the app transient UI (hide the on-screen keyboard, cancel the
enters a special mode, such as multiple selection mode. dialog, etc) and return to the page that spawned the
transient UI.
NAVIGATION ACTION ADD TO NAVIGATION HISTORY?

Enumerate items No
The app displays content for an on-screen item, such as Enumerating items is similar to navigating within a peer
the details for the selected item in master/details list. group. When the user presses back, navigate to the page
that preceded the current page that has the item
enumeration.

Resuming
When the user switches to another app and returns to your app, we recommend returning to the last page in the
navigation history.

Get the samples


Back button sample
Shows how to set up an event handler for the back button event and how to enable the title bar back button for
when the app is in windowed Desktop mode.

Related articles
Navigation basics
Peer-to-peer navigation between two pages
3/6/2017 7 min to read Edit on GitHub

Learn how to navigate in a basic two page peer-to-peer Universal Windows Platform (UWP) app.

Important APIs
Windows.UI.Xaml.Controls.Frame
Windows.UI.Xaml.Controls.Page
Windows.UI.Xaml.Navigation

Create the blank app


1. On the Microsoft Visual Studio menu, choose File > New Project.
2. In the left pane of the New Project dialog box, choose the Visual C# -> Windows -> Universal or the Visual
C++ -> Windows -> Universal node.
3. In the center pane, choose Blank App.
4. In the Name box, enter NavApp1, and then choose the OK button.
The solution is created and the project files appear in Solution Explorer.
5. To run the program, choose Debug > Start Debugging from the menu, or press F5.
A blank page is displayed.
6. Press Shift+F5 to stop debugging and return to Visual Studio.

Add basic pages


Next, add two content pages to the project.
Do the following steps two times to add two pages to navigate between.
1. In Solution Explorer, right-click the BlankApp project node to open the shortcut menu.
2. Choose Add > New Item from the shortcut menu.
3. In the Add New Item dialog box, choose Blank Page in the middle pane.
4. In the Name box, enter Page1 (or Page2) and press the Add button.
These files should now be listed as part of your NavApp1 project.

C# C++

Page1.xaml Page1.xaml
Page1.xaml.cs Page1.xaml.cpp
Page2.xaml Page1.xaml.h
Page2.xaml.cs Page2.xaml
Page2.xaml.cpp
Page2.xaml.h
Add the following content to the UI of Page1.xaml.
Add a TextBlock element named pageTitle as a child element of the root Grid. Change the Text property to
Page 1 .

<TextBlock x:Name="pageTitle" Text="Page 1" />

Add the following HyperlinkButton element as a child element of the root Grid and after the pageTitle
TextBlock element.

<HyperlinkButton Content="Click to go to page 2"


Click="HyperlinkButton_Click"
HorizontalAlignment="Center"/>

Add the following code to the Page1 class in the Page1.xaml code-behind file to handle the Click event of the
HyperlinkButton you added previously. Here, we navigate to Page2.xaml.

private void HyperlinkButton_Click(object sender, RoutedEventArgs e)


{
this.Frame.Navigate(typeof(Page2));
}

void Page1::HyperlinkButton_Click(Platform::Object^ sender, RoutedEventArgs^ e)


{
this->Frame->Navigate(Windows::UI::Xaml::Interop::TypeName(Page2::typeid));
}

Make the following changes to the UI of Page2.xaml.


Add a TextBlock element named pageTitle as a child element of the root Grid. Change the value of the Text
property to Page 2 .

<TextBlock x:Name="pageTitle" Text="Page 2" />

Add the following HyperlinkButton element as a child element of the root Grid and after the pageTitle
TextBlock element.

<HyperlinkButton Content="Click to go to page 1"


Click="HyperlinkButton_Click"
HorizontalAlignment="Center"/>

Add the following code to the Page2 class in the Page2.xaml code-behind file to handle the Click event of the
HyperlinkButton you added previously. Here, we navigate to Page1.xaml.

NOTE
For C++ projects, you must add a #include directive in the header file of each page that references another page. For the
inter-page navigation example presented here, page1.xaml.h file contains #include "Page2.xaml.h" , in turn, page2.xaml.h
contains #include "Page1.xaml.h" .
private void HyperlinkButton_Click(object sender, RoutedEventArgs e)
{
this.Frame.Navigate(typeof(Page1));
}

void Page2::HyperlinkButton_Click(Platform::Object^ sender, RoutedEventArgs^ e)


{
this->Frame->Navigate(Windows::UI::Xaml::Interop::TypeName(Page1::typeid));
}

Now that we've prepared the content pages, we need to make Page1.xaml display when the app starts.
Open the app.xaml code-behind file and change the OnLaunched handler.
Here, we specify Page1 in the call to Frame.Navigate instead of MainPage .

protected override void OnLaunched(LaunchActivatedEventArgs e)


{
Frame rootFrame = Window.Current.Content as Frame;

// Do not repeat app initialization when the Window already has content,
// just ensure that the window is active
if (rootFrame == null)
{
// Create a Frame to act as the navigation context and navigate to the first page
rootFrame = new Frame();

rootFrame.NavigationFailed += OnNavigationFailed;

if (e.PreviousExecutionState == ApplicationExecutionState.Terminated)
{
//TODO: Load state from previously suspended application
}

// Place the frame in the current Window


Window.Current.Content = rootFrame;
}

if (rootFrame.Content == null)
{
// When the navigation stack isn&#39;t restored navigate to the first page,
// configuring the new page by passing required information as a navigation
// parameter
rootFrame.Navigate(typeof(Page1), e.Arguments);
}
// Ensure the current window is active
Window.Current.Activate();
}
void App::OnLaunched(Windows::ApplicationModel::Activation::LaunchActivatedEventArgs^ e)
{
auto rootFrame = dynamic_cast<Frame^>(Window::Current->Content);

// Do not repeat app initialization when the Window already has content,
// just ensure that the window is active
if (rootFrame == nullptr)
{
// Create a Frame to act as the navigation context and associate it with
// a SuspensionManager key
rootFrame = ref new Frame();

rootFrame->NavigationFailed +=
ref new Windows::UI::Xaml::Navigation::NavigationFailedEventHandler(
this, &amp;App::OnNavigationFailed);

if (e->PreviousExecutionState == ApplicationExecutionState::Terminated)
{
// TODO: Load state from previously suspended application
}

// Place the frame in the current Window


Window::Current->Content = rootFrame;
}

if (rootFrame->Content == nullptr)
{
// When the navigation stack isn&#39;t restored navigate to the first page,
// configuring the new page by passing required information as a navigation
// parameter
rootFrame->Navigate(Windows::UI::Xaml::Interop::TypeName(Page1::typeid), e->Arguments);
}

// Ensure the current window is active


Window::Current->Activate();
}

Note The code here uses the return value of Navigate to throw an app exception if the navigation to the app's
initial window frame fails. When Navigate returns true, the navigation happens.
Now, build and run the app. Click the link that says "Click to go to page 2". The second page that says "Page 2" at
the top should be loaded and displayed in the frame.

Frame and Page classes


Before we add more functionality to our app, let's look at how the pages we added provide navigation support for
the app.
First, a Frame ( rootFrame ) is created for the app in the App.OnLaunched method of the App.xaml code-behind file.
The Navigate method is used to display content in this Frame.
Note
The Frame class supports various navigation methods such as Navigate, GoBack, and GoForward, and properties
such as BackStack, ForwardStack, and BackStackDepth.
In our example, Page1 is passed to the Navigate method. This method sets the content of the app's current
window to the Frame and loads the content of the page you specify into the Frame (Page1.xaml in our example, or
MainPage.xaml, by default).
Page1 is a subclass of the Page class. The Page class has a read-only Frame property that gets the Frame
containing the Page. When the Click event handler of the HyperlinkButton calls Frame.Navigate(typeof(Page2)) , the
Frame in the app's window displays the content of Page2.xaml.
Whenever a page is loaded into the frame, that page is added as a PageStackEntry to the BackStack or
ForwardStack of the Frame.

Pass information between pages


Our app navigates between two pages, but it really doesn't do anything interesting yet. Often, when an app has
multiple pages, the pages need to share information. Let's pass some information from the first page to the second
page.
In Page1.xaml, replace the the HyperlinkButton you added earlier with the following StackPanel.
Here, we add a TextBlock label and a TextBox ( name ) for entering a text string.

<StackPanel>
<TextBlock HorizontalAlignment="Center" Text="Enter your name"/>
<TextBox HorizontalAlignment="Center" Width="200" Name="name"/>
<HyperlinkButton Content="Click to go to page 2"
Click="HyperlinkButton_Click"
HorizontalAlignment="Center"/>
</StackPanel>

In the HyperlinkButton_Click event handler of the Page1.xaml code-behind file, add a parameter referencing the Text
property of the name TextBox to the Navigate method.

private void HyperlinkButton_Click(object sender, RoutedEventArgs e)


{
this.Frame.Navigate(typeof(Page2), name.Text);
}

void Page1::HyperlinkButton_Click(Platform::Object^ sender, RoutedEventArgs^ e)


{
this->Frame->Navigate(Windows::UI::Xaml::Interop::TypeName(Page2::typeid), name->Text);
}

In the Page2.xaml code-behind file, override the OnNavigatedTo method with the following:

protected override void OnNavigatedTo(NavigationEventArgs e)


{
if (e.Parameter is string)
{
greeting.Text = "Hi, " + e.Parameter.ToString();
}
else
{
greeting.Text = "Hi!";
}
base.OnNavigatedTo(e);
}
void Page2::OnNavigatedTo(NavigationEventArgs^ e)
{
if (dynamic_cast<Platform::String^>(e->Parameter) != nullptr)
{
greeting->Text = "Hi," + e->Parameter->ToString();
}
else
{
greeting->Text = "Hi!";
}
::Windows::UI::Xaml::Controls::Page::OnNavigatedTo(e);
}

Run the app, type your name in the text box, and then click the link that says Click to go to page 2. When you
called this.Frame.Navigate(typeof(Page2), tb1.Text) in the Click event of the HyperlinkButton, the name.Text property was
passed to Page2 and the value from the event data is used for the message displayed on the page.

Cache a page
Page content and state is not cached by default, you must enable it in each page of your app.
In our basic peer-to-peer example, there is no back button (we demonstrate back navigation in Back button
navigation), but if you did click a back button on Page2 , the TextBox (and any other field) on Page1 would be set to
its default state. One way to work around this is to use the NavigationCacheMode property to specify that a page
be added to the frame's page cache.
In the constructor of Page1 , set NavigationCacheMode to Enabled. This retains all content and state values for
the page until the page cache for the frame is exceeded.
Set NavigationCacheMode to Required if you want to ignore cache size limits for the frame. However, cache size
limits might be crucial, depending on the memory limits of a device.

NOTE
The CacheSize property specifies the number of pages in the navigation history that can be cached for the frame.

public Page1()
{
this.InitializeComponent();
this.NavigationCacheMode = Windows.UI.Xaml.Navigation.NavigationCacheMode.Enabled;
}

Page1::Page1()
{
this->InitializeComponent();
this->NavigationCacheMode = Windows::UI::Xaml::Navigation::NavigationCacheMode::Enabled;
}

Related articles
Navigation design basics for UWP apps
Guidelines for tabs and pivots
Guidelines for navigation panes
Show multiple views for an app
3/6/2017 5 min to read Edit on GitHub

You can help your users be more productive by letting them view multiple independent parts of your app in
separate windows. A typical example is an e-mail app where the main UI shows the list of emails and a preview of
the selected e-mail. But users can also open messages in separate windows and view them side-by-side.

Important APIs
ApplicationViewSwitcher
CreateNewView

When you create multiple windows for an app, each window behaves independently. The taskbar shows each
window separately. Users can move, resize, show, and hide app windows independently and can switch between
app windows as if they were separate apps. Each window operates in its own thread.

What is a view?
An app view is the 1:1 pairing of a thread and a window that the app uses to display content. It's represented by a
Windows.ApplicationModel.Core.CoreApplicationView object.
Views are managed by the CoreApplication object. You call CoreApplication.CreateNewView to create
aCoreApplicationView object. The CoreApplicationView brings together a CoreWindow and a
CoreDispatcher (stored in the CoreWindow and Dispatcher properties). You can think of the
CoreApplicationView as the object that the Windows Runtime uses to interact with the core Windows system.
You typically dont work directly with the CoreApplicationView. Instead, the Windows Runtime provides the
ApplicationView class in the Windows.UI.ViewManagement namespace. This class provides properties,
methods, and events that you use when your app interacts with the windowing system. To work with an
ApplicationView, call the static ApplicationView.GetForCurrentView method, which gets an ApplicationView
instance tied to the current CoreApplicationViews thread.
Likewise, the XAML framework wraps the CoreWindow object in a Windows.UI.XAML.Window object. In a XAML
app, you typically interact with the Window object rather than working directly with the CoreWindow.

Show a new view


Before we go further, let's look at the steps to create a new view. Here, the new view is launched in response to a
button click.
private async void Button_Click(object sender, RoutedEventArgs e)
{
CoreApplicationView newView = CoreApplication.CreateNewView();
int newViewId = 0;
await newView.Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () =>
{
Frame frame = new Frame();
frame.Navigate(typeof(SecondaryPage), null);
Window.Current.Content = frame;
// You have to activate the window in order to show it later.
Window.Current.Activate();

newViewId = ApplicationView.GetForCurrentView().Id;
});
bool viewShown = await ApplicationViewSwitcher.TryShowAsStandaloneAsync(newViewId);
}

To show a new view


1. Call CoreApplication.CreateNewView to create a new window and thread for the view content.

CoreApplicationView newView = CoreApplication.CreateNewView();

2. Track the Id of the new view. You use this to show the view later.
You might want to consider building some infrastructure into your app to help with tracking the views you
create. See the ViewLifetimeControl class in the MultipleViews sample for an example.

int newViewId = 0;

3. On the new thread, populate the window.


You use the CoreDispatcher.RunAsync method to schedule work on the UI thread for the new view. You
use a lambda expression to pass a function as an argument to the RunAsync method. The work you do in
the lambda function happens on the new view's thread.
In XAML, you typically add a Frame to the Window's Content property, then navigate the Frame to a XAML
Page where you've defined your app content. For more info, see Peer-to-peer navigation between two
pages.
After the new Window is populated, you must call the Window's Activate method in order to show the
Window later. This work happens on the new view's thread, so the new Window is activated.
Finally, get the new view's Id that you use to show the view later. Again, this work is on the new view's
thread, so ApplicationView.GetForCurrentView gets the Id of the new view.

await newView.Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () =>


{
Frame frame = new Frame();
frame.Navigate(typeof(SecondaryPage), null);
Window.Current.Content = frame;
// You have to activate the window in order to show it later.
Window.Current.Activate();

newViewId = ApplicationView.GetForCurrentView().Id;
});

4. Show the new view by calling ApplicationViewSwitcher.TryShowAsStandaloneAsync.


After you create a new view, you can show it in a new window by calling the
ApplicationViewSwitcher.TryShowAsStandaloneAsync method. The viewId parameter for this method
is an integer that uniquely identifies each of the views in your app. You retrieve the view Id by using either
the ApplicationView.Id property or the ApplicationView.GetApplicationViewIdForWindow method.

bool viewShown = await ApplicationViewSwitcher.TryShowAsStandaloneAsync(newViewId);

The main view


The first view thats created when your app starts is called the main view. This view is stored in the
CoreApplication.MainView property, and its IsMain property is true. You dont create this view; its created by
the app. The main view's thread serves as the manager for the app, and all app activation events are delivered on
this thread.
If secondary views are open, the main views window can be hidden for example, by clicking the close (x) button in
the window title bar - but its thread remains active. Calling Close on the main views Window causes an
InvalidOperationException to occur. (Use Application.Exit to close your app.) If the main views thread is
terminated, the app closes.

Secondary views
Other views, including all views that you create by calling CreateNewView in your app code, are secondary views.
Both the main view and secondary views are stored in the CoreApplication.Views collection. Typically, you create
secondary views in response to a user action. In some instances, the system creates secondary views for your app.

NOTE
You can use the Windows assigned access feature to run an app in kiosk mode. When you do this, the system creates a
secondary view to present your app UI above the lock screen. App-created secondary views are not allowed, so if you try to
show your own secondary view in kiosk mode, an exception is thrown.

Switch from one view to another


You must provide a way for the user to navigate from a secondary window back to the main window. To do this,
use the ApplicationViewSwitcher.SwitchAsync method. You call this method from the thread of the window
you're switching from and pass the view ID of the window you're switching to.

await ApplicationViewSwitcher.SwitchAsync(viewIdToShow);

When you use SwitchAsync, you can choose if you want to close the initial window and remove it from the taskbar
by specifying the value of ApplicationViewSwitchingOptions.
Command design basics for UWP apps
3/6/2017 6 min to read Edit on GitHub

In a Universal Windows Platform (UWP) app, command elements are the interactive UI elements that enable the
user to perform actions, such as sending an email, deleting an item, or submitting a form. This article describes the
command elements, such as buttons and check boxes, the interactions they support, and the command surfaces
(such as command bars and context menus) for hosting them.

Provide the right type of interactions


When designing a command interface, the most important decision is choosing what users should be able to do.
For example, if you're creating a photo app, the user will need tools to edit their photos. However, if you're
creating a social media app that happens to display photos, image editing might not be a priority and so editing
tools can be omitted to save space. Decide what you want users to accomplish and provide the tools to help them
do it.
For recommendations about how to plan the right interactions for your app, see Plan your app.

Use the right command element for the interaction


Using the right elements for the right interactions can mean the difference between an app that feels intuitive to
use and one that seems difficult or confusing. The Universal Windows Platform (UWP) provides a large set of
command elements, in the form of controls, that you can use in your app. Here's a list of some of the most
common controls and a summary of the interactions they enable.

CATEGORY ELEMENTS INTERACTION

Buttons Button Triggers an immediate action, such as


sending an email, confirming an action
in a dialog, submitting form data.

Date and time pickers calendar date picker, calendar view, Enables the user to view and modify
date picker, time picker date and time info, such as when
entering a credit card expiration date or
setting an alarm.

Lists drop-down list, list box, list view and Presents items in a interactive list or a
grid view grid. Use these elements to let users
select a movie from a list of new
releases or manage an inventory.

Predictive text entry Auto-suggest box Saves users time when entering data or
performing queries by providing
suggestions as they type.

Selection controls check box, radio button, toggle switch Lets the user choose between different
options, such as when completing a
survey or configuring app settings.

For a complete list, see Controls and UI elements

Place commands on the right surface


You can place command elements on a number of surfaces in your app, including the app canvas (the content area
of your app) or special command elements that can act as command containers, such as command bars, menus,
dialogs, and flyouts. Here are some general recommendations for placing commands:
Whenever possible, let users directly manipulate the content on the app's canvas, rather than adding
commands that act on the content. For example, in the travel app, let users rearrange their itinerary by
dragging and dropping activities in a list on the canvas, rather than by selecting the activity and using Up or
Down command buttons.
Otherwise, place commands on one of these UI surfaces if users can't manipulate content directly:
In the command bar: You should put most commands on the command bar, which helps to organize
commands and makes them easy to access.
On the app's canvas: If the user is on a page or view that has a single purpose, you can provide
commands for that purpose directly on the canvas. There should be very few of these commands.
In a context menu: You can use context menus for clipboard actions (such as cut, copy, and paste), or for
commands that apply to content that cannot be selected (like adding a push pin to a location on a map).
Here's a list of the command surfaces that Windows provides and recommendations for when to use them.

SURFACE DESCRIPTION

App canvas (content area) If a command is critical and is constantly needed for the user
to complete the core scenarios, put it on the canvas (the
content area of your app). Because you can put commands
near (or on) the objects they affect, putting commands on the
canvas makes them easy and obvious to use.
However, choose the commands you put on the canvas
carefully. Too many commands on the app canvas takes
up valuable screen space and can overwhelm the user. If
the command won't be frequently used, consider putting
it in another command surface, such as menu or the
command bar's "More" area.

Command bar Command bars provide users with easy access to actions. You
can use a command bar to show commands or options that
are specific to the user's context, such as a photo selection or
drawing mode.
Command bars can be placed at the top of the screen, at
the bottom of the screen, or at both the top and bottom
of the screen. This design of a photo editing app shows
the content area and the command bar:

For more information about command bars, see the


Guidelines for command bar article.
Menus and context menus Sometimes it is more efficient to group multiple commands
into a command menu. Menus let you present more options
with less space. Menus can include interactive controls.
Context menus can provide shortcuts to commonly-used
actions and provide access to secondary commands that
are only relevant in certain contexts.
Context menus are for the following types of commands
and command scenarios:
Contextual actions on text selections, such as Copy,
Cut, Paste, Check Spelling, and so on.
Commands for an object that needs to be acted upon
but that can't be selected or otherwise indicated.
Showing clipboard commands.
Custom commands.
This example shows the design for a subway app that
uses a context menu to modify the route, bookmark a
route, or select another train.

For more info about context menus, see the Guidelines


for context menu article.

Dialog controls Dialogs are modal UI overlays that provide contextual app
information. In most cases, dialogs block interactions with the
app window until being explicitly dismissed, and often request
some kind of action from the user.
Dialogs can be disruptive and should only be used in
certain situations. For more info, see the When to confirm
or undo actions section.

Flyout A lightweight contextual popup that displays UI related to


what the user is doing. Use a flyout to:
Show a menu.
Show more detail about an item.
Ask the user to confirm an action without blocking
interaction with the app.
Flyouts can be dismissed by tapping or clicking
somewhere outside the flyout. For more info about flyout
controls, see the Dialogs and flyouts article.

When to confirm or undo actions


No matter how well-designed the user interface is and no matter how careful the user is, at some point, all users
will perform an action they wish they hadn't. Your app can help in these situations by requiring the user to confirm
an action, or by providing a way of undoing recent actions.
For actions that can't be undone and have major consequences, we recommend using a confirmation dialog.
Examples of such actions include:
Overwriting a file
Not saving a file before closing
Confirming permanent deletion of a file or data
Making a purchase (unless the user opts out of requiring a confirmation)
Submitting a form, such as signing up for something
For actions that can be undone, offering a simple undo command is usually enough. Examples of such actions
include:
Deleting a file
Deleting an email (not permanently)
Modifying content or editing text
Renaming a file

TIP
Be careful of how much your app uses confirmation dialogs; they can be very helpful when the user makes a mistake, but
they are a hindrance whenever the user is trying to perform an action intentionally.

Optimize for specific input types


See the Interaction primer for more detail on optimizing user experiences around a specific input type or device.
Active canvas layout pattern
3/6/2017 1 min to read Edit on GitHub

An active canvas is a pattern with a content area and a command area. It's for single-view apps or modal
experiences, such as photo viewers/editors, document viewers, maps, painting, or other apps that make use of a
free-scrolling view. For taking actions, an active canvas can be paired with a command bar or just buttons,
depending on the number and types of actions you need.

Examples
This design of a photo editing app features an active canvas pattern, with a mobile example on the left, and a
desktop example on the right. The image editing surface is a canvas, and the command bar at the bottom contains
all of the contextual actions for the app.

This design of a subway map app makes use of an active canvas with a simple UI strip at the top that has only two
actions and a search box. Contextual actions are shown in the context menu, as seen on the right image.

Implementing this pattern


The active canvas pattern consists of a content area and a command area.
Content area. The content area is usually a free-scrolling canvas. Multiple content areas can exist within an app.
Command area. If you're placing a lot of commands, then a command bar, which responds based on screen size,
could be the way to go. If you're not placing that many commands and aren't as concerned with a responsive UI,
space-saving buttons work well.

Related articles
App bar and command bar
Content design basics for UWP apps
3/6/2017 3 min to read Edit on GitHub

The main purpose of any app is to provide access to content: in a photo-editing app, the photo is the content; in a
travel app, maps and info about travel destinations is the content; and so on. Navigation elements provide access
to content; command elements enable the user to interact with content; content elements display the actual
content.
This article provides content design recommendations for the three content scenarios.

Design for the right content scenario


There are three main content scenarios:
Consumption: A primarily one-way experience where content is consumed. It includes tasks like reading,
listening to music, watching videos, and photo and image viewing.
Creation: A primarily one-way experience where the focus is creating new content. It can be broken down into
making things from scratch, like shooting a photo or video, creating a new image in a painting app, or opening
a fresh document.
Interactive: A two-way content experience that includes consuming, creating, and revising content.

Consumption-focused apps
Content elements receive the highest priority in a consumption-focused app, followed by the navigation elements
needed to help users find the content they want. Examples of consumption-focused apps include movie players,
reading apps, music apps, and photo viewers.

General recommendations for consumption-focused apps:


Consider creating dedicated navigation pages and content-viewing pages, so that when users find the content
they are looking for, they can view it on a dedicated page free of distractions.
Consider creating a full-screen view option that expands the content to fill the entire screen and hides all other
UI elements.

Creation-focused apps
Content and command elements are the most import UI elements in a creation-focused app: command elements
enable the user to create new content. Examples include painting apps, photo editing apps, video editing apps, and
word processing apps.
As an example, here's a design for a photo app that uses command bars to provide access to tools and photo
manipulation options. Because all the commands are in the command bar, the app can devote most of its screen
space to its content, the photo being edited.

General recommendations for creation-focused apps:


Minimize the use of navigation elements.
Command elements are especially important in creation-focused apps. Since users will be executing a lot of
commands, we recommend providing a command history/undo functionality.

Apps with interactive content


In an app with interactive content, users create, view, and edit content; many apps fit into this category. Examples of
these types of apps include line of business apps, inventory management apps, cooking apps that enable the user
to create or modify recipes.

These sort of apps need to balance all three UI elements:


Navigation elements help users find and view content. If viewing and finding content is the most important
scenario, prioritize navigation elements, filtering and sorting, and search.
Command elements let the user create, edit, and manipulate content.
General recommendations for apps with interactive content:
It can be difficult to balance navigation, content, and command elements when all three are important. If
possible, consider creating separate screens for browsing, creating, and editing content, or providing mode
switches.

Commonly used content elements


Here are some UI elements commonly used to display content. (For a complete list of UI elements, see Controls
and UI elements.)

CATEGORY ELEMENTS DESCRIPTION

Audio and video Media playback and transport controls Plays audio and video.

Image viewers Flip view, image Displays images. The flip view displays
images in a collection, such as photos in
an album or items in a product details
page, one image at a time.

Lists drop-down list, list box, list view and Presents items in an interactive list or a
grid view grid. Use these elements to let users
select a movie from a list of new
releases or manage an inventory.

Text and text input Text block, text box, rich edit box Displays text. Some elements enable
the user to edit text. For more info, see
Text controls
Screen sizes and break points for responsive design
3/6/2017 2 min to read Edit on GitHub

The number of device targets and screen sizes across the Windows 10 ecosystem is too great to worry about
optimizing your UI for each one. Instead, we recommended designing for a few key widths (also called
"breakpoints"): 360, 640, 1024 and 1366 epx.

TIP
When designing for specific breakpoints, design for the amount of screen space available to your app (the app's window).
When the app is running full-screen, the app window is the same size as the screen, but in other cases, it's smaller.

This table describes the different size classes and provides general recommendations for tailoring for those size
classes.

SIZE CLASS SMALL MEDIUM LARGE

Typical screen size (diagonal) 4" to 6" 7" to 12", or TVs 13" and larger

Typical devices Phones Phablets, tablets, TVs PCs, laptops, Surface Hubs

Common window sizes in 320x569, 360x640, 480x854 960x540, 1024x640 1366x768, 1920x1080
effective pixels

Window width breakpoints 640px or less 641px to 1007px 1008px or greater


in effective pixels
SIZE CLASS SMALL MEDIUM LARGE

General recommendations Center tab elements. Make tab elements Make tab elements
Set left and right left-aligned. left-aligned.
window margins to Set left and right Set left and right
12px to create a window margins to window margins to
visual separation 24px to create a 24px to create a
between the left and visual separation visual separation
right edges of the between the left and between the left and
app window. right edges of the right edges of the
Dock app bars to the app window. app window.
bottom of the Put command Put command
window for improved elements like app elements like app
reachability bars at the top of the bars at the top of the
Use one app window. app window.
column/region at a Up to two Up to three
time columns/regions columns/regions
Use an icon to Show the search box. Show the search box.
represent search Put the navigation Put the navigation
(don't show a search pane into sliver pane into docked
box). mode so a narrow mode so that it
Put the navigation strip of icons always always shows.
pane in overlay shows.
mode to conserve Consider further
screen space. tailoring for TV
If you're using the experiences.
master details
pattern, use the
stacked presentation
mode to save screen
space.

With Continuum for Phones, a new experience for compatible Windows 10 mobile devices, users can connect
their phones to a monitor, mouse and keyboard to make their phones work like laptops. Keep this new capability
in mind when designing for specific breakpoints - a mobile phone will not always stay in the small size class.
Define page layouts with XAML
3/6/2017 21 min to read Edit on GitHub

XAML gives you a flexible layout system that lets you use automatic sizing, layout panels, visual states, and even
separate UI definitions to create a responsive UI. With a flexible design, you can make your app look great on
screens with different app window sizes, resolutions, pixel densities, and orientations.
Here, we discuss how to use XAML properties and layout panels to make your app responsive and adaptive. We
build on important info about responsive UI design and techniques found in Introduction to UWP app design. You
should understand what effective pixels are and understand each of the responsive design techniques: Reposition,
Resize, Reflow, Reveal, Replace, and Re-architect.

NOTE
Your app layout begins with the navigation model you choose, like whether to use a Pivot with the tabs and pivot model or
SplitView with the nav pane model. For more info about that, see Navigation design basics for UWP apps. Here, we talk
about techniques to make the layout of a single page or group of elements responsive. This info is applicable regardless of
which navigation model you choose for your app.

The XAML framework provides several levels of optimization you can use to create a responsive UI.
Fluid layout Use layout properties and panels to make your default UI fluid.
The foundation of a responsive layout is the appropriate use of layout properties and panels to reposition,
resize, and reflow content. You can set a fixed size on an element, or use automatic sizing to let the parent
layout panel size it. The various Panel classes, such as Canvas, Grid, RelativePanel and StackPanel,
provide different ways to size and position their children.
Adaptive layout Use visual states to make significant alterations to your UI based on window size or other
changes.
When your app window grows or shrinks beyond a certain amount, you might want to alter layout
properties to reposition, resize, reflow, reveal, or replace sections of your UI. You can define different visual
states for your UI, and apply them when the window width or window height crosses a specified threshold.
An AdaptiveTrigger provides an easy way to set the threshold (also called 'breakpoint') where a state is
applied.
Tailored layout A tailored layout is optimized for a specific device family or range of screen sizes. Within
the device family, the layout should still respond and adapt to changes within the range of supported
window sizes.

Note With Continuum for Phones, users can connect their phones to a monitor, mouse, and keyboard.
This capability blurs the lines between phone and desktop device families.

Approaches to tailoring include


Create custom trigger
You can create a device family trigger and modify its setters, as for adaptive triggers.
Use separate XAML files to define distinct views for each device family.
You can use separate XAML files with the same code file to define per-device family views of the UI.
Use separate XAML and code to provide different implementations for each device family.
You can provide different implementations of a page (XAML and code), then navigate to a particular
implementation based on the device family, screen size, or other factors.

Layout properties and panels


Layout is the process of sizing and positioning objects in your UI. To position visual objects, you must put them in a
panel or other container object. The XAML framework provides various panel classes, such as Canvas, Grid,
RelativePanel and StackPanel, which serve as containers and enable you to position and arrange the UI elements
within them.
The XAML layout system supports both static and fluid layouts. In a static layout, you give controls explicit pixel
sizes and positions. When the user changes the resolution or orientation of their device, the UI doesn't change.
Static layouts can become clipped across different form factors and display sizes.
Fluid layouts shrink, grow, and reflow to respond to the visual space available on a device. To create a fluid layout,
use automatic or proportional sizing for elements, alignment, margins, and padding, and let layout panels position
their children as needed. You arrange child elements by specifying how they should be arranged in relationship to
each other, and how they should be sized relative to their content and/or their parent.
In practice, you use a combination of static and fluid elements to create your UI. You still use static elements and
values in some places, but make sure that the overall UI is responsive and adapts to different resolutions, layouts,
and views.
Layout properties
To control the size and position of an element, you set its layout properties. Here are some common layout
properties and their effect.
Height and Width
Set the Height and Width properties to specify the size of an element. You can use fixed values measured in
effective pixels, or you can use auto or proportional sizing. To get the size of an element at runtime, use the
ActualHeight and ActualWidth properties instead of Height and Width.
You use auto sizing to let UI elements resize to fit their content or parent container. You can also use auto sizing
with the rows and columns of a grid. To use auto sizing, set the Height and/or Width of UI elements to Auto.

NOTE
Whether an element resizes to its content or its container depends on the value of its HorizontalAlignment and
VerticalAlignment properties, and how the parent container handles sizing of its children. For more info, see [Alignment]()
and [Layout panels]() later in this article.

You use proportional sizing, also called star sizing, to distribute available space among the rows and columns of a
grid by weighted proportions. In XAML, star values are expressed as * (or n* for weighted star sizing). For example,
to specify that one column is 5 times wider than the second column in a 2-column layout, use "5*" and "*" for the
Width properties in the ColumnDefinition elements.
This example combines fixed, auto, and proportional sizing in a Grid with 4 columns.

Column_1 Auto The column will size to fit its content.


Column_2 * After the Auto columns are calculated,
the column gets part of the remaining
width. Column_2 will be one-half as
wide as Column_4.

Column_3 44 The column will be 44 pixels wide.

Column_4 2* After the Auto columns are calculated,


the column gets part of the remaining
width. Column_4 will be twice as wide as
Column_2.

The default column width is "*", so you don't need to explicitly set this value for the second column.

<Grid>
<Grid.ColumnDefinitions>
<ColumnDefinition Width="Auto"/>
<ColumnDefinition/>
<ColumnDefinition Width="44"/>
<ColumnDefinition Width="2*"/>
</Grid.ColumnDefinitions>
<TextBlock Text="Column 1 sizes to its conent." FontSize="24"/>
</Grid>

In the Visual Studio XAML designer, the result looks like this.

Size constraints
When you use auto sizing in your UI, you might still need to place constraints on the size of an element. You can set
the MinWidth/MaxWidth and MinHeight/MaxHeight properties to specify values that constrain the size of an
element while allowing fluid resizing.
In a Grid, MinWidth/MaxWidth can also be used with column definitions, and MinHeight/MaxHeight can be used
with row definitions.
Alignment
Use the HorizontalAlignment and VerticalAlignment properties to specify how an element should be
positioned within its parent container.
The values for HorizontalAlignment are Left, Center, Right, and Stretch.
The values for VerticalAlignment are Top, Center, Bottom, and Stretch.
With the Stretch alignment, elements fill all the space they're provided in the parent container. Stretch is the
default for both alignment properties. However, some controls, like Button, override this value in their default
style. Any element that can have child elements can treat the Stretch value for HorizontalAlignment and
VerticalAlignment properties uniquely. For example, an element using the default Stretch values placed in a Grid
stretches to fill the cell that contains it. The same element placed in a Canvas sizes to its content. For more info
about how each panel handles the Stretch value, see the Layout panels article.
For more info, see the Alignment, margin, and padding article, and the HorizontalAlignment and
VerticalAlignment reference pages.
Controls also have HorizontalContentAlignment and VerticalContentAlignment properties that you use to
specify how they position their content. Not all controls make use of these properties. They only affect layout
behavior for a control when its template uses the properties as the source of a
HorizontalAlignment/VerticalAlignment value for presenters or content areas within it.
For TextBlock, TextBox, and RichTextBlock, use the TextAlignment property to control the alignment of text in the
control.
Margins and padding
Set the Margin property to control the amount of empty space around an element. Margin does not add pixels to
the ActualHeight and ActualWidth, and is also not considered part of the element for purposes of hit testing and
sourcing input events.
Set the Padding property to control the amount of space between the inner border of an element and its content.
A positive Padding value decreases the content area of the element.
This diagram shows how Margin and Padding are applied to an element.

The left, right, top, and bottom values for Margin and Padding do not need to be symmetrical, and they can be set
to negative values. For more info, see Alignment, margin, and padding, and the Margin or Padding reference
pages.
Let's look at the effects of Margin and Padding on real controls. Heres a TextBox inside of a Grid with the default
Margin and Padding values of 0.

Heres the same TextBox and Grid with Margin and Padding values on the TextBox as shown in this XAML.

<Grid BorderBrush="Blue" BorderThickness="4" Width="200">


<TextBox Text="This is text in a TextBox." Margin="20" Padding="24,16"/>
</Grid>

Visibility
You can reveal or hide an element by setting its Visibility property to one of the Visibility enumeration values:
Visible or Collapsed. When an element is Collapsed, it doesn't take up any space in the UI layout.
You can change an element's Visibility property in code or in a visual state. When the Visibility of an element is
changed, all of its child elements are also changed. You can replace sections of your UI by revealing one panel
while collapsing another.

Tip When you have elements in your UI that are Collapsed by default, the objects are still created at startup,
even though they aren't visible. You can defer loading these elements until they are shown by setting the
x:DeferLoadStrategy attribute to "Lazy". This can improve startup performance. For more info, see
x:DeferLoadStrategy attribute.

Style resources
You don't have to set each property value individually on a control. It's typically more efficient to group property
values into a Style resource and apply the Style to a control. This is especially true when you need to apply the
same property values to many controls. For more info about using styles, see Styling controls.
Layout panels
Most app content can be organized into some form of groupings or hierarchies. You use layout panels to group
and arrange UI elements in your app. The main thing to consider when choosing a layout panel is how the panel
positions and sizes its child elements. You might also need to consider how overlapping child elements are layered
on top of each other.
Here's a comparison of the main features of the panel controls provided in the XAML framework.

PANEL CONTROL DESCRIPTION

Canvas Canvas doesnt support fluid UI; you control all aspects of
positioning and sizing child elements. You typically use it for
special cases like creating graphics or to define small static
areas of a larger adaptive UI. You can use code or visual states
to reposition elements at runtime.
Elements are positioned absolutely using Canvas.Top and
Canvas.Left attached properties.
Layering can be explicitly specified using the Canvas.ZIndex
attached property.
Stretch values for HorizontalAlignment/VerticalAlignment
are ignored. If an element's size is not set explicitly, it sizes to
its content.
Child content is not visually clipped if larger than the panel.
Child content is not constrained by the bounds of the
panel.

Grid Grid supports fluid resizing of child elements. You can use
code or visual states to reposition and reflow elements.
Elements are arranged in rows and columns using Grid.Row
and Grid.Column attached properties.
Elements can span multiple rows and columns using
Grid.RowSpan and Grid.ColumnSpan attached properties.
Stretch values for HorizontalAlignment/VerticalAlignment
are respected. If an element's size is not set explicitly, it
stretches to fill the available space in the grid cell.
Child content is visually clipped if larger than the panel.
Content size is constrained by the bounds of the panel, so
scrollable content shows scroll bars if needed.
PANEL CONTROL DESCRIPTION

RelativePanel Elements are arranged in relation to the edge or center of


the panel, and in relation to each other.
Elements are positioned using a variety of attached
properties that control panel alignment, sibling alignment, and
sibling position.
Stretch values for HorizontalAlignment/VerticalAlignment
are ignored unless RelativePanel attached properties for
alignment cause stretching (for example, an element is aligned
to both the right and left edges of the panel). If an element's
size is not set explicitly and it's not stretched, it sizes to its
content.
Child content is visually clipped if larger than the panel.
Content size is constrained by the bounds of the panel, so
scrollable content shows scroll bars if needed.

StackPanel Elements are stacked in a single line either vertically or


horizontally.
Stretch values for HorizontalAlignment/VerticalAlignment
are respected in the direction opposite the Orientation
property. If an element's size is not set explicitly, it stretches to
fill the available width (or height if the Orientation is
Horizontal). In the direction specified by the Orientation
property, an element sizes to its content.
Child content is visually clipped if larger than the panel.
Content size is not constrained by the bounds of the panel
in the direction specified by the Orientation property, so
scrollable content stretches beyond the panel bounds and
doesn't show scrollbars. You must explicitly constrain the
height (or width) of the child content to make its scrollbars
show.

VariableSizedWrapGrid Elements are arranged in rows or columns that


automatically wrap to a new row or column when the
MaximumRowsOrColumns value is reached.
Whether elements are arranged in rows or columns is
specified by the Orientation property.
Elements can span multiple rows and columns using
VariableSizedWrapGrid.RowSpan and
VariableSizedWrapGrid.ColumnSpan attached properties.
Stretch values for HorizontalAlignment/VerticalAlignment
are ignored. Elements are sized as specified by the ItemHeight
and ItemWidth properties. If these properties are not set, the
item in the first cell sizes to its content, and all other cells
inherit this size.
Child content is visually clipped if larger than the panel.
Content size is constrained by the bounds of the panel, so
scrollable content shows scroll bars if needed.

For detailed information and examples of these panels, see Layout panels. Also, see the Responsive techniques
sample.
Layout panels let you organize your UI into logical groups of controls. When you use them with appropriate
property settings, you get some support for automatic resizing, repositioning, and reflowing of UI elements.
However, most UI layouts need further modification when there are significant changes to the window size. For
this, you can use visual states.

Visual states and state triggers


Use visual states to reposition, resize, reflow, reveal, or replace sections of your UI based on screen size or other
factors. A VisualState defines property values that are applied to an element when its in a particular state. You
group visual states in a VisualStateManager that applies the appropriate VisualState when the specified
conditions are met.
Set visual states in code
To apply a visual state from code, you call the VisualStateManager.GoToState method. For example, to apply a
state when the app window is a particular size, handle the SizeChanged event and call GoToState to apply the
appropriate state.
Here, a VisualStateGroup contains 2 VisualState definitions. The first, DefaultState , is empty. When it's applied, the
values defined in the XAML page are applied. The second, WideState , changes the DisplayMode property of the
SplitView to Inline and opens the pane. This state is applied in the SizeChanged event handler if the window
width is 720 effective pixels or greater.

<Page ...>
<Grid>
<VisualStateManager.VisualStateGroups>
<VisualStateGroup>
<VisualState x:Name="DefaultState">
<Storyboard>
</Storyboard>
</VisualState>

<VisualState x:Name="WideState">
<Storyboard>
<ObjectAnimationUsingKeyFrames
Storyboard.TargetProperty="SplitView.DisplayMode"
Storyboard.TargetName="mySplitView">
<DiscreteObjectKeyFrame KeyTime="0">
<DiscreteObjectKeyFrame.Value>
<SplitViewDisplayMode>Inline</SplitViewDisplayMode>
</DiscreteObjectKeyFrame.Value>
</DiscreteObjectKeyFrame>
</ObjectAnimationUsingKeyFrames>
<ObjectAnimationUsingKeyFrames
Storyboard.TargetProperty="SplitView.IsPaneOpen"
Storyboard.TargetName="mySplitView">
<DiscreteObjectKeyFrame KeyTime="0" Value="True"/>
</ObjectAnimationUsingKeyFrames>
</Storyboard>
</VisualState>
</VisualStateGroup>
</VisualStateManager.VisualStateGroups>

<SplitView x:Name="mySplitView" DisplayMode="CompactInline"


IsPaneOpen="False" CompactPaneLength="20">
<!-- SplitView content -->

<SplitView.Pane>
<!-- Pane content -->
</SplitView.Pane>
</SplitView>
</Grid>
</Page>

private void CurrentWindow_SizeChanged(object sender, Windows.UI.Core.WindowSizeChangedEventArgs e)


{
if (e.Size.Width >= 720)
VisualStateManager.GoToState(this, "WideState", false);
else
VisualStateManager.GoToState(this, "DefaultState", false);
}
Set visual states in XAML markup
Prior to Windows 10, VisualState definitions required Storyboard objects for property changes, and you had to
call GoToState in code to apply the state. This is shown in the previous example. You will still see many examples
that use this syntax, or you might have existing code that uses it.
Starting in Windows 10, you can use the simplified Setter syntax shown here, and you can use a StateTrigger in
your XAML markup to apply the state. You use state triggers to create simple rules that automatically trigger visual
state changes in response to an app event.
This example does the same thing as the previous example, but uses the simplified Setter syntax instead of a
Storyboard to define property changes. And instead of calling GoToState, it uses the built in AdaptiveTrigger state
trigger to apply the state. When you use state triggers, you don't need to define an empty DefaultState . The default
settings are reapplied automatically when the conditions of the state trigger are no longer met.

<Page ...>
<Grid>
<VisualStateManager.VisualStateGroups>
<VisualStateGroup>
<VisualState>
<VisualState.StateTriggers>
<!-- VisualState to be triggered when the
window width is >=720 effective pixels. -->
<AdaptiveTrigger MinWindowWidth="720" />
</VisualState.StateTriggers>

<VisualState.Setters>
<Setter Target="mySplitView.DisplayMode" Value="Inline"/>
<Setter Target="mySplitView.IsPaneOpen" Value="True"/>
</VisualState.Setters>
</VisualState>
</VisualStateGroup>
</VisualStateManager.VisualStateGroups>

<SplitView x:Name="mySplitView" DisplayMode="CompactInline"


IsPaneOpen="False" CompactPaneLength="20">
<!-- SplitView content -->

<SplitView.Pane>
<!-- Pane content -->
</SplitView.Pane>
</SplitView>
</Grid>
</Page>

Important In the previous example, the VisualStateManager.VisualStateGroups attached property is set on the
Grid element. When you use StateTriggers, always ensure that VisualStateGroups is attached to the first child
of the root in order for the triggers to take effect automatically. (Here, Grid is the first child of the root Page
element.)

Attached property syntax


In a VisualState, you typically set a value for a control property, or for one of the attached properties of the panel
that contains the control. When you set an attached property, use parentheses around the attached property name.
This example shows how to set the RelativePanel.AlignHorizontalCenterWithPanel attached property on a
TextBox named myTextBox . The first XAML uses ObjectAnimationUsingKeyFrames syntax and the second uses
Setter syntax.
<!-- Set an attached property using ObjectAnimationUsingKeyFrames. -->
<ObjectAnimationUsingKeyFrames
Storyboard.TargetProperty="(RelativePanel.AlignHorizontalCenterWithPanel)"
Storyboard.TargetName="myTextBox">
<DiscreteObjectKeyFrame KeyTime="0" Value="True"/>
</ObjectAnimationUsingKeyFrames>

<!-- Set an attached property using Setter. -->


<Setter Target="myTextBox.(RelativePanel.AlignHorizontalCenterWithPanel)" Value="True"/>

Custom state triggers


You can extend the StateTrigger class to create custom triggers for a wide range of scenarios. For example, you
can create a StateTrigger to trigger different states based on input type, then increase the margins around a control
when the input type is touch. Or create a StateTrigger to apply different states based on the device family the app is
run on. For examples of how to build custom triggers and use them to create optimized UI experiences from within
a single XAML view, see the State triggers sample.
Visual states and styles
You can use Style resources in visual states to apply a set of property changes to multiple controls. For more info
about using styles, see Styling controls.
In this simplified XAML from the State triggers sample, a Style resource is applied to a Button to adjust the size and
margins for mouse or touch input. For the complete code and the definition of the custom state trigger, see the
State triggers sample.
<Page ... >
<Page.Resources>
<!-- Styles to be used for mouse vs. touch/pen hit targets -->
<Style x:Key="MouseStyle" TargetType="Rectangle">
<Setter Property="Margin" Value="5" />
<Setter Property="Height" Value="20" />
<Setter Property="Width" Value="20" />
</Style>
<Style x:Key="TouchPenStyle" TargetType="Rectangle">
<Setter Property="Margin" Value="15" />
<Setter Property="Height" Value="40" />
<Setter Property="Width" Value="40" />
</Style>
</Page.Resources>

<RelativePanel>
<!-- ... -->
<Button Content="Color Palette Button" x:Name="MenuButton">
<Button.Flyout>
<Flyout Placement="Bottom">
<RelativePanel>
<Rectangle Name="BlueRect" Fill="Blue"/>
<Rectangle Name="GreenRect" Fill="Green" RelativePanel.RightOf="BlueRect" />
<!-- ... -->
</RelativePanel>
</Flyout>
</Button.Flyout>
</Button>
<!-- ... -->
</RelativePanel>
<VisualStateManager.VisualStateGroups>
<VisualStateGroup x:Name="InputTypeStates">
<!-- Second set of VisualStates for building responsive UI optimized for input type.
Take a look at InputTypeTrigger.cs class in CustomTriggers folder to see how this is implemented. -->
<VisualState>
<VisualState.StateTriggers>
<!-- This trigger indicates that this VisualState is to be applied when MenuButton is invoked using a mouse. -->
<triggers:InputTypeTrigger TargetElement="{x:Bind MenuButton}" PointerType="Mouse" />
</VisualState.StateTriggers>
<VisualState.Setters>
<Setter Target="BlueRect.Style" Value="{StaticResource MouseStyle}" />
<Setter Target="GreenRect.Style" Value="{StaticResource MouseStyle}" />
<!-- ... -->
</VisualState.Setters>
</VisualState>
<VisualState>
<VisualState.StateTriggers>
<!-- Multiple trigger statements can be declared in the following way to imply OR usage.
For example, the following statements indicate that this VisualState is to be applied when MenuButton is invoked using Touch OR
Pen.-->
<triggers:InputTypeTrigger TargetElement="{x:Bind MenuButton}" PointerType="Touch" />
<triggers:InputTypeTrigger TargetElement="{x:Bind MenuButton}" PointerType="Pen" />
</VisualState.StateTriggers>
<VisualState.Setters>
<Setter Target="BlueRect.Style" Value="{StaticResource TouchPenStyle}" />
<Setter Target="GreenRect.Style" Value="{StaticResource TouchPenStyle}" />
<!-- ... -->
</VisualState.Setters>
</VisualState>
</VisualStateGroup>
</VisualStateManager.VisualStateGroups>
</Page>

Tailored layouts
When you make significant changes to your UI layout on different devices, you might find it more convenient to
define a separate UI file with a layout tailored to the device, rather than adapting a single UI. If the functionality is
the same across devices, you can define separate XAML views that share the same code file. If both the view and
the functionality differ significantly across devices, you can define separate Pages, and choose which Page to
navigate to when the app is loaded.
Separate XAML views per device family
Use XAML views to create different UI definitions that share the same code-behind. You can provide a unique UI
definition for each device family. Follow these steps to add a XAML view to your app.
To add a XAML view to an app
1. Select Project > Add New Item. The Add New Item dialog box opens. > Tip Make sure a folder or the project,
and not the solution, is selected in Solution Explorer.
2. Under Visual C# or Visual Basic in the left pane, pick the XAML template type.
3. In the center pane, pick XAML View.
4. Enter the name for the view. The view must be named correctly. For more info on naming, see the remainder of
this section.
5. Click Add. The file is added to the project.
The previous steps create only a XAML file, but not an associated code-behind file. Instead, the XAML view is
associated with an existing code-behind file using a "DeviceName" qualifier that's part of the file or folder name.
This qualifier name can be mapped to a string value that represents the device family of the device that your app is
currently running on, such as, "Desktop", "Mobile", and the names of the other device families (see
ResourceContext.QualifierValues).
You can add the qualifier to the file name, or add the file to a folder that has the qualifier name.
Use file name
To use the qualifier name with the file, use this format: [pageName].DeviceFamily-[qualifierString].xaml.
Let's look at an example for a file named MainPage.xaml. To create a view for mobile devices, name the XAML view
MainPage.DeviceFamily-Mobile.xaml. To create a view for PC devices, name the view MainPage.DeviceFamily-
Desktop.xaml. Here's what the solution looks like in Microsoft Visual Studio.

Use folder name


To organize the views in your Visual Studio project using folders, you can use the qualifier name with the folder. To
do so, name your folder like this: DeviceFamily-[qualifierString]. In this case, each XAML view file has the same
name. Don't include the qualifier in the file name.
Here's an example, again for a file named MainPage.xaml. To create a view for mobile devices, create a folder
named "DeviceFamily-Mobile", and place a XAML view named MainPage.xaml into it. To create a view for PC
devices, create a folder named "DeviceFamily-Desktop", and place another XAML view named MainPage.xaml into
it. Here's what the solution looks like in Visual Studio.
In both cases, a unique view is used for mobile and PC devices. The default MainPage.xaml file is used if the device
it's running on doesn't match any of the device family specific views.
Separate XAML pages per device family
To provide unique views and functionality, you can create separate Page files (XAML and code), and then navigate
to the appropriate page when the page is needed.
To add a XAML page to an app
1. Select Project > Add New Item. The Add New Item dialog box opens. > Tip Make sure the project, and not the
solution, is selected in Solution Explorer.
2. Under Visual C# or Visual Basic in the left pane, pick the XAML template type.
3. In the center pane, pick Blank page.
4. Enter the name for the page. For example, "MainPage_Mobile". Both a MainPage_Mobile.xaml and
MainPage_Mobile.xaml.cs/vb/cpp code file are created.
5. Click Add. The file is added to the project.
At runtime, check the device family that the app is running on, and navigate to the correct page like this.

if (Windows.System.Profile.AnalyticsInfo.VersionInfo.DeviceFamily == "Windows.Mobile")
{
rootFrame.Navigate(typeof(MainPage_Mobile), e.Arguments);
}
else
{
rootFrame.Navigate(typeof(MainPage), e.Arguments);
}

You can also use different criteria to determine which page to navigate to. For more examples, see the Tailored
multiple views sample, which uses the GetIntegratedDisplaySize function to check the physical size of an
integrated display.

Sample code
XAML UI basics sample
See all of the XAML controls in an interactive format.
Layout panels
3/6/2017 7 min to read Edit on GitHub

You use layout panels to arrange and group UI elements in your app. The built-in XAML layout panels include
RelativePanel, StackPanel, Grid, VariableSizedWrapGrid, and Canvas. Here, we describe each panel and show
how to use it to layout XAML UI elements.
There are several things to consider when choosing a layout panel:
How the panel positions its child elements.
How the panel sizes its child elements.
How overlapping child elements are layered on top of each other (z-order).
The number and complexity of nested panel elements needed to create your desired layout.
Panel attached properties
Most XAML layout panels use attached properties to let their child elements inform the parent panel about how
they should be positioned in the UI. Attached properties use the syntax AttachedPropertyProvider.PropertyName.
If you have panels that are nested inside other panels, attached properties on UI elements that specify layout
characteristics to a parent are interpreted by the most immediate parent panel only.
Here is an example of how you can set the Canvas.Left attached property on a Button control in XAML. This
informs the parent Canvas that the Button should be positioned 50 effective pixels from the left edge of the
Canvas.

<Canvas>
<Button Canvas.Left="50">Hello</Button>
</Canvas>

For more info about attached properties, see Attached properties overview.

Note An attached property is a XAML concept that requires special syntax to get or set from code. To use
attached properties in code, see the Attached properties in code section of the Attached properties overview
article.

Panel borders
The RelativePanel, StackPanel, and Grid panels define border properties that let you draw a border around the
panel without wrapping them in an additional Border element. The border properties are BorderBrush,
BorderThickness, CornerRadius, and Padding.
Heres an example of how to set border properties on a Grid.

<Grid BorderBrush="Blue" BorderThickness="12" CornerRadius="12" Padding="12">


<TextBlock Text="Hello World!"/>
</Grid>
Using the built-in border properties reduces the XAML element count, which can improve the UI performance of
your app. For more info about layout panels and UI performance, see Optimize your XAML layout.

RelativePanel
RelativePanel lets you layout UI elements by specifying where they go in relation to other elements and in
relation to the panel. By default, an element is positioned in the upper left corner of the panel. You can use
RelativePanel with a VisualStateManager and AdaptiveTriggers to rearrange your UI for different window
sizes.
This table shows the attached properties you can use to align an element with the edge or center of the panel, and
align and position it in relation to other elements.

PANEL ALIGNMENT SIBLING ALIGNMENT SIBLING POSITION

AlignTopWithPanel AlignTopWith Above

AlignBottomWithPanel AlignBottomWith Below

AlignLeftWithPanel AlignLeftWith LeftOf

AlignRightWithPanel AlignRightWith RightOf

AlignHorizontalCenterWithPanel AlignHorizontalCenterWith

AlignVerticalCenterWithPanel AlignVerticalCenterWith

This XAML shows how to arrange elements in a RelativePanel.

<RelativePanel BorderBrush="Gray" BorderThickness="1">


<Rectangle x:Name="RedRect" Fill="Red" Height="44" Width="44"/>
<Rectangle x:Name="BlueRect" Fill="Blue"
Height="44" Width="88"
RelativePanel.RightOf="RedRect" />

<Rectangle x:Name="GreenRect" Fill="Green"


Height="44"
RelativePanel.Below="RedRect"
RelativePanel.AlignLeftWith="RedRect"
RelativePanel.AlignRightWith="BlueRect"/>
<Rectangle Fill="Yellow"
RelativePanel.Below="GreenRect"
RelativePanel.AlignLeftWith="BlueRect"
RelativePanel.AlignRightWithPanel="True"
RelativePanel.AlignBottomWithPanel="True"/>
</RelativePanel>

The result looks like this.


Here are a few thing to note about the sizing of the rectangles.
The red rectangle is given an explicit size of 44x44. It's placed in the upper left corner of the panel, which is the
default position.
The green rectangle is given an explicit height of 44. Its left side is aligned with the red rectangle, and its right
side is aligned with the blue rectangle, which determines its width.
The yellow rectangle isn't given an explicit size. Its left side is aligned with the blue rectangle. Its right and
bottom edges are aligned with the edge of the panel. Its size is determined by these alignments and it will
resize as the panel resizes.

StackPanel
StackPanel is a simple layout panel that arranges its child elements into a single line that can be oriented
horizontally or vertically. StackPanel controls are typically used in scenarios where you want to arrange a small
subsection of the UI on your page.
You can use the Orientation property to specify the direction of the child elements. The default orientation is
Vertical.
The following XAML shows how to create a vertical StackPanel of items.

<StackPanel>
<Rectangle Fill="Red" Height="44"/>
<Rectangle Fill="Blue" Height="44"/>
<Rectangle Fill="Green" Height="44"/>
<Rectangle Fill="Yellow" Height="44"/>
</StackPanel>

The result looks like this.

In a StackPanel, if a child element's size is not set explicitly, it stretches to fill the available width (or height if the
Orientation is Horizontal). In this example, the width of the rectangles is not set. The rectangles expand to fill the
entire width of the StackPanel.

Grid
The Grid panel supports arranging controls in multi-row and multi-column layouts. You can specify a Grid panel's
rows and columns by using the RowDefinitions and ColumnDefinitions properties. In XAML, use property
element syntax to declare the rows and columns within the Grid element. You can distribute space within a column
or a row by using Auto or star sizing.
You position objects in specific cells of the Grid by using the Grid.Column and Grid.Row attached properties.
You can make content span across multiple rows and columns by using the Grid.RowSpan and
Grid.ColumnSpan attached properties.
This XAML example shows how to create a Grid with two rows and two columns.

<Grid>
<Grid.RowDefinitions>
<RowDefinition/>
<RowDefinition Height="44"/>
</Grid.RowDefinitions>
<Grid.ColumnDefinitions>
<ColumnDefinition Width="Auto"/>
<ColumnDefinition/>
</Grid.ColumnDefinitions>
<Rectangle Fill="Red" Width="44"/>
<Rectangle Fill="Blue" Grid.Row="1"/>
<Rectangle Fill="Green" Grid.Column="1"/>
<Rectangle Fill="Yellow" Grid.Row="1" Grid.Column="1"/>
</Grid>

The result looks like this.

In this example, the sizing works like this:


The second row has an explicit height of 44 effective pixels. By default, the height of the first row fills whatever
space is left over.
The width of the first column is set to Auto, so it's as wide as needed for its children. In this case, it's 44
effective pixels wide to accommodate the width of the red rectangle.
There are no other size constraints on the rectangles, so each one stretches to fill the grid cell it's in.

VariableSizedWrapGrid
VariableSizedWrapGrid provides a grid-style layout panel where elements are arranged in rows or columns that
automatically wrap to a new row or column when the MaximumRowsOrColumns value is reached.
The Orientation property specifies whether the grid adds its items in rows or columns before wrapping. The
default orientation is Vertical, which means the grid adds items from top to bottom until a column is full, then
wraps to a new column. When the value is Horizontal, the grid adds items from left to right, then wraps to a new
row.
Cell dimensions are specified by the ItemHeight and ItemWidth. Each cell is the same size. If ItemHeight or
ItemWidth is not specified, then the first cell sizes to fit its content, and every other cell is the size of the first cell.
You can use the VariableSizedWrapGrid.ColumnSpan and VariableSizedWrapGrid.RowSpan attached
properties to specify how many adjacent cells a child element should fill.
Here's how to use a VariableSizedWrapGrid in XAML.

<VariableSizedWrapGrid MaximumRowsOrColumns="3" ItemHeight="44" ItemWidth="44">


<Rectangle Fill="Red"/>
<Rectangle Fill="Blue"
VariableSizedWrapGrid.RowSpan="2"/>
<Rectangle Fill="Green"
VariableSizedWrapGrid.ColumnSpan="2"/>
<Rectangle Fill="Yellow"
VariableSizedWrapGrid.RowSpan="2"
VariableSizedWrapGrid.ColumnSpan="2"/>
</VariableSizedWrapGrid>

The result looks like this.

In this example, the maximum number of rows in each column is 3. The first column contains only 2 items (the red
and blue rectangles) because the blue rectangle spans 2 rows. The green rectangle then wraps to the top of the
next column.

Canvas
The Canvas panel positions its child elements using fixed coordinate points. You specify the points on individual
child elements by setting the Canvas.Left and Canvas.Top attached properties on each element. During layout,
the parent Canvas reads these attached property values from its children and uses these values during the Arrange
pass of layout.
Objects in a Canvas can overlap, where one object is drawn on top of another object. By default, the Canvas
renders child objects in the order in which theyre declared, so the last child is rendered on top (each element has a
default z-index of 0). This is the same as other built-in panels. However, Canvas also supports the Canvas.ZIndex
attached property that you can set on each of the child elements. You can set this property in code to change the
draw order of elements during run time. The element with the highest Canvas.ZIndex value draws last and
therefore draws over any other elements that share the same space or overlap in any way. Note that alpha value
(transparency) is respected, so even if elements overlap, the contents shown in overlap areas might be blended if
the top one has a non-maximum alpha value.
The Canvas does not do any sizing of its children. Each element must specify its size.
Here's an example of a Canvas in XAML.

<Canvas Width="120" Height="120">


<Rectangle Fill="Red" Height="44" Width="44"/>
<Rectangle Fill="Blue" Height="44" Width="44" Canvas.Left="20" Canvas.Top="20"/>
<Rectangle Fill="Green" Height="44" Width="44" Canvas.Left="40" Canvas.Top="40"/>
<Rectangle Fill="Yellow" Height="44" Width="44" Canvas.Left="60" Canvas.Top="60"/>
</Canvas>

The result looks like this.


Use the Canvas panel with discretion. While it's convenient to be able to precisely control positions of elements in
UI for some scenarios, a fixed positioned layout panel causes that area of your UI to be less adaptive to overall app
window size changes. App window resize might come from device orientation changes, split app windows,
changing monitors, and a number of other user scenarios.

Panels for ItemsControl


There are several special-purpose panels that can be used only as an ItemsPanel to display items in an
ItemsControl. These are ItemsStackPanel, ItemsWrapGrid, VirtualizingStackPanel, and WrapGrid. You can't
use these panels for general UI layout.
XAML custom panels overview
3/6/2017 18 min to read Edit on GitHub

A panel is an object that provides a layout behavior for child elements it contains, when the Extensible Application
Markup Language (XAML) layout system runs and your app UI is rendered.

Important APIs
Panel
ArrangeOverride
MeasureOverride

You can define custom panels for XAML layout by deriving a custom class from the Panel class. You provide
behavior for your panel by overriding the MeasureOverride and ArrangeOverride, supplying logic that measures
and arranges the child elements.

The Panel base class


To define a custom panel class, you can either derive from the Panel class directly, or derive from one of the
practical panel classes that aren't sealed, such as Grid or StackPanel. It's easier to derive from Panel, because it
can be difficult to work around the existing layout logic of a panel that already has layout behavior. Also, a panel
with behavior might have existing properties that aren't relevant for your panel's layout features.
From Panel, your custom panel inherits these APIs:
The Children property.
The Background, ChildrenTransitions and IsItemsHost properties, and the dependency property identifiers.
None of these properties are virtual, so you don't typically override or replace them. You don't typically need
these properties for custom panel scenarios, not even for reading values.
The layout override methods MeasureOverride and ArrangeOverride. These were originally defined by
FrameworkElement. The base Panel class doesn't override these, but practical panels like Grid do have
override implementations that are implemented as native code and are run by the system. Providing new (or
additive) implementations for ArrangeOverride and MeasureOverride is the bulk of the effort you need to
define a custom panel.
All the other APIs of FrameworkElement, UIElement and DependencyObject, such as Height, Visibility and
so on. You sometimes reference values of these properties in your layout overrides, but they aren't virtual so
you don't typically override or replace them.
This focus here is to describe XAML layout concepts, so you can consider all the possibilities for how a custom
panel can and should behave in layout. If you'd rather jump right in and see an example custom panel
implementation, see BoxPanel, an example custom panel.

The Children property


The Children property is relevant to a custom panel because all classes derived from Panel use the Children
property as the place to store their contained child elements in a collection. Children is designated as the XAML
content property for the Panel class, and all classes derived from Panel can inherit the XAML content property
behavior. If a property is designated the XAML content property, that means that XAML markup can omit a property
element when specifying that property in markup, and the values are set as immediate markup children (the
"content"). For example, if you derive a class named CustomPanel from Panel that defines no new behavior, you
can still use this markup:

<local:CustomPanel>
<Button Name="button1"/>
<Button Name="button2"/>
</local:CustomPanel>

When a XAML parser reads this markup, Children is known to be the XAML content property for all Panel derived
types, so the parser will add the two Button elements to the UIElementCollection value of the Children
property. The XAML content property facilitates a streamlined parent-child relationship in the XAML markup for a
UI definition. For more info about XAML content properties, and how collection properties are populated when
XAML is parsed, see the XAML syntax guide.
The collection type that's maintaining the value of the Children property is the UIElementCollection class.
UIElementCollection is a strongly typed collection that uses UIElement as its enforced item type. UIElement is a
base type that's inherited by hundreds of practical UI element types, so the type enforcement here is deliberately
loose. But it does enforce that you couldn't have a Brush as a direct child of a Panel, and it generally means that
only elements that are expected to be visible in UI and participate in layout will be found as child elements in a
Panel.
Typically, a custom panel accepts any UIElement child element by a XAML definition, by simply using the
characteristics of the Children property as-is. As an advanced scenario, you could support further type checking of
child elements, when you iterate over the collection in your layout overrides.
Besides looping through the Children collection in the overrides, your panel logic might also be influenced by
Children.Count . You might have logic that is allocating space at least partly based on the number of items, rather
than desired sizes and the other characteristics of individual items.

Overriding the layout methods


The basic model for the layout override methods (MeasureOverride and ArrangeOverride) is that they should
iterate through all the children and call each child element's specific layout method. The first layout cycle starts
when the XAML layout system sets the visual for the root window. Because each parent invokes layout on its
children, this propagates a call to layout methods to every possible UI element that is supposed to be part of a
layout. In XAML layout, there are two stages: measure, then arrange.
You don't get any built-in layout method behavior for MeasureOverride and ArrangeOverride from the base
Panel class. Items in Children won't automatically render as part of the XAML visual tree. It is up to you to make
the items known to the layout process, by invoking layout methods on each of the items you find in Children
through a layout pass within your MeasureOverride and ArrangeOverride implementations.
There's no reason to call base implementations in layout overrides unless you have your own inheritance. The
native methods for layout behavior (if they exist) run regardless, and not calling base implementation from
overrides won't prevent the native behavior from happening.
During the measure pass, your layout logic queries each child element for its desired size, by calling the Measure
method on that child element. Calling the Measure method establishes the value for the DesiredSize property.
The MeasureOverride return value is the desired size for the panel itself.
During the arrange pass, the positions and sizes of child elements are determined in x-y space and the layout
composition is prepared for rendering. Your code must call Arrange on each child element in Children so that the
layout system detects that the element belongs in the layout. The Arrange call is a precursor to composition and
rendering; it informs the layout system where that element goes, when the composition is submitted for rendering.
Many properties and values contribute to how the layout logic will work at runtime. A way to think about the layout
process is that the elements with no children (generally the most deeply nested element in the UI) are the ones that
can finalize measurements first. They don't have any dependencies on child elements that influence their desired
size. They might have their own desired sizes, and these are size suggestions until the layout actually takes place.
Then, the measure pass continues walking up the visual tree until the root element has its measurements and all
the measurements can be finalized.
The candidate layout must fit within the current app window or else parts of the UI will be clipped. Panels often are
the place where the clipping logic is determined. Panel logic can determine what size is available from within the
MeasureOverride implementation, and may have to push the size restrictions onto the children and divide space
amongst children so that everything fits as best it can. The result of layout is ideally something that uses various
properties of all parts of the layout but still fits within the app window. That requires both a good implementation
for layout logic of the panels, and also a judicious UI design on the part of any app code that builds a UI using that
panel. No panel design is going to look good if the overall UI design includes more child elements than can
possibly fit in the app.
A large part of what makes the layout system work is that any element that's based on FrameworkElement
already has some of its own inherent behavior when acting as a child in a container. For example, there are several
APIs of FrameworkElement that either inform layout behavior or are needed to make layout work at all. These
include:
DesiredSize (actually a UIElement property)
ActualHeight and ActualWidth
Height and Width
Margin
LayoutUpdated event
HorizontalAlignment and VerticalAlignment
ArrangeOverride and MeasureOverride methods
Arrange and Measure methods: these have native implementations defined at the FrameworkElement level,
which handle the element-level layout action

MeasureOverride
The MeasureOverride method has a return value that's used by the layout system as the starting DesiredSize for
the panel itself, when the Measure method is called on the panel by its parent in layout. The logic choices within
the method are just as important as what it returns, and the logic often influences what value is returned.
All MeasureOverride implementations should loop through Children, and call the Measure method on each child
element. Calling the Measure method establishes the value for the DesiredSize property. This might inform how
much space the panel itself needs, as well as how that space is divided among elements or sized for a particular
child element.
Here's a very basic skeleton of a MeasureOverride method:

protected override Size MeasureOverride(Size availableSize)


{
Size returnSize; //TODO might return availableSize, might do something else

//loop through each Child, call Measure on each


foreach (UIElement child in Children)
{
child.Measure(new Size()); // TODO determine how much space the panel allots for this child, that's what you pass to Measure
Size childDesiredSize = child.DesiredSize; //TODO determine how the returned Size is influenced by each child's DesiredSize
//TODO, logic if passed-in Size and net DesiredSize are different, does that matter?
}
return returnSize;
}
Elements often have a natural size by the time they're ready for layout. After the measure pass, the DesiredSize
might indicate that natural size, if the availableSize you passed for Measure was smaller. If the natural size is larger
than availableSize you passed for Measure, the DesiredSize is constrained to availableSize. That's how
Measure's internal implementation behaves, and your layout overrides should take that behavior into account.
Some elements don't have a natural size because they have Auto values for Height and Width. These elements
use the full availableSize, because that's what an Auto value represents: size the element to the maximum available
size, which the immediate layout parent communicates by calling Measure with availableSize. In practice, there's
always some measurement that a UI is sized to (even if that's the top level window.) Eventually, the measure pass
resolves all the Auto values to parent constraints and all Auto value elements get real measurements (which you
can get by checking ActualWidth and ActualHeight, after layout completes).
It's legal to pass a size to Measure that has at least one infinite dimension, to indicate that the panel can attempt to
size itself to fit measurements of its content. Each child element being measured sets its DesiredSize value using
its natural size. Then, during the arrange pass, the panel typically arranges using that size.
Text elements such as TextBlock have a calculated ActualWidth and ActualHeight based on their text string and
text properties even if no Height or Width value is set, and these dimensions should be respected by your panel
logic. Clipping text is a particularly bad UI experience.
Even if your implementation doesn't use the desired size measurements, it's best to call the Measure method on
each child element, because there are internal and native behaviors that are triggered by Measure being called. For
an element to participate in layout, each child element must have Measure called on it during the measure pass
and the Arrange method called on it during the arrange pass. Calling these methods sets internal flags on the
object and populates values (such as the DesiredSize property) that the system's layout logic needs when it builds
the visual tree and renders the UI.
The MeasureOverride return value is based on the panel's logic interpreting the DesiredSize or other size
considerations for each of the child elements in Children when Measure is called on them. What to do with
DesiredSize values from children and how the MeasureOverride return value should use them is up to your own
logic's interpretation. You don't typically add up the values without modification, because the input of
MeasureOverride is often a fixed available size that's being suggested by the panel's parent. If you exceed that
size, the panel itself might get clipped. You'd typically compare the total size of children to the panel's available size
and make adjustments if necessary.
Tips and guidance
Ideally, a custom panel should be suitable for being the first true visual in a UI composition, perhaps at a level
immediately under Page, UserControl or another element that is the XAML page root. In MeasureOverride
implementations, don't routinely return the input Size without examining the values. If the return Size has an
Infinity value in it, this can throw exceptions in runtime layout logic. An Infinity value can come from the main
app window, which is scrollable and therefore doesn't have a maximum height. Other scrollable content might
have the same behavior.
Another common mistake in MeasureOverride implementations is to return a new default Size (values for
height and width are 0). You might start with that value, and it might even be the correct value if your panel
determines that none of the children should be rendered. But, a default Size results in your panel not being
sized correctly by its host. It requests no space in the UI, and therefore gets no space and doesn't render. All your
panel code otherwise might be functioning fine, but you still won't see your panel or contents thereof if it's
being composed with zero height, zero width.
Within the overrides, avoid the temptation to cast child elements to FrameworkElement and use properties
that are calculated as a result of layout, particularly ActualWidth and ActualHeight. For most common
scenarios, you can base the logic on the child's DesiredSize value and you won't need any of the Height or
Width related properties of a child element. For specialized cases, where you know the type of element and
have additional information, for example the natural size of an image file, you can use your element's
specialized information because it's not a value that is actively being altered by layout systems. Including layout-
calculated properties as part of layout logic substantially increases the risk of defining an unintentional layout
loop. These loops cause a condition where a valid layout can't be created and the system can throw a
LayoutCycleException if the loop is not recoverable.
Panels typically divide their available space between multiple child elements, although exactly how space is
divided varies. For example, Grid implements layout logic that uses its RowDefinition and ColumnDefinition
values to divide the space into the Grid cells, supporting both star-sizing and pixel values. If they're pixel values,
the size available for each child is already known, so that's what is passed as input size for a grid-style Measure.
Panels themselves can introduce reserved space for padding between items. If you do this, make sure to expose
the measurements as a property that's distinct from Margin or any Padding property.
Elements might have values for their ActualWidth and ActualHeight properties based on a previous layout
pass. If values change, app UI code can put handlers for LayoutUpdated on elements if there's special logic to
run, but panel logic typically doesn't need to check for changes with event handling. The layout system is
already making the determinations of when to re-run layout because a layout-relevant property changed value,
and a panel's MeasureOverride or ArrangeOverride are called automatically in the appropriate
circumstances.

ArrangeOverride
The ArrangeOverride method has a Size return value that's used by the layout system when rendering the panel
itself, when the Arrange method is called on the panel by its parent in layout. It's typical that the input finalSize and
the ArrangeOverride returned Size are the same. If they aren't, that means the panel is attempting to make itself a
different size than what the other participants in layout claim is available. The final size was based on having
previously run the measure pass of layout through your panel code, so that's why returning a different size isn't
typical: it means you are deliberately ignoring measure logic.
Don't return a Size with an Infinity component. Trying to use such a Size throws an exception from internal
layout.
All ArrangeOverride implementations should loop through Children, and call the Arrange method on each child
element. Like Measure, Arrange doesn't have a return value. Unlike Measure, no calculated property gets set as a
result (however, the element in question typically fires a LayoutUpdated event).
Here's a very basic skeleton of an ArrangeOverride method:

protected override Size ArrangeOverride(Size finalSize)


{
//loop through each Child, call Arrange on each
foreach (UIElement child in Children)
{
Point anchorPoint = new Point(); //TODO more logic for topleft corner placement in your panel
// for this child, and based on finalSize or other internal state of your panel
child.Arrange(new Rect(anchorPoint, child.DesiredSize)); //OR, set a different Size
}
return finalSize; //OR, return a different Size, but that's rare
}

The arrange pass of layout might happen without being preceded by a measure pass. However, this only happens
when the layout system has determined no properties have changed that would have affected the previous
measurements. For example, if an alignment changes, there's no need to re-measure that particular element
because its DesiredSize would not change when its alignment choice changes. On the other hand, if ActualHeight
changes on any element in a layout, a new measure pass is needed. The layout system automatically detects true
measure changes and invokes the measure pass again, and then runs another arrange pass.
The input for Arrange takes a Rect value. The most common way to construct this Rect is to use the constructor
that has a Point input and a Size input. The Point is the point where the top left corner of the bounding box for the
element should be placed. The Size is the dimensions used to render that particular element. You often use the
DesiredSize for that element as this Size value, because establishing the DesiredSize for all elements involved in
layout was the purpose of the measure pass of layout. (The measure pass determines all-up sizing of the elements
in an iterative way so that the layout system can optimize how elements are placed once it gets to the arrange
pass.)
What typically varies between ArrangeOverride implementations is the logic by which the panel determines the
Point component of how it arranges each child. An absolute positioning panel such as Canvas uses the explicit
placement info that it gets from each element through Canvas.Left and Canvas.Top values. A space-dividing
panel such as Grid would have mathematical operations that divided the available space into cells and each cell
would have an x-y value for where its content should be placed and arranged. An adaptive panel such as
StackPanel might be expanding itself to fit content in its orientation dimension.
There are still additional positioning influences on elements in layout, beyond what you directly control and pass to
Arrange. These come from the internal native implementation of Arrange that's common to all
FrameworkElement derived types and augmented by some other types such as text elements. For example,
elements can have margin and alignment, and some can have padding. These properties often interact. For more
info, see Alignment, margin, and padding.

Panels and controls


Avoid putting functionality into a custom panel that should instead be built as a custom control. The role of a panel
is to present any child element content that exists within it, as a function of layout that happens automatically. The
panel might add decorations to content (similar to how a Border adds the border around the element it presents),
or perform other layout-related adjustments like padding. But that's about as far as you should go when extending
the visual tree output beyond reporting and using information from the children.
If there's any interaction that's accessible to the user, you should write a custom control, not a panel. For example, a
panel shouldn't add scrolling viewports to content it presents, even if the goal is to prevent clipping, because the
scrollbars, thumbs and so on are interactive control parts. (Content might have scrollbars after all, but you should
leave that up to the child's logic. Don't force it by adding scrolling as a layout operation.) You might create a control
and also write a custom panel that plays an important role in that control's visual tree, when it comes to presenting
content in that control. But the control and the panel should be distinct code objects.
One reason the distinction between control and panel is important is because of Microsoft UI Automation and
accessibility. Panels provide a visual layout behavior, not a logical behavior. How a UI element appears visually is
not an aspect of UI that is typically important to accessibility scenarios. Accessibility is about exposing the parts of
an app that are logically important to understanding a UI. When interaction is required, controls should expose the
interaction possibilities to the UI Automation infrastructure. For more info, see Custom automation peers.

Other layout API


There are some other APIs that are part of the layout system, but aren't declared by Panel. You might use these in a
panel implementation or in a custom control that uses panels.
UpdateLayout, InvalidateMeasure, and InvalidateArrange are methods that initiate a layout pass.
InvalidateArrange might not trigger a measure pass, but the other two do. Never call these methods from
within a layout method override, because they're almost sure to cause a layout loop. Control code doesn't
typically need to call them either. Most aspects of layout are triggered automatically by detecting changes to the
framework-defined layout properties such as Width and so on.
LayoutUpdated is an event that fires when some aspect of layout of the element has changed. This isn't
specific to panels; the event is defined by FrameworkElement.
SizeChanged is an event that fires only after layout passes are finalized, and indicates that ActualHeight or
ActualWidth have changed as a result. This is another FrameworkElement event. There are cases where
LayoutUpdated fires, but SizeChanged does not. For example the internal contents might be rearranged, but
the element's size didn't change.

Related topics
Reference
FrameworkElement.ArrangeOverride
FrameworkElement.MeasureOverride
Panel
Concepts
Alignment, margin, and padding
BoxPanel, an example custom panel
3/6/2017 13 min to read Edit on GitHub

Learn to write code for a custom Panel class, implementing ArrangeOverride and MeasureOverride methods,
and using the Children property.

Important APIs
Panel
ArrangeOverride
MeasureOverride

The example code shows a custom panel implementation, but we don't devote a lot of time explaining the layout
concepts that influence how you can customize a panel for different layout scenarios. If you want more info about
these layout concepts and how they might apply to your particular layout scenario, see XAML custom panels
overview.
A panel is an object that provides a layout behavior for child elements it contains, when the XAML layout system
runs and your app UI is rendered. You can define custom panels for XAML layout by deriving a custom class from
the Panel class. You provide behavior for your panel by overriding the ArrangeOverride and MeasureOverride
methods, supplying logic that measures and arranges the child elements. This example derives from Panel. When
you start from Panel, ArrangeOverride and MeasureOverride methods don't have a starting behavior. Your
code is providing the gateway by which child elements become known to the XAML layout system and get
rendered in the UI. So, it's really important that your code accounts for all child elements and follows the patterns
the layout system expects.

Your layout scenario


When you define a custom panel, you're defining a layout scenario.
A layout scenario is expressed through:
What the panel will do when it has child elements
When the panel has constraints on its own space
How the logic of the panel determines all the measurements, placement, positions, and sizings that eventually
result in a rendered UI layout of children
With that in mind, the BoxPanel shown here is for a particular scenario. In the interest of keeping the code foremost
in this example, we won't explain the scenario in detail yet, and instead concentrate on the steps needed and the
coding patterns. If you want to know more about the scenario first, skip ahead to "The scenario for BoxPanel ", and
then come back to the code.

Start by deriving from Panel


Start by deriving a custom class from Panel. Probably the easiest way to do this is to define a separate code file for
this class, using the Add | New Item | Class context menu options for a project from the Solution Explorer in
Microsoft Visual Studio. Name the class (and file) BoxPanel .
The template file for a class doesn't start with many using statements because it's not specifically for Universal
Windows Platform (UWP) apps. So first, add using statements. The template file also starts with a few using
statements that you probably don't need, and can be deleted. Here's a suggested list of using statements that can
resolve types you'll need for typical custom panel code:

using System;
using System.Collections.Generic; // if you need to cast IEnumerable for iteration, or define your own collection properties
using Windows.Foundation; // Point, Size, and Rect
using Windows.UI.Xaml; // DependencyObject, UIElement, and FrameworkElement
using Windows.UI.Xaml.Controls; // Panel
using Windows.UI.Xaml.Media; // if you need Brushes or other utilities

Now that you can resolve Panel, make it the base class of BoxPanel . Also, make BoxPanel public:

public class BoxPanel : Panel


{
}

At the class level, define some int and double values that will be shared by several of your logic functions, but
which won't need to be exposed as public API. In the example, these are named: maxrc , rowcount , colcount , cellwidth
, cellheight , maxcellheight , aspectratio .
After you've done this, the complete code file looks like this (removing comments on using, now that you know
why we have them):

using System;
using System.Collections.Generic;
using Windows.Foundation;
using Windows.UI.Xaml;
using Windows.UI.Xaml.Controls;
using Windows.UI.Xaml.Media;

public class BoxPanel : Panel


{
int maxrc, rowcount, colcount;
double cellwidth, cellheight, maxcellheight, aspectratio;
}

From here on out, we'll be showing you one member definition at a time, be that a method override or something
supporting such as a dependency property. You can add these to the skeleton above in any order, and we won't be
showing the using statements or the definition of the class scope again in the snippets until we show the final
code.

MeasureOverride
protected override Size MeasureOverride(Size availableSize)
{
Size returnSize;
// Determine the square that can contain this number of items.
maxrc = (int)Math.Ceiling(Math.Sqrt(Children.Count));
// Get an aspect ratio from availableSize, decides whether to trim row or column.
aspectratio = availableSize.Width / availableSize.Height;

// Now trim this square down to a rect, many times an entire row or column can be omitted.
if (aspectratio > 1)
{
rowcount = maxrc;
colcount = (maxrc > 2 &amp;&amp; Children.Count < maxrc * (maxrc - 1)) ? maxrc - 1 : maxrc;
}
else
{
rowcount = (maxrc > 2 &amp;&amp; Children.Count < maxrc * (maxrc - 1)) ? maxrc - 1 : maxrc;
colcount = maxrc;
}

// Now that we have a column count, divide available horizontal, that&#39;s our cell width.
cellwidth = (int)Math.Floor(availableSize.Width / colcount);
// Next get a cell height, same logic of dividing available vertical by rowcount.
cellheight = Double.IsInfinity(availableSize.Height) ? Double.PositiveInfinity : availableSize.Height / rowcount;

foreach (UIElement child in Children)


{
child.Measure(new Size(cellwidth, cellheight));
maxcellheight = (child.DesiredSize.Height > maxcellheight) ? child.DesiredSize.Height : maxcellheight;
}
return LimitUnboundedSize(availableSize);
}

The necessary pattern of a MeasureOverride implementation is the loop through each element in
Panel.Children. Always call the Measure method on each of these elements. Measure has a parameter of type
Size. What you're passing here is the size that your panel is committing to have available for that particular child
element. So, before you can do the loop and start calling Measure, you need to know how much space each cell
can devote. From the MeasureOverride method itself, you have the availableSize value. That is the size that the
panel's parent used when it called Measure, which was the trigger for this MeasureOverride being called in the
first place. So a typical logic is to devise a scheme whereby each child element divides the space of the panel's
overall availableSize. You then pass each division of size to Measure of each child element.
How BoxPanel divides size is fairly simple: it divides its space into a number of boxes that's largely controlled by the
number of items. Boxes are sized based on row and column count and the available size. Sometimes one row or
column from a square isn't needed, so it's dropped and the panel becomes a rectangle rather than square in terms
of its row : column ratio. For more info about how this logic was arrived at, skip ahead to "The scenario for
BoxPanel".
So what does the measure pass do? It sets a value for the read-only DesiredSize property on each element where
Measure was called. Having a DesiredSize value is possibly important once you get to the arrange pass, because
the DesiredSize communicates what the size can or should be when arranging and in the final rendering. Even if
you don't use DesiredSize in your own logic, the system still needs it.
It's possible for this panel to be used when the height component of availableSize is unbounded. If that's true, the
panel doesn't have a known height to divide. In this case, the logic for the measure pass informs each child that it
doesn't have a bounded height, yet. It does so by passing a Size to the Measure call for children where
Size.Height is infinite. That's legal. When Measure is called, the logic is that the DesiredSize is set as the
minimum of these: what was passed to Measure, or that element's natural size from factors such as explicitly-set
Height and Width.
NOTE
The internal logic of StackPanel also has this behavior: StackPanel passes an infinite dimension value to Measure on
children, indicating that there is no constraint on children in the orientation dimension. StackPanel typically sizes itself
dynamically, to accommodate all children in a stack that grows in that dimension.

However, the panel itself can't return a Size with an infinite value from MeasureOverride; that throws an
exception during layout. So, part of the logic is to find out the maximum height that any child requests, and use that
height as the cell height in case that isn't coming from the panel's own size constraints already. Here's the helper
function LimitUnboundedSize that was referenced in previous code, which then takes that maximum cell height and
uses it to give the panel a finite height to return, as well as assuring that cellheight is a finite number before the
arrange pass is initiated:

// This method is called only if one of the availableSize dimensions of measure is infinite.
// That can happen to height if the panel is close to the root of main app window.
// In this case, base the height of a cell on the max height from desired size
// and base the height of the panel on that number times the #rows.

Size LimitUnboundedSize(Size input)


{
if (Double.IsInfinity(input.Height))
{
input.Height = maxcellheight * colcount;
cellheight = maxcellheight;
}
return input;
}

ArrangeOverride
protected override Size ArrangeOverride(Size finalSize)
{
int count = 1
double x, y;
foreach (UIElement child in Children)
{
x = (count - 1) % colcount * cellwidth;
y = ((int)(count - 1) / colcount) * cellheight;
Point anchorPoint = new Point(x, y);
child.Arrange(new Rect(anchorPoint, child.DesiredSize));
count++;
}
return finalSize;
}

The necessary pattern of an ArrangeOverride implementation is the loop through each element in
Panel.Children. Always call the Arrange method on each of these elements.
Note how there aren't as many calculations as in MeasureOverride; that's typical. The size of children is already
known from the panel's own MeasureOverride logic, or from the DesiredSize value of each child set during the
measure pass. However, we still need to decide the location within the panel where each child will appear. In a
typical panel, each child should render at a different position. A panel that creates overlapping elements isn't
desirable for typical scenarios (although it's not out of the question to create panels that have purposeful overlaps,
if that's really your intended scenario).
This panel arranges by the concept of rows and columns. The number of rows and columns was already calculated
(it was necessary for measurement). So now the shape of the rows and columns plus the known sizes of each cell
contribute to the logic of defining a rendering position (the anchorPoint ) for each element that this panel contains.
That Point, along with the Size already known from measure, are used as the two components that construct a
Rect. Rect is the input type for Arrange.
Panels sometimes need to clip their content. If they do, the clipped size is the size that's present in DesiredSize,
because the Measure logic sets it as the minimum of what was passed to Measure, or other natural size factors. So
you don't typically need to specifically check for clipping during Arrange; the clipping just happens based on
passing the DesiredSize through to each Arrange call.
You don't always need a count while going through the loop if all the info you need for defining the rendering
position is known by other means. For example, in Canvas layout logic, the position in the Children collection
doesn't matter. All the info needed to position each element in a Canvas is known by reading Canvas.Left and
Canvas.Top values of children as part of the arrange logic. The BoxPanel logic happens to need a count to compare
to the colcount so it's known when to begin a new row and offset the y value.
It's typical that the input finalSize and the Size you return from a ArrangeOverride implementation are the same.
For more info about why, see "ArrangeOverride" section of XAML custom panels overview.

A refinement: controlling the row vs. column count


You could compile and use this panel just as it is now. However, we'll add one more refinement. In the code just
shown, the logic puts the extra row or column on the side that's longest in aspect ratio. But for greater control over
the shapes of cells, it might be desirable to choose a 4x3 set of cells instead of 3x4 even if the panel's own aspect
ratio is "portrait." So we'll add an optional dependency property that the panel consumer can set to control that
behavior. Here's the dependency property definition, which is very basic:

public static readonly DependencyProperty UseOppositeRCRatioProperty =


DependencyProperty.Register("UseOppositeRCRatio", typeof(bool), typeof(BoxPanel), null);

public bool UseSquareCells


{
get { return (bool)GetValue(UseOppositeRCRatioProperty); }
set { SetValue(UseOppositeRCRatioProperty, value); }
}

And here's how using UseOppositeRCRatio impacts the measure logic. Really all it's doing is changing how rowcount
and colcount are derived from maxrc and the true aspect ratio, and there are corresponding size differences for
each cell because of that. When UseOppositeRCRatio is true, it inverts the value of the true aspect ratio before using it
for row and column counts.

if (UseOppositeRCRatio) { aspectratio = 1 / aspectratio;}

The scenario for BoxPanel


The particular scenario for BoxPanel is that it's a panel where one of the main determinants of how to divide space
is by knowing the number of child items, and dividing the known available space for the panel. Panels are innately
rectangle shapes. Many panels operate by dividing that rectangle space into further rectangles; that's what Grid
does for its cells. In Grid's case, the size of the cells is set by ColumnDefinition and RowDefinition values, and
elements declare the exact cell they go into with Grid.Row and Grid.Column attached properties. Getting good
layout from a Grid usually requires knowing the number of child elements beforehand, so that there are enough
cells and each child element sets its attached properties to fit into its own cell.
But what if the number of children is dynamic? That's certainly possible; your app code can add items to collections,
in response to any dynamic run-time condition you consider to be important enough to be worth updating your UI.
If you're using data binding to backing collections/business objects, getting such updates and updating the UI is
handled automatically, so that's often the preferred technique (see Data binding in depth).
But not all app scenarios lend themselves to data binding. Sometimes, you need to create new UI elements at
runtime and make them visible. BoxPanel is for this scenario. A changing number of child items is no problem for
BoxPanel because it's using the child count in calculations, and adjusts both the existing and new child elements
into a new layout so they all fit.
An advanced scenario for extending BoxPanel further (not shown here) could both accommodate dynamic children
and use a child's DesiredSize as a stronger factor for the sizing of individual cells. This scenario might use varying
row or column sizes or non-grid shapes so that there's less "wasted" space. This requires a strategy for how
multiple rectangles of various sizes and aspect ratios can all fit into a containing rectangle both for aesthetics and
smallest size. BoxPanel doesn't do that; it's using a simpler technique for dividing space. BoxPanel 's technique is to
determine the least square number that's greater than the child count. For example, 9 items would fit in a 3x3
square. 10 items require a 4x4 square. However, you can often fit items while still removing one row or column of
the starting square, to save space. In the count=10 example, that fits in a 4x3 or 3x4 rectangle.
You might wonder why the panel wouldn't instead choose 5x2 for 10 items, because that fits the item number
neatly. However, in practice, panels are sized as rectangles that seldom have a strongly oriented aspect ratio. The
least-squares technique is a way to bias the sizing logic to work well with typical layout shapes and not encourage
sizing where the cell shapes get odd aspect ratios.

NOTE
This article is for Windows 10 developers writing Universal Windows Platform (UWP) apps. If you're developing for Windows
8.x or Windows Phone 8.x, see the archived documentation.

Related topics
Reference
FrameworkElement.ArrangeOverride
FrameworkElement.MeasureOverride
Panel
Concepts
Alignment, margin, and padding
Alignment, margin, and padding
3/6/2017 8 min to read Edit on GitHub

In addition to dimension properties (width, height, and constraints), elements can also have alignment, margin,
and padding properties that influence the layout behavior when an element goes through a layout pass and is
rendered in a UI. There are relationships between alignment, margin, padding and dimension properties that have
a typical logic flow when a FrameworkElement object is positioned, such that values are sometimes used and
sometimes ignored depending on the circumstances.

Alignment properties
The HorizontalAlignment and VerticalAlignment properties describe how a child element should be
positioned within a parent element's allocated layout space. By using these properties together, layout logic for a
container can position child elements within the container (either a panel or a control). Alignment properties are
intended to hint the desired layout to an adaptive layout container, so basically they're set on
FrameworkElement children and interpreted by another FrameworkElement container parent. Alignment
values can specify whether elements align to one of the two edges of an orientation, or to the center. However,
the default value for both alignment properties is Stretch. With Stretch alignment, elements fill the space they're
provided in layout. Stretch is the default so that it's easier to use adaptive layout techniques in the cases where
there is no explicit measurement or no DesiredSize value that came from the measure pass of layout. With this
default, there's no risk of an explicit height/width not fitting within the container and being clipped until you size
each container.

NOTE
As a general layout principle, it's best to only apply measurements to certain key elements and use the adaptive layout
behavior for the other elements. This provides flexible layout behavior for when the user sizes the top app window, which
typically is possible to do at any time.

If there are either Height and Width values or clipping within an adaptive container, even if Stretch is set as an
alignment value, the layout is controlled by the behavior of its container. In panels, a Stretch value that's been
obviated by Height and Width acts as if the value is Center.
If there are no natural or calculated height and width values, these dimension values are mathematically NaN
(Not A Number). The elements are waiting for their layout container to give them dimensions. After layout is run,
there will be values for ActualHeight and ActualWidth properties for elements where a Stretch alignment was
used. The NaN values remain in Height and Width for the child elements so that the adaptive behavior can run
again, for example, if layout-related changes such as app window sizing causes another layout cycle.
Text elements such as TextBlock don't usually have an explicitly declared width, but they do have a calculated
width that you can query with ActualWidth, and that width also cancels out a Stretch alignment. (The FontSize
property and other text properties, as well as the text itself, are already hinting the intended layout size. You don't
typically want your text to be stretched.) Text used as content within a control has the same effect; the presence of
text that needs presenting causes an ActualWidth to be calculated, and this also commutes a desired width and
size to the containing control. Text elements also have an ActualHeight based on font size per line, line breaks,
and other text properties.
A panel such as Grid already has other logic for layout (row and column definitions, and attached properties such
as Grid.Row set on elements to indicate which cell to be drawn in). In that case, the alignment properties
influence how the content is aligned within the area of that cell, but the cell structure and sizing is controlled by
settings on the Grid.
Item controls sometimes display items where the base types of the items are data. This involves an
ItemsPresenter. Although the data itself is not a FrameworkElement derived type, ItemsPresenter is, so you
can set HorizontalAlignment and VerticalAlignment for the presenter and that alignment applies to the data
items when presented in the items control.
Alignment properties are only relevant for cases when there's extra space available in a dimension of the parent
layout container. If a layout container is already clipping content, alignment can affect the area of the element
where the clipping will apply. For example, if you set HorizontalAlignment="Left" , the right size of the element gets
clipped.

Margin
The Margin property describes the distance between an element and its peers in a layout situation, and also the
distance between an element and the content area of a container that contains the element. If you think of
elements as bounding boxes or rectangles where the dimensions are the ActualHeight and ActualWidth, the
Margin layout applies to the outside of that rectangle and does not add pixels to the ActualHeight and
ActualWidth. The margin is also not considered part of the element for purposes of hit testing and sourcing
input events.
In general layout behavior, components of a Margin value are constrained last, and are constrained only after
Height and Width are already constrained all the way to 0. So, be careful with margins when the container is
already clipping or constraining the element; otherwise, your margin could be the cause of an element not
appearing to render (because one of its dimensions has been constrained to 0 after the margin was applied).
Margin values can be uniform, by using syntax like Margin="20" . With this syntax, a uniform margin of 20 pixels
would be applied to the element, with a 20-pixel margin on the left, top, right, and bottom sides. Margin values
can also take the form of four distinct values, each value describing a distinct margin to apply to the left, top, right,
and bottom (in that order). For example, Margin="0,10,5,25" . The underlying type for the Margin property is a
Thickness structure, which has properties that hold the Left, Top, Right, and Bottom values as separate Double
values.
Margins are additive. For example, if two elements each specify a uniform margin of 10 pixels and they are
adjacent peers in any orientation, the distance between the elements is 20 pixels.
Negative margins are permitted. However, using a negative margin can often cause clipping, or overdraws of
peers, so it's not a common technique to use negative margins.
Proper use of the Margin property enables very fine control of an element's rendering position and the rendering
position of its neighbor elements and children. When you use element dragging to position elements within the
XAML designer in Visual Studio, you'll see that the modified XAML typically has values for Margin of that element
that were used to serialize your positioning changes back into the XAML.
The Block class, which is a base class for Paragraph, also has a Margin property. It has an analogous effect on
how that Paragraph is positioned within its parent container, which is typically a RichTextBlock or RichEditBox
object, and also how more than one paragraph is positioned relative to other Block peers from the
RichTextBlock.Blocks collection.

Padding
A Padding property describes the distance between an element and any child elements or content that it
contains. Content is treated as a single bounding box that encloses all the content, if it's an element that permits
more than one child. For example, if there's an ItemsControl that contains two items, the Padding is applied
around the bounding box that contains the items. Padding subtracts from the available size when it comes to the
measure and arrange pass calculations for that container and are part of the desired size values when the
container itself goes through the layout pass for whatever contains it. Unlike Margin, Padding is not a property
of FrameworkElement, and in fact there are several classes which each define their own Padding property:
Control.Padding: inherits to all Control derived classes. Not all controls have content, so for some controls
(for example AppBarSeparator) setting the property does nothing. If the control has a border (see
Control.BorderThickness), the padding applies inside that border.
Border.Padding: defines space between the rectangle line created by BorderThickness/BorderBrush and
the Child element.
ItemsPresenter.Padding: contributes to appearance of the generated visuals for items in item controls,
placing the specified padding around each item.
TextBlock.Padding and RichTextBlock.Padding: expands the bounding box around the text of the text
element. These text elements don't have a Background, so it can be visually difficult to see what's the text's
padding versus other layout behavior applied by the text element's container. For that reason, text element
padding is seldom used and it's more typical to use Margin settings on contained Block containers (for the
RichTextBlock case).
In each of these cases, the same element also has a Margin property. If both margin and padding are applied,
they are additive in the sense that the apparent distance between an outer container and any inner content will be
margin plus padding. If there are different background values applied to content, element or container, the point
at which margin ends and padding begins is potentially visible in the rendering.

Dimensions (Height, Width)


The Height and Width properties of a FrameworkElement often influence how the alignment, margin, and
padding properties behave when a layout pass happens. In particular, real-number Height and Width value
cancels Stretch alignments, and is also promoted as a possible component of the DesiredSize value that's
established during the measure pass of the layout. Height and Width have constraint properties: the Height
value can be constrained with MinHeight and MaxHeight, the Width value can be constrained with MinWidth
and MaxWidth. Also, ActualWidth and ActualHeight are calculated, read-only properties that only contain
valid values after a layout pass has completed. For more info about how the dimensions and constraints or
calculated properties interrelate, see Remarks in FrameworkElement.Height and FrameworkElement.Width.

Related topics
Reference
FrameworkElement.Height
FrameworkElement.Width
FrameworkElement.HorizontalAlignment
FrameworkElement.VerticalAlignment
FrameworkElement.Margin
Control.Padding
Create a simple weather app by using Grid and
StackPanel
3/6/2017 4 min to read Edit on GitHub

Use XAML to create the layout for a simple weather app using the Grid and StackPanel elements. With these tools
you can make great looking apps that work on any device running Windows 10. This tutorial takes 10-20 minutes.

Prerequisites
Windows 10 and Microsoft Visual Studio 2015. Click here to learn how to get set up with Visual Studio.
Knowledge of how to create a basic "Hello World" app by using XAML and C#. If you don't have that yet, click
here to learn how to create a "Hello World" app.

Step 1: Create a blank app


1. In Visual Studio menu, select File > New Project.
2. In the left pane of the New Project dialog box, select Visual C# > Windows > Universal or Visual C++ >
Windows > Universal.
3. In the center pane, select Blank App.
4. In the Name box, enter WeatherPanel, and select OK.
5. To run the program, select Debug > Start Debugging from the menu, or select F5.

Step 2: Define a Grid


In XAML a Grid is made up of a series of rows and columns. By specifying the row and column of an element
within a Grid, you can place and space other elements within a user interface. Rows and columns are defined with
the RowDefinition and ColumnDefinition elements.
To start creating a layout, open MainPage.xaml by using the Solution Explorer, and replace the automatically
generated Grid element with this code.

<Grid>
<Grid.ColumnDefinitions>
<ColumnDefinition Width="3*"/>
<ColumnDefinition Width="5*"/>
</Grid.ColumnDefinitions>
<Grid.RowDefinitions>
<RowDefinition Height="2*"/>
<RowDefinition Height="*"/>
</Grid.RowDefinitions>
</Grid>

The new Grid creates a set of two rows and columns, which defines the layout of the app interface. The first
column has a Width of "3*", while the second has "5*", dividing the horizontal space between the two columns at a
ratio of 3:5. In the same way, the two rows have a Height of "3*" and "*" respectively, so the Grid allocates three
times as much space for the first row as for the second ("*" is the same as "1*"). These ratios are maintained even if
the window is resized or the device is changed.
To learn about other methods of sizing rows and columns, see Define layouts with XAML.
If you run the application now you won't see anything except a blank page, because none of the Grid areas have
any content. To show the Grid let's give it some color.

Step 3: Color the Grid


To color the Grid we add three Border elements, each with a different background color. Each is also assigned to a
row and column in the parent Grid by using the Grid.Row and Grid.Column attributes. The values of these
attributes default to 0, so you don't need to assign them to the first Border. Add the following code to the Grid
element after the row and column definitions.

<Border Background="#2f5cb6"/>
<Border Grid.Column ="1" Background="#1f3d7a"/>
<Border Grid.Row="1" Grid.ColumnSpan="2" Background="#152951"/>

Notice that for the third Border we use an extra attribute, Grid.ColumnSpan, which causes this Border to span
both columns in the lower row. You can use Grid.RowSpan in the same way, and together these attributes let you
span an element over any number of rows and columns. The upper-left corner of such a span is always the
Grid.Column and Grid.Row specified in the element attributes.
If you run the app, the result looks something like this.

Step 4: Organize content by using StackPanel elements


StackPanel is the second UI element we'll use to create our weather app. The StackPanel is a fundamental part of
many basic app layouts, allowing you to stack elements vertically or horizontally.
In the following code, we create two StackPanel elements and fill each with three TextBlocks. Add these
StackPanel elements to the Grid below the Border elements from Step 3. This causes the TextBlock elements to
render on top of the colored Grid we created earlier.

<StackPanel Grid.Column="1" Margin="40,0,0,0" VerticalAlignment="Center">


<TextBlock Foreground="White" FontSize="25" Text="Today - 64 F"/>
<TextBlock Foreground="White" FontSize="25" Text="Partially Cloudy"/>
<TextBlock Foreground="White" FontSize="25" Text="Precipitation: 25%"/>
</StackPanel>
<StackPanel Grid.Row="1" Grid.ColumnSpan="2" Orientation="Horizontal"
HorizontalAlignment="Center" VerticalAlignment="Center">
<TextBlock Foreground="White" FontSize="25" Text="High: 66" Margin="0,0,20,0"/>
<TextBlock Foreground="White" FontSize="25" Text="Low: 43" Margin="0,0,20,0"/>
<TextBlock Foreground="White" FontSize="25" Text="Feels like: 63"/>
</StackPanel>
In the first Stackpanel, each TextBlock stacks vertically below the next. This is the default behavior of a
StackPanel, so we don't need to set the Orientation attribute. In the second StackPanel, we want the child
elements to stack horizontally from left to right, so we set the Orientation attribute to "Horizontal". We must also
set the Grid.ColumnSpan attribute to "2", so that the text is centered over the lower Border.
If you run the app now, you'll see something like this.

Step 5: Add an image icon


Finally, let's fill the empty section in our Grid with an image that represents today's weathersomething that says
"partially cloudy."
Download the image below and save it as a PNG named "partially-cloudy".

In the Solution Explorer, right click the Assets folder, and select Add -> Existing Item... Find partially-
cloudy.png in the browser that pops up, select it, and click Add.
Next, in MainPage.xaml, add the following Image element below the StackPanels from Step 4.

<Image Margin="20" Source="Assets/partially-cloudy.png"/>

Because we want the Image in the first row and column, we don't need to set its Grid.Row or Grid.Column
attributes, allowing them to default to "0".
And that's it! You've successfully created the layout for a simple weather application. If you run the application by
pressing F5, you should see something like this:
If you like, try experimenting with the layout above, and explore different ways you might represent weather data.

Related articles
For an introduction to designing UWP app layouts, see Introduction to UWP app design
To learn about creating responsive layouts that adapt to different screen sizes, see Define Page Layouts with XAML
UWP style guide
3/6/2017 1 min to read Edit on GitHub

Design guidance and code examples that teach you how to define your UWP apps personality through color,
typography, and motion.

Color
Color provides intuitive wayfinding through an app's various levels of information and serves as a crucial tool
for reinforcing the interaction model.

Icons
Good icons harmonize with typography and with the rest of the design language. They dont mix metaphors,
and they communicate only whats needed, as speedily and simply as possible.

Motion
Purposeful, well-designed animations bring apps to life and make the experience feel crafted and polished. Help
users understand context changes, and tie experiences together with visual transitions.

Sound
Sound helps complete an application's user experience, and gives them that extra audio edge they need to
match the feel of Windows across all platforms.

Typography
As the visual representation of language, typographys main task is to be clear. Its style should never get in the
way of that goal. But typography also has an important role as a layout componentwith a powerful effect on
the density and complexity of the designand on the users experience of that design.
Fonts
Segoe MDL2 icons

Styling controls
You can customize the appearance of your apps in many ways by using the XAML framework. Styles let you set
control properties and reuse those settings for a consistent appearance across multiple controls.
Color
3/6/2017 3 min to read Edit on GitHub

Color provides intuitive way of finding through an app's various levels of information and serves as a crucial tool
for reinforcing the interaction model.
In Windows, color is also personal. Users can choose a color and a light or dark theme to be reflected throughout
their experience.

Accent color
The user can pick a single color called the accent from Settings > Personalization > Colors. They have their choice
from a curated set of 48 color swatches, except on Xbox which has a palette of 21 TV-safe colors.
Default accent colors

FFB900 E74856 0078D7 0099BC 7A7574 767676

FF8C00 E81123 0063B1 2D7D9A 5D5A58 4C4A48

F7630C EA005E 8E8CD8 00B7C3 68768A 69797E

CA5010 C30052 6B69D6 038387 515C6B 4A5459

DA3B01 E3008C 8764B8 00B294 567C73 647C64

EF6950 BF0077 744DA9 018574 486860 525E54

D13438 C239B3 B146C2 00CC6A 498205 847545

FF4343 9A0089 881798 10893E 107C10 7E735F

Xbox accent colors

EB8C10 ED5588 1073D6 148282 107C10 4C4A4B


EB4910 BF1077 193E91 54A81B 737373 7E715C

E31123 B144C0 1081CA 547A72 677488 724F2F

A21025 744DA9 108272

When users choose an accent color, it appears as part of their system theme. The areas affected are Start, Taskbar,
window chrome, selected interaction states and hyperlinks within common controls. Each app can further
incorporate the accent color through their typography, backgrounds, and interactionsor override it to preserve
their specific branding.

Color palette building blocks


Once an accent color is selected, light and dark shades of the accent color are created based on HSB values of color
luminosity. Apps can use shade variations to create visual hierarchy and to provide an indication of interaction.
By default, hyperlinks will use the user's accent color. If the page background is a similar color, you can choose to
assign a lighter (or darker) shade of accent to the hyperlinks for better contrast.

The various light/dark shades of the default accent color.


An example of how color logic gets applied to a design spec.

NOTE
In XAML, the primary accent color is exposed as a theme resource named SystemAccentColor . The shades are available as
SystemAccentColorLight3 , SystemAccentColorLight2 , SystemAccentColorLight1 , SystemAccentColorDark1 ,
SystemAccentColorDark2 , and SystemAccentColorDark3 . Also available programmatically via UISettings.GetColorValue
and the UIColorType enum.

Color theming
The user may also choose between a light or dark theme for the system. Some apps choose to change their theme
based on the users preference, while others opt out.
Apps using light theme are for scenarios involving productivity apps. Examples would be the suite of apps
available with Microsoft Office. Light theme affords the ease of reading long lengths of text in conjunction with
prolonged periods of time-at-task.
Dark theme allows more visible contrast of content for apps that are media centric or scenarios where users are
presented with an abundance of videos or imagery. In these scenarios, reading is not necessarily the primary task,
though a movie watching experience might be, and shown under low-light ambient conditions.
If your app doesnt quite fit either of these descriptions, consider following the system theme to let the user decide
what's right for them.
To make designing for themes easier, Windows provides an additional color palette that automatically adapts to
the theme.
Light theme
Base
Alt

List

Chrome
Dark theme
Base

Alt
List

Chrome
Changing the theme
You can change themes easily by changing the RequestedTheme property in your App.xaml:

<Application
x:Class="App9.App"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:local="using:App9"
RequestedTheme="Dark">

</Application>

Removing the RequestedTheme means that your application will honor the users app mode settings, and they
will be able to choose to view your app in either the dark or light theme.
Make sure that you take the theme into consideration when creating your app, as the theme has a big impact on
the look of your app.

Accessibility
Our palette is optimized for screen usage. We recommend maintaining a contrast ratio for text of 4.5:1 against the
background for optimal readability. There are many free tools available to test whether or not your colors pass, like
Contrast Ratio.

Related articles
XAML Styles
XAML Theme Resources
Icons for UWP apps
3/6/2017 2 min to read Edit on GitHub

Good icons harmonize with typography and with the rest of the design language. They dont mix metaphors, and
they communicate only whats needed, as speedily and simply as possible.

Linear scaling size ramps


16px x 16px 24px x 24px 32px x 32px 48px x 48px

Common shapes
Icons should generally maximize their given space with little padding. These shapes provide starting points for
sizing basic shapes.
Use the shape that corresponds to the icon's orientation and compose around these basic parameters. Icons don't
necessarily need to fill or fit completely inside the shape and may be adjusted as needed to ensure optimal balance.

Circle Square Triangle

Horizontal rectangle Vertical rectangle


Angles
In addition to using the same grid and line weight, icons are constructed with common elements.
Using only these angles in building shapes creates consistency across all our icons, and ensures the icons render
correctly.
These lines can be combined, joined, rotated, and reflected in creating icons.

1:1 1:2 1:3 1:4


45 26.57 (vertical) 18.43 (vertical) 14.04 (vertical)
63.43 (horizontal) 71.57 (horizontal) 75.96 (horizontal)

Here are some examples:


Curves
Curved lines are constructed from sections of a whole circle and should not be skewed unless needed to snap to
the pixel grid.

1/4 circle 1/8 circle

Geometric construction
We recommend using only pure geometric shapes when constructing icons.
Filled shapes
Icons can contain filled shapes when needed, but they should not be more than 4px at 32px 32px. Filled circles
should not be larger than 6px 6px.
Badges
A "badge" is a generic term used to describe an element added to an icon that's not meant to be integrated with
the base icon element. These usually convey other pieces of information about the icon like status or action. Other
commons terms include: overlay, annotation, or modifier.
Status badges utilize a filled, colored object that is on top of the icon, whereas action badges are integrated into the
icon in the same monochrome style and line weight.

Common status badges Common action badges

Badge color
Color badging should only be used to convey the state of an icon. The colors used in status badging convey specific
emotional messages to the user.

Green - #128B44 Blue - #2C71B9 Yellow - #FDC214

Positive: done, completed Neutral: help, notification Cautionary: alert, warning

Badge position
The default position for any status or action is the bottom right. Only use the other positions when the design will
not allow it.
Badge sizing
Badges should be sized to 1018 px on a 32 px 32 px grid.

Related articles
Guidelines for tile and icon assets
Motion for UWP apps
3/6/2017 2 min to read Edit on GitHub

Purposeful, well-designed animations bring apps to life and make the experience feel crafted and polished. Help
users understand context changes, and tie experiences together with visual transitions.

Benefits of animation
Animation is more than making things move. Animation is a tool for creating a physical ecosystem for the user to
live inside and manipulate through touch. The quality of the experience depends on how well the app responds to
the user, and what kind of personality the UI communicates.
Make sure animation serves a purpose in your app. The best Universal Windows Platform (UWP) apps use
animation to bring the UI to life. Animation should:
Give feedback based on the user's behavior.
Teach the user how to interact with the UI.
Indicate how to navigate to previous or succeeding views.
As a user spends more time inside your app, or as tasks in your app become more sophisticated, high-quality
animation becomes increasingly important: it can be used to change how the user perceives their cognitive load
and your app's ease of use. Animation has many other direct benefits:
Animation adds hints towards interaction.
Animation is directional: it moves forward and backward, in and out of content, leaving minimal
"breadcrumb" clues as to how the user arrived at the present view.
Animation can give the impression of enhanced performance.
When network speeds lag or the system pauses to work, animations can make the user's wait feel shorter.
Animation adds personality.
The well-considered Windows Phone UI uses motion to create the impression that an app is concerned with
the here and now, and helps counteract the sensation that the user is burrowing into nested hierarchies.
Animation adds consistency.
Transitions can help users learn how to operate new applications by drawing analogies to tasks that the user
is already familiar with.
Animation adds elegance.
Animations can be used to let the user know that the phone is processing, not frozen, and it can passively
surface new information that the user may be interested in.

In this section
ANIMATION TYPE DESCRIPTION

Add and delete List animations let you insert or remove single or multiple
items from a collection, such as a photo album or a list of
search results.

Drag and drop Use drag-and-drop animations when users move objects,
such as moving an item within a list, or dropping an item on
top of another.

Edge Edge-based animations show or hide UI that originates from


the edge of the screen. The show and hide actions can be
initiated either by the user or by the app. The UI can either
overlay the app or be part of the main app surface. If the UI is
part of the app surface, the rest of the app might need to be
resized to accommodate it.

Fade Use fade animations to bring items into a view or to take


items out of a view. The two common fade animations are
fade-in and fade-out.

Pointer Pointer animations provide users with visual feedback when


the user taps on an item. The pointer down animation slightly
shrinks and tilts the pressed item, and plays when an item is
first tapped. The pointer up animation, which restores the item
to its original position, is played when the user releases the
pointer.

Pop-up animations Use pop-up animations to show and hide pop-up UI for
flyouts or custom pop-up UI elements. Pop-up elements are
containers that appear over the app's content and are
dismissed if the user taps or clicks outside of the pop-up
element.

Reposition Move elements into a new position.


Sound
3/6/2017 4 min to read Edit on GitHub

There are many ways to use sound to enhance your app. You can use to sound to supplement other UI elements,
enabling users to recognize events audibly. Sound can be an effective user interface element for people with visual
disabilities. You can use sound to create an atmosphere that immerses the user; for example, you might play a
whimsical soundtrack in the background of puzzle game, or use ominous sound effects for a horror/survival game.

Sound Global API


UWP provides an easily accessible sound system that allows you to simply "flip a switch" and get an immersive
audio experience across your entire app.
The ElementSoundPlayer is an integrated sound system within XAML, and when turned on all default controls
will play sounds automatically.

ElementSoundPlayer.State = ElementSoundPlayerState.On;

The ElementSoundPlayer has three different states: On Off and Auto.


If set to Off, no matter where your app is run, sound will never play. If set to On sounds for your app will play on
every platform.
Sound for TV and Xbox
Sound is a key part of the 10-foot experience, and by default, the ElementSoundPlayer's state is Auto, meaning
that you will only get sound when your app is running on Xbox. To understand more about designing for Xbox and
TV, please see Designing for Xbox and TV.

Sound Volume Override


All sounds within the app can be dimmed with the Volume control. However, sounds within the app cannot get
louder than the system volume.
To set the app volume level, call:

ElementSoundPlayer.Volume = 0.5f;

Where maximum volume (relative to system volume) is 1.0, and minimum is 0.0 (essentially silent).

Control Level State


If a control's default sound is not desired, it can be disabled. This is done through the ElementSoundMode on the
control.
The ElementSoundMode has two states: Off and Default. When not set, it is Default. If set to Off, every sound
that control plays will be muted except for focus.

<Button Name="ButtonName" Content="More Info" ElementSoundMode="Off"/>


ButtonName.ElementSoundState = ElementSoundMode.Off;

Is This The Right Sound?


When creating a custom control, or changing an existing control's sound, it is important to understand the usages
of all the sounds the system provides.
Each sound relates to a certain basic user interaction, and although sounds can be customized to play on any
interaction, this section serves to illustrate the scenarios where sounds should be used to maintain a consistent
experience across all UWP apps.
Invoking an Element
The most common control-triggered sound in our system today is the Invoke sound. This sound plays when a
user invokes a control through a tap/click/enter/space or press of the 'A' button on a gamepad.
Typically, this sound is only played when a user explicitly targets a simple control or control part through an input
device.
To play this sound from any control event, simply call the Play method from ElementSoundPlayer and pass in
ElementSound.Invoke:

ElementSoundPlayer.Play(ElementSoundKind.Invoke);

Showing & Hiding Content


There are many flyouts, dialogs and dismissible UIs in XAML, and any action that triggers one of these overlays
should call a Show or Hide sound.
When an overlay content window is brought into view, the Show sound should be called:

ElementSoundPlayer.Play(ElementSoundKind.Show);

Conversely when an overlay content window is closed (or is light dismissed), the Hide sound should be called:

ElementSoundPlayer.Play(ElementSoundKind.Hide);

Navigation Within a Page


When navigating between panels or views within an app's page (see Hub or Tabs and Pivots), there is typically
bidirectional movement. Meaning you can move to the next view/panel or the previous one, without leaving the
current app page you're on.
The audio experience around this navigation concept is encompassed by the MovePrevious and MoveNext
sounds.
When moving to a view/panel that is considered the next item in a list, call:

ElementSoundPlayer.Play(ElementSoundKind.MoveNext);

And when moving to a previous view/panel in a list considered the previous item, call:

ElementSoundPlayer.Play(ElementSoundKind.MovePrevious);
Back Navigation
When navigating from the current page to the previous page within an app the GoBack sound should be called:

ElementSoundPlayer.Play(ElementSoundKind.GoBack);

Focusing on an Element
The Focus sound is the only implicit sound in our system. Meaning a user isn't directly interacting with anything,
but is still hearing a sound.
Focusing happens when a user navigates through an app, this can be with the gamepad/keyboard/remote or
kinect. Typically the Focus sound does not play on PointerEntered or mouse hover events.
To set up a control to play the Focus sound when your control receives focus, call:

ElementSoundPlayer.Play(ElementSoundKind.Focus);

Cycling Focus Sounds


As an added feature to calling ElementSound.Focus, the sound system will, by default, cycle through 4 different
sounds on each navigation trigger. Meaning that no two exact focus sounds will play right after the other.
The purpose behind this cycling feature is to keep the focus sounds from becoming monotonous and from being
noticeable by the user; focus sounds will be played most often and therefore should be the most subtle.

Related articles
Designing for Xbox and TV
Typography
3/6/2017 6 min to read Edit on GitHub

As the visual representation of language, typographys main task is to be clear. Its style should never get in the way
of that goal. But typography also has an important role as a layout componentwith a powerful effect on the
density and complexity of the designand on the users experience of that design.

Typeface
Weve selected Segoe UI for use on all Microsoft digital designs. Segoe UI provides a wide range of characters and
is designed to maintain optimal legibility across sizes and pixel densities. It offers a clean, light, and open aesthetic
that complements the content of the system.

Weights
We approach typography with an eye to simplicity and efficiency. We choose to use one typeface, a minimum of
weights and sizes, and a clear hierarchy. Positioning and alignment follow the default style for the given language.
In English the sequence runs left to right, top to bottom. Relationships between text and images are clear and
straightforward.
Line spacing

Line spacing should be calculated at 125% of the font size, rounding to the closest multiple of four when necessary.
For example with 15px Segoe UI, 125% of 15px is 18.75px. We recommend rounding up and setting line height to
20px to stay on the 4px grid. This ensures a good reading experience and adequate space for diacritical marks. See
the Type ramp section below for specific examples.
When stacking larger type on top of smaller type, the distance from the last baseline of the larger type to the first
baseline of the smaller type should be equal to the larger type's line height.

In XAML, this is accomplished by stacking two TextBlocks and setting the appropriate margin.

<StackPanel Width="200">
<!-- Setting a bottom margin of 3px on the header
puts the baseline of the body text exactly 24px
below the baseline of the header. 24px is the
recommended line height for a 20px font size,
which is what's set in SubtitleTextBlockStyle.
The bottom margin will be different for
different font size pairings. -->
<TextBlock
Style="{StaticResource SubtitleTextBlockStyle}"
Margin="0,0,0,3"
Text="Header text" />
<TextBlock
Style="{StaticResource BodyTextBlockStyle}"
TextWrapping="Wrap"
Text="This line of text should be positioned where the above header would have wrapped." />
</StackPanel>

Kerning and tracking


Segoe is a humanist typeface, with a soft, friendly appearance, it has organic, open forms based on handwritten
text. To ensure optimum legibility and maintain its humanist integrity, the kerning and tracking settings must have
specific values. Kerning should be set to metrics and tracking should be set to 0.

Word and letter spacing


Similar to kerning and tracking, word spacing and letter spacing use specific settings to ensure optimum legibility
and humanist integrity. Word spacing by default is always 100% and letter spacing should be set to 0.
NOTE
In a XAML text control use Typogrphy.Kerning to control kerning and FontStretch to control tracking. By default
Typography.Kerning is set to true and FontStretch is set to Normal, which are the recommended values.

Alignment
Generally, we recommend that visual elements and columns of type be left-aligned. In most instances, this flush-
left and ragged-right approach provides consistent anchoring of the content and a uniform layout.

Line endings
When typography is not positioned as flush left and ragged right, try to ensure even line endings and avoid
hyphenation.

Paragraphs
To provide aligned column edges, paragraphs should be indicated by skipping a line without indentation.
Character count
If a line is too short, the eye will have to travel left and right too often, breaking the readers rhythm. If possible, 50
60 letters per line is best for ease of reading.
Segoe provides a wide range of characters and is designed to maintain optimal legibility in both small and large
sizes as well as low and high pixel densities. Using the optimal number of letters in a text column line ensures good
legibility in an application.
Lines that are too long will strain the eye and may disorient the user. Lines that are too short force the readers eye
to travel too much and can cause fatigue.

Hanging text alignment


The horizontal alignment of icons with text can be handled in a number of ways depending on the size of the icon
and the amount of text. When the text, either single or multiple lines, fits within the height of the icon, the text
should be vertically centered.
Once the height of the text extends beyond the height of the icon, the first line of text should align vertically and the
additional text should flow on naturally below. When using characters with larger cap, ascender and descender
heights, care should be taken to observe the same alignment guidance.

NOTE
XAML's TextBlock.TextLineBounds property provides access to the cap height and baseline font metrics. It can be used to
visually vertically center or top-align type.

Clipping and ellipses


Clip by defaultassume that text will wrap unless the redline specifies otherwise. When using non-wrapping text,
we recommend clipping rather than using ellipses. Clipping can occur at the edge of the container, at the edge of
the device, at the edge of a scrollbar, etc.
Exceptionsfor containers which are not well-defined (e.g. no differentiating background color), then non-
wrapping text can be redlined to use the ellipse .
Type ramp
The type ramp establishes a crucial design relationship from headlines to body text and ensures a clear and
understandable hierarchy between the different levels. This hierarchy builds a structure which enables users to
easily navigate through written communication.

All sizes are in effective pixels. For more details, see Intro to UWP app design.
NOTE
Most levels of the ramp are available as XAML static resources that follow the *TextBlockStyle naming convention (ex:
HeaderTextBlockStyle ).

Primary and secondary text


To create additional hierarchy beyond the type ramp, set secondary text to 60% opacity. In the theming color
palette, you would use BaseMedium. Primary text should always be at 100% opacity, or BaseHigh.

All caps titles


Certain page titles should be in ALL CAPS to add yet another dimension of hierarchy. These titles should use
BaseAlt with the character spacing set to 75 thousandths of an em. This treatment may also be used to help with
app navigation.
However, proper names change their meaning when capitalized in certain languages, so any page titles based on
names or user input should not be converted to all caps.

Do's and don'ts


Use Body for most text
Use Base for titles when space is constrained
Incorporate SubtitleAlt to create contrast and hierarchy by emphasizing top level content
Don't use Caption for long strings or any primary action
Don't use Header or Subheader if text needs to wrap
Don't combine Subtitle and SubtitleAlt on the same page

Related articles
Text controls
Fonts
Segoe MDL2 icons
Fonts for UWP apps
3/6/2017 3 min to read Edit on GitHub

This article lists the recommended fonts for UWP apps. These fonts are guaranteed to be available in all Windows
10 editions that support UWP apps.

Important APIs
FontFamily property

The UWP typography guide recommends that apps use the Segoe UI font, and although Segoe UI is a great choice
for most apps, you don't have to use it for everything. You might use other fonts for certain scenarios, such as
reading, or when displaying text in certain non-English languages.

Sans-serif fonts
Sans-serif fonts are a great choice for headings and UI elements.

FONT-FAMILY STYLES NOTES

Arial Regular, Italic, Bold, Bold Italic, Black Supports European and Middle Eastern
scripts (Latin, Greek, Cyrillic, Arabic,
Armenian, and Hebrew) Black weight
supports European scripts only.

Calibri Regular, Italic, Bold, Bold Italic, Light, Supports European and Middle Eastern
Light Italic scripts (Latin, Greek, Cyrillic, Arabic and
Hebrew). Arabic available in the
uprights only.

Consolas Regular, Italic, Bold, Bold Italic Fixed width font that supports
European scripts (Latin, Greek and
Cyrillic).

Segoe UI Regular, Italic, Light Italic, Black Italic, User-interface font for European and
Bold, Bold Italic, Light, Semilight, Middle East scripts (Arabic, Armenian,
Semibold, Black Cyrillic, Georgian, Greek, Hebrew, Latin),
and also Lisu script.

Segoe UI Historic Regular Fallback font for historic scripts

Selawik Regular, Semilight, Light, Bold, Semibold An open-source font that's metrically
compatible with Segoe UI, intended for
apps on other platforms that dont
want to bundle Segoe UI. Get Selawik
on GitHub.

Verdana Regular, Italic, Bold, Bold Italic Supports European scripts (Latin, Greek,
Cyrillic and Armenian).

Serif fonts
Serif fonts are good for presenting large amounts of text.

FONT-FAMILY STYLES NOTES

Cambria Regular Serif font that supports European


scripts (Latin, Greek, Cyrillic).

Courier New Regular, Italic, Bold, Bold Italic Serif fixed width font supports European
and Middle Eastern scripts (Latin, Greek,
Cyrillic, Arabic, Armenian, and Hebrew).

Georgia Regular, Italic, Bold, Bold Italic Supports European scripts (Latin, Greek
and Cyrillic).

Times New Roman Regular, Italic, Bold, Bold Italic Legacy font that supports European
scripts (Latin, Greek, Cyrillic, Arabic,
Armenian, Hebrew).

Symbols and icons


FONT-FAMILY STYLES NOTES

Segoe MDL2 Assets Regular User-interface font for app icons. For
more info, see the Segoe MDL2 assets
article.

Segoe UI Emoji Regular

Segoe UI Symbol Regular Fallback font for symbols

Fonts for non-Latin languages


Although many of these fonts provide Latin characters.

FONT-FAMILY STYLES NOTES

Ebrima Regular, Bold User-interface font for African scripts


(Ethiopic, N'Ko, Osmanya, Tifinagh, Vai).

Gadugi Regular, Bold User-interface font for North American


scripts (Canadian Syllabics, Cherokee).

Javanese Text Regular Fallback font for Regular Fallback font for Javanese script
Javanese script

Leelawadee UI Regular, Semilight, Bold User-interface font for Southeast Asian


scripts (Buginese, Lao, Khmer, Thai).

Malgun Gothic Regular User-interface font for Korean.

Microsoft Himalaya Regular Fallback font for Tibetan script.

Microsoft JhengHei UI Regular, Bold, Light User-interface font for Traditional


Chinese.
FONT-FAMILY STYLES NOTES

Microsoft New Tai Lue Regular Fallback font for New Tai Lue script.

Microsoft PhagsPa Regular Fallback font for Phags-pa script.

Microsoft Tai Le Regular Fallback font for Tai Le script.

Microsoft YaHei UI Regular, Bold, Light User-interface font for Simplified


Chinese.

Microsoft Yi Baiti Regular Fallback font for Yi script.

Mongolian Baiti Regular Fallback font for Mongolian script.

MV Boli Regular Fallback font for Thaana script.

Myanmar Text Regular Fallback font for Myanmar script.

Nirmala UI Regular, Semilight, Bold User-interface font for South Asian


scripts (Bangla, Devanagari, Gujarati,
Gurmukhi, Kannada, Malayalam, Odia,
Ol Chiki, Sinhala, Sora Sompeng, Tamil,
Telugu)

SimSun Regular A legacy Chinese UI font.

Yu Gothic Medium

Yu Gothic UI Regular User-interface font for Japanese.

Globalizing/localizing fonts
Use the LanguageFont font-mapping APIs for programmatic access to the recommended font family, size, weight,
and style for a particular language. The LanguageFont object provides access to the correct font info for various
categories of content including UI headers, notifications, body text, and user-editable document body fonts. For
more info, see Adjusting layout and fonts to support globalization.

Get the samples


Downloadable fonts sample
UI basics sample
Line spacing with DirectWrite sample

Related articles
Adjusting layout and fonts to support globalization
Segoe MDL2
Text controls)
XAML theme resources
Segoe MDL2 icons
3/8/2017 15 min to read Edit on GitHub

This article lists the icons provided by the Segoe MDL2 Assets font.

Important APIs
Symbol enumeration (XAML)

About Segoe MDL2 Assets


With the release Windows 10, the Segoe MDL2 Assets font replaced the Windows 8/8.1 Segoe UI Symbol icon
font. ( Segoe UI Symbol will still be available as a "legacy" resource, but we recommend updating your app to use
the new Segoe MDL2 Assets.)
Most of the icons and UI controls included in the Segoe MDL2 Assets font are mapped to the Private Use Area of
Unicode (PUA). The PUA allows font developers to assign private Unicode values to glyphs that dont map to
existing code points. This is useful when creating a symbol font, but it creates an interoperability problem. If the
font is not available, the glyphs wont show up. Only use these glyphs when you can specify the Segoe MDL2
Assets font.
Use these glyphs only when you can explicitly specify the Segoe MDL2 Assets font. If you are working with tiles,
you can't use these glyphs because you can't specify the tile font and PUA glyphs are not available via font-
fallback.
Unlike with Segoe UI Symbol, the icons in the Segoe MDL2 Assets font are not intended for use in-line with text.
This means that some older "tricks" like the progressive disclosure arrows no longer apply. Likewise, since all of
the new icons are sized and positioned the same, they do not have to be made with zero width; we have just made
sure they work as a set. Ideally, you can overlay two icons that were designed as a set and they will fall into place.
We may do this to allow colorization in the code. For example, U+EA3A and U+EA3B were created for the Start tile
Badge status. Because these are already centered the circle fill can be colored for different states.

Layering and mirroring


All glyphs in Segoe MDL2 Assets have the same fixed width with a consistent height and left origin point, so
layering and colorization effects can be achieved by drawing glyphs directly on top of each other. This example
show a black outline drawn on top of the zero-width red heart.

Many of the icons also have mirrored forms available for use in languages that use right-to-left text directionality
such as Arabic, Farsi, and Hebrew.

Symbol enumeration
If you are developing an app in C#/VB/C++ and XAML, you can use the Symbol enumeration to use icons from
the Segoe MDL2 Assets font.

How do I get this font?


To obtain Segoe MDL2 Assets, you must install Windows 10.

Icon list
Symbol Unicode point Description

E001 CheckMarkLegacy

E002 CheckboxFillLegacy

E003 CheckboxLegacy

E004 CheckboxIndeterminateLegacy

E005 CheckboxCompositeReversedLegacy

E006 HeartLegacy

E007 HeartBrokenLegacy

E008 CheckMarkZeroWidthLegacy

E009 CheckboxFillZeroWidthLegacy

E00A RatingStarFillZeroWidthLegacy

E00B HeartFillZeroWidthLegacy

E00C HeartBrokenZeroWidthLegacy

E00E ScrollChevronLeftLegacy

E00F ScrollChevronRightLegacy

E010 ScrollChevronUpLegacy

E011 ScrollChevronDownLegacy

E012 ChevronLeft3Legacy
E013 ChevronRight3Legacy

E014 ChevronUp3Legacy

E015 ChevronDown3Legacy

E016 ScrollChevronLeftBoldLegacy

E017 ScrollChevronRightBoldLegacy

E018 ScrollChevronUpBoldLegacy

E019 ScrollChevronDownBoldLegacy

E052 RevealPasswordLegacy

E07F EaseOfAccessLegacy

E081 CheckmarkListviewLegacy

E082 RatingStarFillReducedPaddingHTMLLeg
acy

E087 KeyboardStandardLegacy

E08F KeyboardSplitLegacy

E094 SearchboxLegacy

E096 ChevronLeft1Legacy

E097 ChevronRight1Legacy

E098 ChevronUp1Legacy

E099 ChevronDown1Legacy

E09A ChevronLeft2Legacy
E09B ChevronRight2Legacy

E09C ChevronUp2Legacy

E09D ChevronDown2Legacy

E09E ChevronLeft4Legacy

E09F ChevronRight4Legacy

E0A0 ChevronUp4Legacy

E0A1 ChevronDown4Legacy

E0A2 CheckboxCompositeLegacy

E0A5 HeartFillLegacy

E0A6 BackBttnArrow42Legacy

E0AB BackBttnMirroredArrow42Legacy

E0AD BackBttnMirroredArrow20Legacy

E0AE ArrowHTMLLegacyMirrored

E0B4 RatingStarFillLegacy

E0B5 RatingStarFillSmallLegacy

E0B8 SemanticZoomLegacy

E0C4 BackBttnArrow20Legacy

E0D5 ArrowHTMLLegacy

E0E2 ChevronFlipLeftLegacy
E0E3 ChevronFlipRightLegacy

E0E4 ChevronFlipUpLegacy

E0E5 ChevronFlipDownLegacy

E0E7 CheckmarkMenuLegacy

E100 PreviousLegacy

E101 NextLegacy

E102 PlayLegacy

E103 PauseLegacy

E104 EditLegacy

E105 SaveLegacy

E106 ClearLegacy

E107 DeleteLegacy

E108 RemoveLegacy

E109 AddLegacy

E10A CancelLegacy

E10B AcceptLegacy

E10C MoreLegacy

E10D RedoLegacy

E10E UndoLegacy
E10F HomeLegacy

E110 UpLegacy

E111 ForwardLegacy

E112 BackLegacy

E113 FavoriteLegacy

E114 CameraLegacy

E115 SettingsLegacy

E116 VideoLegacy

E117 SyncLegacy

E118 DownloadLegacy

E119 MailLegacy

E11A FindLegacy

E11B HelpLegacy

E11C UploadLegacy

E11D EmojiLegacy

E11E TwoPageLegacy

E11F LeaveChatLegacy

E120 MailForwardLegacy

E121 ClockLegacy
E122 SendLegacy

E123 CropLegacy

E124 RotateCameraLegacy

E125 PeopleLegacy

E126 ClosePaneLegacy

E127 OpenPaneLegacy

E128 WorldLegacy

E129 FlagLegacy

E12A PreviewLinkLegacy

E12B GlobeLegacy

E12C TrimLegacy

E12D AttachCameraLegacy

E12E ZoomInLegacy

E12F BookmarksLegacy

E130 DocumentLegacy

E131 ProtectedDocumentLegacy

E132 PageFillLegacy

E133 MultiSelectLegacy

E134 CommentLegacy
E135 MailFillLegacy

E136 ContactInfoLegacy

E137 HangUpLegacy

E138 ViewAllLegacy

E139 MapPinLegacy

E13A PhoneLegacy

E13B VideoChatLegacy

E13C SwitchLegacy

E13D ContactLegacy

E13E RenameLegacy

E13F ExpandTileLegacy

E140 ReduceTileLegacy

E141 PinLegacy

E142 MusicInfoLegacy

E143 GoLegacy

E144 KeyBoardLegacy

E145 DockLeftLegacy

E146 DockRightLegacy

E147 DockBottomLegacy
E148 RemoteLegacy

E149 RefreshLegacy

E14A RotateLegacy

E14B ShuffleLegacy

E14C ListLegacy

E14D ShopLegacy

E14E SelectAllLegacy

E14F OrientationLegacy

E150 ImportLegacy

E151 ImportAllLegacy

E152 ShowAllFiles3Legacy

E153 ShowAllFiles1Legacy

E154 ShowAllFilesLegacy

E155 BrowsePhotosLegacy

E156 WebcamLegacy

E158 PictureLegacy

E159 SaveLocalLegacy

E15A CaptionLegacy

E15B StopLegacy
E15C ShowResultsLegacy

E15D VolumeLegacy

E15E RepairLegacy

E15F MessageLegacy

E160 PageLegacy

E161 CalendarDayLegacy

E162 CalendarWeekLegacy

E163 CalendarLegacy

E164 CharactersLegacy

E165 MailReplyAllLegacy

E166 ReadLegacy

E167 LinkLegacy

E168 AccountsLegacy

E169 ShowBccLegacy

E16A HideBccLegacy

E16B CutLegacy

E16C AttachLegacy

E16D PasteLegacy

E16E FilterLegacy
E16F CopyLegacy

E170 Emoji2Legacy

E171 ImportantLegacy

E172 MailReplyLegacy

E173 SlideshowLegacy

E174 SortLegacy

E175 ListLegacyMirrored

E176 ExpandTileLegacyMirrored

E177 ReduceTileLegacyMirrored

E178 ManageLegacy

E179 AllAppsLegacy

E17A DisconnectDriveLegacy

E17B MapDriveLegacy

E17C NewWindowLegacy

E17D OpenWithLegacy

E181 ContactPresenceLegacy

E182 PriorityLegacy

E183 UploadSkyDriveLegacy

E184 GotoTodayLegacy
E185 FontLegacy

E186 FontColorLegacy

E187 Contact2Legacy

E188 FolderLegacy

E189 AudioLegacy

E18A PlaceFolderLegacy

E18B ViewLegacy

E18C SetlockScreenLegacy

E18D SetTileLegacy

E18E CCJapanLegacy

E18F CCEuroLegacy

E190 CCLegacy

E191 StopSlideshowLegacy

E192 PermissionsLegacy

E193 HighlightLegacy

E194 DisableUpdatesLegacy

E195 UnfavoriteLegacy

E196 UnpinLegacy

E197 OpenLocalLegacy
E198 MuteLegacy

E199 ItalicLegacy

E19A UnderlineLegacy

E19B BoldLegacy

E19C MoveToFolderLegacy

E19D LikeDislikeLegacy

E19E DislikeLegacy

E19F LikeLegacy

E1A0 AlignRightLegacy

E1A1 AlignCenterLegacy

E1A2 AlignLeftLegacy

E1A3 ZoomLegacy

E1A4 ZoomOutLegacy

E1A5 OpenFileLegacy

E1A6 OtherUserLegacy

E1A7 AdminLegacy

E1A8 MailForwardLegacyMirrored

E1AA GoLegacyMirrored

E1AB DockLeftLegacyMirrored
E1AC DockRightLegacyMirrored

E1AD ImportLegacyMirrored

E1AE ImportAllLegacyMirrored

E1AF MailReplyLegacyMirrored

E1B0 ItalicCLegacy

E1B1 BoldGLegacy

E1B2 UnderlineSLegacy

E1B3 BoldFLegacy

E1B4 ItalicKLegacy

E1B5 UnderlineULegacy

E1B6 ItalicILegacy

E1B7 BoldNLegacy

E1B8 UnderlineRussianLegacy

E1B9 BoldRussionLegacy

E1BA FontStyleKoreanLegacy

E1BB UnderlineLKoreanLegacy

E1BC ItalicKoreanLegacy

E1BD BoldKoreanLegacy

E1BE FontColorKoreanLegacy
E1BF ClosePaneLegacyMirrored

E1C0 OpenPaneLegacyMirrored

E1C2 EditLegacyMirrored

E1C3 StreetLegacy

E1C4 MapLegacy

E1C5 ClearSelectionLegacy

E1C6 FontDecreaseLegacy

E1C7 FontIncreaseLegacy

E1C8 FontSizeLegacy

E1C9 CellPhoneLegacy

E1CA ReshareLegacy

E1CB TagLegacy

E1CC RepeatOneLegacy

E1CD RepeatAllLegacy

E1CE OutlineStarLegacy

E1CF SolidStarLegacy

E1D0 CalculatorLegacy

E1D1 DirectionsLegacy

E1D2 LocationLegacy
E1D3 LibraryLegacy

E1D4 PhoneBookLegacy

E1D5 MemoLegacy

E1D6 MicrophoneLegacy

E1D7 PostUpdateLegacy

E1D8 BackToWindowLegacy

E1D9 FullScreenLegacy

E1DA NewFolderLegacy

E1DB CalendarReplyLegacy

E1DC CalendarLegacyMirrored

E1DD UnsyncFolderLegacy

E1DE ReportHackedLegacy

E1DF SyncFolderLegacy

E1E0 BlockContactLegacy

E1E1 SwitchAppsLegacy

E1E2 AddFriendLegacy

E1E3 TouchPointerLegacy

E1E4 GoToStartLegacy

E1E5 ZeroBarsLegacy
E1E6 OneBarLegacy

E1E7 TwoBarsLegacy

E1E8 ThreeBarsLegacy

E1E9 FourBarsLegacy

E1EA ItalicRussianLegacy

E1EC AllAppsLegacyMirrored

E1ED OpenWithLegacyMirrored

E1EE BookmarksLegacyMirrored

E1EF MultiSelectLegacyMirrored

E1F1 ShowResultsLegacyMirrored

E1F2 MailReplyAllLegacyMirrored

E1F3 HelpLegacyMirrored

E1F4 ClearSelectionLegacyMirrored

E1F5 RecordLegacy

E1F6 LockLegacy

E1F7 UnlockLegacy

E1FD DownLegacy

E206 CommentInlineLegacy

E208 FavoriteInlineLegacy
E209 LikeInlineLegacy

E20A VideoInlineLegacy

E20B MailMessageLegacy

E211 PC1Legacy

E212 DevicesLegacy

E224 RatingStarLegacy

E228 ChevronDownSmLegacy

E248 ReplyLegacy

E249 Favorite2Legacy

E24A Unfavorite2Legacy

E25A MobileContactLegacy

E25B BlockedLegacy

E25C TypingIndicatorLegacy

E25D PresenceChickletVideoLegacy

E25E PresenceChickletLegacy

E26B ChevronRightSmLegacy

E26C ChevronLeftSmLegacy

E28F SaveAsLegacy

E290 DecreaseIndentLegacy
E291 IncreaseIndentLegacy

E292 BulletedListLegacy

E294 ScanLegacy

E295 PreviewLegacy

E297 DecreaseIndentLegacyMirrored

E298 IncreaseIndentLegacyMirrored

E299 BulletedListLegacyMirrored

E29B PlayOnLegacy

E2AC ResolutionLegacy

E2AD LengthLegacy

E2AE LayoutLegacy

E2AF Contact3Legacy

E2B0 TypeLegacy

E2B1 ColorLegacy

E2B2 SizeLegacy

E2B3 ReturnToWindowLegacy

E2B4 OpenInNewWindowLegacy

E2F6 PrintLegacy

E2F7 Printer3DLegacy
E700 GlobalNavButton

E701 Wifi

E702 Bluetooth

E703 Connect

E704 InternetSharing

E705 VPN

E706 Brightness

E707 MapPin

E708 QuietHours

E709 Airplane

E70A Tablet

E70B QuickNote

E70C RememberedDevice

E70D ChevronDown

E70E ChevronUp

E70F Edit

E710 Add

E711 Cancel

E712 More
E713 Settings

E714 Video

E715 Mail

E716 People

E717 Phone

E718 Pin

E719 Shop

E71A Stop

E71B Link

E71C Filter

E71D AllApps

E71E Zoom

E71F ZoomOut

E720 Microphone

E721 Search

E722 Camera

E723 Attach

E724 Send

E725 SendFill
E726 WalkSolid

E727 InPrivate

E728 FavoriteList

E729 PageSolid

E72A Forward

E72B Back

E72C Refresh

E72D Share

E72E Lock

E730 ReportHacked

E734 FavoriteStar

E735 FavoriteStarFill

E738 Remove

E739 Checkbox

E73A CheckboxComposite

E73B CheckboxFill

E73C CheckboxIndeterminate

E73D CheckboxCompositeReversed

E73E CheckMark
E73F BackToWindow

E740 FullScreen

E741 ResizeTouchLarger

E742 ResizeTouchSmaller

E743 ResizeMouseSmall

E744 ResizeMouseMedium

E745 ResizeMouseWide

E746 ResizeMouseTall

E747 ResizeMouseLarge

E748 SwitchUser

E749 Print

E74A Up

E74B Down

E74C OEM

E74D Delete

E74E Save

E74F Mute

E750 BackSpaceQWERTY

E751 ReturnKey
E752 UpArrowShiftKey

E753 Cloud

E754 Flashlight

E755 RotationLock

E756 CommandPrompt

E759 SIPMove

E75A SIPUndock

E75B SIPRedock

E75C EraseTool

E75D UnderscoreSpace

E75E GripperTool

E75F Dialpad

E760 PageLeft

E761 PageRight

E762 MultiSelect

E763 KeyboardLeftHanded

E764 KeyboardRightHanded

E765 KeyboardClassic

E766 KeyboardSplit
E767 Volume

E768 Play

E769 Pause

E76B ChevronLeft

E76C ChevronRight

E76D InkingTool

E76E Emoji2

E76F GripperBarHorizontal

E770 System

E771 Personalize

E772 Devices

E773 SearchAndApps

E774 Globe

E775 TimeLanguage

E776 EaseOfAccess

E777 UpdateRestore

E778 HangUp

E779 ContactInfo

E77A Unpin
E77B Contact

E77C Memo

E77F Paste

E780 PhoneBook

E781 LEDLight

E783 Error

E784 GripperBarVertical

E785 Unlock

E786 Slideshow

E787 Calendar

E788 GripperResize

E789 Megaphone

E78A Trim

E78B NewWindow

E78C SaveLocal

E790 Color

E791 DataSense

E792 SaveAs

E793 Light
E799 AspectRatio

E7A5 DataSenseBar

E7A6 Redo

E7A7 Undo

E7A8 Crop

E7AC OpenWith

E7AD Rotate

E7B5 SetlockScreen

E7B7 MapPin2

E7B8 Package

E7BA Warning

E7BC ReadingList

E7BE Education

E7BF ShoppingCart

E7C0 Train

E7C1 Flag

E7C3 Page

E7C4 Multitask

E7C5 BrowsePhotos
E7C6 HalfStarLeft

E7C7 HalfStarRight

E7C8 Record

E7C9 TouchPointer

E7DE LangJPN

E7E3 Ferry

E7E6 Highlight

E7E7 ActionCenterNotification

E7E8 PowerButton

E7EA ResizeTouchNarrower

E7EB ResizeTouchShorter

E7EC DrivingMode

E7ED RingerSilent

E7EE OtherUser

E7EF Admin

E7F0 CC

E7F1 SDCard

E7F2 CallForwarding

E7F3 SettingsDisplaySound
E7F4 TVMonitor

E7F5 Speakers

E7F6 Headphone

E7F7 DeviceLaptopPic

E7F8 DeviceLaptopNoPic

E7F9 DeviceMonitorRightPic

E7FA DeviceMonitorLeftPic

E7FB DeviceMonitorNoPic

E7FC Game

E7FD HorizontalTabKey

E802 StreetsideSplitMinimize

E803 StreetsideSplitExpand

E804 Car

E805 Walk

E806 Bus

E809 TiltUp

E80A TiltDown

E80C RotateMapRight

E80D RotateMapLeft
E80F Home

E811 ParkingLocation

E812 MapCompassTop

E813 MapCompassBottom

E814 IncidentTriangle

E815 Touch

E816 MapDirections

E819 StartPoint

E81A StopPoint

E81B EndPoint

E81C History

E81D Location

E81E MapLayers

E81F Accident

E821 Work

E822 Construction

E823 Recent

E825 Bank

E826 DownloadMap
E829 InkingToolFill2

E82A HighlightFill2

E82B EraseToolFill

E82C EraseToolFill2

E82D Dictionary

E82E DictionaryAdd

E82F ToolTip

E830 ChromeBack

E835 ProvisioningPackage

E836 AddRemoteDevice

E839 Ethernet

E83A ShareBroadband

E83B DirectAccess

E83C DialUp

E83D DefenderApp

E83E BatteryCharging9

E83F Battery10

E840 Pinned

E841 PinFill
E842 PinnedFill

E843 PeriodKey

E844 PuncKey

E845 RevToggleKey

E846 RightArrowKeyTime1

E847 RightArrowKeyTime2

E848 LeftQuote

E849 RightQuote

E84A DownShiftKey

E84B UpShiftKey

E84C PuncKey0

E84D PuncKeyLeftBottom

E84E RightArrowKeyTime3

E84F RightArrowKeyTime4

E850 Battery0

E851 Battery1

E852 Battery2

E853 Battery3

E854 Battery4
E855 Battery5

E856 Battery6

E857 Battery7

E858 Battery8

E859 Battery9

E85A BatteryCharging0

E85B BatteryCharging1

E85C BatteryCharging2

E85D BatteryCharging3

E85E BatteryCharging4

E85F BatteryCharging5

E860 BatteryCharging6

E861 BatteryCharging7

E862 BatteryCharging8

E863 BatterySaver0

E864 BatterySaver1

E865 BatterySaver2

E866 BatterySaver3

E867 BatterySaver4
E868 BatterySaver5

E869 BatterySaver6

E86A BatterySaver7

E86B BatterySaver8

E86C SignalBars1

E86D SignalBars2

E86E SignalBars3

E86F SignalBars4

E870 SignalBars5

E871 SignalNotConnected

E872 Wifi1

E873 Wifi2

E874 Wifi3

E875 SIMLock

E876 SIMMissing

E877 Vibrate

E878 RoamingInternational

E879 RoamingDomestic

E87A CallForwardInternational
E87B CallForwardRoaming

E87C JpnRomanji

E87D JpnRomanjiLock

E87E JpnRomanjiShift

E87F JpnRomanjiShiftLock

E880 StatusDataTransfer

E881 StatusDataTransferVPN

E882 StatusDualSIM2

E883 StatusDualSIM2VPN

E884 StatusDualSIM1

E885 StatusDualSIM1VPN

E886 StatusSGLTE

E887 StatusSGLTECell

E888 StatusSGLTEDataVPN

E889 StatusVPN

E88A WifiHotspot

E88B LanguageKor

E88C LanguageCht

E88D LanguageChs
E88E USB

E88F InkingToolFill

E890 View

E891 HighlightFill

E892 Previous

E893 Next

E894 Clear

E895 Sync

E896 Download

E897 Help

E898 Upload

E899 Emoji

E89A TwoPage

E89B LeaveChat

E89C MailForward

E89E RotateCamera

E89F ClosePane

E8A0 OpenPane

E8A1 PreviewLink
E8A2 AttachCamera

E8A3 ZoomIn

E8A4 Bookmarks

E8A5 Document

E8A6 ProtectedDocument

E8A7 OpenInNewWindow

E8A8 MailFill

E8A9 ViewAll

E8AA VideoChat

E8AB Switch

E8AC Rename

E8AD Go

E8AE SurfaceHub

E8AF Remote

E8B0 Click

E8B1 Shuffle

E8B2 Movies

E8B3 SelectAll

E8B4 Orientation
E8B5 Import

E8B6 ImportAll

E8B7 Folder

E8B8 Webcam

E8B9 Picture

E8BA Caption

E8BB ChromeClose

E8BC ShowResults

E8BD Message

E8BE Leaf

E8BF CalendarDay

E8C0 CalendarWeek

E8C1 Characters

E8C2 MailReplyAll

E8C3 Read

E8C4 ShowBcc

E8C5 HideBcc

E8C6 Cut

E8C8 Copy
E8C9 Important

E8CA MailReply

E8CB Sort

E8CC MobileTablet

E8CD DisconnectDrive

E8CE MapDrive

E8CF ContactPresence

E8D0 Priority

E8D1 GotoToday

E8D2 Font

E8D3 FontColor

E8D4 Contact2

E8D5 FolderFill

E8D6 Audio

E8D7 Permissions

E8D8 DisableUpdates

E8D9 Unfavorite

E8DA OpenLocal

E8DB Italic
E8DC Underline

E8DD Bold

E8DE MoveToFolder

E8DF LikeDislike

E8E0 Dislike

E8E1 Like

E8E2 AlignRight

E8E3 AlignCenter

E8E4 AlignLeft

E8E5 OpenFile

E8E6 ClearSelection

E8E7 FontDecrease

E8E8 FontIncrease

E8E9 FontSize

E8EA CellPhone

E8EB Reshare

E8EC Tag

E8ED RepeatOne

E8EE RepeatAll
E8EF Calculator

E8F0 Directions

E8F1 Library

E8F2 ChatBubbles

E8F3 PostUpdate

E8F4 NewFolder

E8F5 CalendarReply

E8F6 UnsyncFolder

E8F7 SyncFolder

E8F8 BlockContact

E8F9 SwitchApps

E8FA AddFriend

E8FB Accept

E8FC GoToStart

E8FD BulletedList

E8FE Scan

E8FF Preview

E904 ZeroBars

E905 OneBar
E906 TwoBars

E907 ThreeBars

E908 FourBars

E909 World

E90A Comment

E90B MusicInfo

E90C DockLeft

E90D DockRight

E90E DockBottom

E90F Repair

E910 Accounts

E911 DullSound

E912 Manage

E913 Street

E914 Printer3D

E915 RadioBullet

E916 Stopwatch

E91B Photo

E91C ActionCenter
E91F FullCircleMask

E921 ChromeMinimize

E922 ChromeMaximize

E923 ChromeRestore

E924 Annotation

E925 BackSpaceQWERTYSm

E926 BackSpaceQWERTYMd

E927 Swipe

E928 Fingerprint

E929 Handwriting

E92C ChromeBackToWindow

E92D ChromeFullScreen

E92E KeyboardStandard

E92F KeyboardDismiss

E930 Completed

E931 ChromeAnnotate

E932 Label

E933 IBeam

E934 IBeamOutline
E935 FlickDown

E936 FlickUp

E937 FlickLeft

E938 FlickRight

E939 FeedbackApp

E93C MusicAlbum

E93E Streaming

E943 Code

E944 ReturnToWindow

E945 LightningBolt

E946 Info

E947 CalculatorMultiply

E948 CalculatorAddition

E949 CalculatorSubtract

E94A CalculatorDivide

E94B CalculatorSquareroot

E94C CalculatorPercentage

E94D CalculatorNegate

E94E CalculatorEqualTo
E94F CalculatorBackspace

E950 Component

E951 DMC

E952 Dock

E953 MultimediaDMS

E954 MultimediaDVR

E955 MultimediaPMP

E956 PrintfaxPrinterFile

E957 Sensor

E958 StorageOptical

E95A Communications

E95B Headset

E95D Projector

E95E Health

E960 Webcam2

E961 Input

E962 Mouse

E963 Smartcard

E964 SmartcardVirtual
E965 MediaStorageTower

E966 ReturnKeySm

E967 GameConsole

E968 Network

E969 StorageNetworkWireless

E96A StorageTape

E96D ChevronUpSmall

E96E ChevronDownSmall

E96F ChevronLeftSmall

E970 ChevronRightSmall

E971 ChevronUpMed

E972 ChevronDownMed

E973 ChevronLeftMed

E974 ChevronRightMed

E975 Devices2

E976 ExpandTile

E977 PC1

E978 PresenceChicklet

E979 PresenceChickletVideo
E97A Reply

E97B SetTile

E97C Type

E97D Korean

E97E HalfAlpha

E97F FullAlpha

E980 Key12On

E981 ChineseChangjie

E982 QWERTYOn

E983 QWERTYOff

E984 ChineseQuick

E985 Japanese

E986 FullHiragana

E987 FullKatakana

E988 HalfKatakana

E989 ChineseBoPoMoFo

E98A ChinesePinyin

E98F ConstructionCone

E990 XboxOneConsole
E992 Volume0

E993 Volume1

E994 Volume2

E995 Volume3

E996 BatteryUnknown

E998 WifiAttentionOverlay

E99A Robot

E9A1 TapAndSend

E9A8 PasswordKeyShow

E9A9 PasswordKeyHide

E9AA BidiLtr

E9AB BidiRtl

E9AC ForwardSm

E9AD CommaKey

E9AE DashKey

E9AF DullSoundKey

E9B0 HalfDullSound

E9B1 RightDoubleQuote

E9B2 LeftDoubleQuote
E9B3 PuncKeyRightBottom

E9B4 PuncKey1

E9B5 PuncKey2

E9B6 PuncKey3

E9B7 PuncKey4

E9B8 PuncKey5

E9B9 PuncKey6

E9BA PuncKey9

E9BB PuncKey7

E9BC PuncKey8

E9CA Frigid

E9D9 Diagnostic

E9F3 Process

EA14 DisconnectDisplay

EA1F Info2

EA21 ActionCenterAsterisk

EA24 Beta

EA35 SaveCopy

EA37 List
EA38 Asterisk

EA39 ErrorBadge

EA3A CircleRing

EA3B CircleFill

EA40 AllAppsMirrored

EA41 BookmarksMirrored

EA42 BulletedListMirrored

EA43 CallForwardInternationalMirrored

EA44 CallForwardRoamingMirrored

EA47 ChromeBackMirrored

EA48 ClearSelectionMirrored

EA49 ClosePaneMirrored

EA4A ContactInfoMirrored

EA4B DockRightMirrored

EA4C DockLeftMirrored

EA4E ExpandTileMirrored

EA4F GoMirrored

EA50 GripperResizeMirrored

EA51 HelpMirrored
EA52 ImportMirrored

EA53 ImportAllMirrored

EA54 LeaveChatMirrored

EA55 ListMirrored

EA56 MailForwardMirrored

EA57 MailReplyMirrored

EA58 MailReplyAllMirrored

EA5B OpenPaneMirrored

EA5C OpenWithMirrored

EA5E ParkingLocationMirrored

EA5F ResizeMouseMediumMirrored

EA60 ResizeMouseSmallMirrored

EA61 ResizeMouseTallMirrored

EA62 ResizeTouchNarrowerMirrored

EA63 SendMirrored

EA64 SendFillMirrored

EA65 ShowResultsMirrored

EA69 Media

EA6A SyncError
EA6C Devices3

EA80 Lightbulb

EA81 StatusCircle

EA82 StatusTriangle

EA83 StatusError

EA84 StatusWarning

EA86 Puzzle

EA89 CalendarSolid

EA8A HomeSolid

EA8B ParkingLocationSolid

EA8C ContactSolid

EA8D ConstructionSolid

EA8E AccidentSolid

EA8F Ringer

EA91 ThoughtBubble

EA92 HeartBroken

EA93 BatteryCharging10

EA94 BatterySaver9

EA95 BatterySaver10
EA97 CallForwardingMirrored

EA98 MultiSelectMirrored

EA99 Broom

EADF Trackers

EB05 PieSingle

EB0F StockDown

EB11 StockUp

EB42 Drop

EB47 BusSolid

EB48 FerrySolid

EB49 StartPointSolid

EB4A StopPointSolid

EB4B EndPointSolid

EB4C AirplaneSolid

EB4D TrainSolid

EB4E WorkSolid

EB4F ReminderFill

EB50 Reminder

EB51 Heart
EB52 HeartFill

EB55 EthernetError

EB56 EthernetWarning

EB57 StatusConnecting1

EB58 StatusConnecting2

EB59 StatusUnsecure

EB5A WifiError0

EB5B WifiError1

EB5C WifiError2

EB5D WifiError3

EB5E WifiError4

EB5F WifiWarning0

EB60 WifiWarning1

EB61 WifiWarning2

EB62 WifiWarning3

EB63 WifiWarning4

EB66 Devices4

EB67 NUIIris

EB68 NUIFace
EB7E EditMirrored

EB82 NUIFPStartSlideHand

EB83 NUIFPStartSlideAction

EB84 NUIFPContinueSlideHand

EB85 NUIFPContinueSlideAction

EB86 NUIFPRollRightHand

EB87 NUIFPRollRightHandAction

EB88 NUIFPRollLeftHand

EB89 NUIFPRollLeftAction

EB8A NUIFPPressHand

EB8B NUIFPPressAction

EB8C NUIFPPressRepeatHand

EB8D NUIFPPressRepeatAction

EB90 StatusErrorFull

EB91 MultitaskExpanded

EB95 Certificate

EB96 BackSpaceQWERTYLg

EB97 ReturnKeyLg

EB9D FastForward
EB9E Rewind

EB9F Photo2

EBA0 MobBattery0

EBA1 MobBattery1

EBA2 MobBattery2

EBA3 MobBattery3

EBA4 MobBattery4

EBA5 MobBattery5

EBA6 MobBattery6

EBA7 MobBattery7

EBA8 MobBattery8

EBA9 MobBattery9

EBAA MobBattery10

EBAB MobBatteryCharging0

EBAC MobBatteryCharging1

EBAD MobBatteryCharging2

EBAE MobBatteryCharging3

EBAF MobBatteryCharging4

EBB0 MobBatteryCharging5
EBB1 MobBatteryCharging6

EBB2 MobBatteryCharging7

EBB3 MobBatteryCharging8

EBB4 MobBatteryCharging9

EBB5 MobBatteryCharging10

EBB6 MobBatterySaver0

EBB7 MobBatterySaver1

EBB8 MobBatterySaver2

EBB9 MobBatterySaver3

EBBA MobBatterySaver4

EBBB MobBatterySaver5

EBBC MobBatterySaver6

EBBD MobBatterySaver7

EBBE MobBatterySaver8

EBBF MobBatterySaver9

EBC0 MobBatterySaver10

EBC3 DictionaryCloud

EBC4 ResetDrive

EBC5 VolumeBars
EBC6 Project

EBD2 AdjustHologram

EBD4 WifiCallBars

EBD5 WifiCall0

EBD6 WifiCall1

EBD7 WifiCall2

EBD8 WifiCall3

EBD9 WifiCall4

EBDE DeviceDiscovery

EBE6 WindDirection

EBE7 RightArrowKeyTime0

EBFC TabletMode

EBFD StatusCircleLeft

EBFE StatusTriangleLeft

EBFF StatusErrorLeft

EC00 StatusWarningLeft

EC02 MobBatteryUnknown

EC05 NetworkTower

EC06 CityNext
EC07 CityNext2

EC08 Courthouse

EC09 Groceries

EC0A Sustainable

EC0B BuildingEnergy

EC11 ToggleFilled

EC12 ToggleBorder

EC13 SliderThumb

EC14 ToggleThumb

EC15 MiracastLogoSmall

EC16 MiracastLogoLarge

EC19 PLAP

EC1B Badge

EC1E SignalRoaming

EC20 MobileLocked

EC24 InsiderHubApp

EC25 PersonalFolder

EC26 HomeGroup

EC27 MyNetwork
EC31 KeyboardFull

EC37 MobSignal1

EC38 MobSignal2

EC39 MobSignal3

EC3A MobSignal4

EC3B MobSignal5

EC3C MobWifi1

EC3D MobWifi2

EC3E MobWifi3

EC3F MobWifi4

EC40 MobAirplane

EC41 MobBluetooth

EC42 MobActionCenter

EC43 MobLocation

EC44 MobWifiHotspot

EC45 LanguageJpn

EC46 MobQuietHours

EC47 MobDrivingMode

EC48 SpeedOff
EC49 SpeedMedium

EC4A SpeedHigh

EC4E ThisPC

EC4F MusicNote

EC50 FileExplorer

EC51 FileExplorerApp

EC52 LeftArrowKeyTime0

EC54 MicOff

EC55 MicSleep

EC56 MicError

EC57 PlaybackRate1x

EC58 PlaybackRateOther

EC59 CashDrawer

EC5A BarcodeScanner

EC5B ReceiptPrinter

EC5C MagStripeReader

EC61 CompletedSolid

EC64 CompanionApp

EC6D SwipeRevealArt
EC71 MicOn

EC72 MicClipping

EC74 TabletSelected

EC75 MobileSelected

EC76 LaptopSelected

EC77 TVMonitorSelected

EC7A DeveloperTools

EC7E MobCallForwarding

EC7F MobCallForwardingMirrored

EC80 BodyCam

EC81 PoliceCar

EC87 Draw

EC88 DrawSolid

EC8A LowerBrightness

EC8F ScrollUpDown

EC92 DateTime

ECA5 Tiles

ECA7 PartyLeader

ECAA AppIconDefault
ECC4 AddSurfaceHub

ECC5 DevUpdate

ECC6 Unit

ECC8 AddTo

ECC9 RemoveFrom

ECCA RadioBtnOff

ECCB RadioBtnOn

ECCC RadioBullet2

ECCD ExploreContent

ECE7 ScrollMode

ECE8 ZoomMode

ECE9 PanMode

ECF0 WiredUSB

ECF1 WirelessUSB

ECF3 USBSafeConnect

ED0C ActionCenterNotificationMirrored

ED0D ActionCenterMirrored

ED10 ResetDevice

ED15 Feedback
ED1E Subtitles

ED1F SubtitlesAudio

ED28 CalendarMirrored

ED2A eSIM

ED2B eSIMNoProfile

ED2C eSIMLocked

ED2D eSIMBusy

ED2E SignalError

ED2F StreamingEnterprise

ED30 Headphone0

ED31 Headphone1

ED32 Headphone2

ED33 Headphone3

ED39 KeyboardBrightness

ED3A KeyboardLowerBrightness

ED3C SkipBack10

ED3D SkipForward30

ED41 TreeFolderFolder

ED42 TreeFolderFolderFill
ED43 TreeFolderFolderOpen

ED44 TreeFolderFolderOpenFill

ED47 MultimediaDMP

ED4C KeyboardOneHanded

ED4D Narrator

ED53 EmojiTabPeople

ED54 EmojiTabSmilesAnimals

ED55 EmojiTabCelebrationObjects

ED56 EmojiTabFoodPlants

ED57 EmojiTabTransitPlaces

ED58 EmojiTabSymbols

ED59 EmojiTabTextSmiles

ED5A EmojiTabFavorites

ED5B EmojiSwatch

ED5C ConnectApp

ED5D CompanionDeviceFramework

ED5E Ruler

ED5F FingerInking

ED60 StrokeErase
ED61 PointErase

ED62 ClearAllInk

ED63 Pencil

ED64 Marker

ED65 InkingCaret

ED66 InkingColorOutline

ED67 InkingColorFill

EDA2 HardDrive

EDA3 NetworkAdapter

EDA4 Touchscreen

EDA5 NetworkPrinter

EDA6 CloudPrinter

EDA7 KeyboardShortcut

EDA8 BrushSize

EDA9 NarratorForward

EDAA NarratorForwardMirrored

EDAB SyncBadge12

EDAC RingerBadge12

EDAD AsteriskBadge12
EDAE ErrorBadge12

EDAF CircleRingBadge12

EDB0 CircleFillBadge12

EDB1 ImportantBadge12

EDB3 MailBadge12

EDB4 PauseBadge12

EDB5 PlayBadge12

EDC6 PenWorkspace

EDE1 Export

EDE2 ExportMirrored

EDFB CaligraphyPen

EE35 ReplyMirrored

EE3F LockscreenDesktop

EE40 Multitask16

EE4A Play36

EE56 PenPalette

EE57 GuestUser

EE63 SettingsBattery

EE64 TaskbarPhone
EE65 LockScreenGlance

EE71 ImageExport

EE77 WifiEthernet

EE79 ActionCenterQuiet

EE7A ActionCenterQuietNotification

EE92 TrackersMirrored

EE93 DateTimeMirrored

EE94 Wheel

EF15 PenWorkspaceMirrored

EF16 PenPaletteMirrored

EF17 StrokeEraseMirrored

EF18 PointEraseMirrored

EF19 ClearAllInkMirrored

EF1F BackgroundToggle

EF20 Marquee

Related articles
Guidelines for fonts
Symbol enumeration
Styling controls
3/6/2017 5 min to read Edit on GitHub

You can customize the appearance of your apps in many ways by using the XAML framework. Styles let you set
control properties and reuse those settings for a consistent appearance across multiple controls.

Style basics
Use styles to extract visual property settings into reusable resources. Here's an example that shows 3 buttons with
a style that sets the BorderBrush, BorderThickness and Foreground properties. By applying a style, you can
make the controls appear the same without having to set these properties on each control separately.

You can define a style inline in the XAML for a control, or as a reusable resource. Define resources in an individual
page's XAML file, in the App.xaml file, or in a separate resource dictionary XAML file. A resource dictionary XAML
file can be shared across apps, and more than one resource dictionary can be merged in a single app. Where the
resource is defined determines the scope in which it can be used. Page-level resources are available only in the
page where they are defined. If resources with the same key are defined in both App.xaml and in a page, the
resource in the page overrides the resource in App.xaml. If a resource is defined in a separate resource dictionary
file, it's scope is determined by where the resource dictionary is referenced.
In the Style definition, you need a TargetType attribute and a collection of one or more Setter elements. The
TargetType attribute is a string that specifies a FrameworkElement type to apply the style to. The TargetType
value must specify a FrameworkElement-derived type that's defined by the Windows Runtime or a custom type
that's available in a referenced assembly. If you try to apply a style to a control and the control's type doesn't
match the TargetType attribute of the style you're trying to apply, an exception occurs.
Each Setter element requires a Property and a Value. These property settings indicate what control property the
setting applies to, and the value to set for that property. You can set the Setter.Value with either attribute or
property element syntax. The XAML here shows the style applied to the buttons shown previously. In this XAML,
the first two Setter elements use attribute syntax, but the last Setter, for the BorderBrush property, uses
property element syntax. The example doesn't use the x:Key attribute attribute, so the style is implicitly applied to
the buttons. Applying styles implicitly or explicitly is explained in the next section.
<Page.Resources>
<Style TargetType="Button">
<Setter Property="BorderThickness" Value="5" />
<Setter Property="Foreground" Value="Blue" />
<Setter Property="BorderBrush" >
<Setter.Value>
<LinearGradientBrush StartPoint="0.5,0" EndPoint="0.5,1">
<GradientStop Color="Yellow" Offset="0.0" />
<GradientStop Color="Red" Offset="0.25" />
<GradientStop Color="Blue" Offset="0.75" />
<GradientStop Color="LimeGreen" Offset="1.0" />
</LinearGradientBrush>
</Setter.Value>
</Setter>
</Style>
</Page.Resources>

<StackPanel Orientation="Horizontal">
<Button Content="Button"/>
<Button Content="Button"/>
<Button Content="Button"/>
</StackPanel>

Apply an implicit or explicit style


If you define a style as a resource, there are two ways to apply it to your controls:
Implicitly, by specifying only a TargetType for the Style.
Explicitly, by specifying a TargetType and an x:Key attribute attribute for the Style and then by setting the
target control's Style property with a {StaticResource} markup extension reference that uses the explicit key.
If a style contains the x:Key attribute, you can only apply it to a control by setting the Style property of the control
to the keyed style. In contrast, a style without an x:Key attribute is automatically applied to every control of its
target type, that doesn't otherwise have an explicit style setting.
Here are two buttons that demonstrate implicit and explicit styles.

In this example, the first style has an x:Key attribute and its target type is Button. The first button's Style property
is set to this key, so this style is applied explicitly. The second style is applied implicitly to the second button
because its target type is Button and the style doesn't have an x:Key attribute.
<Page.Resources>
<Style x:Key="PurpleStyle" TargetType="Button">
<Setter Property="FontFamily" Value="Lucida Sans Unicode"/>
<Setter Property="FontStyle" Value="Italic"/>
<Setter Property="FontSize" Value="14"/>
<Setter Property="Foreground" Value="MediumOrchid"/>
</Style>

<Style TargetType="Button">
<Setter Property="FontFamily" Value="Lucida Sans Unicode"/>
<Setter Property="FontStyle" Value="Italic"/>
<Setter Property="FontSize" Value="14"/>
<Setter Property="RenderTransform">
<Setter.Value>
<RotateTransform Angle="25"/>
</Setter.Value>
</Setter>
<Setter Property="BorderBrush" Value="Orange"/>
<Setter Property="BorderThickness" Value="2"/>
<Setter Property="Foreground" Value="Orange"/>
</Style>
</Page.Resources>

<Grid x:Name="LayoutRoot">
<Button Content="Button" Style="{StaticResource PurpleStyle}"/>
<Button Content="Button" />
</Grid>

Use based-on styles


To make styles easier to maintain and to optimize style reuse, you can create styles that inherit from other styles.
You use the BasedOn property to create inherited styles. Styles that inherit from other styles must target the
same type of control or a control that derives from the type targeted by the base style. For example, if a base style
targets ContentControl, styles that are based on this style can target ContentControl or types that derive from
ContentControl such as Button and ScrollViewer. If a value is not set in the based-on style, it's inherited from
the base style. To change a value from the base style, the based-on style overrides that value. The next example
shows a Button and a CheckBox with styles that inherit from the same base style.

The base style targets ContentControl, and sets the Height, and Width properties. The styles based on this style
target CheckBox and Button, which derive from ContentControl. The based-on styles set different colors for
the BorderBrush and Foreground properties. (You don't typically put a border around a CheckBox. We do it
here to show the effects of the style.)
<Page.Resources>
<Style x:Key="BasicStyle" TargetType="ContentControl">
<Setter Property="Width" Value="130" />
<Setter Property="Height" Value="30" />
</Style>

<Style x:Key="ButtonStyle" TargetType="Button"


BasedOn="{StaticResource BasicStyle}">
<Setter Property="BorderBrush" Value="Orange" />
<Setter Property="BorderThickness" Value="2" />
<Setter Property="Foreground" Value="Red" />
</Style>

<Style x:Key="CheckBoxStyle" TargetType="CheckBox"


BasedOn="{StaticResource BasicStyle}">
<Setter Property="BorderBrush" Value="Blue" />
<Setter Property="BorderThickness" Value="2" />
<Setter Property="Foreground" Value="Green" />
</Style>
</Page.Resources>

<StackPanel>
<Button Content="Button" Style="{StaticResource ButtonStyle}" Margin="0,10"/>
<CheckBox Content="CheckBox" Style="{StaticResource CheckBoxStyle}"/>
</StackPanel>

Use tools to work with styles easily


A fast way to apply styles to your controls is to right-click on a control on the Microsoft Visual Studio XAML
design surface and select Edit Style or Edit Template (depending on the control you are right-clicking on). You
can then apply an existing style by selecting Apply Resource or define a new style by selecting Create Empty. If
you create an empty style, you are given the option to define it in the page, in the App.xaml file, or in a separate
resource dictionary.

Modify the default system styles


You should use the styles that come from the Windows Runtime default XAML resources when you can. When
you have to define your own styles, try to base your styles on the default ones when possible (using based-on
styles as explained earlier, or start by editing a copy of the original default style).

The Template property


A style setter can be used for the Template property of a Control, and in fact this makes up the majority of a
typical XAML style and an app's XAML resources. This is discussed in more detail in the topic Control templates.
Control templates
3/6/2017 7 min to read Edit on GitHub

You can customize a control's visual structure and visual behavior by creating a control template in the XAML
framework. Controls have many properties, such as Background, Foreground, and FontFamily, that you can set
to specify different aspects of the control's appearance. But the changes that you can make by setting these
properties are limited. You can specify additional customizations by creating a template using the
ControlTemplate class. Here, we show you how to create a ControlTemplate to customize the appearance of a
CheckBox control.

Important APIs
ControlTemplate class
Control.Template property

Custom control template example


By default, a CheckBox control puts its content (the string or object next to the CheckBox) to the right of the
selection box, and a check mark indicates that a user selected the CheckBox. These characteristics represent the
visual structure and visual behavior of the CheckBox.
Here's a CheckBox using the default ControlTemplate shown in the Unchecked , Checked , and Indeterminate states.

You can change these characteristics by creating a ControlTemplate for the CheckBox. For example, if you want
the content of the check box to be below the selection box, and you want to use an X to indicate that a user
selected the check box. You specify these characteristics in the ControlTemplate of the CheckBox.
To use a custom template with a control, assign the ControlTemplate to the Template property of the control.
Here's a CheckBox using a ControlTemplate called CheckBoxTemplate1 . We show the Extensible Application
Markup Language (XAML) for the ControlTemplate in the next section.

<CheckBox Content="CheckBox" Template="{StaticResource CheckBoxTemplate1}" IsThreeState="True" Margin="20"/>

Here's how this CheckBox looks in the Unchecked , Checked , and Indeterminate states after we apply our template.

Specify the visual structure of a control


When you create a ControlTemplate, you combine FrameworkElement objects to build a single control. A
ControlTemplate must have only one FrameworkElement as its root element. The root element usually contains
other FrameworkElement objects. The combination of objects makes up the control's visual structure.
This XAML creates a ControlTemplate for a CheckBox that specifies that the content of the control is below the
selection box. The root element is a Border. The example specifies a Path to create an X that indicates that a user
selected the CheckBox, and an Ellipse that indicates an indeterminate state. Note that the Opacity is set to 0 on
the Path and the Ellipse so that by default, neither appear.

<ControlTemplate x:Key="CheckBoxTemplate1" TargetType="CheckBox">


<Border BorderBrush="{TemplateBinding BorderBrush}"
BorderThickness="{TemplateBinding BorderThickness}"
Background="{TemplateBinding Background}">
<Grid>
<Grid.RowDefinitions>
<RowDefinition Height="*"/>
<RowDefinition Height="25"/>
</Grid.RowDefinitions>
<Rectangle x:Name="NormalRectangle" Fill="Transparent" Height="20" Width="20"
Stroke="{ThemeResource SystemControlForegroundBaseMediumHighBrush}"
StrokeThickness="{ThemeResource CheckBoxBorderThemeThickness}"
UseLayoutRounding="False"/>
<!-- Create an X to indicate that the CheckBox is selected. -->
<Path x:Name="CheckGlyph"
Data="M103,240 L111,240 119,248 127,240 135,240 123,252 135,264 127,264 119,257 111,264 103,264 114,252 z"
Fill="{ThemeResource CheckBoxForegroundThemeBrush}"
FlowDirection="LeftToRight"
Height="14" Width="16" Opacity="0" Stretch="Fill"/>
<Ellipse x:Name="IndeterminateGlyph"
Fill="{ThemeResource CheckBoxForegroundThemeBrush}"
Height="8" Width="8" Opacity="0" UseLayoutRounding="False" />
<ContentPresenter x:Name="ContentPresenter"
ContentTemplate="{TemplateBinding ContentTemplate}"
Content="{TemplateBinding Content}"
Margin="{TemplateBinding Padding}" Grid.Row="1"
HorizontalAlignment="Center"
VerticalAlignment="{TemplateBinding VerticalContentAlignment}"/>
</Grid>
</Border>
</ControlTemplate>

Specify the visual behavior of a control


A visual behavior specifies the appearance of a control when it is in a certain state. The CheckBox control has 3
check states: Checked , Unchecked , and Indeterminate . The value of the IsChecked property determines the state of
the CheckBox, and its state determines what appears in the box.
This table lists the possible values of IsChecked, the corresponding states of the CheckBox, and the appearance of
the CheckBox.

IsChecked value CheckBox state CheckBox appearance

true Checked Contains an "X".

false Unchecked Empty.

null Indeterminate Contains a circle.

You specify the appearance of a control when it is in a certain state by using VisualState objects. A VisualState
contains a Setter or Storyboard that changes the appearance of the elements in the ControlTemplate. When the
control enters the state that the VisualState.Name property specifies, the property changes in the Setter or
Storyboard are applied. When the control exits the state, the changes are removed. You add VisualState objects
to VisualStateGroup objects. You add VisualStateGroup objects to the
VisualStateManager.VisualStateGroups attached property, which you set on the root FrameworkElement of
the ControlTemplate.
This XAML shows the VisualState objects for the Checked , Unchecked , and Indeterminate states. The example sets
the VisualStateManager.VisualStateGroups attached property on the Border, which is the root element of the
ControlTemplate. The Checked VisualState specifies that the Opacity of the Path named CheckGlyph (which we
show in the previous example) is 1. The Indeterminate VisualState specifies that the Opacity of the Ellipse named
IndeterminateGlyph is 1. The Unchecked VisualState has no Setter or Storyboard, so the CheckBox returns to its
default appearance.
<ControlTemplate x:Key="CheckBoxTemplate1" TargetType="CheckBox">
<Border BorderBrush="{TemplateBinding BorderBrush}"
BorderThickness="{TemplateBinding BorderThickness}"
Background="{TemplateBinding Background}">

<VisualStateManager.VisualStateGroups>
<VisualStateGroup x:Name="CheckStates">
<VisualState x:Name="Checked">
<VisualState.Setters>
<Setter Target="CheckGlyph.Opacity" Value="1"/>
</VisualState.Setters>
<!-- This Storyboard is equivalent to the Setter. -->
<!--<Storyboard>
<DoubleAnimation Duration="0" To="1"
Storyboard.TargetName="CheckGlyph" Storyboard.TargetProperty="Opacity"/>
</Storyboard>-->
</VisualState>
<VisualState x:Name="Unchecked"/>
<VisualState x:Name="Indeterminate">
<VisualState.Setters>
<Setter Target="IndeterminateGlyph.Opacity" Value="1"/>
</VisualState.Setters>
<!-- This Storyboard is equivalent to the Setter. -->
<!--<Storyboard>
<DoubleAnimation Duration="0" To="1"
Storyboard.TargetName="IndeterminateGlyph" Storyboard.TargetProperty="Opacity"/>
</Storyboard>-->
</VisualState>
</VisualStateGroup>
</VisualStateManager.VisualStateGroups>

<Grid>
<Grid.RowDefinitions>
<RowDefinition Height="*"/>
<RowDefinition Height="25"/>
</Grid.RowDefinitions>
<Rectangle x:Name="NormalRectangle" Fill="Transparent" Height="20" Width="20"
Stroke="{ThemeResource SystemControlForegroundBaseMediumHighBrush}"
StrokeThickness="{ThemeResource CheckBoxBorderThemeThickness}"
UseLayoutRounding="False"/>
<!-- Create an X to indicate that the CheckBox is selected. -->
<Path x:Name="CheckGlyph"
Data="M103,240 L111,240 119,248 127,240 135,240 123,252 135,264 127,264 119,257 111,264 103,264 114,252 z"
Fill="{ThemeResource CheckBoxForegroundThemeBrush}"
FlowDirection="LeftToRight"
Height="14" Width="16" Opacity="0" Stretch="Fill"/>
<Ellipse x:Name="IndeterminateGlyph"
Fill="{ThemeResource CheckBoxForegroundThemeBrush}"
Height="8" Width="8" Opacity="0" UseLayoutRounding="False" />
<ContentPresenter x:Name="ContentPresenter"
ContentTemplate="{TemplateBinding ContentTemplate}"
Content="{TemplateBinding Content}"
Margin="{TemplateBinding Padding}" Grid.Row="1"
HorizontalAlignment="Center"
VerticalAlignment="{TemplateBinding VerticalContentAlignment}"/>
</Grid>
</Border>
</ControlTemplate>

To better understand how VisualState objects work, consider what happens when the CheckBox goes from the
Unchecked state to the Checked state, then to the Indeterminate state, and then back to the Unchecked state. Here are
the transitions.
State transition What happens CheckBox appearance when the
transition completes

From Unchecked to Checked . The Setter value of the Checked An X is displayed.


VisualState is applied, so the Opacity
of CheckGlyph is 1.

From Checked to Indeterminate . The Setter value of the A circle is displayed.


Indeterminate VisualState is applied,
so the Opacity of
IndeterminateGlyph is 1. The Setter
value of the Checked VisualState is
removed, so the Opacity of
CheckGlyph is 0.

From Indeterminate to Unchecked . The Setter value of the Nothing is displayed.


Indeterminate VisualState is
removed, so the Opacity of
IndeterminateGlyph is 0.

For more info about how to create visual states for controls, and in particular how to use the Storyboard class and
the animation types, see Storyboarded animations for visual states.

Use tools to work with themes easily


A fast way to apply themes to your controls is to right-click on a control on the Microsoft Visual Studio Document
Outline and select Edit Theme or Edit Style (depending on the control you are right-clicking on). You can then
apply an existing theme by selecting Apply Resource or define a new one by selecting Create Empty.

Controls and accessibility


When you create a new template for a control, in addition to possibly changing the control's behavior and visual
appearance, you might also be changing how the control represents itself to accessibility frameworks. The
Universal Windows Platform (UWP) supports the Microsoft UI Automation framework for accessibility. All of the
default controls and their templates have support for common UI Automation control types and patterns that are
appropriate for the control's purpose and function. These control types and patterns are interpreted by UI
Automation clients such as assistive technologies, and this enables a control to be accessible as a part of a larger
accessible app UI.
To separate the basic control logic and also to satisfy some of the architectural requirements of UI Automation,
control classes include their accessibility support in a separate class, an automation peer. The automation peers
sometimes have interactions with the control templates because the peers expect certain named parts to exist in
the templates, so that functionality such as enabling assistive technologies to invoke actions of buttons is possible.
When you create a completely new custom control, you sometimes also will want to create a new automation peer
to go along with it. For more info, see Custom automation peers.

Learn more about a control's default template


The topics that document the styles and templates for XAML controls show you excerpts of the same starting
XAML you'd see if you used the Edit Theme or Edit Style techniques explained previously. Each topic lists the
names of the visual states, the theme resources used, and the full XAML for the style that contains the template.
The topics can be useful guidance if you've already started modifying a template and want to see what the original
template looked like, or to verify that your new template has all of the required named visual states.
Theme resources in control templates
For some of the attributes in the XAML examples, you may have noticed resource references that use the
{ThemeResource} markup extension. This is a technique that enables a single control template to use resources
that can be different values depending on which theme is currently active. This is particularly important for brushes
and colors, because the main purpose of the themes is to enable users to choose whether they want a dark, light,
or high contrast theme applied to the system overall. Apps that use the XAML resource system can use a resource
set that's appropriate for that theme, so that the theme choices in an app's UI are reflective of the user's
systemwide theme choice.

Get the sample code


XAML UI basics sample
Custom text edit control sample
ResourceDictionary and XAML resource references
3/6/2017 21 min to read Edit on GitHub

You can define the UI or resources for your app using XAML. Resources are typically definitions of some object that
you expect to use more than once. To refer to a XAML resource later, you specify a key for a resource that acts like
its name. You can reference a resource throughout an app or from any XAML page within it. You can define your
resources using a ResourceDictionary element from the Windows Runtime XAML. Then, you can reference your
resources by using a StaticResource markup extension or ThemeResource markup extension.
The XAML elements you might want to declare most often as XAML resources include Style, ControlTemplate,
animation components, and Brush subclasses. Here, we explain how to define a ResourceDictionary and keyed
resources, and how XAML resources relate to other resources that you define as part of your app or app package.
We also explain resource dictionary advanced features such as MergedDictionaries and ThemeDictionaries.
Prerequisites
We assume that you understand XAML markup and have read the XAML overview.

Define and use XAML resources


XAML resources are objects that are referenced from markup more than once. Resources are defined in a
ResourceDictionary, typically in a separate file or at the top of the markup page, like this.

<Page
x:Class="MSDNSample.MainPage"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">

<Page.Resources>
<x:String x:Key="greeting">Hello world</x:String>
<x:String x:Key="goodbye">Goodbye world</x:String>
</Page.Resources>

<TextBlock Text="{StaticResource greeting}" Foreground="Gray" VerticalAlignment="Center"/>


</Page>

In this example:
<Page.Resources></Page.Resources> - Defines the resource dictionary.
<x:String> - Defines the resource with the key "greeting".
{StaticResource greeting} - Looks up the resource with the key "greeting", which is assigned to the Text property of
the TextBlock.

Note Don't confuse the concepts related to ResourceDictionary with the Resource build action, resource
(.resw) files, or other "resources" that are discussed in the context of structuring the code project that produces
your app package.

Resources don't have to be strings; they can be any shareable object, such as styles, templates, brushes, and colors.
However, controls, shapes, and other FrameworkElements are not shareable, so they can't be declared as reusable
resources. For more info about sharing, see the XAML resources must be shareable section later in this topic.
Here, both a brush and a string are declared as resources and used by controls in a page.
<Page
x:Class="SpiderMSDN.MainPage"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">

<Page.Resources>
<SolidColorBrush x:Key="myFavoriteColor" Color="green"/>
<x:String x:Key="greeting">Hello world</x:String>
</Page.Resources>

<TextBlock Foreground="{StaticResource myFavoriteColor}" Text="{StaticResource greeting}" VerticalAlignment="Top"/>


<Button Foreground="{StaticResource myFavoriteColor}" Content="{StaticResource greeting}" VerticalAlignment="Center"/>
</Page>

All resources need to have a key. Usually that key is a string defined with x:Key=myString . However, there are a few
other ways to specify a key:
Style and ControlTemplate require a TargetType, and will use the TargetType as the key if x:Key is not
specified. In this case, the key is the actual Type object, not a string. (See examples below)
DataTemplate resources that have a TargetType will use the TargetType as the key if x:Key is not specified. In
this case, the key is the actual Type object, not a string.
x:Name can be used instead of x:Key. However, x:Name also generates a code behind field for the resource. As a
result, x:Name is less efficient than x:Key because that field needs to be initialized when the page is loaded.
The StaticResource markup extension can retrieve resources only with a string name (x:Key or x:Name). However,
the XAML framework also looks for implicit style resources (those which use TargetType rather than x:Key or
x:Name) when it decides which style & template to use for a control that hasn't set the Style and
ContentTemplate or ItemTemplate properties.
Here, the Style has an implicit key of typeof(Button), and since the Button at the bottom of the page doesn't
specify a Style property, it looks for a style with key of typeof(Button):

<Page
x:Class="MSDNSample.MainPage"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">

<Page.Resources>
<Style TargetType="Button">
<Setter Property="Background" Value="red"/>
</Style>
</Page.Resources>
<!-- This button will have a red background. -->
<Button Content="Button" Height="100" VerticalAlignment="Center" Width="100"/>
</Page>

For more info about implicit styles and how they work, see Styling controls and Control templates.

Look up resources in code


You access members of the resource dictionary like any other dictionary.

Caution When you perform a resource lookup in code, only the resources in the Page.Resources dictionary are
looked at. Unlike the StaticResource markup extension, the code doesn't fall back to the Application.Resources
dictionary if the resources arent found in the first dictionary.

This example shows how to retrieve the redButtonStyle resource out of a pages resource dictionary:
<Page
x:Class="MSDNSample.MainPage"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">

<Page.Resources>
<Style TargetType="Button" x:Key="redButtonStyle">
<Setter Property="Background" Value="red"/>
</Style>
</Page.Resources>
</Page>

public sealed partial class MainPage : Page


{
public MainPage()
{
this.InitializeComponent();
Style redButtonStyle = (Style)this.Resources["redButtonStyle"];
}
}

To look up app-wide resources from code, use Application.Current.Resources to get the app's resource
dictionary, as shown here.

<Application
x:Class="MSDNSample.App"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:local="using:SpiderMSDN">
<Application.Resources>
<Style TargetType="Button" x:Key="appButtonStyle">
<Setter Property="Background" Value="red"/>
</Style>
</Application.Resources>

</Application>

public sealed partial class MainPage : Page


{
public MainPage()
{
this.InitializeComponent();
Style appButtonStyle = (Style)Application.Current.Resources["appButtonStyle"];
}
}

You can also add an application resource in code.


There are two things to keep in mind when doing this.
First, you need to add the resources before any page tries to use the resource.
Second, you cant add resources in the Apps constructor.
You can avoid both problems if you add the resource in the Application.OnLaunched method, like this.
// App.xaml.cs

sealed partial class App : Application


{
protected override void OnLaunched(LaunchActivatedEventArgs e)
{
Frame rootFrame = Window.Current.Content as Frame;
if (rootFrame == null)
{
SolidColorBrush brush = new SolidColorBrush(Windows.UI.Color.FromArgb(255, 0, 255, 0)); // green
this.Resources["brush"] = brush;
// Other code that VS generates for you
}
}
}

Every FrameworkElement can have a ResourceDictionary


FrameworkElement is a base class that controls inherit from, and it has a Resources property. So, you can add a
local resource dictionary to any FrameworkElement.
Here, a resource dictionary is added to a page element.

<Page
x:Class="MSDNSample.MainPage"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">

<Page.Resources>
<x:String x:Key="greeting">Hello world</x:String>
</Page.Resources>

<Border>
<Border.Resources>
<x:String x:Key="greeting">Hola mundo</x:String>
</Border.Resources>
<TextBlock Text="{StaticResource greeting}" Foreground="Gray" VerticalAlignment="Center"/>
</Border>
</Page>

Here, both the Page and the Border have resource dictionaries, and they both have a resource called "greeting".
The TextBlock is inside the Border, so its resource lookup looks first to the Borders resources, then the Pages
resources, and then the Application resources. The TextBlock will read "Hola mundo".
To access that elements resources from code, use that elements Resources property. Accessing a
FrameworkElements resources in code, rather than XAML, will look only in that dictionary, not in parent
elements dictionaries.
<Page
x:Class="MSDNSample.MainPage"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">

<Page.Resources>
<x:String x:Key="greeting">Hello world</x:String>
</Page.Resources>

<Border x:Name="border">
<Border.Resources>
<x:String x:Key="greeting">Hola mundo</x:String>
</Border.Resources>
</Border>
</Page>

public sealed partial class MainPage : Page


{
public MainPage()
{
this.InitializeComponent();
string str = (string)border.Resources["greeting"];
}
}

Merged resource dictionaries


A merged resource dictionary combines one resource dictionary into another, usually in another file.

Tip You can create a resource dictionary file in Microsoft Visual Studio by using the Add > New Item >
Resource Dictionary option from the Project menu.

Here, you define a resource dictionary in a separate XAML file called Dictionary1.xaml.

<!-- Dictionary1.xaml -->


<ResourceDictionary
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:local="using:MSDNSample">

<SolidColorBrush x:Key="brush" Color="Red"/>

</ResourceDictionary>

To use that dictionary, you merge it with your pages dictionary:


<Page
x:Class="MSDNSample.MainPage"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">
<Page.Resources>
<ResourceDictionary>
<ResourceDictionary.MergedDictionaries>
<ResourceDictionary Source="Dictionary1.xaml"/>
</ResourceDictionary.MergedDictionaries>

<x:String x:Key="greeting">Hello world</x:String>

</ResourceDictionary>
</Page.Resources>

<TextBlock Foreground="{StaticResource brush}" Text="{StaticResource greeting}" VerticalAlignment="Center"/>


</Page>

Here's what happens in this example. In <Page.Resources> , you declare <ResourceDictionary> . The XAML framework
implicitly creates a resource dictionary for you when you add resources to <Page.Resources> ; however, in this case,
you dont want just any resource dictionary, you want one that contains merged dictionaries.
So you declare <ResourceDictionary> , then add things to its <ResourceDictionary.MergedDictionaries> collection. Each of
those entries takes the form <ResourceDictionary Source="Dictionary1.xaml"/> . To add more than one dictionary, just add a
<ResourceDictionary Source="Dictionary2.xaml"/> entry after the first entry.

After <ResourceDictionary.MergedDictionaries></ResourceDictionary.MergedDictionaries> , you can optionally put additional


resources in your main dictionary. You use resources from a merged to dictionary just like a regular dictionary. In
the example above, {StaticResource brush} finds the resource in the child/merged dictionary (Dictionary1.xaml), while
{StaticResource greeting} finds its resource in the main page dictionary.

In the resource-lookup sequence, a MergedDictionaries dictionary is checked only after a check of all the other
keyed resources of that ResourceDictionary. After searching that level, the lookup reaches the merged
dictionaries, and each item in MergedDictionaries is checked. If multiple merged dictionaries exist, these
dictionaries are checked in the inverse of the order in which they are declared in the MergedDictionaries
property. In the following example, if both Dictionary2.xaml and Dictionary1.xaml declared the same key, the key
from Dictionary2.xaml is used first because it's last in the MergedDictionaries set.

<Page
x:Class="MSDNSample.MainPage"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">
<Page.Resources>
<ResourceDictionary>
<ResourceDictionary.MergedDictionaries>
<ResourceDictionary Source="Dictionary1.xaml"/>
<ResourceDictionary Source="Dictionary2.xaml"/>
</ResourceDictionary.MergedDictionaries>
</ResourceDictionary>
</Page.Resources>

<TextBlock Foreground="{StaticResource brush}" Text="greetings!" VerticalAlignment="Center"/>


</Page>

Within the scope of any one ResourceDictionary, the dictionary is checked for key uniqueness. However, that
scope does not extend across different items in different MergedDictionaries files.
You can use the combination of the lookup sequence and lack of unique key enforcement across merged-dictionary
scopes to create a fallback value sequence of ResourceDictionary resources. For example, you might store user
preferences for a particular brush color in the last merged resource dictionary in the sequence, using a resource
dictionary that synchronizes to your app's state and user preference data. However, if no user preferences exist yet,
you can define that same key string for a ResourceDictionary resource in the initial MergedDictionaries file, and
it can serve as the fallback value. Remember that any value you provide in a primary resource dictionary is always
checked before the merged dictionaries are checked, so if you want to use the fallback technique, don't define that
resource in a primary resource dictionary.

Theme resources and theme dictionaries


A ThemeResource is similar to a StaticResource, but the resource lookup is reevaluated when the theme changes.
In this example, you set the foreground of a TextBlock to a value from the current theme.

<TextBlock Text="hello world" Foreground="{ThemeResource FocusVisualWhiteStrokeThemeBrush}" VerticalAlignment="Center"/>

A theme dictionary is a special type of merged dictionary that holds the resources that vary with the theme a user is
currently using on his or her device. For example, the "light" theme might use a white color brush whereas the
"dark" theme might use a dark color brush. The brush changes the resource that it resolves to, but otherwise the
composition of a control that uses the brush as a resource could be the same. To reproduce the theme-switching
behavior in your own templates and styles, instead of using MergedDictionaries as the property to merge items
into the main dictionaries, use the ThemeDictionaries property.
Each ResourceDictionary element within ThemeDictionaries must have an x:Key value. The value is a string that
names the relevant themefor example, "Default", "Dark", "Light", or "HighContrast". Typically, Dictionary1 and
Dictionary2 will define resources that have the same names but different values.

Here, you use red text for the light theme and blue text for the dark theme.

<!-- Dictionary1.xaml -->


<ResourceDictionary
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:local="using:MSDNSample">

<SolidColorBrush x:Key="brush" Color="Red"/>

</ResourceDictionary>

<!Dictionary2.xaml -->
<ResourceDictionary
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:local="using:MSDNSample">

<SolidColorBrush x:Key="brush" Color="blue"/>

</ResourceDictionary>

In this example, you set the foreground of a TextBlock to a value from the current theme.
<Page
x:Class="MSDNSample.MainPage"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">
<Page.Resources>
<ResourceDictionary>
<ResourceDictionary.ThemeDictionaries>
<ResourceDictionary Source="Dictionary1.xaml" x:Key="Light"/>
<ResourceDictionary Source="Dictionary2.xaml" x:Key="Dark"/>
</ResourceDictionary.ThemeDictionaries>
</ResourceDictionary>
</Page.Resources>
<TextBlock Foreground="{StaticResource brush}" Text="hello world" VerticalAlignment="Center"/>
</Page>

For theme dictionaries, the active dictionary to be used for resource lookup changes dynamically, whenever
ThemeResource markup extension is used to make the reference and the system detects a theme change. The
lookup behavior that is done by the system is based on mapping the active theme to the x:Key of a specific theme
dictionary.
It can be useful to examine the way that the theme dictionaries are structured in the default XAML design resources,
which parallel the templates that the Windows Runtime uses by default for its controls. Open the XAML files in \
(Program Files)\Windows Kits\10\DesignTime\CommonConfiguration\Neutral\UAP\<SDK version>\Generic using
a text editor or your IDE. Note how the theme dictionaries are defined first in generic.xaml, and how each theme
dictionary defines the same keys. Each such key is then referenced by elements of composition in the various keyed
elements that are outside the theme dictionaries and defined later in the XAML. There's also a separate
themeresources.xaml file for design that contains only the theme resources and extra templates, not the default
control templates. The theme areas are duplicates of what you'd see in generic.xaml.
When you use XAML design tools to edit copies of styles and templates, the design tools extract sections from the
XAML design resource dictionaries and place them as local copies of XAML dictionary elements that are part of your
app and project.
For more info and for a list of the theme-specific and system resources that are available to your app, see XAML
theme resources.

Lookup behavior for XAML resource references


Lookup behavior is the term that describes how the XAML resources system tries to find a XAML resource. The
lookup occurs when a key is referenced as a XAML resource reference from somewhere in the app's XAML. First,
the resources system has predictable behavior for where it will check for the existence of a resource based on
scope. If a resource isn't found in the initial scope, the scope expands. The lookup behavior continues on throughout
the locations and scopes that a XAML resource could possibly be defined by an app or by the system. If all possible
resource lookup attempts fail, an error often results. It's usually possible to eliminate these errors during the
development process.
The lookup behavior for XAML resource references starts with the object where the actual usage is applied and its
own Resources property. If a ResourceDictionary exists there, that ResourceDictionary is checked for an item
that has the requested key. This first level of lookup is rarely relevant because you usually do not define and then
reference a resource on the same object. In fact, a Resources property often doesn't exist here. You can make XAML
resource references from nearly anywhere in XAML; you aren't limited to properties of FrameworkElement
subclasses.
The lookup sequence then checks the next parent object in the runtime object tree of the app. If a
FrameworkElement.Resources exists and holds a ResourceDictionary, the dictionary item with the specified key
string is requested. If the resource is found, the lookup sequence stops and the object is provided to the location
where the reference was made. Otherwise, the lookup behavior advances to the next parent level towards the object
tree root. The search continues recursively upwards until the root element of the XAML is reached, exhausting the
search of all possible immediate resource locations.

Note It is a common practice to define all the immediate resources at the root level of a page, both to take
advantage of this resource-lookup behavior and also as a convention of XAML markup style.

If the requested resource is not found in the immediate resources, the next lookup step is to check the
Application.Resources property. Application.Resources is the best place to put any app-specific resources that
are referenced by multiple pages in your app's navigation structure.
Control templates have another possible location in the reference lookup: theme dictionaries. A theme dictionary is
a single XAML file that has a ResourceDictionary element as its root. The theme dictionary might be a merged
dictionary from Application.Resources. The theme dictionary might also be the control-specific theme dictionary
for a templated custom control.
Finally, there is a resource lookup against platform resources. Platform resources include the control templates that
are defined for each of the system UI themes, and which define the default appearance of all the controls that you
use for UI in a Windows Runtime app. Platform resources also include a set of named resources that relate to
system-wide appearance and themes. These resources are technically a MergedDictionaries item, and thus are
available for lookup from XAML or code once the app has loaded. For example, the system theme resources include
a resource named "SystemColorWindowTextColor" that provides a Color definition to match app text color to a
system window's text color that comes from the operating system and user preferences. Other XAML styles for
your app can refer to this style, or your code can get a resource lookup value (and cast it to Color in the example
case).
For more info and for a list of the theme-specific and system resources that are available to a Windows Store app
that uses XAML, see XAML theme resources.
If the requested key is still not found in any of these locations, a XAML parsing error/exception occurs. In certain
circumstances, the XAML parse exception may be a run-time exception that is not detected either by a XAML
markup compile action, or by a XAML design environment.
Because of the tiered lookup behavior for resource dictionaries, you can deliberately define multiple resource items
that each have the same string value as the key, as long as each resource is defined at a different level. In other
words, although keys must be unique within any given ResourceDictionary, the uniqueness requirement does not
extend to the lookup behavior sequence as a whole. During lookup, only the first such object that's successfully
retrieved is used for the XAML resource reference, and then the lookup stops. You could use this behavior to
request the same XAML resource by key at various positions within your app's XAML but get different resources
back, depending on the scope from which the XAML resource reference was made and how that particular lookup
behaves.

Forward references within a ResourceDictionary


XAML resource references within a particular resource dictionary must reference a resource that has already been
defined with a key, and that resource must appear lexically before the resource reference. Forward references
cannot be resolved by a XAML resource reference. For this reason, if you use XAML resource references from within
another resource, you must design your resource dictionary structure so that the resources that are used by other
resources are defined first in a resource dictionary.
Resources defined at the app level cannot make references to immediate resources. This is equivalent to attempting
a forward reference, because the app resources are actually processed first (when the app first starts, and before
any navigation-page content is loaded). However, any immediate resource can make a reference to an app
resource, and this can be a useful technique for avoiding forward-reference situations.
XAML resources must be shareable
For an object to exist in a ResourceDictionary, that object must be shareable.
Being shareable is required because, when the object tree of an app is constructed and used at run time, objects
cannot exist at multiple locations in the tree. Internally, the resource system creates copies of resource values to use
in the object graph of your app when each XAML resource is requested.
A ResourceDictionary and Windows Runtime XAML in general supports these objects for shareable usage:
Styles and templates (Style and classes derived from FrameworkTemplate)
Brushes and colors (classes derived from Brush, and Color values)
Animation types including Storyboard
Transforms (classes derived from GeneralTransform)
Matrix and Matrix3D
Point values
Certain other UI-related structures such as Thickness and CornerRadius
XAML intrinsic data types
You can also use custom types as a shareable resource if you follow the necessary implementation patterns. You
define such classes in your backing code (or in runtime components that you include) and then instantiate those
classes in XAML as a resource. Examples are object data sources and IValueConverter implementations for data
binding.
Custom types must have a default constructor, because that's what a XAML parser uses to instantiate a class.
Custom types used as resources can't have the UIElement class in their inheritance, because a UIElement can
never be shareable (it's always intended to represent exactly one UI element that exists at one position in the object
graph of your runtime app).

UserControl usage scope


A UserControl element has a special situation for resource-lookup behavior because it has the inherent concepts
of a definition scope and a usage scope. A UserControl that makes a XAML resource reference from its definition
scope must be able to support the lookup of that resource within its own definition-scope lookup sequencethat
is, it cannot access app resources. From a UserControl usage scope, a resource reference is treated as being within
the lookup sequence towards its usage page root (just like any other resource reference made from an object in a
loaded object tree) and can access app resources.

ResourceDictionary and XamlReader.Load


You can use a ResourceDictionary as either the root or a part of the XAML input for the XamlReader.Load
method. You can also include XAML resource references in that XAML if all such references are completely self-
contained in the XAML submitted for loading. XamlReader.Load parses the XAML in a context that is not aware of
any other ResourceDictionary objects, not even Application.Resources. Also, don't use {ThemeResource} from
within XAML submitted to XamlReader.Load.

Using a ResourceDictionary from code


Most of the scenarios for a ResourceDictionary are handled exclusively in XAML. You declare the
ResourceDictionary container and the resources within as a XAML file or set of XAML nodes in a UI definition file.
And then you use XAML resource references to request those resources from other parts of XAML. Still, there are
certain scenarios where your app might want to adjust the contents of a ResourceDictionary using code that
executes while the app is running, or at least to query the contents of a ResourceDictionary to see if a resource is
already defined. These code calls are made on a ResourceDictionary instance, so you must first retrieve one
either an immediate ResourceDictionary somewhere in the object tree by getting
FrameworkElement.Resources, or Application.Current.Resources .
In C# or Microsoft Visual Basic code, you can reference a resource in a given ResourceDictionary by using the
indexer (Item). A ResourceDictionary is a string-keyed dictionary, so the indexer uses the string key instead of an
integer index. In Visual C++ component extensions (C++/CX) code, use Lookup.
When using code to examine or change a ResourceDictionary, the behavior for APIs like Lookup or Item does
not traverse from immediate resources to app resources; that's a XAML parser behavior that only happens as XAML
pages are loaded. At run time, scope for keys is self-contained to the ResourceDictionary instance that you are
using at the time. However, that scope does extend into MergedDictionaries.
Also, if you request a key that does not exist in the ResourceDictionary, there may not be an error; the return
value may simply be provided as null. You may still get an error, though, if you try to use the returned null as a
value. The error would come from the property's setter, not your ResourceDictionary call. The only way you'd
avoid an error is if the property accepted null as a valid value. Note how this behavior contrasts with XAML lookup
behavior at XAML parse time; a failure to resolve the provided key from XAML at parse time results in a XAML parse
error, even in cases where the property could have accepted null.
Merged resource dictionaries are included into the index scope of the primary resource dictionary that references
the merged dictionary at run time. In other words, you can use Item or Lookup of the primary dictionary to find
any objects that were actually defined in the merged dictionary. In this case, the lookup behavior does resemble the
parse-time XAML lookup behavior: if there are multiple objects in merged dictionaries that each have the same key,
the object from the last-added dictionary is returned.
You are permitted to add items to an existing ResourceDictionary by calling Add (C# or Visual Basic) or Insert
(C++/CX). You could add the items to either immediate resources or app resources. Either of these API calls
requires a key, which satisfies the requirement that each item in a ResourceDictionary must have a key. However,
items that you add to a ResourceDictionary at run time are not relevant to XAML resource references. The
necessary lookup for XAML resource references happens when that XAML is first parsed as the app is loaded (or a
theme change is detected). Resources added to collections at run time weren't available then, and altering the
ResourceDictionary doesn't invalidate an already retrieved resource from it even if you change the value of that
resource.
You also can remove items from a ResourceDictionary at run time, make copies of some or all items, or other
operations. The members listing for ResourceDictionary indicates which APIs are available. Note that because
ResourceDictionary has a projected API to support its underlying collection interfaces, your API options differ
depending on whether you are using C# or Visual Basic versus C++/CX.

ResourceDictionary and localization


A XAML ResourceDictionary might initially contain strings that are to be localized. If so, store these strings as
project resources instead of in a ResourceDictionary. Take the strings out of the XAML, and instead give the
owning element an x:Uid directive value. Then, define a resource in a resources file. Provide a resource name in the
form XUIDValue.PropertyName and a resource value of the string that should be localized.

Custom resource lookup


For advanced scenarios, you can implement a class that can have different behavior than the XAML resource
reference lookup behavior described in this topic. To do this, you implement the class
CustomXamlResourceLoader, and then you can access that behavior by using the CustomResource markup
extension for resource references rather than using StaticResource or ThemeResource. Most apps won't have
scenarios that require this. For more info, see CustomXamlResourceLoader.

Related topics
ResourceDictionary
XAML overview
StaticResource markup extension
ThemeResource markup extension
XAML theme resources
Styling controls
x:Key attribute
XAML theme resources
3/6/2017 13 min to read Edit on GitHub

Theme resources in XAML are a set of resources that apply different values depending on which system theme is
active. There are 3 themes that the XAML framework supports: "Light", "Dark", and "HighContrast".
Prerequisites
This topic assumes that you have read ResourceDictionary and XAML resource references.

How theme resources differ from static resources


There are two XAML markup extensions that can reference a XAML resource from an existing XAML resource
dictionary: {StaticResource} markup extension and {ThemeResource} markup extension.
Evaluation of a {ThemeResource} markup extension occurs when the app loads and subsequently each time the
theme changes at runtime. This is typically the result of the user changing their device settings or from a
programmatic change within the app that alters its current theme.
In contrast, a {StaticResource} markup extension is evaluated only when the XAML is first loaded by the app. It does
not update. Its similar to a find and replace in your XAML with the actual runtime value at app launch.

Theme resources and where they fit in the resource dictionary


structure
Each theme resource is part of the XAML file themeresources.xaml. For design purposes, themeresources.xaml is
available in the \(Program Files)\Windows Kits\10\DesignTime\CommonConfiguration\Neutral\UAP\<SDK
version>\Generic folder from a Windows Software Development Kit (SDK) installation. The resource dictionaries in
themeresources.xaml are also reproduced in generic.xaml in the same directory.

Note The Windows Runtime doesn't use these physical files for runtime lookup. That's why they are
specifically in a DesignTime folder, and they aren't copied into apps by default. Instead, these resource
dictionaries exist in memory as part of the Windows Runtime itself, and your app's XAML resource references
to theme resources (or system resources) resolve there at runtime.

Guidelines for using theme resources


Follow these guidelines when you define and consume your own custom theme resources.
DO:
Specify theme dictionaries for both "Light" and "Dark" in addition to your "HighContrast" dictionary. Although
you can create a ResourceDictionary with "Default" as the key, its preferred to be explicit and instead use
"Light", "Dark", and "HighContrast".
Use the {ThemeResource} markup extension in: Styles, Setters, Control templates, Property setters, and
Animations.
DO NOT:
Use the {ThemeResource} markup extension in your resource definitions inside your ThemeDictionaries.
Use {StaticResource} markup extension instead.
EXCEPTION: it is alright to use the {ThemeResource} markup extension to reference resources that are
agnostic to the app theme in your ThemeDictionaries. Examples of these resources are accent color
resources like SystemAccentColor , or system color resources, which are typically prefixed with "SystemColor"
like SystemColorButtonFaceColor .
Caution If you dont follow these guidelines, you might see unexpected behavior related to themes in your app.
For more info, see the Troubleshooting theme resources section.

The XAML color ramp and theme-dependent brushes


The combined set of colors for "Light", "Dark", and "HighContrast" themes make up the Windows color ramp in
XAML. Whether you want to modify the system themes, or apply a system theme to your own XAML elements, its
important to understand how the color resources are structured.
Light and Dark theme colors
The XAML framework provides a set of named Color resources with values that are tailored for the "Light" and
"Dark" themes. The keys you use to reference these follow the naming format: System[Simple Light/Dark Name]Color .
This table lists the key, simple name, and string representation of the color (using the #aarrggbb format) for the
"Light" and "Dark" resources provided by the XAML framework. The key is used to reference the resource in an
app. The "Simple light/dark name" is used as part of the brush naming convention that we explain later.

KEY SIMPLE LIGHT/DARK NAME LIGHT DARK

SystemAltHighColor AltHigh #FFFFFFFF #FF000000

SystemAltLowColor AltLow #33FFFFFF #33000000

SystemAltMediumColor AltMedium #99FFFFFF #99000000

SystemAltMediumHighColor AltMediumHigh #CCFFFFFF #CC000000

SystemAltMediumLowColor AltMediumLow #66FFFFFF #66000000

SystemBaseHighColor BaseHigh #FF000000 #FFFFFFFF

SystemBaseLowColor BaseLow #33000000 #33FFFFFF

SystemBaseMediumColor BaseMedium #99000000 #99FFFFFF

SystemBaseMediumHighCol BaseMediumHigh #CC000000 #CCFFFFFF


or

SystemBaseMediumLowColo BaseMediumLow #66000000 #66FFFFFF


r

SystemChromeAltLowColor ChromeAltLow #FF171717 #FFF2F2F2

SystemChromeBlackHighCol ChromeBlackHigh #FF000000 #FF000000


or

SystemChromeBlackLowCol ChromeBlackLow #33000000 #33000000


or
KEY SIMPLE LIGHT/DARK NAME LIGHT DARK

SystemChromeBlackMedium ChromeBlackMediumLow #66000000 #66000000


LowColor

SystemChromeBlackMedium ChromeBlackMedium #CC000000 #CC000000


Color

SystemChromeDisabledHigh ChromeDisabledHigh #FFCCCCCC #FF333333


Color

SystemChromeDisabledLow ChromeDisabledLow #FF7A7A7A #FF858585


Color

SystemChromeHighColor ChromeHigh #FFCCCCCC #FF767676

SystemChromeLowColor ChromeLow #FFF2F2F2 #FF171717

SystemChromeMediumColo ChromeMedium #FFE6E6E6 #FF1F1F1F


r

SystemChromeMediumLow ChromeMediumLow #FFF2F2F2 #FF2B2B2B


Color

SystemChromeWhiteColor ChromeWhite #FFFFFFFF #FFFFFFFF

SystemListLowColor ListLow #19000000 #19FFFFFF

SystemListMediumColor ListMedium #33000000 #33FFFFFF

Windows system high-contrast colors


In addition to the set of resources provided by the XAML framework, there's a set of color values derived from the
Windows system palette. These colors are not specific to the Windows Runtime or Universal Windows Platform
(UWP) apps. However, many of the XAML Brush resources consume these colors when the system is operating
(and the app is running) using the "HighContrast" theme. The XAML framework provides these system-wide colors
as keyed resources. The keys follow the naming format: SystemColor[name]Color .
This table lists the system-wide colors that XAML provides as resource objects derived from the Windows system
palette. The "Ease of Access name" column shows how color is labeled in the Windows settings UI. The "Simple
HighContrast name" column is a one word description of how the color is applied across the XAML common
controls. It's used as part of the brush naming convention that we explain later. The "Initial default" column shows
the values you'd get if the system is not running in high contrast at all.

KEY EASE OF ACCESS NAME SIMPLE HIGHCONTRAST NAME INITIAL DEFAULT

SystemColorButtonFaceColo Button Text (background) Background #FFF0F0F0


r

SystemColorButtonTextColo Button Text (foreground) Foreground #FF000000


r

SystemColorGrayTextColor Disabled Text Disabled #FF6D6D6D

SystemColorHighlightColor Selected Text (background) Highlight #FF3399FF


KEY EASE OF ACCESS NAME SIMPLE HIGHCONTRAST NAME INITIAL DEFAULT

SystemColorHighlightTextCo Selected Text (foreground) HighlightAlt #FFFFFFFF


lor

SystemColorHotlightColor Hyperlinks Hyperlink #FF0066CC

SystemColorWindowColor Background PageBackground #FFFFFFFF

SystemColorWindowTextCol Text PageText #FF000000


or

Windows provides different high-contrast themes, and enables the user to set the specific colors to for their high-
contrast settings through the Ease of Access Center, as shown here. Therefore, it's not possible to provide a
definitive list of high-contrast color values.

For more info about supporting high-contrast themes, see High-contrast themes.
System accent color
In addition to the system high-contrast theme colors, the system accent color is provided as a special color
resource using the key SystemAccentColor . At runtime, this resource gets the color that the user has specified as the
accent color in the Windows personalization settings.

Note Its possible to override the system color resources for high-contrast color and accent color by creating
resources with the same names, but its a best practice to respect the users color choices, especially for high-
contrast settings.

Theme-dependent brushes
The color resources shown in the preceding sections are used to set the Color property of SolidColorBrush
resources in the system theme resource dictionaries. You use the brush resources to apply the color to XAML
elements. The keys for the brush resources follow the naming format:
SystemControl[Simple HighContrast name][Simple light/dark name]Brush . For example, SystemControlBackroundAltHighBrush .
Lets look at how the color value for this brush is determined at run-time. In the "Light" and "Dark" resource
dictionaries, this brush is defined like this:
<SolidColorBrush x:Key="SystemControlBackgroundAltHighBrush" Color="{StaticResource SystemAltHighColor}"/>

In the "HighContrast" resource dictionary, this brush is defined like this:


<SolidColorBrush x:Key="SystemControlBackgroundAltHighBrush" Color="{ThemeResource SystemColorButtonFaceColor}"/>

When this brush is applied to a XAML element, its color is determined at run-time by the current theme, as shown
in this table.

THEME COLOR SIMPLE NAME COLOR RESOURCE RUNTIME VALUE

Light AltHigh SystemAltHighColor #FFFFFFFF

Dark AltHigh SystemAltHighColor #FF000000

HighContrast Background SystemColorButtonFaceColo The color specified in


r settings for the button
background.

You can use the SystemControl[Simple HighContrast name][Simple light/dark name]Brush naming scheme to determine which
brush to apply to your own XAML elements.

Note Not every combination of [Simple HighContrast name][Simple light/dark name] is provided as a brush
resource.

The XAML type ramp


The themeresources.xaml file defines several resources that define a Style that you can apply to text containers in
your UI, specifically for either TextBlock or RichTextBlock. These are not the default implicit styles. They are
provided to make it easier for you to create XAML UI definitions that match the Windows type ramp documented
in Guidelines for fonts.
These styles are for text attributes that you want applied to the whole text container. If you want styles applied just
to sections of the text, set attributes on the text elements within the container, such as on a Run in
TextBlock.Inlines or on a Paragraph in RichTextBlock.Blocks.
The styles look like this when applied to a TextBlock:
BaseTextBlockStyle
TargetType: TextBlock
Supplies the common properties for all the other TextBlock container styles.

<!-- Usage -->


<TextBlock Text="Base" Style="{StaticResource BaseTextBlockStyle}"/>

<!-- Style definition -->


<Style x:Key="BaseTextBlockStyle" TargetType="TextBlock">
<Setter Property="FontFamily" Value="Segoe UI"/>
<Setter Property="FontWeight" Value="SemiBold"/>
<Setter Property="FontSize" Value="15"/>
<Setter Property="TextTrimming" Value="None"/>
<Setter Property="TextWrapping" Value="Wrap"/>
<Setter Property="LineStackingStrategy" Value="MaxHeight"/>
<Setter Property="TextLineBounds" Value="Full"/>
</Style>

HeaderTextBlockStyle

<!-- Usage -->


<TextBlock Text="Header" Style="{StaticResource HeaderTextBlockStyle}"/>

<!-- Style definition -->


<Style x:Key="HeaderTextBlockStyle" TargetType="TextBlock"
BasedOn="{StaticResource BaseTextBlockStyle}">
<Setter Property="FontSize" Value="46"/>
<Setter Property="FontWeight" Value="Light"/>
<Setter Property="OpticalMarginAlignment" Value="TrimSideBearings"/>
</Style>

SubheaderTextBlockStyle

<!-- Usage -->


<TextBlock Text="SubHeader" Style="{StaticResource SubheaderTextBlockStyle}"/>

<!-- Style definition -->


<Style x:Key="SubheaderTextBlockStyle" TargetType="TextBlock"
BasedOn="{StaticResource BaseTextBlockStyle}">
<Setter Property="FontSize" Value="34"/>
<Setter Property="FontWeight" Value="Light"/>
<Setter Property="OpticalMarginAlignment" Value="TrimSideBearings"/>
</Style>

TitleTextBlockStyle

<!-- Usage -->


<TextBlock Text="Title" Style="{StaticResource TitleTextBlockStyle}"/>

<!-- Style definition -->


<Style x:Key="TitleTextBlockStyle" TargetType="TextBlock"
BasedOn="{StaticResource BaseTextBlockStyle}">
<Setter Property="FontWeight" Value="SemiLight"/>
<Setter Property="FontSize" Value="24"/>
<Setter Property="OpticalMarginAlignment" Value="TrimSideBearings"/>
</Style>

SubtitleTextBlockStyle
<!-- Usage -->
<TextBlock Text="SubTitle" Style="{StaticResource SubtitleTextBlockStyle}"/>

<!-- Style definition -->


<Style x:Key="SubtitleTextBlockStyle" TargetType="TextBlock"
BasedOn="{StaticResource BaseTextBlockStyle}">
<Setter Property="FontWeight" Value="Normal"/>
<Setter Property="FontSize" Value="20"/>
<Setter Property="OpticalMarginAlignment" Value="TrimSideBearings"/>
</Style>

BodyTextBlockStyle

<!-- Usage -->


<TextBlock Text="Body" Style="{StaticResource BodyTextBlockStyle}"/>

<!-- Style definition -->


<Style x:Key="BodyTextBlockStyle" TargetType="TextBlock"
BasedOn="{StaticResource BaseTextBlockStyle}">
<Setter Property="FontWeight" Value="Normal"/>
<Setter Property="FontSize" Value="15"/>
</Style>

CaptionTextBlockStyle

<!-- Usage -->


<TextBlock Text="Caption" Style="{StaticResource CaptionTextBlockStyle}"/>

<!-- Style definition -->


<Style x:Key="CaptionTextBlockStyle" TargetType="TextBlock"
BasedOn="{StaticResource BaseTextBlockStyle}">
<Setter Property="FontSize" Value="12"/>
<Setter Property="FontWeight" Value="Normal"/>
</Style>

BaseRichTextBlockStyle
TargetType: RichTextBlock
Supplies the common properties for all the other RichTextBlock container styles.

<!-- Usage -->


<RichTextBlock Style="{StaticResource BaseRichTextBlockStyle}">
<Paragraph>Rich text.</Paragraph>
</RichTextBlock>

<!-- Style definition -->


<Style x:Key="BaseRichTextBlockStyle" TargetType="RichTextBlock">
<Setter Property="FontFamily" Value="Segoe UI"/>
<Setter Property="FontWeight" Value="SemiBold"/>
<Setter Property="FontSize" Value="15"/>
<Setter Property="TextTrimming" Value="None"/>
<Setter Property="TextWrapping" Value="Wrap"/>
<Setter Property="LineStackingStrategy" Value="MaxHeight"/>
<Setter Property="TextLineBounds" Value="Full"/>
<Setter Property="OpticalMarginAlignment" Value="TrimSideBearings"/>
</Style>

BodyRichTextBlockStyle
<!-- Usage -->
<RichTextBlock Style="{StaticResource BodyRichTextBlockStyle}">
<Paragraph>Rich text.</Paragraph>
</RichTextBlock>

<!-- Style definition -->


<Style x:Key="BodyRichTextBlockStyle" TargetType="RichTextBlock" BasedOn="{StaticResource BaseRichTextBlockStyle}">
<Setter Property="FontWeight" Value="Normal"/>
</Style>

Note The RichTextBlock styles don't have all the text ramp styles that TextBlock does, mainly because the
block-based document object model for RichTextBlock makes it easier to set attributes on the individual text
elements. Also, setting TextBlock.Text using the XAML content property introduces a situation where there is
no text element to style and thus you'd have to style the container. That isn't an issue for RichTextBlock
because its text content always has to be in specific text elements like Paragraph, which is where you might
apply XAML styles for page header, page subheader and similar text ramp definitions.

Miscellaneous Named styles


There's an additional set of keyed Style definitions you can apply to style a Button differently than its default
implicit style.
TextBlockButtonStyle
TargetType: ButtonBase
Apply this style to a Button when you need to show text that a user can click to take action. The text is styled using
the current accent color to distinguish it as interactive and has focus rectangles that work well for text. Unlike the
implicit style of a HyperlinkButton, the TextBlockButtonStyle does not underline the text.
The template also styles the presented text to use SystemControlHyperlinkBaseMediumBrush (for
"PointerOver" state), SystemControlHighlightBaseMediumLowBrush (for "Pressed" state) and
SystemControlDisabledBaseLowBrush (for "Disabled" state).
Here's a Button with the TextBlockButtonStyle resource applied to it.

<Button Content="Clickable text" Style="{StaticResource TextBlockButtonStyle}"


Click="Button_Click"/>

It looks like this:

NavigationBackButtonNormalStyle
TargetType: Button
This Style provides a complete template for a Button that can be the navigation back button for a navigation app.
It includes theme resource references that make this button use the Segoe MDL2 Assets symbol font, so you
should use a Symbol value as the content rather than text. The default dimensions are 40 x 40 pixels. To tailor the
styling you can either explicitly set the Height, Width, FontSize, and other properties on your Button or create a
derived style using BasedOn.
Here's a Button with the NavigationBackButtonNormalStyle resource applied to it.
<Button Content="&amp;#xE830;" Style="{StaticResource NavigationBackButtonNormalStyle}"
Click="Button_Click"/>

It looks like this:

NavigationBackButtonSmallStyle
TargetType: Button
This Style provides a complete template for a Button that can be the navigation back button for a navigation app.
It's similar to NavigationBackButtonNormalStyle, but its dimensions are 30 by 30 pixels.
Here's a Button with the NavigationBackButtonSmallStyle resource applied to it.

<Button Content="&amp;#xE830;" Style="{StaticResource NavigationBackButtonSmallStyle}"


Click="Button_Click"/>

Troubleshooting theme resources


If you dont follow the guidelines for using theme resources, you might see unexpected behavior related to themes
in your app.
For example, when you open a light-themed flyout, parts of your dark-themed app also change as if they were in
the light theme. Or if you navigate to a light-themed page and then navigate back, the original dark-themed page
(or parts of it) now looks as though it is in the light theme.
Typically, these types of issues occur when you provide a "Default" theme and a "HighContrast" theme to support
high-contrast scenarios, and then use both "Light" and "Dark" themes in different parts of your app.
For example, consider this theme dictionary definition:

<!-- DO NOT USE. THIS XAML DEMONSTRATES AN ERROR. -->


<ResourceDictionary>
<ResourceDictionary.ThemeDictionaries>
<ResourceDictionary x:Key="Default">
<SolidColorBrush x:Key="myBrush" Color="{ThemeResource SystemBaseHighColor}"/>
</ResourceDictionary>
<ResourceDictionary x:Key="HighContrast">
<SolidColorBrush x:Key="myBrush" Color="{ThemeResource SystemColorButtonFaceColor}"/>
</ResourceDictionary>
</ResourceDictionary.ThemeDictionaries>
</ResourceDictionary>

Intuitively, this looks correct. You want to change the color pointed to by myBrush when in high-contrast, but when
not in high-contrast, you rely on the {ThemeResource} markup extension to make sure that myBrush points to the
right color for your theme. If your app never has FrameworkElement.RequestedTheme set on elements within
its visual tree, this will typically work as expected. However, you run into problems in your app as soon as you start
to re-theme different parts of your visual tree.
The problem occurs because brushes are shared resources, unlike most other XAML types. If you have 2 elements
in XAML sub-trees with different themes that reference the same brush resource, then as the framework walks
each sub-tree to update its {ThemeResource} markup extension expressions, changes to the shared brush resource
are reflected in the other sub-tree, which is not your intended result.
To fix this, replace the "Default" dictionary with separate theme dictionaries for both "Light" and "Dark" themes in
addition to "HighContrast":

<!-- DO NOT USE. THIS XAML DEMONSTRATES AN ERROR. -->


<ResourceDictionary>
<ResourceDictionary.ThemeDictionaries>
<ResourceDictionary x:Key="Light">
<SolidColorBrush x:Key="myBrush" Color="{ThemeResource SystemBaseHighColor}"/>
</ResourceDictionary>
<ResourceDictionary x:Key="Dark">
<SolidColorBrush x:Key="myBrush" Color="{ThemeResource SystemBaseHighColor}"/>
</ResourceDictionary>
<ResourceDictionary x:Key="HighContrast">
<SolidColorBrush x:Key="myBrush" Color="{ThemeResource SystemColorButtonFaceColor}"/>
</ResourceDictionary>
</ResourceDictionary.ThemeDictionaries>
</ResourceDictionary>

However, problems still occur if any of these resources are referenced in inherited properties like Foreground.
Your custom control template might specify the foreground color of an element using the {ThemeResource}
markup extension, but when the framework propagates the inherited value to child elements, it provides a direct
reference to the resource that was resolved by the {ThemeResource} markup extension expression. This causes
problems when the framework processes theme changes as it walks your control's visual tree. It re-evaluates the
{ThemeResource} markup extension expression to get a new brush resource but doesnt yet propagate this
reference down to the children of your control; this happens later, such as during the next measure pass.
As a result, after walking the control visual tree in response to a theme change, the framework walks the children
and updates any {ThemeResource} markup extension expressions set on them, or on objects set on their
properties. This is where the problem occurs; the framework walks the brush resource and because it specifies its
color using a {ThemeResource} markup extension, it's re-evaluated.
At this point, the framework appears to have polluted your theme dictionary because it now has a resource from
one dictionary that has its color set from another dictionary.
To fix this problem, use the {StaticResource} markup extension instead of {ThemeResource} markup extension.
With the guidelines applied, the theme dictionaries look like this:

<ResourceDictionary>
<ResourceDictionary.ThemeDictionaries>
<ResourceDictionary x:Key="Light">
<SolidColorBrush x:Key="myBrush" Color="{StaticResource SystemBaseHighColor}"/>
</ResourceDictionary>
<ResourceDictionary x:Key="Dark">
<SolidColorBrush x:Key="myBrush" Color="{StaticResource SystemBaseHighColor}"/>
</ResourceDictionary>
<ResourceDictionary x:Key="HighContrast">
<SolidColorBrush x:Key="myBrush" Color="{ThemeResource SystemColorButtonFaceColor}"/>
</ResourceDictionary>
</ResourceDictionary.ThemeDictionaries>
</ResourceDictionary>

Notice that the {ThemeResource} markup extension is still used in the "HighContrast" dictionary instead of
{StaticResource} markup extension. This situation falls under the exception given earlier in the guidelines. Most of
the brush values that are used for the "HighContrast" theme are using color choices that are globally controlled by
the system, but exposed to XAML as a specially-named resource (those prefixed with SystemColor in the name).
The system enables the user to set the specific colors that should be used for their high contrast settings through
the Ease of Access Center. Those color choices are applied to the specially-named resources. The XAML framework
uses the same theme changed event to also update these brushes when it detects theyve changed at the system
level. This is why the {ThemeResource} markup extension is used here.
Controls and patterns for UWP apps
3/6/2017 2 min to read Edit on GitHub

In UWP app development, a control is a UI element that displays content or enables interaction. Controls are the
building blocks of the user interface. We provide 45+ controls for you to use, ranging from simple buttons to
powerful data controls like the grid view. A pattern is a recipe for combining several controls to make something
new.
The articles in this section provide design guidance and coding instructions for adding controls & patterns to your
UWP app.

Intro
General instructions and code examples for adding and styling controls in XAML and C#.

Add controls and handle events


There are 3 key steps to adding controls to your app: Add a control to your app UI, set properties on the control,
and add code to the control's event handlers so that it does something.

Styling controls
You can customize the appearance of your apps in many ways by using the XAML framework. Styles let you set
control properties and reuse those settings for a consistent appearance across multiple controls.

Alphabetical index
Detailed information about specific controls and patterns.
(For a list sorted by function, see Index of controls by function.)
Auto-suggest box
Bars
Buttons
Checkbox
Date and time controls
Calendar date picker
Calendar view
Date picker
Time picker
Dialogs and flyouts
Flip view
Hub
Hyperlinks
Images and image brushes
Lists
Map control
Master/details
Media playback
Custom transport controls
Menus and context menus
Nav pane
Progress controls
Radio button
Scrolling and panning controls
Search
Semantic zoom
Slider
Split view
Tabs and pivots
Text controls
Labels
Password boxes
Rich-edit boxes
Rich-text blocks
Spell-checking and prediction
Text block
Text box
Tiles, badges, and notifications
Tiles
Adaptive tiles
Adaptive tiles schema
Asset guidelines
Special tile templates
Adaptive and interactive toast notifications
Badge notifications
Notifications visualizer
Notification delivery methods
Local tile notifications
Periodic notifications
WNS
Raw notifications
Toggle
Tooltips
Web view

Additional controls options


Additional controls for UWP development are available from companies such as Telerik, SyncFusion, DevExpress,
Infragistics, ComponentOne, and ActiPro. These controls provide additional support for enterprise and .NET
developers by augmenting the standard system controls with custom controls and services.
If you're interested in learning more about these controls, check out the Customer orders database sample on
GitHub. This sample makes use of the data grid control and data entry validation from Telerik, which is part of their
UI for UWP suite. The UI for UWP suite is a collection of over 20 controls that is available as an open source project
through the .NET foundation.
Intro to controls and patterns
3/6/2017 6 min to read Edit on GitHub

In UWP app development, a control is a UI element that displays content or enables interaction. You create the UI
for your app by using controls such as buttons, text boxes, and combo boxes to display data and get user input.
A pattern is a recipe for modifying a control or combining several controls to make something new. For example,
the Nav pane pattern is a way that you can use a SplitView control for app navigation. Similarly, you can customize
the template of a Pivot control to implement the tab pattern.
In many cases, you can use a control as-is. But XAML controls separate function from structure and appearance, so
you can make various levels of modification to make them fit your needs. In the Style section, you can learn how to
use XAML styles and control templates to modify a control.
In this section, we provide guidance for each of the XAML controls you can use to build your app UI. To start, this
article shows you how to add controls to your app. There are 3 key steps to using controls to your app:
Add a control to your app UI.
Set properties on the control, such as width, height, or foreground color.
Add code to the control's event handlers so that it does something.

Add a control
You can add a control to an app in several ways:
Use a design tool like Blend for Visual Studio or the Microsoft Visual Studio Extensible Application Markup
Language (XAML) designer.
Add the control to the XAML markup in the Visual Studio XAML editor.
Add the control in code. Controls that you add in code are visible when the app runs, but are not visible in the
Visual Studio XAML designer.
In Visual Studio, when you add and manipulate controls in your app, you can use many of the program's features,
including the Toolbox, XAML designer, XAML editor, and the Properties window.
The Visual Studio Toolbox displays many of the controls that you can use in your app. To add a control to your app,
double-click it in the Toolbox. For example, when you double-click the TextBox control, this XAML is added to the
XAML view.

<TextBox HorizontalAlignment="Left" Text="TextBox" VerticalAlignment="Top"/>

You can also drag the control from the Toolbox to the XAML designer.

Set the name of a control


To work with a control in code, you set its x:Name attribute and reference it by name in your code. You can set the
name in the Visual Studio Properties window or in XAML. Here's how to set the name of the currently selected
control by using the Name text box at the top of the Properties window.
To name a control
1. Select the element to name.
2. In the Properties panel, type a name into the Name text box.
3. Press Enter to commit the name.

Here's how to set the name of a control in the XAML editor by adding the x:Name attribute.

<Button x:Name="Button1" Content="Button"/>

Set the control properties


You use properties to specify the appearance, content, and other attributes of controls. When you add a control
using a design tool, some properties that control size, position, and content might be set for you by Visual Studio.
You can change some properties, such as Width, Height or Margin, by selecting and manipulating the control in the
Design view. This illustration shows some of the resizing tools available in Design view.

You might want to let the control be sized and positioned automatically. In this case, you can reset the size and
position properties that Visual Studio set for you.
To reset a property
1. In the Properties panel, click the property marker next to the property value. The property menu opens.
2. In the property menu, click Reset.

You can set control properties in the Properties window, in XAML, or in code. For example, to change the
foreground color for a Button, you set the control's Foreground property. This illustration shows how to set the
Foreground property by using the color picker in the Properties window.
Here's how to set the Foreground property in the XAML editor. Notice the Visual Studio IntelliSense window that
opens to help you with the syntax.

Here's the resulting XAML after you set the Foreground property.

<Button x:Name="Button1" Content="Button"


HorizontalAlignment="Left" VerticalAlignment="Top"
Foreground="Beige"/>

Here's how to set the Foreground property in code.


Button1.Foreground = new SolidColorBrush(Windows.UI.Colors.Beige);

Create an event handler


Each control has events that enable you to respond to actions from your user or other changes in your app. For
example, a Button control has a Click event that is raised when a user clicks the Button. You create a method, called
an event handler, to handle the event. You can associate a control's event with an event handler method in the
Properties window, in XAML, or in code. For more info about events, see Events and routed events overview.
To create an event handler, select the control and then click the Events tab at the top of the Properties window. The
Properties window lists all of the events available for that control. Here are some of the events for a Button.

To create an event handler with the default name, double-click the text box next to the event name in the Properties
window. To create an event handler with a custom name, type the name of your choice into the text box and press
enter. The event handler is created and the code-behind file is opened in the code editor. The event handler method
has 2 parameters. The first is sender , which is a reference to the object where the handler is attached. The sender
parameter is an Object type. You typically cast sender to a more precise type if you expect to check or change the
state on the sender object itself. Based on your own app design, you expect a type that is safe to cast the sender to,
based on where the handler is attached. The second value is event data, which generally appears in signatures as
the e or args parameter.
Here's code that handles the Click event of a Button named Button1 . When you click the button, the Foreground
property of the Button you clicked is set to blue.

private void Button_Click(object sender, RoutedEventArgs e)


{
Button b = (Button)sender;
b.Foreground = new SolidColorBrush(Windows.UI.Colors.Blue);
}

You can also associate an event handler in XAML. In the XAML editor, type in the event name that you want to
handle. Visual Studio shows an IntelliSense window when you begin typing. After you specify the event, you can
double-click <New Event Handler> in the IntelliSense window to create a new event handler with the default name, or
select an existing event handler from the list.
Here's the IntelliSense window that appears. It helps you create a new event handler or select an existing event
handler.

This example shows how to associate a Click event with an event handler named Button_Click in XAML.

<Button Name="Button1" Content="Button" Click="Button_Click"/>

You can also associate an event with its event handler in the code-behind. Here's how to associate an event handler
in code.

Button1.Click += new RoutedEventHandler(Button_Click);

Related topics
Index of controls by function
Windows.UI.Xaml.Controls namespace
Layout
Style
Usability
Controls by function
3/6/2017 9 min to read Edit on GitHub

The XAML UI framework for Windows provides an extensive library of controls that support UI development. Some
of these controls have a visual representation; others function as the containers for other controls or content, such
as images and media.
You can see many of the Windows UI controls in action by downloading the XAML UI Basics sample.
Here's a list by function of the common XAML controls you can use in your app.

Appbars and commands


App bar
A toolbar for displaying application-specific commands. See Command bar.
Reference: AppBar
App bar button
A button for showing commands using app bar styling.

Reference: AppBarButton, SymbolIcon, BitmapIcon, FontIcon, PathIcon


Design and how-to: App bar and command bar control guide
Sample code: XAML Commanding sample
App bar separator
Visually separates groups of commands in a command bar.
Reference: AppBarSeparator
Sample code: XAML Commanding sample
App bar toggle button
A button for toggling commands in a command bar.
Reference: AppBarToggleButton
Sample code: XAML Commanding sample
Command bar
A specialized app bar that handles the resizing of app bar button elements.
<CommandBar>
<AppBarButton Icon="Back" Label="Back" Click="AppBarButton_Click"/>
<AppBarButton Icon="Stop" Label="Stop" Click="AppBarButton_Click"/>
<AppBarButton Icon="Play" Label="Play" Click="AppBarButton_Click"/>
</CommandBar>

Reference: CommandBar
Design and how-to: App bar and command bar control guide
Sample code: XAML Commanding sample

Buttons
Button
A control that responds to user input and raises a Click event.

<Button x:Name="button1" Content="Button"


Click="Button_Click" />

Reference: Button
Design and how-to: Buttons control guide
Hyperlink
See Hyperlink button.
Hyperlink button
A button that appears as marked up text and opens the specified URI in a browser.

<HyperlinkButton Content="www.microsoft.com"
NavigateUri="http://www.microsoft.com"/>

Reference: HyperlinkButton
Design and how-to: Hyperlinks control guide
Repeat button
A button that raises its Click event repeatedly from the time it's pressed until it's released.

<RepeatButton x:Name="repeatButton1" Content="Repeat Button"


Click="RepeatButton_Click" />

Reference: RepeatButton
Design and how-to: Buttons control guide

Collection/data controls
Flip view
A control that presents a collection of items that the user can flip through, one item at a time.

<FlipView x:Name="flipView1" SelectionChanged="FlipView_SelectionChanged">


<Image Source="Assets/Logo.png" />
<Image Source="Assets/SplashScreen.png" />
<Image Source="Assets/SmallLogo.png" />
</FlipView>

Reference: FlipView
Design and how-to: Flip view control guide
Grid view
A control that presents a collection of items in rows and columns that can scroll horizontally.

<GridView x:Name="gridView1" SelectionChanged="GridView_SelectionChanged">


<x:String>Item 1</x:String>
<x:String>Item 2</x:String>
</GridView>

Reference: GridView
Design and how-to: Lists
Sample code: ListView sample
Items control
A control that presents a collection of items in a UI specified by a data template.

<ItemsControl/>

Reference: ItemsControl
List view
A control that presents a collection of items in a list that can scroll vertically.

<ListView x:Name="listView1" SelectionChanged="ListView_SelectionChanged">


<x:String>Item 1</x:String>
<x:String>Item 2</x:String>
</ListView>

Reference: ListView
Design and how-to: Lists
Sample code: ListView sample

Date and time controls


Calendar date picker
A control that lets a user select a date using a drop-down calendar display.

<CalendarDatePicker/>

Reference: CalendarDatePicker
Design and how-to: Calendar, date, and time controls
Calendar view
A configurable calendar display that lets a user select single or multiple dates.

<CalendarView/>

Reference: CalendarView
Design and how-to: Calendar, date, and time controls
Date picker
A control that lets a user select a date.

<DatePicker Header="Arrival Date"/>

Reference: DatePicker
Design and how-to: Calendar, date, and time controls
Time picker
A control that lets a user set a time value.
<TimePicker Header="Arrival Time"/>

Reference: TimePicker
Design and how-to: Calendar, date, and time controls

Flyouts
Context menu
See Menu flyout and Popup menu.
Flyout
Displays a message that requires user interaction. (Unlike a dialog, a flyout does not create a separate window, and
does not block other user interaction.)

<Flyout>
<StackPanel>
<TextBlock Text="All items will be removed. Do you want to continue?"/>
<Button Click="DeleteConfirmation_Click" Content="Yes, empty my cart"/>
</StackPanel>
</Flyout>

Reference: Flyout
Design and how-to: Context menus and dialogs
Menu flyout
Temporarily displays a list of commands or options related to what the user is currently doing.

<MenuFlyout>
<MenuFlyoutItem Text="Reset" Click="Reset_Click"/>
<MenuFlyoutSeparator/>
<ToggleMenuFlyoutItem Text="Shuffle"
IsChecked="{Binding IsShuffleEnabled, Mode=TwoWay}"/>
<ToggleMenuFlyoutItem Text="Repeat"
IsChecked="{Binding IsRepeatEnabled, Mode=TwoWay}"/>
</MenuFlyout>

Reference: MenuFlyout, MenuFlyoutItem, MenuFlyoutSeparator, ToggleMenuFlyoutItem


Design and how-to: Context menus and dialogs
Sample code: XAML Context Menu sample
Popup menu
A custom menu that presents commands that you specify.
Reference: PopupMenu
Design and how-to: Context menus and dialogs
Tooltip
A pop-up window that displays information for an element.

<Button Content="Button" Click="Button_Click"


ToolTipService.ToolTip="Click to perform action" />

Reference: ToolTip, ToolTipService


Design and how-to: Guidelines for tooltips

Images
Image
A control that presents an image.

<Image Source="Assets/Logo.png" />

Reference: Image
Design and how-to: Image and ImageBrush
Sample code: XAML images sample

Graphics and ink


InkCanvas
A control that receives and displays ink strokes.

<InkCanvas/>

Reference: InkCanvas
Shapes
Various retained mode graphical objects that can be presented like ellipses, rectangles, lines, Bezier paths, etc.
<Ellipse/>
<Path/>
<Rectangle/>

Reference: Shapes
How to: Drawing shapes
Sample code: XAML vector-based drawing sample

Layout controls
Border
A container control that draws a border, background, or both, around another object.

<Border BorderBrush="Blue" BorderThickness="4"


Height="108" Width="64"
Padding="8" CornerRadius="4">
<Canvas>
<Rectangle Fill="Yellow"/>
<Rectangle Fill="Green" Margin="0,44"/>
</Canvas>
</Border>

Reference: Border
Canvas
A layout panel that supports the absolute positioning of child elements relative to the top left corner of the canvas.

<Canvas Width="120" Height="120">


<Rectangle Fill="Red"/>
<Rectangle Fill="Blue" Canvas.Left="20" Canvas.Top="20"/>
<Rectangle Fill="Green" Canvas.Left="40" Canvas.Top="40"/>
<Rectangle Fill="Yellow" Canvas.Left="60" Canvas.Top="60"/>
</Canvas>

Reference: Canvas
Grid
A layout panel that supports the arranging of child elements in rows and columns.
<Grid>
<Grid.RowDefinitions>
<RowDefinition Height="50"/>
<RowDefinition Height="50"/>
</Grid.RowDefinitions>
<Grid.ColumnDefinitions>
<ColumnDefinition Width="50"/>
<ColumnDefinition Width="50"/>
</Grid.ColumnDefinitions>
<Rectangle Fill="Red"/>
<Rectangle Fill="Blue" Grid.Row="1"/>
<Rectangle Fill="Green" Grid.Column="1"/>
<Rectangle Fill="Yellow" Grid.Row="1" Grid.Column="1"/>
</Grid>

Reference: Grid
Panning scroll viewer
See Scroll viewer.
RelativePanel
A panel that lets you position and align child objects in relation to each other or the parent panel.

<RelativePanel>
<TextBox x:Name="textBox1" RelativePanel.AlignLeftWithPanel="True"/>
<Button Content="Submit" RelativePanel.Below="textBox1"/>
</RelativePanel>

Reference: RelativePanel
Scroll bar
See scroll viewer. (ScrollBar is an element of ScrollViewer. You don't typically use it as a stand-alone control.)
Reference: ScrollBar
Scroll viewer
A container control that lets the user pan and zoom its content.
<ScrollViewer ZoomMode="Enabled" MaxZoomFactor="10"
HorizontalScrollMode="Enabled"
HorizontalScrollBarVisibility="Visible"
Height="200" Width="200">
<Image Source="Assets/Logo.png" Height="400" Width="400"/>
</ScrollViewer>

Reference: ScrollViewer
Design and how-to: Scroll and panning controls guide
Sample code: XAML scrolling, panning and zooming sample
Stack panel
A layout panel that arranges child elements into a single line that can be oriented horizontally or vertically.

<StackPanel>
<Rectangle Fill="Red"/>
<Rectangle Fill="Blue"/>
<Rectangle Fill="Green"/>
<Rectangle Fill="Yellow"/>
</StackPanel>

Reference: StackPanel
VariableSizedWrapGrid
A layout panel that supports the arranging of child elements in rows and columns. Each child element can span
multiple rows and columns.

<VariableSizedWrapGrid MaximumRowsOrColumns="3" ItemHeight="44" ItemWidth="44">


<Rectangle Fill="Red"/>
<Rectangle Fill="Blue" Height="80"
VariableSizedWrapGrid.RowSpan="2"/>
<Rectangle Fill="Green" Width="80"
VariableSizedWrapGrid.ColumnSpan="2"/>
<Rectangle Fill="Yellow" Height="80" Width="80"
VariableSizedWrapGrid.RowSpan="2"
VariableSizedWrapGrid.ColumnSpan="2"/>
</VariableSizedWrapGrid>

Reference: VariableSizedWrapGrid
Viewbox
A container control that scales its content to a specified size.

<Viewbox MaxWidth="25" MaxHeight="25">


<Image Source="Assets/Logo.png"/>
</Viewbox>
<Viewbox MaxWidth="75" MaxHeight="75">
<Image Source="Assets/Logo.png"/>
</Viewbox>
<Viewbox MaxWidth="150" MaxHeight="150">
<Image Source="Assets/Logo.png"/>
</Viewbox>

Reference: Viewbox
Zooming scroll viewer
See Scroll viewer.

Media controls
Audio
See Media element.
Media element
A control that plays audio and video content.

<MediaElement x:Name="myMediaElement"/>

Reference: MediaElement
Design and how-to: Media element control guide
MediaTransportControls
A control that provides playback controls for a MediaElement.

<MediaTransportControls MediaElement="myMediaElement"/>

Reference: MediaTransportControls
Design and how-to: Media element control guide
Sample code: Media Transport Controls sample
Video
See Media element.

Navigation
Hub
A container control that lets the user view and navigate to different sections of content.

<Hub>
<HubSection>
<!--- hub section content -->
</HubSection>
<HubSection>
<!--- hub section content -->
</HubSection>
</Hub>

Reference: Hub
Design and how-to: Hub control guide
Sample code:XAML Hub control sample
Pivot
A full-screen container and navigation model that also provides a quick way to move between different pivots
(views or filters), typically in the same set of data.
The Pivot control can be styled to have a "tab" layout.
Reference: Pivot
Design and how-to: Tabs and pivot control guide
Sample code: Pivot sample
Semantic zoom
A container control that lets the user zoom between two views of a collection of items.

<SemanticZoom>
<ZoomedInView>
<GridView></GridView>
</ZoomedInView>
<ZoomedOutView>
<GridView></GridView>
</ZoomedOutView>
</SemanticZoom>

Reference: SemanticZoom
Design and how-to: Semantic zoom control guide
Sample code: XAML GridView grouping and SemanticZoom sample
SplitView
A container control with two views; one view for the main content and another view that is typically used for a
navigation menu.

<SplitView>
<SplitView.Pane>
<!-- Menu content -->
</SplitView.Pane>
<SplitView.Content>
<!-- Main content -->
</SplitView.Content>
</SplitView>

Reference: SplitView
Design and how-to: Split view control guide
Web view
A container control that hosts web content.

<WebView x:Name="webView1" Source="http://dev.windows.com"


Height="400" Width="800"/>

Reference: WebView
Design and how-to: Guidelines for Web views
Sample code: XAML WebView control sample

Progress controls
Progress bar
A control that indicates progress by displaying a bar.

A progress bar that shows a specific value.

<ProgressBar x:Name="progressBar1" Value="50" Width="100"/>


A progress bar that shows indeterminate progress.

<ProgressBar x:Name="indeterminateProgressBar1" IsIndeterminate="True" Width="100"/>

Reference: ProgressBar
Design and how-to: Progress controls guide
Progress ring
A control that indicates indeterminate progress by displaying a ring.

<ProgressRing x:Name="progressRing1" IsActive="True"/>

Reference: ProgressRing
Design and how-to: Progress controls guide

Text controls
Auto suggest box
A text input box that provides suggested text as the user types.

Reference: AutoSuggestBox
Design and how-to: Text controls, Auto suggest box control guide
Sample code: AutoSuggestBox migration sample
Multi-line text box
See Text box.
Password box
A control for entering passwords.

<PasswordBox x:Name="passwordBox1"
PasswordChanged="PasswordBox_PasswordChanged" />

Reference: PasswordBox
Design and how-to: Text controls, Password box control guide
Sample code: XAML text display sample, XAML text editing sample
Rich edit box
A control that lets a user edit rich text documents with content like formatted text, hyperlinks, and images.

<RichEditBox />

Reference: RichEditBox
Design and how-to: Text controls, Rich edit box control guide
Sample code: XAML text sample
Search box
See Auto suggest box.
Single-line text box
See Text box.
Static text/paragraph
See Text block.
Text block
A control that displays text.

<TextBlock x:Name="textBlock1" Text="I am a TextBlock"/>

Reference: TextBlock, RichTextBlock


Design and how-to: Text controls, Text block control guide, Rich text block control guide
Sample code: XAML text sample
Text box
A single-line or multi-line plain text field.

<TextBox x:Name="textBox1" Text="I am a TextBox"


TextChanged="TextBox_TextChanged"/>

Reference: TextBox
Design and how-to: Text controls, Text box control guide
Sample code: XAML text sample

Selection controls
Check box
A control that a user can select or clear.
<CheckBox x:Name="checkbox1" Content="CheckBox"
Checked="CheckBox_Checked"/>

Reference: CheckBox
Design and how-to: Check box control guide
Combo box
A drop-down list of items a user can select from.

<ComboBox x:Name="comboBox1" Width="100"


SelectionChanged="ComboBox_SelectionChanged">
<x:String>Item 1</x:String>
<x:String>Item 2</x:String>
<x:String>Item 3</x:String>
</ComboBox>

Reference: ComboBox
Design and how-to: Lists
List box
A control that presents an inline list of items that the user can select from.

<ListBox x:Name="listBox1" Width="100"


SelectionChanged="ListBox_SelectionChanged">
<x:String>Item 1</x:String>
<x:String>Item 2</x:String>
<x:String>Item 3</x:String>
</ListBox>

Reference: ListBox
Design and how-to: Lists
Radio button
A control that allows a user to select a single option from a group of options. When radio buttons are grouped
together, they are mutually exclusive.

<RadioButton x:Name="radioButton1" Content="RadioButton 1" GroupName="Group1"


Checked="RadioButton_Checked"/>
<RadioButton x:Name="radioButton2" Content="RadioButton 2" GroupName="Group1"
Checked="RadioButton_Checked" IsChecked="True"/>
<RadioButton x:Name="radioButton3" Content="RadioButton 3" GroupName="Group1"
Checked="RadioButton_Checked"/>

Reference: RadioButton
Design and how-to: Radio button control guide
Slider
A control that lets the user select from a range of values by moving a Thumb control along a track.

<Slider x:Name="slider1" Width="100" ValueChanged="Slider_ValueChanged" />

Reference: Slider
Design and how-to: Slider control guide
Toggle button
A button that can be toggled between 2 states.

<ToggleButton x:Name="toggleButton1" Content="Button"


Checked="ToggleButton_Checked"/>

Reference: ToggleButton
Design and how-to: Toggle control guide
Toggle switch
A switch that can be toggled between 2 states.
<ToggleSwitch x:Name="toggleSwitch1" Header="ToggleSwitch"
OnContent="On" OffContent="Off"
Toggled="ToggleSwitch_Toggled"/>

Reference: ToggleSwitch
Design and how-to: Toggle control guide
App bar and command bar
3/6/2017 12 min to read Edit on GitHub

Command bars (also called "app bars") provide users with easy access to your app's most common tasks, and
can be used to show commands or options that are specific to the user's context, such as a photo selection or
drawing mode. They can also be used for navigation among app pages or between app sections. Command bars
can be used with any navigation pattern.

Important APIs
CommandBar
AppBarButton
AppBarToggleButton
AppBarSeparator

Is this the right control?


The CommandBar control is a general-purpose, flexible, light-weight control that can display both complex
content, such as images or text blocks, as well as simple commands such as AppBarButton, AppBarToggleButton,
and AppBarSeparator controls.
XAML provides both the AppBar control and the CommandBar control. You should use the AppBar only when
you are upgrading a Universal Windows 8 app that uses the AppBar, and need to minimize changes. For new
apps in Windows 10, we recommend using the CommandBar control instead. This document assumes you are
using the CommandBar control.

Examples
An expanded command bar in the Microsoft Photos app.

A command bar in the Outlook Calendar on Windows Phone.


Anatomy
By default, the command bar shows a row of icon buttons and an optional "see more" button, which is
represented by an ellipsis []. Here's the command bar created by the example code shown later. It's shown in
its closed compact state.

The command bar can also be shown in a closed minimal state that looks like this. See the Open and closed
states section for more info.

Here's the same command bar in its open state. The labels identify the main parts of the control.

The command bar is divided into 4 main areas:


The "see more" [] button is shown on the right of the bar. Pressing the "see more" [] button has 2
effects: it reveals the labels on the primary command buttons, and it opens the overflow menu if any
secondary commands are present. In the newest SDK, the button will not be visible when no secondary
commands and no hidden labels are present. OverflowButtonVisibility property allows apps to change
this default auto-hide behavior.
The content area is aligned to the left side of the bar. It is shown if the Content property is populated.
The primary command area is aligned to the right side of the bar, next to the "see more" [] button. It is
shown if the PrimaryCommands property is populated.
The overflow menu is shown only when the command bar is open and the SecondaryCommands property
is populated. The new dynamic overflow behavior will automatically move primary commands into the
SecondaryCommands area when space is limited.
The layout is reversed when the FlowDirection is RightToLeft.

Create a command bar


This example creates the command bar shown previously.

<CommandBar>
<AppBarToggleButton Icon="Shuffle" Label="Shuffle" Click="AppBarButton_Click" />
<AppBarToggleButton Icon="RepeatAll" Label="Repeat" Click="AppBarButton_Click"/>
<AppBarSeparator/>
<AppBarButton Icon="Back" Label="Back" Click="AppBarButton_Click"/>
<AppBarButton Icon="Stop" Label="Stop" Click="AppBarButton_Click"/>
<AppBarButton Icon="Play" Label="Play" Click="AppBarButton_Click"/>
<AppBarButton Icon="Forward" Label="Forward" Click="AppBarButton_Click"/>

<CommandBar.SecondaryCommands>
<AppBarButton Icon="Like" Label="Like" Click="AppBarButton_Click"/>
<AppBarButton Icon="Dislike" Label="Dislike" Click="AppBarButton_Click"/>
</CommandBar.SecondaryCommands>

<CommandBar.Content>
<TextBlock Text="Now playing..." Margin="12,14"/>
</CommandBar.Content>
</CommandBar>

Commands and content


The CommandBar control has 3 properties you can use to add commands and content: PrimaryCommands,
SecondaryCommands, and Content.
Primary actions and overflow
By default, items you add to the command bar are added to the PrimaryCommands collection. These
commands are shown to the left of the "see more" [] button, in what we call the action space. Place the most
important commands, the ones that you want to remain visible in the bar, in the action space. On the smallest
screens (320 epx width), a maximum of 4 items will fit in the command bar's action space.
You can add commands to the SecondaryCommands collection, and these items are shown in the overflow
area. Place less important commands within the overflow area.
The default overflow area is styled to be distinct from the bar. You can adjust the styling by setting the
CommandBarOverflowPresenterStyle property to a Style that targets the
CommandBarOverflowPresenter.
You can programmatically move commands between the PrimaryCommands and SecondaryCommands as
needed.
App bar buttons
Both the PrimaryCommands and SecondaryCommands can be populated only with AppBarButton,
AppBarToggleButton, and AppBarSeparator command elements. These controls are optimized for use in a
command bar, and their appearance changes depending on whether the control is used in the action space or
overflow area.
The app bar button controls are characterized by an icon and associated label. They have two sizes; normal and
compact. By default, the text label is shown. When the IsCompact property is set to true, the text label is hidden.
When used in a CommandBar control, the command bar overwrites the button's IsCompact property
automatically as the command bar is opened and closed.
To position app bar button labels to the right of their icons, apps can use CommandBar's new
DefaultLabelPosition property.

<CommandBar DefaultLabelPosition="Right">
<AppBarToggleButton Icon="Shuffle" Label="Shuffle"/>
<AppBarToggleButton Icon="RepeatAll" Label="Repeat"/>
</CommandBar>

Here is what the code snippet above looks like when drawn by an app.

Individual app bar buttons cannot move their label position, this must be done on the command bar as a whole.
App bar buttons can specify that their labels never show by setting the new LabelPosition property to
Collapsed. We recommend limiting the use of this setting to universally recognizable iconography such as '+'.
When you place an app bar button in the overflow menu (SecondaryCommands), it's shown as text only. The
LabelPosition of app bar buttons in the overflow will be ignored. Here's the same app bar toggle button shown
in the action space as a primary command (top), and in the overflow area as a secondary command (bottom).

If there is a command that would appear consistently across pages, it's best to keep that command in a
consistent location.
We recommended placing Accept, Yes, and OK commands to the left of Reject, No, and Cancel. Consistency
gives users the confidence to move around the system and helps them transfer their knowledge of app
navigation from app to app.
Button labels
We recommend keeping app bar button labels short, preferably a single word. Longer labels positioned bellow
an app bar button's icon will wrap to multiple lines thus increasing the overall height of the opened command
bar. You can include a soft-hyphen character (0x00AD) in the text for a label to hint at the character boundary
where a word break should occur. In XAML, you express this using an escape sequence, like this:

<AppBarButton Icon="Back" Label="Areally&#x00AD;longlabel"/>

When the label wraps at the hinted location, it looks like this.
Other content
You can add any XAML elements to the content area by setting the Content property. If you want to add more
than one element, you need to place them in a panel container and make the panel the single child of the
Content property.
When there are both primary commands and content, the primary commands take precedence and may cause
the content to be clipped.
When the ClosedDisplayMode is Compact, the content can be clipped if it is larger than the compact size of
the command bar. You should handle the Opening and Closed events to show or hide parts of the UI in the
content area so that they aren't clipped. See the Open and closed states section for more info.

Open and closed states


The command bar can be open or closed. A user can switch between these states by pressing the "see more"
[] button. You can switch between them programmatically by setting the IsOpen property. When open, the
primary command buttons are shown with text labels and the overflow menu is open if secondary commands
are present, as shown previously.
You can use the Opening, Opened, Closing, and Closed events to respond to the command bar being opened
or closed.
The Opening and Closing events occur before the transition animation begins.
The Opened and Closed events occur after the transition completes.
In this example, the Opening and Closing events are used to change the opacity of the command bar. When the
command bar is closed, it's semi-transparent so the app background shows through. When the command bar is
opened, the command bar is made opaque so the user can focus on the commands.

<CommandBar Opening="CommandBar_Opening"
Closing="CommandBar_Closing">
<AppBarButton Icon="Accept" Label="Accept"/>
<AppBarButton Icon="Edit" Label="Edit"/>
<AppBarButton Icon="Save" Label="Save"/>
<AppBarButton Icon="Cancel" Label="Cancel"/>
</CommandBar>

private void CommandBar_Opening(object sender, object e)


{
CommandBar cb = sender as CommandBar;
if (cb != null) cb.Background.Opacity = 1.0;
}

private void CommandBar_Closing(object sender, object e)


{
CommandBar cb = sender as CommandBar;
if (cb != null) cb.Background.Opacity = 0.5;
}

ClosedDisplayMode
You can control how the command bar is shown in its closed state by setting the ClosedDisplayMode
property. There are 3 closed display modes to choose from:
Compact: The default mode. Shows content, primary command icons without labels, and the "see more"
[] button.
Minimal: Shows only a thin bar that acts as the "see more" [] button. The user can press anywhere on the
bar to open it.
Hidden: The command bar is not shown when it's closed. This can be useful for showing contextual
commands with an inline command bar. In this case, you must open the command bar programmatically by
setting the IsOpen property or changing the ClosedDisplayMode to Minimal or Compact.
Here, a command bar is used to hold simple formatting commands for a RichEditBox. When the edit box doesn't
have focus, the formatting commands can be distracting, so they're hidden. When the edit box is being used, the
command bar's ClosedDisplayMode is changed to Compact so the formatting commands are visible.

<StackPanel Width="300"
GotFocus="EditStackPanel_GotFocus"
LostFocus="EditStackPanel_LostFocus">
<CommandBar x:Name="FormattingCommandBar" ClosedDisplayMode="Hidden">
<AppBarButton Icon="Bold" Label="Bold" ToolTipService.ToolTip="Bold"/>
<AppBarButton Icon="Italic" Label="Italic" ToolTipService.ToolTip="Italic"/>
<AppBarButton Icon="Underline" Label="Underline" ToolTipService.ToolTip="Underline"/>
</CommandBar>
<RichEditBox Height="200"/>
</StackPanel>

private void EditStackPanel_GotFocus(object sender, RoutedEventArgs e)


{
FormattingCommandBar.ClosedDisplayMode = AppBarClosedDisplayMode.Compact;
}

private void EditStackPanel_LostFocus(object sender, RoutedEventArgs e)


{
FormattingCommandBar.ClosedDisplayMode = AppBarClosedDisplayMode.Hidden;
}

Note The implementation of the editing commands is beyond the scope of this example. For more info, see
the RichEditBox article.

Although the Minimal and Hidden modes are useful in some situations, keep in mind that hiding all actions
could confuse users.
Changing the ClosedDisplayMode to provide more or less of a hint to the user affects the layout of surrounding
elements. In contrast, when the CommandBar transitions between closed and open, it does not affect the layout
of other elements.
IsSticky
After opening the command bar, if the user interacts with the app anywhere outside of the control then by
default the overflow menu is dismissed and the labels are hidden. Closing it in this way is called light dismiss.
You can control how the bar is dismissed by setting the IsSticky property. When the bar is sticky ( IsSticky="true"
), it's not closed by a light dismiss gesture. The bar remains open until the user presses the "see more" []
button or selects an item from the overflow menu. We recommend avoiding sticky command bars because they
don't conform to users' expectations around light dismiss.

Do's and don'ts


Placement
Command bars can be placed at the top of the app window, at the bottom of the app window, and inline.
For small handheld devices, we recommend positioning command bars at the bottom of the screen for easy
reachability.
For devices with larger screens, if you're placing just one command bar, we recommend placing it near the
top of the window. Use the DiagonalSizeInInches API to determine physical screen size.
Command bars can be placed in the following screen regions on single-view screens (left example) and on
multi-view screens (right example). Inline command bars can be placed anywhere in the action space.

Touch devices: If the command bar must remain visible to a user when the touch keyboard, or Soft Input
Panel (SIP), appears then you can assign the command bar to the BottomAppBar property of a Page and it
will move to remain visible when the SIP is present. Otherwise, you should place the command bar inline
and positioned relative to your app content.

Actions
Prioritize the actions that go in the command bar based on their visibility.
Place the most important commands, the ones that you want to remain visible in the bar, in the first few slots
of the action space. On the smallest screens (320 epx width), between 2-4 items will fit in the command bar's
action space, depending on other on-screen UI.
Place less-important commands later in the bar's action space or within the first few slots of the overflow
area. These commands will be visible when the bar has enough screen real estate, but will fall into the
overflow area's drop-down menu when there isn't enough room.
Place the least-important commands within the overflow area. These commands will always appear in the
drop-down menu.
If there is a command that would appear consistently across pages, it's best to keep that command in a
consistent location. We recommended placing Accept, Yes, and OK commands to the left of Reject, No, and
Cancel. Consistency gives users the confidence to move around the system and helps them transfer their
knowledge of app navigation from app to app.
Although you can place all actions within the overflow area so that only the "see more" [] button is visible on
the command bar, keep in mind that hiding all actions could confuse users.
Command bar flyouts
Consider logical groupings for the commands, such as placing Reply, Reply All, and Forward in a Respond
menu. While typically an app bar button activates a single command, an app bar button can be used to show a
MenuFlyout or Flyout with custom content.

Overflow menu

The overflow menu is represented by the "see more" [] button, the visible entry point for the menu. It's on
the far-right of the toolbar, adjacent to primary actions.
The overflow area is allocated for actions that are less frequently used.
Actions can come and go between the primary action space and the overflow menu at breakpoints. You can
also designate actions to always remain in the primary action space regardless of screen or app window size.
Infrequently used actions can remain in the overflow menu even when the app bar is expanded on larger
screens.

Adaptability
The same number of actions in the app bar should be visible in both portrait and landscape orientation,
which reduces the user's cognitive load. The number of actions available should be determined by the
device's width in portrait orientation.
On small screens that are likely to be used one-handed, app bars should be positioned near the bottom of
the screen.
On larger screens, placing app bars closer to the top of the window makes them more noticeable and
discoverable.
By targeting breakpoints, you can move actions in and out of the menu as the window size changes.
By targeting screen diagonal, you can modify app bar position based on device screen size.
Consider moving labels to the right of app bar button icons to improve legibility. Labels on the bottom
require users to open the command bar to reveal labels, while labels on the right are visible even when
command bar is closed. This optimization works well on larger windows.

Get the sample code


Commanding sample
XAML UI basics sample

Related articles
Command design basics for UWP apps
CommandBar class
Auto-suggest box
3/6/2017 4 min to read Edit on GitHub

Use an AutoSuggestBox to provide a list of suggestions for a user to select from as they type.

Important APIs
AutoSuggestBox class
TextChanged event
SuggestionChose event
QuerySubmitted event

Is this the right control?


If you'd like a simple, customizable control that allows text search with a list of suggestions, then choose an auto-
suggest box.
For more info about choosing the right text control, see the Text controls article.

Examples
An auto suggest box in the Groove Music app.
Anatomy
The entry point for the auto-suggest box consists of an optional header and a text box with optional hint text:

The auto-suggest results list populates automatically once the user starts to enter text. The results list can appear
above or below the text entry box. A "clear all" button appears:

Create an auto-suggest box


To use an AutoSuggestBox, you need to respond to 3 user actions.
Text changed - When the user enters text, update the suggestion list.
Suggestion chosen - When the user chooses a suggestion in the suggestion list, update the text box.
Query submitted - When the user submits a query, show the query results.
Text changed
The TextChanged event occurs whenever the content of the text box is updated. Use the event args Reason
property to determine whether the change was due to user input. If the change reason is UserInput, filter your
data based on the input. Then, set the filtered data as the ItemsSource of the AutoSuggestBox to update the
suggestion list.
To control how items are displayed in the suggestion list, you can use DisplayMemberPath or ItemTemplate.
To display the text of a single property of your data item, set the DisplayMemberPath property to choose which
property from your object to display in the suggestion list.
To define a custom look for each item in the list, use the ItemTemplate property.
Suggestion chosen
When a user navigates through the suggestion list using the keyboard, you need to update the text in the text box
to match.
You can set the TextMemberPath property to choose which property from your data object to display in the text
box. If you specify a TextMemberPath, the text box is updated automatically. You should typically specify the same
value for DisplayMemberPath and TextMemberPath so the text is the same in the suggestion list and the text box.
If you need to show more than a simple property, handle the SuggestionChosen event to populate the text box
with custom text based on the selected item.
Query submitted
Handle the QuerySubmitted event to perform a query action appropriate to your app and show the result to the
user.
The QuerySubmitted event occurs when a user commits a query string. The user can commit a query in one of
these ways:
While the focus is in the text box, press Enter or click the query icon. The event args ChosenSuggestion
property is null.
While the focus is in the suggestion list, press Enter, click, or tap an item. The event args ChosenSuggestion
property contains the item that was selected from the list.
In all cases, the event args QueryText property contains the text from the text box.

Use AutoSuggestBox for search


Use an AutoSuggestBox to provide a list of suggestions for a user to select from as they type.
By default, the text entry box doesnt have a query button shown. You can set the QueryIcon property to add a
button with the specified icon on the right side of the text box. For example, to make the AutoSuggestBox look like
a typical search box, add a find icon, like this.

<AutoSuggestBox QueryIcon="Find"/>

Here's an AutoSuggestBox with a 'find' icon.

Get the sample code


To see complete working examples of AutoSuggestBox, see the AutoSuggestBox sample and XAML UI Basics
sample.
Here is a simple AutoSuggestBox with the required event handlers.

<AutoSuggestBox PlaceholderText="Search" QueryIcon="Find" Width="200"


TextChanged="AutoSuggestBox_TextChanged"
QuerySubmitted="AutoSuggestBox_QuerySubmitted"
SuggestionChosen="AutoSuggestBox_SuggestionChosen"/>
private void AutoSuggestBox_TextChanged(AutoSuggestBox sender, AutoSuggestBoxTextChangedEventArgs args)
{
// Only get results when it was a user typing,
// otherwise assume the value got filled in by TextMemberPath
// or the handler for SuggestionChosen.
if (args.Reason == AutoSuggestionBoxTextChangeReason.UserInput)
{
//Set the ItemsSource to be your filtered dataset
//sender.ItemsSource = dataset;
}
}

private void AutoSuggestBox_SuggestionChosen(AutoSuggestBox sender, AutoSuggestBoxSuggestionChosenEventArgs args)


{
// Set sender.Text. You can use args.SelectedItem to build your text string.
}

private void AutoSuggestBox_QuerySubmitted(AutoSuggestBox sender, AutoSuggestBoxQuerySubmittedEventArgs args)


{
if (args.ChosenSuggestion != null)
{
// User selected an item from the suggestion list, take an action on it here.
}
else
{
// Use args.QueryText to determine what to do.
}
}

Do's and don'ts


When using the auto-suggest box to perform searches and no search results exist for the entered text,
display a single-line "No results" message as the result so that users know their search request executed:

Related articles
Text controls
Spell checking
Search
TextBox class
Windows.UI.Xaml.Controls PasswordBox class
String.Length property
Buttons
3/13/2017 5 min to read Edit on GitHub

A button gives the user a way to trigger an immediate action.

Important APIs
Button class
RepeatButton class
Click event

Is this the right control?


A button lets the user initiate an immediate action, such as submitting a form.
Don't use a button when the action is to navigate to another page; use a link instead. See Hyperlinks for more info.

Exception: For wizard navigation, use buttons labeled "Back" and "Next". For other types of backwards
navigation or navigation to an upper level, use a back button.

Example
This example uses two buttons, Close all and Cancel, in a dialog in the Microsoft Edge browser.

Create a button
This example shows a button that responds to a click.
Create the button in XAML.

<Button Content="Submit" Click="SubmitButton_Click"/>

Or create the button in code.


Button submitButton = new Button();
submitButton.Content = "Submit";
submitButton.Click += SubmitButton_Click;

// Add the button to a parent container in the visual tree.


stackPanel1.Children.Add(submitButton);

Handle the Click event.

private async void SubmitButton_Click(object sender, RoutedEventArgs e)


{
// Call app specific code to submit form. For example:
// form.Submit();
Windows.UI.Popups.MessageDialog messageDialog =
new Windows.UI.Popups.MessageDialog("Thank you for your submission.");
await messageDialog.ShowAsync();
}

Button interaction
When you tap a Button with a finger or stylus, or press a left mouse button while the pointer is over it, the button
raises the Click event. If a button has keyboard focus, pressing the Enter key or the Spacebar key also raises the
Click event.
You generally can't handle low-level PointerPressed events on a Button because it has the Click behavior instead.
For more info, see Events and routed events overview.
You can change how a button raises the Click event by changing the ClickMode property. The default ClickMode
value is Release. If ClickMode is Hover, the Click event can't be raised with the keyboard or touch.
Button content
Button is a ContentControl. Its XAML content property is Content, which enables a syntax like this for XAML:
<Button>A button's content</Button> . You can set any object as the button's content. If the content is a UIElement, it is
rendered in the button. If the content is another type of object, its string representation is shown in the button.
Here, a StackPanel that contains an image of an orange and text is set as the content of a button.

<Button Click="Button_Click"
Background="#FF0D6AA3"
Height="100" Width="80">
<StackPanel>
<Image Source="Assets/Slices.png" Height="62"/>
<TextBlock Text="Orange" Foreground="White"
HorizontalAlignment="Center"/>
</StackPanel>
</Button>

The button looks like this.

Create a repeat button


A RepeatButton is a button that raises Click events repeatedly from the time it's pressed until it's released. Set
the Delay property to specify the time that the RepeatButton waits after it is pressed before it starts repeating the
click action. Set the Interval property to specify the time between repetitions of the click action. Times for both
properties are specified in milliseconds.
The following example shows two RepeatButton controls whose respective Click events are used to increase and
decrease the value shown in a text block.

<StackPanel>
<RepeatButton Width="100" Delay="500" Interval="100" Click="Increase_Click">Increase</RepeatButton>
<RepeatButton Width="100" Delay="500" Interval="100" Click="Decrease_Click">Decrease</RepeatButton>
<TextBlock x:Name="clickTextBlock" Text="Number of Clicks:" />
</StackPanel>

private static int _clicks = 0;


private void Increase_Click(object sender, RoutedEventArgs e)
{
_clicks += 1;
clickTextBlock.Text = "Number of Clicks: " + _clicks;
}

private void Decrease_Click(object sender, RoutedEventArgs e)


{
if(_clicks > 0)
{
_clicks -= 1;
clickTextBlock.Text = "Number of Clicks: " + _clicks;
}
}

Recommendations
Make sure the purpose and state of a button are clear to the user.
Use a concise, specific, self-explanatory text that clearly describes the action that the button performs. Usually
button text content is a single word, a verb.
When there are multiple buttons for the same decision (such as in a confirmation dialog), present the
commit buttons in this order:
OK/[Do it]/Yes
[Don't do it]/No
Cancel
(where [Do it] and [Don't do it] are specific responses to the main instruction.)
If the button's text content is dynamic, for example, it is localized, consider how the button will resize and
what will happen to controls around it.
For command buttons with text content, use a minimum button width.
Avoid narrow, short, or tall command buttons with text content.
Use the default font unless your brand guidelines tell you to use something different.
For an action that needs to be available across multiple pages within your app, instead of duplicating a button
on multiple pages, consider using a bottom app bar.
Expose only one or two buttons to the user at a time, for example, Accept and Cancel. If you need to expose
more actions to the user, consider using checkboxes or radio buttons from which the user can select actions,
with a single command button to trigger those actions.
Use the default command button to indicate the most common or recommended action.
Consider customizing your buttons. A button's shape is rectangular by default, but you can customize the
visuals that make up the button's appearance. A button's content is usually textfor example, Accept or Cancel
but you could replace the text with an icon, or use an icon plus text.
Make sure that as the user interacts with a button, the button changes state and appearance to provide
feedback to the user. Normal, pressed, and disabled are examples of button states.
Trigger the button's action when the user taps or presses the button. Usually the action is triggered when the
user releases the button, but you also can set a button's action to trigger when a finger first presses it.
Don't use a command button to set state.
Don't change button text while the app is running; for example, don't change the text of a button that says
"Next" to "Continue".
Don't swap the default submit, reset, and button styles.
Don't put too much content inside a button. Make the content concise and easy to understand (nothing more
than a picture and some text).

Back buttons
The back button is a system-provided UI element that enables backward navigation through either the back stack
or navigation history of the user. You don't have to create your own back button, but you might have to do some
work to enable a good backwards navigation experience. For more info, see History and backwards navigation

Get the sample code


XAML UI basics sample
See all of the XAML controls in an interactive format.

Related articles
Radio buttons
Toggle switches
Check boxes
Button class
Check boxes
3/6/2017 7 min to read Edit on GitHub

A check box is used to select or deselect action items. It can be used for a single item or for a list of multiple items
that a user can choose from. The control has three selection states: unselected, selected, and indeterminate. Use
the indeterminate state when a collection of sub-choices have both unselected and selected states.

Important APIs
CheckBox class
Checked event
IsChecked property

Is this the right control?


Use a single check box for a binary yes/no choice, such as with a "Remember me?" login scenario or with a
terms of service agreement.

For a binary choice, the main difference between a check box and a toggle switch is that the check box is for
status and the toggle switch is for action. You can delay committing a check box interaction (as part of a form
submit, for example), while you should immediately commit a toggle switch interaction. Also, only check boxes
allow for multi-selection.
Use multiple check boxes for multi-select scenarios in which a user chooses one or more items from a group of
choices that are not mutually exclusive.
Create a group of check boxes when users can select any combination of options.

When options can be grouped, you can use an indeterminate check box to represent the whole group. Use the
check box's indeterminate state when a user selects some, but not all, sub-items in the group.
Both check box and radio button controls let the user select from a list of options. Check boxes let the user
select a combination of options. In contrast, radio buttons let the user make a single choice from mutually
exclusive options. When there is more than one option but only one can be selected, use a radio button instead.

Examples
A check box in a dialog in the Microsoft Edge browser.

Check boxes in the Alarms & Clock app in Windows.


Create a checkbox
To assign a label to the checkbox, set the Content property. The label displays next to the checkbox.
This XAML creates a single check box that is used to agree to terms of service before a form can be submitted.

<CheckBox x:Name="termsOfServiceCheckBox"
Content="I agree to the terms of service."/>

Here's the same check box created in code.

CheckBox checkBox1 = new CheckBox();


checkBox1.Content = "I agree to the terms of service.";

Bind to IsChecked
Use the IsChecked property to determine whether the check box is checked or cleared. You can bind the value of
the IsChecked property to another binary value. However, because IsChecked is a nullable boolean value, you
must use a value converter to bind it to a boolean value.
In this example, the IsChecked property of the check box to agree to terms of service is bound to the IsEnabled
property of a Submit button. The Submit button is enabled only if the terms of service are agreed to.

Note We only show the relevant code here. For more info about data binding and value converters, see Data
binding overview.
...
<Page.Resources>
<local:NullableBooleanToBooleanConverter x:Key="NullableBooleanToBooleanConverter"/>
</Page.Resources>

...

<StackPanel Grid.Column="2" Margin="40">


<CheckBox x:Name="termsOfServiceCheckBox" Content="I agree to the terms of service."/>
<Button Content="Submit"
IsEnabled="{x:Bind termsOfServiceCheckBox.IsChecked,
Converter={StaticResource NullableBooleanToBooleanConverter}, Mode=OneWay}"/>
</StackPanel>

public class NullableBooleanToBooleanConverter : IValueConverter


{
public object Convert(object value, Type targetType, object parameter, string language)
{
if (value is bool?)
{
return (bool)value;
}
return false;
}

public object ConvertBack(object value, Type targetType, object parameter, string language)
{
if (value is bool)
return (bool)value;
return false;
}
}

Handle Click and Checked events


To perform an action when the check box state changes, you can handle either the Click event, or the Checked
and Unchecked events.
The Click event occurs whenever the checked state changes. If you handle the Click event, use the IsChecked
property to determine the state of the check box.
The Checked and Unchecked events occur independently. If you handle these events, you should handle both of
them to repsond to state changes in the check box.
In the following examples, we show handling the Click event, and the Checked and Unchecked events.
Multiple checkboxes can share the same event handler. This example creates four checkboxes for selecting pizza
toppings. The four checkboxes share the same Click event handler to update the list of selected toppings.
<StackPanel Margin="40">
<TextBlock Text="Pizza Toppings"/>
<CheckBox Content="Pepperoni" x:Name="pepperoniCheckbox"
Click="toppingsCheckbox_Click"/>
<CheckBox Content="Beef" x:Name="beefCheckbox"
Click="toppingsCheckbox_Click"/>
<CheckBox Content="Mushrooms" x:Name="mushroomsCheckbox"
Click="toppingsCheckbox_Click"/>
<CheckBox Content="Onions" x:Name="onionsCheckbox"
Click="toppingsCheckbox_Click"/>

<!-- Display the selected toppings. -->


<TextBlock Text="Toppings selected:"/>
<TextBlock x:Name="toppingsList"/>
</StackPanel>

Here's the event handler for the Click event. Every time a checkbox is clicked, it examines the checkboxes to see
which ones are checked and update list of selected toppings.

private void toppingsCheckbox_Click(object sender, RoutedEventArgs e)


{
string selectedToppingsText = string.Empty;
CheckBox[] checkboxes = new CheckBox[] { pepperoniCheckbox, beefCheckbox,
mushroomsCheckbox, onionsCheckbox };
foreach (CheckBox c in checkboxes)
{
if (c.IsChecked == true)
{
if (selectedToppingsText.Length > 1)
{
selectedToppingsText += ", ";
}
selectedToppingsText += c.Content;
}
}
toppingsList.Text = selectedToppingsText;
}

Use the indeterminate state


The CheckBox control inherits from ToggleButton and can have three states:

STATE PROPERTY VALUE

checked IsChecked true

unchecked IsChecked false

indeterminate IsChecked null

For the check box to report the indeterminate state, you must set the IsThreeState property to true.
When options can be grouped, you can use an indeterminate check box to represent the whole group. Use the
check box's indeterminate state when a user selects some, but not all, sub-items in the group.
In the following example, the "Select all" checkbox has its IsThreeState property set to true. The "Select all"
checkbox is checked if all child elements are checked, unchecked if all child elements are unchecked, and
indeterminate otherwise.
<StackPanel>
<CheckBox x:Name="OptionsAllCheckBox" Content="Select all" IsThreeState="True"
Checked="SelectAll_Checked" Unchecked="SelectAll_Unchecked"
Indeterminate="SelectAll_Indeterminate"/>
<CheckBox x:Name="Option1CheckBox" Content="Option 1" Margin="24,0,0,0"
Checked="Option_Checked" Unchecked="Option_Unchecked" />
<CheckBox x:Name="Option2CheckBox" Content="Option 2" Margin="24,0,0,0"
Checked="Option_Checked" Unchecked="Option_Unchecked" IsChecked="True"/>
<CheckBox x:Name="Option3CheckBox" Content="Option 3" Margin="24,0,0,0"
Checked="Option_Checked" Unchecked="Option_Unchecked" />
</StackPanel>
private void Option_Checked(object sender, RoutedEventArgs e)
{
SetCheckedState();
}

private void Option_Unchecked(object sender, RoutedEventArgs e)


{
SetCheckedState();
}

private void SelectAll_Checked(object sender, RoutedEventArgs e)


{
Option1CheckBox.IsChecked = Option2CheckBox.IsChecked = Option3CheckBox.IsChecked = true;
}

private void SelectAll_Unchecked(object sender, RoutedEventArgs e)


{
Option1CheckBox.IsChecked = Option2CheckBox.IsChecked = Option3CheckBox.IsChecked = false;
}

private void SelectAll_Indeterminate(object sender, RoutedEventArgs e)


{
// If the SelectAll box is checked (all options are selected),
// clicking the box will change it to its indeterminate state.
// Instead, we want to uncheck all the boxes,
// so we do this programatically. The indeterminate state should
// only be set programatically, not by the user.

if (Option1CheckBox.IsChecked == true &&


Option2CheckBox.IsChecked == true &&
Option3CheckBox.IsChecked == true)
{
// This will cause SelectAll_Unchecked to be executed, so
// we don't need to uncheck the other boxes here.
OptionsAllCheckBox.IsChecked = false;
}
}

private void SetCheckedState()


{
// Controls are null the first time this is called, so we just
// need to perform a null check on any one of the controls.
if (Option1CheckBox != null)
{
if (Option1CheckBox.IsChecked == true &&
Option2CheckBox.IsChecked == true &&
Option3CheckBox.IsChecked == true)
{
OptionsAllCheckBox.IsChecked = true;
}
else if (Option1CheckBox.IsChecked == false &&
Option2CheckBox.IsChecked == false &&
Option3CheckBox.IsChecked == false)
{
OptionsAllCheckBox.IsChecked = false;
}
else
{
// Set third state (indeterminate) by setting IsChecked to null.
OptionsAllCheckBox.IsChecked = null;
}
}
}

Do's and don'ts


Verify that the purpose and current state of the check box is clear.
Limit check box text content to no more than two lines.
Word the checkbox label as a statement that the check mark makes true and the absence of a check mark
makes false.
Use the default font unless your brand guidelines tell you to use another.
If the text content is dynamic, consider how the control will resize and what will happen to visuals around it.
If there are two or more mutually exclusive options from which to choose, consider using radio buttons.
Don't put two check box groups next to each other. Use group labels to separate the groups.
Don't use a check box as an on/off control or to perform a command; instead, use a toggle switch.
Don't use a check box to display other controls, such as a dialog box.
Use the indeterminate state to indicate that an option is set for some, but not all, sub-choices.
When using indeterminate state, use subordinate check boxes to show which options are selected and which
are not. Design the UI so that the user can get see the sub-choices.
Don't use the indeterminate state to represent a third state. The indeterminate state is used to indicate that
an option is set for some, but not all, sub-choices. So, don't allow users to set an indeterminate state
directly. For an example of what not to do, this check box uses the indeterminate state to indicate medium
spiciness:

Instead, use a radio button group that has three options: Not spicy, Spicy, and Extra spicy.

Related articles
CheckBox class
Radio buttons
Toggle switch
Calendar, date, and time controls
3/6/2017 4 min to read Edit on GitHub

Date and time controls give you standard, localized ways to let a user view and set date and time values in your
app. This article provides design guidelines and helps you pick the right control.

Important APIs
CalendarView class
CalendarDatePicker class
DatePicker class
TimePicker class

Which date or time control should you use?


There are four date and time controls to choose from; the control you use depends on your scenario. Use this
info to pick the right control to use in your app.

Calendar view Use to pick a single date or a range of


dates from an always visible calendar.

Calendar date picker Use to pick a single date from a


contextual calendar.

Date picker Use to pick a single known date when


contextual info isn't important.

Time picker Use to pick a single time value.

Calendar view
CalendarView lets a user view and interact with a calendar that they can navigate by month, year, or decade. A
user can select a single date or a range of dates. It doesn't have a picker surface and the calendar is always
visible.
The calendar view is made up of 3 separate views: the month view, year view, and decade view. By default, it
starts with the month view open, but you can specify any view as the startup view.

If you need to let a user select multiple dates, you must use a CalendarView.
If you need to let a user pick only a single date and dont need a calendar to be always visible, consider using
a CalendarDatePicker or DatePicker control.
Calendar date picker
CalendarDatePicker is a drop down control thats optimized for picking a single date from a calendar view
where contextual information like the day of the week or fullness of the calendar is important. You can modify
the calendar to provide additional context or to limit available dates.
The entry point displays placeholder text if a date has not been set; otherwise, it displays the chosen date. When
the user selects the entry point, a calendar view expands for the user to make a date selection. The calendar view
overlays other UI; it doesn't push other UI out of the way.

Use a calendar date picker for things like choosing an appointment or departure date.
Date picker
The DatePicker control provides a standardized way to choose a specific date.
The entry point displays the chosen date, and when the user selects the entry point, a picker surface expands
vertically from the middle for the user to make a selection. The date picker overlays other UI; it doesn't push
other UI out of the way.
Use a date picker to let a user pick a known date, such as a date of birth, where the context of the calendar is
not important.
Time picker
The TimePicker is used to select a single time value for things like appointments or a departure time. It's a static
display that is set by the user or in code, but it doesn't update to display the current time.
The entry point displays the chosen time, and when the user selects the entry point, a picker surface expands
vertically from the middle for the user to make a selection. The time picker overlays other UI; it doesn't push
other UI out of the way.

Use a time picker to let a user pick a single time value.

Create a date or time control


See these articles for info and examples specific to each date and time control.
Calendar view
Calendar date picker
Date picker
Time Picker
Globalization
The XAML date controls support each of the calendar systems supported by Windows. These calendars are
specified in the Windows.Globalization.CalendarIdentifiers class. Each control uses the correct calendar for
your app's default language, or you can set the CalendarIdentifier property to use a specific calendar system.
The time picker control supports each of the clock systems specified in the
Windows.Globalization.ClockIdentifiers class. You can set the ClockIdentifier property to use either a 12-
hour clock or 24-hour clock. The type of the property is String, but you must use values that correspond to the
static string properties of the ClockIdentifiers class. These are: TwelveHour (the string "12HourClock")and
TwentyFourHour (the string "24HourClock"). "12HourClock" is the default value.
DateTime and Calendar values
The date objects used in the XAML date and time controls have a different representation depending on your
programming language.
C# and Visual Basic use the System.DateTimeOffset structure that is part of .NET.
C++/CX uses the Windows::Foundation::DateTime structure.
A related concept is the Calendar class, which influences how dates are interpreted in context. All Windows
Runtime apps can use the Windows.Globalization.Calendar class. C# and Visual Basic apps can alternatively
use the System.Globalization.Calendar class, which has very similar functionality. (Windows Runtime apps
can use the base .NET Calendar class but not the specific implementations; for example, GregorianCalendar.)
.NET also supports a type named DateTime, which is implicitly convertible to a DateTimeOffset. So you might
see a "DateTime" type being used in .NET code that's used to set values that are really DateTimeOffset. For more
info on the difference between DateTime and DateTimeOffset, see Remarks in the DateTimeOffset class.

Note Properties that take date objects can't be set as a XAML attribute string, because the Windows
Runtime XAML parser doesn't have a conversion logic for converting strings to dates as
DateTime/DateTimeOffset objects. You typically set these values in code. Another possible technique is to
define a date that's available as a data object or in the data context, then set the property as a XAML attribute
that references a {Binding} markup extension expression that can access the date as data.

Get the sample code


XAML UI basics sample

Related topics
For developers (XAML)
CalendarView class
CalendarDatePicker class
DatePicker class
TimePicker class
Calendar date picker
3/6/2017 2 min to read Edit on GitHub

The calendar date picker is a drop down control thats optimized for picking a single date from a calendar view
where contextual information like the day of the week or fullness of the calendar is important. You can modify the
calendar to provide additional context or to limit available dates.

Important APIs
CalendarDatePicker class
Date property
DateChanged event

Is this the right control?


Use a calendar date picker to let a user pick a single date from a contextual calendar view. Use it for things like
choosing an appointment or departure date.
To let a user pick a known date, such as a date of birth, where the context of the calendar is not important, consider
using a date picker.
For more info about choosing the right control, see the Date and time controls article.

Examples
The entry point displays placeholder text if a date has not been set; otherwise, it displays the chosen date. When
the user selects the entry point, a calendar view expands for the user to make a date selection. The calendar view
overlays other UI; it doesn't push other UI out of the way.

Create a date picker


<CalendarDatePicker x:Name="arrivalCalendarDatePicker" Header="Arrival date"/>
CalendarDatePicker arrivalCalendarDatePicker = new CalendarDatePicker();
arrivalCalendarDatePicker.Header = "Arrival date";

The resulting calendar date picker looks like this:

The calendar date picker has an internal CalendarView for picking a date. A subset of CalendarView properties,
like IsTodayHighlighted and FirstDayOfWeek, exist on CalendarDatePicker and are forwarded to the internal
CalendarView to let you modify it.
However, you can't change the SelectionMode of the internal CalendarView to allow multiple selection. If you
need to let a user pick multiple dates or need a calendar to be always visible, consider using a calendar view
instead of a calendar date picker. See the Calendar view article for more info on how you can modify the calendar
display.
Selecting dates
Use the Date property to get or set the selected date. By default, the Date property is null. When a user selects a
date in the calendar view, this property is updated. A user can clear the date by clicking the selected date in the
calendar view to deselect it.
You can set the date in your code like this.

myCalendarDatePicker.Date = new DateTime(1977, 1, 5);

When you set the Date in code, the value is constrained by the MinDate and MaxDate properties.
If Date is smaller than MinDate, the value is set to MinDate.
If Date is greater than MaxDate, the value is set to MaxDate.
You can handle the DateChanged event to be notified when the Date value has changed.

NOTE
For important info about date values, see DateTime and Calendar values in the Date and time controls article.

Setting a header and placeholder text


You can add a Header (or label) and PlaceholderText (or watermark) to the calendar date picker to give the user
an indication of what it's used for. To customize the look of the header, you can set the HeaderTemplate property
instead of Header.
The default placeholder text is "select a date". You can remove this by setting the PlaceholderText property to an
empty string, or you can provide custom text as shown here.

<CalendarDatePicker x:Name="arrivalCalendarDatePicker" Header="Arrival date"


PlaceholderText="Choose your arrival date"/>

Get the sample code


XAML UI basics sample
Related articles
Date and time controls
Calendar view
Date picker
Time picker
Calendar view
3/6/2017 6 min to read Edit on GitHub

A calendar view lets a user view and interact with a calendar that they can navigate by month, year, or decade. A
user can select a single date or a range of dates. It doesn't have a picker surface and the calendar is always visible.

Important APIs
CalendarView class
SelectedDatesChanged event

Is this the right control?


Use a calendar view to let a user pick a single date or a range of dates from an always visible calendar.
If you need to let a user select multiple dates at one time, you must use a calendar view. If you need to let a user
pick only a single date and dont need a calendar to be always visible, consider using a calendar date picker or date
picker control.
For more info about choosing the right control, see the Date and time controls article.

Examples
The calendar view is made up of 3 separate views: the month view, year view, and decade view. By default, it starts
with the month view open. You can specify a startup view by setting the DisplayMode property.

Users click the header in the month view to open the year view, and click the header in the year view to open the
decade view. Users pick a year in the decade view to return to the year view, and pick a month in the year view to
return to the month view. The two arrows to the side of the header navigate forward or backward by month, by
year, or by decade.

Create a calendar view


This example shows how to create a simple calendar view.

<CalendarView/>

The resulting calendar view looks like this:


Selecting dates
By default, the SelectionMode property is set to Single. This lets a user pick a single date in the calendar. Set
SelectionMode to None to disable date selection.
Set SelectionMode to Multiple to let a user select multiple dates. You can select multiple dates programmatically
by adding DateTime/DateTimeOffset objects to the SelectedDates collection, as shown here:

calendarView1.SelectedDates.Add(DateTimeOffset.Now);
calendarView1.SelectedDates.Add(new DateTime(1977, 1, 5));

A user can deselect a selected date by clicking or tapping it in the calendar grid.
You can handle the SelectedDatesChanged event to be notified when the SelectedDates collection has
changed.

NOTE
For important info about date values, see DateTime and Calendar values in the Date and time controls article.

Customizing the calendar view's appearance


The calendar view is composed of both XAML elements defined in the ControlTemplate and visual elements
rendered directly by the control.
The XAML elements defined in the control template include the border that encloses the control, the header,
previous and next buttons, and DayOfWeek elements. You can style and re-template these elements like any
XAML control.
The calendar grid is composed of CalendarViewDayItem objects. You cant style or re-template these
elements, but various properties are provided to let you to customize their appearance.
This diagram shows the elements that make up the month view of the calendar. For more info, see the Remarks on
the CalendarViewDayItem class.
This table lists the properties you can change to modify the appearance of calendar elements.

ELEMENT PROPERTIES

DayOfWeek DayOfWeekFormat

CalendarItem CalendarItemBackground, CalendarItemBorderBrush,


CalendarItemBorderThickness, CalendarItemForeground

DayItem DayItemFontFamily, DayItemFontSize, DayItemFontStyle,


DayItemFontWeight, HorizontalDayItemAlignment,
VerticalDayItemAlignment, CalendarViewDayItemStyle

MonthYearItem (in the year and decade views, equivalent to MonthYearItemFontFamily, MonthYearItemFontSize,
DayItem) MonthYearItemFontStyle, MonthYearItemFontWeight

FirstOfMonthLabel FirstOfMonthLabelFontFamily, FirstOfMonthLabelFontSize,


FirstOfMonthLabelFontStyle, FirstOfMonthLabelFontWeight,
HorizontalFirstOfMonthLabelAlignment,
VerticalFirstOfMonthLabelAlignment, IsGroupLabelVisible

FirstofYearDecadeLabel (in the year and decade views, FirstOfYearDecadeLabelFontFamily,


equivalent to FirstOfMonthLabel) FirstOfYearDecadeLabelFontSize,
FirstOfYearDecadeLabelFontStyle,
FirstOfYearDecadeLabelFontWeight

Visual State Borders FocusBorderBrush, HoverBorderBrush, PressedBorderBrush,


SelectedBorderBrush, SelectedForeground,
SelectedHoverBorderBrush, SelectedPressedBorderBrush

OutofScope IsOutOfScopeEnabled, OutOfScopeBackground,


OutOfScopeForeground

Today IsTodayHighlighted, TodayFontWeight, TodayForeground

By default, the month view shows 6 weeks at a time. You can change the number of weeks shown by setting the
NumberOfWeeksInView property. The minimum number of weeks to show is 2; the maximum is 8.
By default, the year and decade views show in a 4x4 grid. To change the number of rows or columns, call
SetYearDecadeDisplayDimensions with the your desired number of rows and columns. This will change the
grid for both the year and decade views.
Here, the year and decade views are set to show in a 3x4 grid.

calendarView1.SetYearDecadeDisplayDimensions(3, 4);

By default, the minimum date shown in the calendar view is 100 years prior to the current date, and the maximum
date shown is 100 years past the current date. You can change the minimum and maximum dates that the
calendar shows by setting the MinDate and MaxDate properties.

calendarView1.MinDate = new DateTime(2000, 1, 1);


calendarView1.MaxDate = new DateTime(2099, 12, 31);

Updating calendar day items


Each day in the calendar is represented by a CalendarViewDayItem object. To access an individual day item and
use its properties and methods, handle the CalendarViewDayItemChanging event and use the Item property of
the event args to access the CalendarViewDayItem.
You can make a day not selectable in the calendar view by setting its CalendarViewDayItem.IsBlackout
property to true.
You can show contextual information about the density of events in a day by calling the
CalendarViewDayItem.SetDensityColors method. You can show from 0 to 10 density bars for each day, and
set the color of each bar.
Here are some day items in a calendar. Days 1 and 2 are blacked out. Days 2, 3, and 4 have various density bars
set.

Phased rendering
A calendar view can contain a large number of CalendarViewDayItem objects. To keep the UI responsive and
enable smooth navigation through the calendar, calendar view supports phased rendering. This lets you break up
processing of a day item into phases. If a day is moved out of view before all the phases are complete, no more
time is used trying to process and render that item.
This example shows phased rendering of a calendar view for scheduling appointments.
In phase 0, the default day item is rendered.
In phase 1, you blackout dates that can't be booked. This includes past dates, Sundays, and dates that are
already fully booked.
In phase 2, you check each appointment that's booked for the day. You show a green density bar for each
confirmed appointment and a blue density bar for each tentative appointment.
The Bookings class in this example is from a fictitious appointment booking app, and is not shown.

<CalendarView CalendarViewDayItemChanging="CalendarView_CalendarViewDayItemChanging"/>
private void CalendarView_CalendarViewDayItemChanging(CalendarView sender,
CalendarViewDayItemChangingEventArgs args)
{
// Render basic day items.
if (args.Phase == 0)
{
// Register callback for next phase.
args.RegisterUpdateCallback(CalendarView_CalendarViewDayItemChanging);
}
// Set blackout dates.
else if (args.Phase == 1)
{
// Blackout dates in the past, Sundays, and dates that are fully booked.
if (args.Item.Date < DateTimeOffset.Now ||
args.Item.Date.DayOfWeek == DayOfWeek.Sunday ||
Bookings.HasOpenings(args.Item.Date) == false)
{
args.Item.IsBlackout = true;
}
// Register callback for next phase.
args.RegisterUpdateCallback(CalendarView_CalendarViewDayItemChanging);
}
// Set density bars.
else if (args.Phase == 2)
{
// Avoid unnecessary processing.
// You don't need to set bars on past dates or Sundays.
if (args.Item.Date > DateTimeOffset.Now &&
args.Item.Date.DayOfWeek != DayOfWeek.Sunday)
{
// Get bookings for the date being rendered.
var currentBookings = Bookings.GetBookings(args.Item.Date);

List<Color> densityColors = new List<Color>();


// Set a density bar color for each of the days bookings.
// It's assumed that there can't be more than 10 bookings in a day. Otherwise,
// further processing is needed to fit within the max of 10 density bars.
foreach (booking in currentBookings)
{
if (booking.IsConfirmed == true)
{
densityColors.Add(Colors.Green);
}
else
{
densityColors.Add(Colors.Blue);
}
}
args.Item.SetDensityColors(densityColors);
}
}
}

Related articles
Date and time controls
Calendar date picker
Date picker
Time picker
Date picker
3/6/2017 1 min to read Edit on GitHub

The date picker gives you a standardized way to let users pick a localized date value using touch, mouse, or
keyboard input.

Important APIs
DatePicker class
Date property

Is this the right control?


Use a date picker to let a user pick a known date, such as a date of birth, where the context of the calendar is not
important.
For more info about choosing the right date control, see the Date and time controls article.

Examples
The entry point displays the chosen date, and when the user selects the entry point, a picker surface expands
vertically from the middle for the user to make a selection. The date picker overlays other UI; it doesn't push other
UI out of the way.

Create a date picker


This example shows how to create a simple date picker with a header.

<DatePicker x:Name=birthDatePicker Header="Date of birth"/>

DatePicker birthDatePicker = new DatePicker();


birthDatePicker.Header = "Date of birth";

The resulting date picker looks like this:


Note For important info about date values, see DateTime and Calendar values in the Date and time controls
article.

Related articles
Date and time controls
Calendar date picker
Calendar view
Time picker
Time picker
3/6/2017 1 min to read Edit on GitHub

The time picker gives you a standardized way to let users pick a time value using touch, mouse, or keyboard input.

Important APIs
TimePicker class
Time property

Is this the right control?


Use a time picker to let a user pick a single time value.
For more info about choosing the right control, see the Date and time controls article.

Examples
The entry point displays the chosen time, and when the user selects the entry point, a picker surface expands
vertically from the middle for the user to make a selection. The time picker overlays other UI; it doesn't push other
UI out of the way.

Create a time picker


This example shows how to create a simple time picker with a header.

<TimePicker x:Name=arrivalTimePicker Header="Arrival time"/>

TimePicker arrivalTimePicker = new TimePicker();


arrivalTimePicker.Header = "Arrival time";

The resulting time picker looks like this:


NOTE
For important info about date and time values, see DateTime and Calendar values in the Date and time controls article.

Related topics
Date and time controls
Calendar date picker
Calendar view
Date picker
Dialogs and flyouts
3/6/2017 9 min to read Edit on GitHub

Dialogs and flyouts are transient UI elements that appear when something happens that requires notification,
approval, or additional information from the user.

Important APIs
ContentDialog class
Flyout class

Dialogs

Dialogs are modal UI overlays that provide contextual app information. Dialogs block interactions with the app
window until being explicitly dismissed. They often request some kind of action from the user.

Flyouts

A flyout is a lightweight contextual popup that displays UI related to what the user is doing. It includes
placement and sizing logic, and can be used to reveal a hidden control, show more detail about an item, or ask
the user to confirm an action.

Unlike a dialog, a flyout can be quickly dismissed by tapping or clicking somewhere outside the flyout,
pressing the Escape key or Back button, resizing the app window, or changing the device's orientation.

Is this the right control?


Use dialogs and flyouts to notify users of important information or to request confirmation or additional info
before an action can be completed.
Don't use a flyout instead of tooltip or context menu. Use a tooltip to show a short description that hides after a
specified time. Use a context menu for contextual actions related to a UI element, such as copy and paste.
Dialogs and flyouts make sure that users are aware of important information, but they also disrupt the user
experience. Because dialogs are modal (blocking), they interupt users, preventing them from doing anything else
until they interact with the dialog. Flyouts provide a less jarring experience, but displaying too many flyouts can be
distracting.
Consider the importance of the information you want to share: is it important enough to interupt the user? Also
consider how frequently the information needs to be shown; if you're showing a dialog or notification every few
minutes, you might want to allocate space for this info in the primary UI instead. For example, in a chat client,
rather than showing a flyout every time a friend logs in, you might display a list of friends who are online at the
moment and highlight friends as they log on.
Flyouts and dialogs are frequently used to confirm an action (such as deleting a file) before executing it. If you
expect the user to perform a particular action frequently, consider providing a way for the user to undo the action
if it was a mistake, rather than forcing users to confirm the action every time.

Dialogs vs. flyouts


Once you've determined that you want to use a dialog or flyout, you need to choose which one to use.
Given that dialogs block interactions and flyouts do not, dialogs should be reserved for situations where you want
the user to drop everything to focus on a specific bit of information or answer a question. Flyouts, on the other
hand, can be used when you want to call attention to something, but it's ok if the user wants to ignore it.

Use a dialog for...


Expressing important information that the user must read and acknowledge before proceeding. Examples
include:
When the user's security might be compromised
When the user is about to permanently alter a valuable asset
When the user is about to delete a valuable asset
To confirm an in-app purchase
Error messages that apply to the overall app context, such as a connectivity error.
Questions, when the app needs to ask the user a blocking question, such as when the app can't choose on
the user's behalf. A blocking question can't be ignored or postponed, and should offer the user well-defined
choices.

Use a flyout for...


Collecting additional information needed before an action can be completed.
Displaying info that's only relevent some of the time. For example, in a photo gallery app, when the user
clicks an image thumbnail, you might use a flyout to display a large version of the image.
Warnings and confirmations, including ones related to potentially destructive actions.
Displaying more information, such as details or longer descriptions of an item on the page.

Dialogs
General guidelines
Clearly identify the issue or the user's objective in the first line of the dialog's text.
The dialog title is the main instruction and is optional.
Use a short title to explain what people need to do with the dialog. Long titles do not wrap and are
truncated.
If you're using the dialog to deliver a simple message, error or question, you can optionally omit the
title. Rely on the content text to deliver that core information.
Make sure that the title relates directly to the button choices.
The dialog content contains the descriptive text and is required.
Present the message, error, or blocking question as simply as possible.
If a dialog title is used, use the content area to provide more detail or define terminology. Don't repeat
the title with slightly different wording.
At least one dialog button must appear.
Buttons are the only mechanism for users to dismiss the dialog.
Use buttons with text that identifies specific responses to the main instruction or content. An example is,
"Do you want to allow AppName to access your location?", followed by "Allow" and "Block" buttons.
Specific responses can be understood more quickly, resulting in efficient decision making.
Present the commit buttons in this order:
OK/[Do it]/Yes
[Don't do it]/No
Cancel
(where [Do it] and [Don't do it] are specific responses to the main instruction.)
Error dialogs display the error message in the dialog box, along with any pertinent information. The only
button used in an error dialog should be Close or a similar action.
Don't use dialogs for errors that are contextual to a specific place on the page, such as validation errors (in
password fields, for example), use the app's canvas itself to show inline errors.
Confirmation dialogs (OK/Cancel)
A confirmation dialog gives users the chance to confirm that they want to perform an action. They can affirm the
action, or choose to cancel.
A typical confirmation dialog has two buttons: an affirmation ("OK") button and a cancel button.
In general, the affirmation button should be on the left (the primary button) and the cancel button (the
secondary button) should be on the right.

As noted in the general recommendations section, use buttons with text that identifies specific responses to the
main instruction or content.

Some platforms put the affirmation button on the right instead of the left. So why do we recommend putting
it on the left? If you assume that the majority of users are right-handed and they hold their phone with that
hand, it's actually more comfortable to press the affirmation button when it's on the left, because the button is
more likely to be within the user's thumb-arc. Buttons on the right-side of the screen require the user to pull
their thumb inward into an less-comfortable position.
Create a dialog
To create a dialog, you use the ContentDialog class. You can create a dialog in code or markup. Although its
usually easier to define UI elements in XAML, in the case of a simple dialog, it's actually easier to just use code.
This example creates a dialog to notify the user that there's no WiFi connection, and then uses the ShowAsync
method to display it.

private async void displayNoWifiDialog()


{
ContentDialog noWifiDialog = new ContentDialog()
{
Title = "No wifi connection",
Content = "Check connection and try again",
PrimaryButtonText = "Ok"
};

ContentDialogResult result = await noWifiDialog.ShowAsync();


}

When the user clicks a dialog button, the ShowAsync method returns a ContentDialogResult to let you know
which button the user clicks.
The dialog in this example asks a question and uses the returned ContentDialogResult to determine the user's
response.

private async void displayDeleteFileDialog()


{
ContentDialog deleteFileDialog = new ContentDialog()
{
Title = "Delete file permanently?",
Content = "If you delete this file, you won't be able to recover it. Do you want to delete it?",
PrimaryButtonText = "Delete",
SecondaryButtonText = "Cancel"
};

ContentDialogResult result = await deleteFileDialog.ShowAsync();

// Delete the file if the user clicked the primary button.


/// Otherwise, do nothing.
if (result == ContentDialogResult.Primary)
{
// Delete the file.
}
}

Flyouts
Create a flyout
A flyout is an open-ended container that can show arbitrary UI as its content.
Flyouts are attached to specific controls. You can use the Placement property to specify where flyout appears: Top,
Left, Bottom, Right, or Full. If you select the Full placement mode, the app stretches the flyout and centers it inside
the app window. When visible, they should be anchored to the invoking object and specify their preferred relative
position to the object: Top, Left, Bottom, or Right. Flyout also has a Full placement mode which attempts to stretch
the flyout and center it inside the app window. Some controls, such as Button, provide a Flyout property that you
can use to associate a flyout.
This example creates a simple flyout that displays some text when the button is pressed.
<Button Content="Click me">
<Button.Flyout>
<Flyout>
<TextBlock Text="This is a flyout!"/>
</Flyout>
</Button.Flyout>
</Button>

If the control doesn't have a flyout property, you can use the FlyoutBase.AttachedFlyout attached property instead.
When you do this, you also need to call the FlyoutBase.ShowAttachedFlyout method to show the flyout.
This example adds a simple flyout to an image. When the user taps the image, the app shows the flyout.

<Image Source="Assets/cliff.jpg" Width="50" Height="50"


Margin="10" Tapped="Image_Tapped">
<FlyoutBase.AttachedFlyout>
<Flyout>
<TextBlock TextWrapping="Wrap" Text="This is some text in a flyout." />
</Flyout>
</FlyoutBase.AttachedFlyout>
</Image>

private void Image_Tapped(object sender, TappedRoutedEventArgs e)


{
FlyoutBase.ShowAttachedFlyout((FrameworkElement)sender);
}

The previous examples defined their flyouts inline. You can also define a flyout as a static resource and then use it
with multiple elements. This example creates a more complicated flyout that displays a larger version of an image
when its thumbnail is tapped.

<!-- Declare the shared flyout as a resource. -->


<Page.Resources>
<Flyout x:Key="ImagePreviewFlyout" Placement="Right">
<!-- The flyout's DataContext must be the Image Source
of the image the flyout is attached to. -->
<Image Source="{Binding Path=Source}"
MaxHeight="400" MaxWidth="400" Stretch="Uniform"/>
<Flyout.FlyoutPresenterStyle>
<Style TargetType="FlyoutPresenter">
<Setter Property="ScrollViewer.ZoomMode" Value="Enabled"/>
<Setter Property="Background" Value="Black"/>
<Setter Property="BorderBrush" Value="Gray"/>
<Setter Property="BorderThickness" Value="5"/>
<Setter Property="MinHeight" Value="300"/>
<Setter Property="MinWidth" Value="300"/>
</Style>
</Flyout.FlyoutPresenterStyle>
</Flyout>
</Page.Resources>
<!-- Assign the flyout to each element that shares it. -->
<StackPanel>
<Image Source="Assets/cliff.jpg" Width="50" Height="50"
Margin="10" Tapped="Image_Tapped"
FlyoutBase.AttachedFlyout="{StaticResource ImagePreviewFlyout}"
DataContext="{Binding RelativeSource={RelativeSource Mode=Self}}"/>
<Image Source="Assets/grapes.jpg" Width="50" Height="50"
Margin="10" Tapped="Image_Tapped"
FlyoutBase.AttachedFlyout="{StaticResource ImagePreviewFlyout}"
DataContext="{Binding RelativeSource={RelativeSource Mode=Self}}"/>
<Image Source="Assets/rainier.jpg" Width="50" Height="50"
Margin="10" Tapped="Image_Tapped"
FlyoutBase.AttachedFlyout="{StaticResource ImagePreviewFlyout}"
DataContext="{Binding RelativeSource={RelativeSource Mode=Self}}"/>
</StackPanel>

private void Image_Tapped(object sender, TappedRoutedEventArgs e)


{
FlyoutBase.ShowAttachedFlyout((FrameworkElement)sender);
}

Style a flyout
To style a Flyout, modify its FlyoutPresenterStyle. This example shows a paragraph of wrapping text and makes
the text block accessible to a screen reader.

<Flyout>
<Flyout.FlyoutPresenterStyle>
<Style TargetType="FlyoutPresenter">
<Setter Property="ScrollViewer.HorizontalScrollMode"
Value="Disabled"/>
<Setter Property="ScrollViewer.HorizontalScrollBarVisibility" Value="Disabled"/>
<Setter Property="IsTabStop" Value="True"/>
<Setter Property="TabNavigation" Value="Cycle"/>
</Style>
</Flyout.FlyoutPresenterStyle>
<TextBlock Style="{StaticResource BodyTextBlockStyle}" Text="Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod
tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea
commodo consequat."/>
</Flyout>

Get the samples


XAML UI basics
See all of the XAML controls in an interactive format.

Related articles
Tooltips
Menus and context menu
Flyout class
ContentDialog class
Flip view
3/6/2017 6 min to read Edit on GitHub

Use a flip view for browsing images or other items in a collection, such as photos in an album or items in a
product details page, one item at a time. For touch devices, swiping across an item moves through the collection.
For a mouse, navigation buttons appear on mouse hover. For a keyboard, arrow keys move through the collection.

Important APIs
FlipView class
ItemsSource property
ItemTemplate property

Is this the right control?


Flip view is best for perusing images in small to medium collections (up to 25 or so items). Examples of such
collections include items in a product details page or photos in a photo album. Although we don't recommend flip
view for most large collections, the control is common for viewing individual images in a photo album.

Examples
Horizontal browsing, starting at the left-most item and flipping right, is the typical layout for a flip view. This layout
works well in either portrait or landscape orientation on all devices:

A flip view can also be browsed vertically:

Create a flip view


FlipView is an ItemsControl, so it can contain a collection of items of any type. To populate the view, add items to
the Items collection, or set the ItemsSource property to a data source.
By default, a data item is displayed in the flip view as the string representation of the data object it's bound to. To
specify exactly how items in the flip view are displayed, you create a DataTemplate to define the layout of
controls used to display an individual item. The controls in the layout can be bound to properties of a data object,
or have content defined inline. You assign the DataTemplate to the ItemTemplate property of the FlipView.
Add items to the Items collection
You can add items to the Items collection using XAML or code. You typically add items this way if you have a small
number of items that don't change and are easily defined in XAML, or if you generate the items in code at run
time. Here's a flip view with items defined inline.

<FlipView x:Name="flipView1">
<Image Source="Assets/Logo.png" />
<Image Source="Assets/SplashScreen.png" />
<Image Source="Assets/SmallLogo.png" />
</FlipView>

// Create a new flip view, add content,


// and add a SelectionChanged event handler.
FlipView flipView1 = new FlipView();
flipView1.Items.Add("Item 1");
flipView1.Items.Add("Item 2");

// Add the flip view to a parent container in the visual tree.


stackPanel1.Children.Add(flipView1);

When you add items to a flip view they are automatically placed in a FlipViewItem container. To change how an
item is displayed you can apply a style to the item container by setting the ItemContainerStyle property.
When you define the items in XAML, they are automatically added to the Items collection.
Set the items source
You typically use a flip view to display data from a source such as a database or the Internet. To populate a flip
view from a data source, you set its ItemsSource property to a collection of data items.
Here, the flip view's ItemsSource is set in code directly to an instance of a collection.

// Data source.
List<String> itemsList = new List<string>();
itemsList.Add("Item 1");
itemsList.Add("Item 2");

// Create a new flip view, add content,


// and add a SelectionChanged event handler.
FlipView flipView1 = new FlipView();
flipView1.ItemsSource = itemsList;
flipView1.SelectionChanged += FlipView_SelectionChanged;

// Add the flip view to a parent container in the visual tree.


stackPanel1.Children.Add(flipView1);

You can also bind the ItemsSource property to a collection in XAML. For more info, see Data binding with XAML.
Here, the ItemsSource is bound to a CollectionViewSource named itemsViewSource .
<Page.Resources>
<!-- Collection of items displayed by this page -->
<CollectionViewSource x:Name="itemsViewSource" Source="{Binding Items}"/>
</Page.Resources>

...

<FlipView x:Name="itemFlipView"
ItemsSource="{Binding Source={StaticResource itemsViewSource}}"/>

Note You can populate a flip view either by adding items to its Items collection, or by setting its ItemsSource
property, but you can't use both ways at the same time. If you set the ItemsSource property and you add an
item in XAML, the added item is ignored. If you set the ItemsSource property and you add an item to the Items
collection in code, an exception is thrown.

Specify the look of the items


By default, a data item is displayed in the flip view as the string representation of the data object it's bound to. You
typically want to show a more rich presentation of your data. To specify exactly how items in the flip view are
displayed, you create a DataTemplate. The XAML in the DataTemplate defines the layout and appearance of
controls used to display an individual item. The controls in the layout can be bound to properties of a data object,
or have content defined inline. The DataTemplate is assigned to the ItemTemplate property of the FlipView
control.
In this example, the ItemTemplate of a FlipView is defined inline. An overlay is added to the image to display the
image name.

<FlipView x:Name="flipView1" Width="480" Height="270"


BorderBrush="Black" BorderThickness="1">
<FlipView.ItemTemplate>
<DataTemplate>
<Grid>
<Image Width="480" Height="270" Stretch="UniformToFill"
Source="{Binding Image}"/>
<Border Background="#A5000000" Height="80" VerticalAlignment="Bottom">
<TextBlock Text="{Binding Name}"
FontFamily="Segoe UI" FontSize="26.667"
Foreground="#CCFFFFFF" Padding="15,20"/>
</Border>
</Grid>
</DataTemplate>
</FlipView.ItemTemplate>
</FlipView>

Here's what the layout defined by the data template looks like.
Flip view data template.
Set the orientation of the flip view
By default, the flip view flips horizontally. To make the it flip vertically, use a stack panel with a vertical orientation
as the flip view's ItemsPanel.
This example shows how to use a stack panel with a vertical orientation as the ItemsPanel of a FlipView.
<FlipView x:Name="flipViewVertical" Width="480" Height="270"
BorderBrush="Black" BorderThickness="1">

<!-- Use a vertical stack panel for vertical flipping. -->


<FlipView.ItemsPanel>
<ItemsPanelTemplate>
<VirtualizingStackPanel Orientation="Vertical"/>
</ItemsPanelTemplate>
</FlipView.ItemsPanel>

<FlipView.ItemTemplate>
<DataTemplate>
<Grid>
<Image Width="480" Height="270" Stretch="UniformToFill"
Source="{Binding Image}"/>
<Border Background="#A5000000" Height="80" VerticalAlignment="Bottom">
<TextBlock Text="{Binding Name}"
FontFamily="Segoe UI" FontSize="26.667"
Foreground="#CCFFFFFF" Padding="15,20"/>
</Border>
</Grid>
</DataTemplate>
</FlipView.ItemTemplate>
</FlipView>

Here's what the flip view looks like with a vertical orientation.

Adding a context indicator


A context indicator in a flip view provides a useful point of reference. The dots in a standard context indicator
aren't interactive. As seen in this example, the best placement is usually centered and below the gallery:

For larger collections (10-25 items), consider using an indicator that provides more context, such as a film strip of
thumbnails. Unlike a context indicator that uses simple dots, each thumbnail in the film strip shows a small version
of the corresponding image and should be selectable:
For example code that shows how to add a context indicator to a FlipView, see XAML FlipView sample.

Do's and don'ts


Flip views work best for collections of up to 25 or so items.
Avoid using a flip view control for larger collections, as the repetitive motion of flipping through each item can
be tedious. An exception would be for photo albums, which often have hundreds or thousands of images.
Photo albums almost always switch to a flip view once a photo has been selected in the grid view layout. For
other large collections, consider a List view or grid view.
For context indicators:
The order of dots (or whichever visual marker you choose) works best when centered and below a
horizontally-panning gallery.
If you want a context indicator in a vertically-panning gallery, it works best centered and to the right of
the images.
The highlighted dot indicates the current item. Usually the highlighted dot is white and the other dots
are gray.
The number of dots can vary, but don't have so many that the user might struggle to find his or her
place - 10 dots is usually the maximum number to show.

Globalization and localization checklist


BI-DIRECTIONAL CONSIDERATIONS Use standard mirroring for RTL languages. The back and
forward controls should be based on the language's direction,
so for RTL languages, the right button should navigate
backwards and the left button should navigate forward.

Get the sample code


XAML UI basics sample

Related articles
Guidelines for lists
FlipView class
Hub control/pattern
3/6/2017 3 min to read Edit on GitHub

A hub control lets you organize app content into distinct, yet related, sections or categories. Sections in a hub are
meant to be traversed in a preferred order, and can serve as the starting point for more detailed experiences.

Content in a hub can be displayed in a panoramic view that allows users to get a glimpse of what's new, what's
available, and what's relevant. Hubs typically have a page header, and content sections each get a section header.

Important APIs
Hub class
HubSection class

Is this the right control?


The hub control works well for displaying large amounts of content that is arranged in a hierarchy. Hubs prioritize
the browsing and discovery of new content, making them useful for displaying items in a store or a media
collection.
The hub control has several features that make it work well for building a content navigation pattern.
Visual navigation
A hub allows content to be displayed in a diverse, brief, easy-to-scan array.
Categorization
Each hub section allows for its content to be arranged in a logical order.
Mixed content types
With mixed content types, variable asset sizes and ratios are common. A hub allows each content type to be
uniquely and neatly laid out in each hub section.
Variable page and content widths
Being a panoramic model, the hub allows for variability in its section widths. This is great for content of
different depths or quantities.
Flexible architecture
If you'd prefer to keep your app architecture shallow, you can fit all channel content into a hub section
summary.
A hub is just one of several navigation elements you can use; to learn more about navigation patterns and the
other navigation elements, see the Navigation design basics for Universal Windows Platform (UWP) apps.

Hub architecture
The hub control has a hierarchical navigation pattern that support apps with a relational information architecture.
A hub consists of different categories of content, each of which maps to the app's section pages. Section pages can
be displayed in any form that best represents the scenario and content that the section contains.

Layouts and panning/scrolling


There are a number of ways to lay out and navigate content in a hub; just be sure that content lists in a hub always
pan in a direction perpendicular to the direction in which the hub scrolls.
Horizontal panning

Vertical panning
Horizontal panning with vertically scrolling list/grid

Vertical panning with horizontally scrolling list/grid

Examples
The hub provides a great deal of design flexibility. This lets you design apps that have a wide variety of compelling
and visually rich experiences. You can use a hero image or content section for the first group; a large image for the
hero can be cropped both vertically and horizontally without losing the center of interest. Here is an example of a
single hero image and how that image may be cropped for landscape, portrait, and narrow width.
On mobile devices, one hub section is visible at a time.

Recommendations
To let users know that there's more content in a hub section, we recommend clipping the content so that a
certain amount of it peeks.
Based on the needs of your app, you can add several hub sections to the hub control, with each one offering its
own functional purpose. For example, one section could contain a series of links and controls, while another
could be a repository for thumbnails. A user can pan between these sections using the gesture support built
into the hub control.
Having content dynamically reflow is the best way to accommodate different window sizes.
If you have many hub sections, consider adding semantic zoom. This also makes it easier to find sections when
the app is resized to a narrow width.
We recommend not having an item in a hub section lead to another hub; instead, you can use interactive
headers to navigate to another hub section or page.
The hub is a starting point and is meant to be customized to fit the needs of your app. You can change the
following aspects of a hub:
Number of sections
Type of content in each section
Placement and order of sections
Size of sections
Spacing between sections
Spacing between a section and the top or bottom of the hub
Text style and size in headers and content
Color of the background, sections, section headers, and section content

Get the sample code


XAML UI basics sample

Related articles
Hub class
Navigation basics
Using a hub
XAML Hub control sample
Hyperlinks
3/6/2017 6 min to read Edit on GitHub

Hyperlinks navigate the user to another part of the app, to another app, or launch a specific uniform resource
identifier (URI) using a separate browser app. There are two ways that you can add a hyperlink to a XAML app: the
Hyperlink text element and HyperlinkButton control.

Important APIs
Hyperlink text element
HyperlinkButton control

Is this the right control?


Use a hyperlink when you need text that responds when selected and navigates the user to more information
about the text that was selected.
Choose the right type of hyperlink based on your needs:
Use an inline Hyperlink text element inside of a text control. A Hyperlink element flows with other text
elements and you can use it in any InlineCollection. Use a text hyperlink if you want automatic text wrapping
and don't necessarily need a large hit target. Hyperlink text can be small and difficult to target, especially for
touch.
Use a HyperlinkButton for stand-alone hyperlinks. A HyperlinkButton is a specialized Button control that you
can use anywhere that you would use a Button.
Use a HyperlinkButton with an Image as its content to make a clickable image.

Examples
Hyperlinks in the Calculator app.
Create a Hyperlink text element
This example shows how to use a Hyperlink text element inside of a TextBlock.

<StackPanel Width="200">
<TextBlock Text="Privacy" Style="{StaticResource SubheaderTextBlockStyle}"/>
<TextBlock TextWrapping="WrapWholeWords">
<Span xml:space="preserve"><Run>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Read the </Run><Hyperlink
NavigateUri="http://www.contoso.com">Contoso Privacy Statement</Hyperlink><Run> in your browser.</Run> Donec pharetra, enim sit amet
mattis tincidunt, felis nisi semper lectus, vel porta diam nisi in augue.</Span>
</TextBlock>
</StackPanel>

The hyperlink appears inline and flows with the surrounding text:

Tip When you use a Hyperlink in a text control with other text elements in XAML, place the content in a Span
container and apply the xml:space="preserve" attribute to the Span to keep the white space between the
Hyperlink and other elements.
Create a HyperlinkButton
Here's how to use a HyperlinkButton, both with text and with an image.

<StackPanel>
<TextBlock Text="About" Style="{StaticResource TitleTextBlockStyle}"/>
<HyperlinkButton NavigateUri="http://www.contoso.com">
<Image Source="Assets/ContosoLogo.png"/>
</HyperlinkButton>
<TextBlock Text="Version: 1.0.0001" Style="{StaticResource CaptionTextBlockStyle}"/>
<HyperlinkButton Content="Contoso.com" NavigateUri="http://www.contoso.com"/>
<HyperlinkButton Content="Acknowledgments" NavigateUri="http://www.contoso.com"/>
<HyperlinkButton Content="Help" NavigateUri="http://www.contoso.com"/>
</StackPanel>

The hyperlink buttons with text content appear as marked-up text. The Contoso logo image is also a clickable
hyperlink:

Handle navigation
For both kinds of hyperlinks, you handle navigation the same way; you can set the NavigateUri property, or
handle the Click event.
Navigate to a URI
To use the hyperlink to navigate to a URI, set the NavigateUri property. When a user clicks or taps the hyperlink,
the specified URI opens in the default browser. The default browser runs in a separate process from your app.

NOTE
You don't have to use http: or https: schemes. You can use schemes such as ms-appx:, ms-appdata:, or ms-resources:, if
there's resource content at these locations that's appropriate to load in a browser. However, the file: scheme is specifically
blocked. For more info, see URI schemes.
When a user clicks the hyperlink, the value of the NavigateUri property is passed to a system handler for URI types and
schemes. The system then launches the app that is registered for the scheme of the URI provided for NavigateUri.

If you don't want the hyperlink to load content in a default Web browser (and don't want a browser to appear),
then don't set a value for NavigateUri. Instead, handle the Click event, and write code that does what you want.
Handle the Click event
Use the Click event for actions other than launching a URI in a browser, such as navigation within the app. For
example, if you want to load a new app page rather than opening a browser, call a Frame.Navigate method within
your Click event handler to navigate to the new app page. If you want an external, absolute URI to load within a
WebView control that also exists in your app, call WebView.Navigate as part of your Click handler logic.
You don't typically handle the Click event as well as specifying a NavigateUri value, as these represent two
different ways of using the hyperlink element. If your intent is to open the URI in the default browser, and you
have specified a value for NavigateUri, don't handle the Click event. Conversely, if you handle the Click event,
don't specify a NavigateUri.
There's nothing you can do within the Click event handler to prevent the default browser from loading any valid
target specified for NavigateUri; that action takes place automatically (asynchronously) when the hyperlink is
activated and can't be canceled from within the Click event handler.

Hyperlink underlines
By default, hyperlinks are underlined. This underline is important because it helps meet accessibility requirements.
Color-blind users use the underline to distinguish between hyperlinks and other text. If you disable underlines,
you should consider adding some other type of formatting difference to distinguish hyperlinks from other text,
such as FontWeight or FontStyle.
Hyperlink text elements
You can set the UnderlineStyle property to disable the underline. If you do, consider using FontWeight or
FontStyle to differentiate your link text.
HyperlinkButton
By default, the HyperlinkButton appears as underlined text when you set a string as the value for the Content
property.
The text does not appear underlined in the following cases:
You set a TextBlock as the value for the Content property, and set the Text property on the TextBlock.
You re-template the HyperlinkButton and change the name of the ContentPresenter template part.
If you need a button that appears as non-underlined text, consider using a standard Button control and applying
the built-in TextBlockButtonStyle system resource to its Style property.

Notes for Hyperlink text element


This section applies only to the Hyperlink text element, not to the HyperlinkButton control.
Input events
Because a Hyperlink is not a UIElement, it does not have the set of UI element input events such as Tapped,
PointerPressed, and so on. Instead, a Hyperlink has its own Click event, plus the implicit behavior of the system
loading any URI specified as the NavigateUri. The system handles all input actions that should invoke the
Hyperlink actions and raises the Click event in response.
Content
Hyperlink has restrictions on the content that can exist in its Inlines collection. Specifically, a Hyperlink only
permits Run and other [Span]() types that aren't another Hyperlink. InlineUIContainer can't be in the Inlines
collection of a Hyperlink. Attempting to add restricted content throws an invalid argument exception or XAML
parse exception.
Hyperlink and theme/style behavior
Hyperlink doesn't inherit from Control, so it doesn't have a Style property or a Template. You can edit the
properties that are inherited from TextElement, such as Foreground or FontFamily, to change the appearance of a
Hyperlink, but you can't use a common style or template to apply changes. Instead of using a template, consider
using common resources for values of Hyperlink properties to provide consistency. Some properties of Hyperlink
use defaults from a {ThemeResource} markup extension value provided by the system. This enables the Hyperlink
appearance to switch in appropriate ways when the user changes the system theme at run-time.
The default color of the hyperlink is the accent color of the system. You can set the Foreground property to
override this.

Recommendations
Only use hyperlinks for navigation; don't use them for other actions.
Use the Body style from the type ramp for text-based hyperlinks. Read about fonts and the Windows 10
type ramp.
Keep discrete hyperlinks far enough apart so that the user can differentiate between them and has an easy
time selecting each one.
Add tooltips to hyperlinks that indicate to where the user will be directed. If the user will be directed to an
external site, include the top-level domain name inside the tooltip, and style the text with a secondary font
color.

Related articles
Text controls
Guidelines for tooltips
For developers (XAML)
Windows.UI.Xaml.Documents.Hyperlink class
Windows.UI.Xaml.Controls.HyperlinkButton class
Images and image brushes
3/6/2017 6 min to read Edit on GitHub

To display an image, you can use either the Image object or the ImageBrush object. An Image object renders an
image, and an ImageBrush object paints another object with an image.

Important APIs
Image class
Source property
ImageBrush class
ImageSource property

Are these the right elements?


Use an Image element to display a stand-alone image in your app.
Use an ImageBrush to apply an image to another object. Uses for an ImageBrush include decorative effects for
text, or backgrounds for controls or layout containers.

Create an image
Image
This example shows how to create an image by using the Image object.

<Image Width="200" Source="licorice.jpg" />

Here's the rendered Image object.

In this example, the Source property specifies the location of the image that you want to display. You can set the
Source by specifying an absolute URL (for example, http://contoso.com/myPicture.jpg) or by specifying a URL that
is relative to your app packaging structure. For our example, we put the "licorice.jpg" image file in the root folder
of our project and declare project settings that include the image file as content.
ImageBrush
With the ImageBrush object, you can use an image to paint an area that takes a Brush object. For example, you
can use an ImageBrush for the value of the Fill property of an Ellipse or the Background property of a Canvas.
The next example shows how to use an ImageBrush to paint an Ellipse.
<Ellipse Height="200" Width="300">
<Ellipse.Fill>
<ImageBrush ImageSource="licorice.jpg" />
</Ellipse.Fill>
</Ellipse>

Here's the Ellipse painted by the ImageBrush.

Stretch an image
If you don't set the Width or Height values of an Image, it is displayed with the dimensions of the image
specified by the Source. Setting the Width and Height creates a containing rectangular area in which the image
is displayed. You can specify how the image fills this containing area by using the Stretch property. The Stretch
property accepts these values, which the Stretch enumeration defines:
None: The image doesn't stretch to fill the output dimensions. Be careful with this Stretch setting: if the source
image is larger than the containing area, your image will be clipped, and this usually isn't desirable because you
don't have any control over the viewport like you do with a deliberate Clip.
Uniform: The image is scaled to fit the output dimensions. But the aspect ratio of the content is preserved. This
is the default value.
UniformToFill: The image is scaled so that it completely fills the output area but preserves its original aspect
ratio.
Fill: The image is scaled to fit the output dimensions. Because the content's height and width are scaled
independently, the original aspect ratio of the image might not be preserved. That is, the image might be
distorted to completely fill the output area.

Crop an image
You can use the Clip property to clip an area from the image output. You set the Clip property to a Geometry.
Currently, non-rectangular clipping is not supported.
The next example shows how to use a RectangleGeometry as the clip region for an image. In this example, we
define an Image object with a height of 200. A RectangleGeometry defines a rectangle for the area of the image
that will be displayed. The Rect property is set to "25,25,100,150", which defines a rectangle starting at position
"25,25" with a width of 100 and a height of 150. Only the part of the image that is within the area of the rectangle
is displayed.
<Image Source="licorice.jpg" Height="200">
<Image.Clip>
<RectangleGeometry Rect="25,25,100,150" />
</Image.Clip>
</Image>

Here's the clipped image on a black background.

Apply an opacity
You can apply an Opacity to an image so that the image is rendered semi-translucent. The opacity values are from
0.0 to 1.0 where 1.0 is fully opaque and 0.0 is fully transparent. This example shows how to apply an opacity of 0.5
to an Image.

<Image Height="200" Source="licorice.jpg" Opacity="0.5" />

Here's the rendered image with an opacity of 0.5 and a black background showing through the partial opacity.

Image file formats


Image and ImageBrush can display these image file formats:
Joint Photographic Experts Group (JPEG)
Portable Network Graphics (PNG)
bitmap (BMP)
Graphics Interchange Format (GIF)
Tagged Image File Format (TIFF)
JPEG XR
icons (ICO)
The APIs for Image, BitmapImage and BitmapSource don't include any dedicated methods for encoding and
decoding of media formats. All of the encode and decode operations are built-in, and at most will surface aspects
of encode or decode as part of event data for load events. If you want to do any special work with image encode or
decode, which you might use if your app is doing image conversions or manipulation, you should use the APIs that
are available in the Windows.Graphics.Imaging namespace. These APIs are also supported by the Windows
Imaging Component (WIC) in Windows.
Starting in Windows 10, version 1607, the Image element supports animated GIF images. When you use a
BitmapImage as the image Source, you can access BitmapImage APIs to control playback of the animated GIF
image. For more info, see the Remarks on the BitmapImage class page.

Note Animated GIF support is available when your app is compiled for Windows 10, version 1607 and
running on version 1607 (or later). When your app is compiled for or runs on previous versions, the first frame
of the GIF is shown, but it is not animated.

For more info about app resources and how to package image sources in an app, see Defining app resources.
WriteableBitmap
A WriteableBitmap provides a BitmapSource that can be modified and that doesn't use the basic file-based
decoding from the WIC. You can alter images dynamically and re-render the updated image. To define the buffer
content of a WriteableBitmap, use the PixelBuffer property to access the buffer and use a stream or language-
specific buffer type to fill it. For example code, see WriteableBitmap.
RenderTargetBitmap
The RenderTargetBitmap class can capture the XAML UI tree from a running app, and then represents a bitmap
image source. After capture, that image source can be applied to other parts of the app, saved as a resource or app
data by the user, or used for other scenarios. One particularly useful scenario is creating a runtime thumbnail of a
XAML page for a navigation scheme, such as providing an image link from a Hub control. RenderTargetBitmap
does have some limitations on the content that will appear in the captured image. For more info, see the API
reference topic for RenderTargetBitmap.
Image sources and scaling
You should create your image sources at several recommended sizes, to ensure that your app looks great when
Windows scales it. When specifying a Source for an Image, you can use a naming convention that will
automatically reference the correct resource for the current scaling. For specifics of the naming convention and
more info, see Quickstart: Using file or image resources.
For more info about how to design for scaling, see UX guidelines for layout and scaling.
Image and ImageBrush in code
It's typical to specify Image and ImageBrush elements using XAML rather than code. This is because these
elements are often the output of design tools as part of a XAML UI definition.
If you define an Image or ImageBrush using code, use the default constructors, then set the relevant source
property (Image.Source or ImageBrush.ImageSource). The source properties require a BitmapImage (not a
URI) when you set them using code. If your source is a stream, use the SetSourceAsync method to initialize the
value. If your source is a URI, which includes content in your app that uses the ms-appx or ms-resource schemes,
use the BitmapImage constructor that takes a URI. You might also consider handling the ImageOpened event if
there are any timing issues with retrieving or decoding the image source, where you might need alternate content
to display until the image source is available. For example code, see XAML images sample.

NOTE
If you establish images using code, you can use automatic handling for accessing unqualified resources with current scale
and culture qualifiers, or you can use ResourceManager and ResourceMap with qualifiers for culture and scale to obtain
the resources directly. For more info see Resource management system.

Related articles
Audio, video, and camera
Image class
ImageBrush class
Inking controls
3/6/2017 5 min to read Edit on GitHub

There are two different controls that facilitate inking in Universal Windows Platform (UWP) apps: InkCanvas and
InkToolbar.
The InkCanvas control renders pen input as either an ink stroke (using default settings for color and thickness) or
an erase stroke. This control is a transparent overlay that doesn't include any built-in UI for changing the default
ink stroke properties.

NOTE
InkCanvas can be configured to support similar functionality for both mouse and touch input.

As the InkCanvas control does not include support for changing the default ink stroke settings, it can be paired with
an InkToolbar control. The InkToolbar contains a customizable and extensible collection of buttons that activate ink-
related features in an associated InkCanvas.
By default, the InkToolbar includes buttons for drawing, erasing, highlighting, and displaying a ruler. Depending on
the feature, other settings and commands, such as ink color, stroke thickness, erase all ink, are provided in a flyout.

NOTE
InkToolbar supports pen and mouse input and can be configured to recognize touch input.

Important APIs
InkCanvas class
InkToolbar class
InkPresenter class
Windows.UI.Input.Inking
Is this the right control?
Use the InkCanvas when you need to enable basic inking features in your app without providing any ink settings to
the user.
By default, strokes are rendered as ink when using the pen tip (a black ballpoint pen with a thickness of 2 pixels)
and as an eraser when using the eraser tip. If an eraser tip is not present, the InkCanvas can be configured to
process input from the pen tip as an erase stroke.
Pair the InkCanvas with an InkToolbar to provide a UI for activating ink features and setting basic ink properties
such as stroke size, color, and shape of the pen tip.

NOTE
For more extensive customization of ink stroke rendering on an InkCanvas, use the underlying InkPresenter object.

Examples
Microsoft Edge
The Edge browser uses the InkCanvas and InkToolbar for Web Notes.

Windows Ink Workspace


The InkCanvas and InkToolbar are also used for both Sketchpad and Screen sketch in the Windows Ink
Workspace.

Create an InkCanvas and InkToolbar


Adding an InkCanvas to your app requires just one line of markup:

<InkCanvas x:Name=myInkCanvas/>
NOTE
For detailed InkCanvas customization using InkPresenter, see the "Pen and stylus interactions in UWP apps" article.

The InkToolbar control must be used in conjunction with an InkCanvas. Incorporating an InkToolbar (with all built-
in tools) into your app requires one additional line of markup:

<InkToolbar TargetInkCanvas={x:Bind myInkCanvas}/>

This displays the following InkToolbar:


Built-in buttons
The InkToolbar includes the following built-in buttons:
Pens
Ballpoint pen - draws a solid, opaque stroke with a circle pen tip. The stroke size is dependent on the pen
pressure detected.
Pencil - draws a soft-edged, textured, and semi-transparent stroke (useful for layered shading effects) with a
circle pen tip. The stroke color (darkness) is dependent on the pen pressure detected.
Highlighter draws a semi-transparent stroke with a rectangle pen tip.
You can customize both the color palette and size attributes (min, max, default) in the flyout for each pen.
Tool
Eraser deletes any ink stroke touched. Note that the entire ink stroke is deleted, not just the portion under the
eraser stroke.
Toggle
Ruler shows or hides the ruler. Drawing near the ruler edge causes the ink stroke to snap to the ruler.

Although this is the default configuration, you have complete control over which built-in buttons are included in
the InkToolbar for your app.
Custom buttons
The InkToolbar consists of two distinct groups of button types:
1. A group of "tool" buttons containing the built-in drawing, erasing, and highlighting buttons. Custom pens
and tools are added here.

NOTE
Feature selection is mutually exclusive.

2. A group of "toggle" buttons containing the built-in ruler button. Custom toggles are added here.
NOTE
Features are not mutually exclusive and can be used concurrently with other active tools.

Depending on your application and the inking functionality required, you can add any of the following buttons
(bound to your custom ink features) to the InkToolbar:
Custom pen a pen for which the ink color palette and pen tip properties, such as shape, rotation, and size, are
defined by the host app.
Custom tool a non-pen tool, defined by the host app.
Custom toggle Sets the state of an app-defined feature to on or off. When turned on, the feature works in
conjunction with the active tool.

NOTE
You cannot change the display order of the built-in buttons. The default display order is: Ballpoint pen, pencil, highlighter,
eraser, and ruler. Custom pens are appended to the last default pen, custom tool buttons are added between the last pen
button and the eraser button and custom toggle buttons are added after the ruler button. (Custom buttons are added in the
order they are specified.)

Although the InkToolbar can be a top level item, it is typically exposed through an Inking button or command. We
recommend using EE56 glyph from the Segoe MLD2 Assets font as a top level icon.

InkToolbar Interaction
All built-in pen and tool buttons include a flyout menu where ink properties and pen tip shape and size can be set.

An "extension glyph" is displayed on the button to indicate the existence of the flyout.
The flyout is shown when the button of an active tool is selected again. When the color or size is changed, the
flyout is automatically dismissed and inking can be resumed. Custom pens and tools can use the default flyout or
specify a custom flyout.
The eraser also has a flyout that provides the Erase All Ink command.

For information on customization and extensibility, check out SimpleInk sample.

Do's and don'ts


The InkCanvas, and inking in general, is best experienced through an active pen. However, we recommend
supporting inking with mouse and touch (including passive pen) input if required by your app.
Use an InkToolbar control with the InkCanvas to provide basic inking features and settings. Both the InkCanvas
and InkToolbar can be programmatically customized.
The InkToolbar, and inking in general, is best experienced through an active pen. However, inking with mouse
and touch can be supported if required by your app.
If supporting inking with touch input, we recommend using the ED5F icon from the Segoe MLD2 Assets font for
the toggle button, with a Touch writing tooltip.
If providing stroke selection, we recommend using the EF20 icon from the Segoe MLD2 Assets font for the tool
button, with a Selection tool tooltip.
If using more than one InkCanvas, we recommend using a single InkToolbar to control inking across canvases.
For best performance, we recommend altering the default flyout rather than creating a custom one for both
default and custom tools.

Get the sample code


SimpleInk sample demonstrates 8 scenarios around the customization and extensibility capabilities of the
InkCanvas and InkToolbar controls. Each scenario provides basic guidance on common inking situations and
control implementations.
For a more advanced inking sample, see ComplexInk sample.

Related articles
Pen and stylus interactions in UWP apps
Recognize ink strokes
Store and retrieve ink strokes
Lists
3/6/2017 9 min to read Edit on GitHub

Lists display and enable interactions with collection-based content. The four list patterns covered in this article
include:
List views, which are primarily used to display text-heavy content collections
Grid views, which are primarily used to display image-heavy content collections
Drop-down lists, which let users choose one item from an expanding list
List boxes, which let users choose one item or multiple items from a box that can be scrolled
Design guidelines, features, and examples are given for each list pattern. At the end of the article are links to
related topics and APIs.

Important APIs
ListView class
GridView class
ComboBox class

List views
List views let you categorize items and assign group headers, drag and drop items, curate content, and
reorder items.
Is this the right control?
Use a list view to:
Display a content collection that primarily consists of text.
Navigate a single or categorized collection of content.
Create the master pane in the master/details pattern. A master/details pattern is often used in email apps,
in which one pane (the master) has a list of selectable items while the other pane (details) has a detailed
view of the selected item.
Examples
Here's a simple list view showing grouped data on a phone.
Recommendations
Items within a list should have the same behavior.
If your list is divided into groups, you can use semantic zoom to make it easier for users to navigate
through grouped content.
List view articles

TOPIC DESCRIPTION

List view and grid view Learn the essentials of using a list view or grid view in
your app.

List view item templates The items you display in a list or grid can play a major
role in the overall look of your app. Modify control
templates and data templates to define the look of the
items and make your app look great.

Inverted lists Inverted lists have new items added at the bottom, like
in a chat app. Follow this guidance to use an inverted
list in your app.
TOPIC DESCRIPTION

Pull-to-refresh The pull-to-refresh pattern lets a user pull down on a


list of data using touch in order to retrieve more data.
Use this guidance to implement pull-to-refresh in your
list view.

Nested UI Nested UI is a user interface (UI) that exposes


actionable controls enclosed inside a container that a
user can also take action on. For example, you might
have list view item that contains a button, and the user
can select the list item, or press the button nested
within it. Follow these best practices to provide the
best nested UI experience for your users.

Grid views
Grid views are suited for arranging and browsing image-based content collections. A grid view layout scrolls
vertically and pans horizontally. Items are laid out in a left-to-right, then top-to-bottom reading order.
Is this the right control?
Use a list view to:
Display a content collection that primarily consists of images.
Display content libraries.
Format the two content views associated with semantic zoom.
Examples
This example shows a typical grid view layout, in this case for browsing apps. Metadata for grid view items is
usually restricted to a few lines of text and an item rating.

A grid view is an ideal solution for a content library, which is often used to present media such as pictures and
videos. In a content library, users expect to be able to tap an item to invoke an action.
Recommendations
Items within a list should have the same behavior.
If your list is divided into groups, you can use semantic zoom to make it easier for users to navigate
through grouped content.
Grid view articles

TOPIC DESCRIPTION

List view and grid view Learn the essentials of using a list view or grid view in
your app.

List view item templates The items you display in a list or grid can play a major
role in the overall look of your app. Modify control
templates and data templates to define the look of the
items and make your app look great.

Nested UI Nested UI is a user interface (UI) that exposes


actionable controls enclosed inside a container that a
user can also take action on. For example, you might
have list view item that contains a button, and the user
can select the list item, or press the button nested
within it. Follow these best practices to provide the
best nested UI experience for your users.

Drop-down lists
Drop-down lists, also known as combo boxes, start in a compact state and expand to show a list of selectable
items. The selected item is always visible, and non-visible items can be brought into view when the user taps
the combo box to expand it.
Is this the right control?
Use a drop-down list to let users select a single value from a set of items that can be adequately
represented with single lines of text.
Use a list or grid view instead of a combo box to display items that contain multiple lines of text or images.
When there are fewer than five items, consider using radio buttons (if only one item can be selected) or
check boxes (if multiple items can be selected).
Use a combo box when the selection items are of secondary importance in the flow of your app. If the
default option is recommended for most users in most situations, showing all the items by using a list
view might draw more attention to the options than necessary. You can save space and minimize
distraction by using a combo box.
Examples
A combo box in its compact state can show a header.

Although combo boxes expand to support longer string lengths, avoid excessively long strings that are
difficult to read.

If the collection in a combo box is long enough, a scroll bar will appear to accommodate it. Group items
logically in the list.

Recommendations
Limit the text content of combo box items to a single line.
Sort items in a combo box in the most logical order. Group together related options and place the most
common options at the top. Sort names in alphabetical order, numbers in numerical order, and dates in
chronological order.
Text Search
Combo boxes automatically support search within their collections. As users type characters on a physical
keyboard while focused on an open or closed combo box, candidates matching the user's string are brought
into view. This functionality is especially helpful when navigating a long list. For example, when interacting
with a drop-down containing a list of states, users can press the w key to bring Washington into view for
quick selection.

List boxes
A list box allows the user to choose either a single item or multiple items from a collection. List boxes are
similar to drop-down lists, except that list boxes are always openthere is no compact (non-expanded) state
for a list box. Items in the list can be scrolled if there isn't space to show everything.
Is this the right control?
A list box can be useful when items in the list are important enough to prominently display, and when
there's enough screen real estate, to show the full list.
A list box should draw the user's attention to the full set of alternatives in an important choice. By contrast,
a drop-down list initially draws the user's attention to the selected item.
Avoid using a list box if:
There is a very small number of items for the list. A single-select list box that always has the same 2
options might be better presented as radio buttons. Also consider using radio buttons when there
are 3 or 4 static items in the list.
The list box is single-select and it always has the same 2 options where one can be implied as not
the other, such as "on" and "off." Use a single check box or a toggle switch.
There is a very large number of items. A better choice for long lists are grid view and list view. For
very long lists of grouped data, semantic zoom is preferred.
The items are contiguous numerical values. If that's the case, consider using a slider.
The selection items are of secondary importance in the flow of your app or the default option is
recommended for most users in most situations. Use a drop-down list instead.
Recommendations
The ideal range of items in a list box is 3 to 9.
A list box works well when its items can dynamically vary.
If possible, set the size of a list box so that its list of items don't need to be panned or scrolled.
Verify that the purpose of the list box, and which items are currently selected, is clear.
Reserve visual effects and animations for touch feedback, and for the selected state of items.
Limit the list box item's text content to a single line. If the items are visuals, you can customize the size. If
an item contains multiple lines of text or images, instead use a grid view or list view.
Use the default font unless your brand guidelines indicate to use another.
Don't use a list box to perform commands or to dynamically show or hide other controls.

Selection mode
Selection mode lets users select and take action on a single item or on multiple items. It can be invoked
through a context menu, by using CTRL+click or SHIFT+click on an item, or by rolling-over a target on an
item in a gallery view. When selection mode is active, check boxes appear next to each list item, and actions
can appear at the top or the bottom of the screen.
There are three selection modes:
Single: The user can select only one item at a time.
Multiple: The user can select multiple items without using a modifier.
Extended: The user can select multiple items with a modifier, such as holding down the SHIFT key.
Tapping anywhere on an item selects it. Tapping on the command bar action affects all selected items. If no
item is selected, command bar actions should be inactive, except for "Select All".
Selection mode doesn't have a light dismiss model; tapping outside of the frame in which selection mode is
active won't cancel the mode. This is to prevent accidental deactivation of the mode. Clicking the back button
dismisses the multi-select mode.
Show a visual confirmation when an action is selected. Consider displaying a confirmation dialog for certain
actions, especially destructive actions such as delete.
Selection mode is confined to the page in which it is active, and can't affect any items outside of that page.
The entry point to selection mode should be juxtaposed against the content it affects.
For command bar recommendations, see guidelines for command bars.

Globalization and localization checklist


WRAPPING Allow two lines for the list label.

HORIZONTAL EXPANSION Make sure fields can accomdation text expension and are
scrollable.

VERTICAL SPACING Use non-Latin chracters for vertical spacing to ensure non-
Latin scripts will display properly.

Related articles
Hub
Master/details
Nav pane
Semantic zoom
Drag and drop
For developers
ListView class
GridView class
ComboBox class
ListBox class
ListView and GridView
3/6/2017 15 min to read Edit on GitHub

Most applications manipulate and display sets of data, such as a gallery of images or a set of email messages. The
XAML UI framework provides ListView and GridView controls that make it easy to display and manipulate data in
your app.
ListView and GridView both derive from the ListViewBase class, so they have the same functionality, but display
data differently. In this article, when we talk about ListView, the info applies to both the ListView and GridView
controls unless otherwise specified. We may refer to classes like ListView or ListViewItem, but the List prefix can
be replaced with Grid for the corresponding grid equivalent (GridView or GridViewItem).

Important APIs
ListView class
GridView class
ItemsSource property
Items property

Is this the right control?


The ListView displays data stacked vertically in a single column. It's often used to show an ordered list of items,
such as a list of emails or search results.
The GridView presents a collection of items in rows and columns that can scroll vertically. Data is stacked
horizontally until it fills the columns, then continues with the next row. It's often used when you need to show a
rich visualization of each item that takes more space, such as a photo gallery.

For a more detailed comparison and guidance on which control to use, see Lists.

Create a list view


List view is an ItemsControl, so it can contain a collection of items of any type. It must have items in its Items
collection before it can show anything on the screen. To populate the view, you can add items directly to the Items
collection, or set the ItemsSource property to a data source.
Important You can use either Items or ItemsSource to populate the list, but you can't use both at the same time.
If you set the ItemsSource property and you add an item in XAML, the added item is ignored. If you set the
ItemsSource property and you add an item to the Items collection in code, an exception is thrown.

Note Many of the examples in this article populate the Items collection directly for the sake of simplicity.
However, it's more common for the items in a list to come from a dynamic source, like a list of books from an
online database. You use the ItemsSource property for this purpose.

Add items to the Items collection


You can add items to the Items collection using XAML or code. You typically add items this way if you have a
small number of items that don't change and are easily defined in XAML, or if you generate the items in code at
run time.
Here's a list view with items defined inline in XAML. When you define the items in XAML, they are automatically
added to the Items collection.
XAML

<ListView x:Name="listView1">
<x:String>Item 1</x:String>
<x:String>Item 2</x:String>
<x:String>Item 3</x:String>
<x:String>Item 4</x:String>
<x:String>Item 5</x:String>
</ListView>

Here's the list view created in code. The resulting list is the same as the one created previously in XAML.
C#

// Create a new ListView and add content.


ListView listView1 = new ListView();
listView1.Items.Add("Item 1");
listView1.Items.Add("Item 2");
listView1.Items.Add("Item 3");
listView1.Items.Add("Item 4");
listView1.Items.Add("Item 5");

// Add the ListView to a parent container in the visual tree.


stackPanel1.Children.Add(listView1);

The ListView looks like this.

Set the items source


You typically use a list view to display data from a source such as a database or the Internet. To populate a list
view from a data source, you set its ItemsSource property to a collection of data items.
Here, the list view's ItemsSource is set in code directly to an instance of a collection.
C#

// Instead of hard coded items, the data could be pulled


// asynchronously from a database or the internet.
ObservableCollection<string> listItems = new ObservableCollection<string>();
listItems.Add("Item 1");
listItems.Add("Item 2");
listItems.Add("Item 3");
listItems.Add("Item 4");
listItems.Add("Item 5");

// Create a new list view, add content,


ListView itemListView = new ListView();
itemListView.ItemsSource = listItems;

// Add the list view to a parent container in the visual tree.


stackPanel1.Children.Add(itemListView);

You can also bind the ItemsSource property to a collection in XAML. For more info about data binding, see Data
binding overview.
Here, the ItemsSource is bound to a public property named Items that exposes the Page's private data collection.
XAML

<ListView x:Name="itemListView" ItemsSource="{x:Bind Items}"/>

C#

private ObservableCollection<string> _items = new ObservableCollection<string>();

public ObservableCollection<string> Items


{
get { return this._items; }
}

protected override void OnNavigatedTo(NavigationEventArgs e)


{
base.OnNavigatedTo(e);

// Instead of hard coded items, the data could be pulled


// asynchronously from a database or the internet.
Items.Add("Item 1");
Items.Add("Item 2");
Items.Add("Item 3");
Items.Add("Item 4");
Items.Add("Item 5");
}

If you need to show grouped data in your list view, you must bind to a CollectionViewSource. The
CollectionViewSource acts as a proxy for the collection class in XAML and enables grouping support. For more
info, see CollectionViewSource.

Data template
An items data template defines how the data is visualized. By default, a data item is displayed in the list view as
the string representation of the data object it's bound to. You can show the string representation of a particular
property of the data item by setting the DisplayMemberPath to that property.
However, you typically want to show a more rich presentation of your data. To specify exactly how items in the list
view are displayed, you create a DataTemplate. The XAML in the DataTemplate defines the layout and
appearance of controls used to display an individual item. The controls in the layout can be bound to properties of
a data object, or have static content defined inline. You assign the DataTemplate to the ItemTemplate property of
the list control.
In this example, the data item is a simple string. You use a DataTemplate to add an image to the left of the string,
and show the string in blue.

Note When you use the x:Bind markup extension in a DataTemplate, you have to specify the DataType (
x:DataType ) on the DataTemplate.

XAML

<ListView x:Name="listView1">
<ListView.ItemTemplate>
<DataTemplate x:DataType="x:String">
<Grid>
<Grid.ColumnDefinitions>
<ColumnDefinition Width="54"/>
<ColumnDefinition/>
</Grid.ColumnDefinitions>
<Image Source="Assets/placeholder.png" Width="44" Height="44"
HorizontalAlignment="Left"/>
<TextBlock Text="{x:Bind}" Foreground="Blue"
FontSize="36" Grid.Column="1"/>
</Grid>
</DataTemplate>
</ListView.ItemTemplate>
<x:String>Item 1</x:String>
<x:String>Item 2</x:String>
<x:String>Item 3</x:String>
<x:String>Item 4</x:String>
<x:String>Item 5</x:String>
</ListView>

Here's what the data items look like when displayed with this data template.

Data templates are the primary way you define the look of your list view. They can also have a significant impact
on performance if your list displays a large number of items. In this article, we use simple string data for most of
the examples, and don't specify a data template. For more info and examples of how to use data templates and
item containers to define the look of items in your list or grid, see List view item templates.

Change the layout of items


When you add items to a list view or grid view, the control automatically wraps each item in an item container and
then lays out all of the item containers. How these item containers are laid out depends on the ItemsPanel of the
control.
By default, ListView uses an ItemsStackPanel, which produces a vertical list, like this.

GridView uses an ItemsWrapGrid, which adds items horizontally, and wraps and scrolls vertically, like this.

You can modify the layout of items by adjusting properties on the items panel, or you can replace the default
panel with another panel.

Note Be careful to not disable virtualization if you change the ItemsPanel. Both ItemsStackPanel and
ItemsWrapGrid support virtualization, so these are safe to use. If you use any other panel, you might disable
virtualization and slow the performance of the list view. For more info, see the list view articles under
Performance.

This example shows how to make a ListView lay out its item containers in a horizontal list by changing the
Orientation property of the ItemsStackPanel. Because the list view scrolls vertically by default, you also need to
adjust some properties on the list views internal ScrollViewer to make it scroll horizontally.
ScrollViewer.HorizontalScrollMode to Enabled or Auto
ScrollViewer.HorizontalScrollBarVisibility to Auto
ScrollViewer.VerticalScrollMode to Disabled
ScrollViewer.VerticalScrollBarVisibility to Hidden

Note These examples are shown with the list view width unconstrained, so the horizontal scrollbars are not
shown. If you run this code, you can set Width="180" on the ListView to make the scrollbars show.

XAML
<ListView Height="60"
ScrollViewer.HorizontalScrollMode="Enabled"
ScrollViewer.HorizontalScrollBarVisibility="Auto"
ScrollViewer.VerticalScrollMode="Disabled"
ScrollViewer.VerticalScrollBarVisibility="Hidden">
<ListView.ItemsPanel>
<ItemsPanelTemplate>
<ItemsStackPanel Orientation="Horizontal"/>
</ItemsPanelTemplate>
</ListView.ItemsPanel>
<x:String>Item 1</x:String>
<x:String>Item 2</x:String>
<x:String>Item 3</x:String>
<x:String>Item 4</x:String>
<x:String>Item 5</x:String>
</ListView>

The resulting list looks like this.

In the next example, the ListView lays out items in a vertical wrapping list by using an ItemsWrapGrid instead of
an ItemsStackPanel.

Note The height of the list view must be constrained to force the control to wrap the containers.

XAML

<ListView Height="100"
ScrollViewer.HorizontalScrollMode="Enabled"
ScrollViewer.HorizontalScrollBarVisibility="Auto"
ScrollViewer.VerticalScrollMode="Disabled"
ScrollViewer.VerticalScrollBarVisibility="Hidden">
<ListView.ItemsPanel>
<ItemsPanelTemplate>
<ItemsWrapGrid/>
</ItemsPanelTemplate>
</ListView.ItemsPanel>
<x:String>Item 1</x:String>
<x:String>Item 2</x:String>
<x:String>Item 3</x:String>
<x:String>Item 4</x:String>
<x:String>Item 5</x:String>
</ListView>

The resulting list looks like this.

If you show grouped data in your list view, the ItemsPanel determines how the item groups are layed out, not how
the individual items are layed out. For example, if the horizontal ItemsStackPanel shown previously is used to
show grouped data, the groups are arranged horizontally, but the items in each group are still stacked vertically,
as shown here.
Item selection and interaction
You can choose from various ways to let a user interact with a list view. By default, a user can select a single item.
You can change the SelectionMode property to enable multi-selection or to disable selection. You can set the
IsItemClickEnabled property so that a user clicks an item to invoke an action (like a button) instead of selecting
the item.

Note Both ListView and GridView use the ListViewSelectionMode enumeration for their SelectionMode
properties. IsItemClickEnabled is False by default, so you need to set it only to enable click mode.

This table shows the ways a user can interact with a list view, and how you can respond to the interaction.

TO ENABLE THIS USE THIS PROPERTY TO GET


INTERACTION: USE THESE SETTINGS: HANDLE THIS EVENT: THE SELECTED ITEM:

No interaction SelectionMode = None, N/A N/A


IsItemClickEnabled = False

Single selection SelectionMode = Single, SelectionChanged SelectedItem, SelectedIndex


IsItemClickEnabled = False

Multiple selection SelectionMode = Multiple, SelectionChanged SelectedItems


IsItemClickEnabled = False

Extended selection SelectionMode = Extended, SelectionChanged SelectedItems


IsItemClickEnabled = False

Click SelectionMode = None, ItemClick N/A


IsItemClickEnabled = True

Note Starting in Windows 10, you can enable IsItemClickEnabled to raise an ItemClick event while
SelectionMode is also set to Single, Multiple, or Extended. If you do this, the ItemClick event is raised first, and
then the SelectionChanged event is raised. In some cases, like if you navigate to another page in the ItemClick
event handler, the SelectionChanged event is not raised and the item is not selected.

You can set these properties in XAML or in code, as shown here.


XAML

<ListView x:Name="myListView" SelectionMode="Multiple"/>

<GridView x:Name="myGridView" SelectionMode="None" IsItemClickEnabled="True"/>


C#

myListView.SelectionMode = ListViewSelectionMode.Multiple;

myGridView.SelectionMode = ListViewSelectionMode.None;
myGridView.IsItemClickEnabled = true;

Read-only
You can set the SelectionMode property to ListViewSelectionMode.None to disable item selection. This puts the
control in read only mode, to be used for displaying data, but not for interacting with it. The control itself is not
disabled, only item selection is disabled.
Single selection
This table describes the keyboard, mouse, and touch interactions when SelectionMode is Single.

MODIFIER KEY INTERACTION

None A user can select a single item using the space bar, mouse
click, or touch tap.

Ctrl A user can deselect a single item using the space bar,
mouse click, or touch tap.
Using the arrow keys, a user can move focus independently
of selection.

When SelectionMode is Single, you can get the selected data item from the SelectedItem property. You can get
the index in the collection of the selected item using the SelectedIndex property. If no item is selected,
SelectedItem is null, and SelectedIndex is -1.
If you try to set an item that is not in the Items collection as the SelectedItem, the operation is ignored and
SelectedItem isnull. However, if you try to set the SelectedIndex to an index that's out of the range of the Items
in the list, a System.ArgumentException exception occurs.
Multiple selection
This table describes the keyboard, mouse, and touch interactions when SelectionMode is Multiple.

MODIFIER KEY INTERACTION

None A user can select multiple items using the space bar, mouse
click, or touch tap to toggle selection on the focused item.
Using the arrow keys, a user can move focus independently
of selection.

Shift A user can select multiple contiguous items by clicking or


tapping the first item in the selection and then the last item in
the selection.
Using the arrow keys, a user can create a contiguous
selection starting with the item selected when Shift is pressed.

Extended selection
This table describes the keyboard, mouse, and touch interactions when SelectionMode is Extended.

MODIFIER KEY INTERACTION

None The behavior is the same as Single selection.


MODIFIER KEY INTERACTION

Ctrl A user can select multiple items using the space bar, mouse
click, or touch tap to toggle selection on the focused item.
Using the arrow keys, a user can move focus independently
of selection.

Shift A user can select multiple contiguous items by clicking or


tapping the first item in the selection and then the last item in
the selection.
Using the arrow keys, a user can create a contiguous
selection starting with the item selected when Shift is pressed.

When SelectionMode is Multiple or Extended, you can get the selected data items from the SelectedItems
property.
The SelectedIndex, SelectedItem, and SelectedItems properties are synchronized. For example, if you set
SelectedIndex to -1, SelectedItem is set to null and SelectedItems is empty; if you set SelectedItem to null,
SelectedIndex is set to -1 and SelectedItems is empty.
In multi-select mode, SelectedItem contains the item that was selected first, and Selectedindex contains the
index of the item that was selected first.
Respond to selection changes
To respond to selection changes in a list view, handle the SelectionChanged event. In the event handler code,
you can get the list of selected items from the SelectionChangedEventArgs.AddedItems property. You can get
any items that were deselected from the SelectionChangedEventArgs.RemovedItems property. The
AddedItems and RemovedItems collections contain at most 1 item unless the user selects a range of items by
holding down the Shift key.
This example shows how to handle the SelectionChanged event and access the various items collections.
XAML

<StackPanel HorizontalAlignment="Right">
<ListView x:Name="listView1" SelectionMode="Multiple"
SelectionChanged="ListView1_SelectionChanged">
<x:String>Item 1</x:String>
<x:String>Item 2</x:String>
<x:String>Item 3</x:String>
<x:String>Item 4</x:String>
<x:String>Item 5</x:String>
</ListView>
<TextBlock x:Name="selectedItem"/>
<TextBlock x:Name="selectedIndex"/>
<TextBlock x:Name="selectedItemCount"/>
<TextBlock x:Name="addedItems"/>
<TextBlock x:Name="removedItems"/>
</StackPanel>

C#
private void ListView1_SelectionChanged(object sender, SelectionChangedEventArgs e)
{
if (listView1.SelectedItem != null)
{
selectedItem.Text =
"Selected item: " + listView1.SelectedItem.ToString();
}
else
{
selectedItem.Text =
"Selected item: null";
}
selectedIndex.Text =
"Selected index: " + listView1.SelectedIndex.ToString();
selectedItemCount.Text =
"Items selected: " + listView1.SelectedItems.Count.ToString();
addedItems.Text =
"Added: " + e.AddedItems.Count.ToString();
removedItems.Text =
"Removed: " + e.RemovedItems.Count.ToString();
}

Click mode
You can change a list view so that a user clicks items like buttons instead of selecting them. For example, this is
useful when your app navigates to a new page when your user clicks an item in a list or grid. To enable this
behavior:
Set SelectionMode to None.
Set IsItemClickEnabled to true.
Handle the ItemClick event to do something when your user clicks an item.
Here's a list view with clickable items. The code in the ItemClick event handler navigates to a new page.
XAML

<ListView SelectionMode="None"
IsItemClickEnabled="True"
ItemClick="ListView1_ItemClick">
<x:String>Page 1</x:String>
<x:String>Page 2</x:String>
<x:String>Page 3</x:String>
<x:String>Page 4</x:String>
<x:String>Page 5</x:String>
</ListView>

C#
private void ListView1_ItemClick(object sender, ItemClickEventArgs e)
{
switch (e.ClickedItem.ToString())
{
case "Page 1":
this.Frame.Navigate(typeof(Page1));
break;

case "Page 2":


this.Frame.Navigate(typeof(Page2));
break;

case "Page 3":


this.Frame.Navigate(typeof(Page3));
break;

case "Page 4":


this.Frame.Navigate(typeof(Page4));
break;

case "Page 5":


this.Frame.Navigate(typeof(Page5));
break;

default:
break;
}
}

Select a range of items programmatically


Sometimes, you need to manipulate a list views item selection programmatically. For example, you might have a
Select all button to let a user select all items in a list. In this case, its usually not very efficient to add and remove
items from the SelectedItems collection one by one. Each item change causes a SelectionChanged event to occur,
and when you work with the items directly instead of working with index values, the item is de-virtualized.
The SelectAll, SelectRange, and DeselectRange methods provide a more efficient way to modify the selection
than using the SelectedItems property. These methods select or deselect using ranges of item indexes. Items that
are virtualized remain virtualized, because only the index is used. All items in the specified range are selected (or
deselected), regardless of their original selection state. The SelectionChanged event occurs only once for each call
to these methods.

Important You should call these methods only when the SelectionMode property is set to Multiple or
Extended. If you call SelectRange when the SelectionMode is Single or None, an exception is thrown.

When you select items using index ranges, use the SelectedRanges property to get all selected ranges in the list.
If the ItemsSource implements IItemsRangeInfo, and you use these methods to modify the selection, the
AddedItems and RemovedItems properties are not set in the SelectionChangedEventArgs. Setting these
properties requires de-virtualizing the item object. Use the SelectedRanges property to get the items instead.
You can select all items in a collection by calling the SelectAll method. However, there is no corresponding method
to deselect all items. You can deselect all items by calling DeselectRange and passing an ItemIndexRange with a
FirstIndex value of 0 and a Length value equal to the number of items in the collection.
XAML
<StackPanel Width="160">
<Button Content="Select all" Click="SelectAllButton_Click"/>
<Button Content="Deselect all" Click="DeselectAllButton_Click"/>
<ListView x:Name="listView1" SelectionMode="Multiple">
<x:String>Item 1</x:String>
<x:String>Item 2</x:String>
<x:String>Item 3</x:String>
<x:String>Item 4</x:String>
<x:String>Item 5</x:String>
</ListView>
</StackPanel>

C#

private void SelectAllButton_Click(object sender, RoutedEventArgs e)


{
if (listView1.SelectionMode == ListViewSelectionMode.Multiple ||
listView1.SelectionMode == ListViewSelectionMode.Extended)
{
listView1.SelectAll();
}
}

private void DeselectAllButton_Click(object sender, RoutedEventArgs e)


{
if (listView1.SelectionMode == ListViewSelectionMode.Multiple ||
listView1.SelectionMode == ListViewSelectionMode.Extended)
{
listView1.DeselectRange(new ItemIndexRange(0, (uint)listView1.Items.Count));
}
}

For info about how to change the look of selected items, see List view item templates.
Drag and drop
ListView and GridView controls support drag and drop of items within themselves, and between themselves and
other ListView and GridView controls. For more info about implementing the drag and drop pattern, see Drag and
drop.

Get the sample code


XAML ListView and GridView sample
This sample shows the usage of ListView and Gridview controls.
XAML UI basics sample
See all of the XAML controls in an interactive format.

Related articles
Lists
List view item templates
Drag and drop
Item containers and templates
3/6/2017 14 min to read Edit on GitHub

ListView and GridView controls manage how their items are arranged (horizontal, vertical, wrapping, etc) and
how a user interacts with the items, but not how the individual items are shown on the screen. Item visualization is
managed by item containers. When you add items to a list view they are automatically placed in a container. The
default item container for ListView is ListViewItem; for GridView, its GridViewItem.

Important APIs
ListView class
GridView class
ItemTemplate property
ItemContainerStyle property

ListView and GridView both derive from the ListViewBase class, so they have the same functionality, but
display data differently. In this article, when we talk about list view, the info applies to both the ListView and
GridView controls unless otherwise specified. We may refer to classes like ListView or ListViewItem, but the
List prefix can be replaced with Grid for the corresponding grid equivalent (GridView or GridViewItem).

These container controls consist of two important parts that combine to create the final visuals shown for an item:
the data template and the control template.
Data template - You assign a DataTemplate to the ItemTemplate property of the list view to specify how
individual data items are shown.
Control template - The control template provides the part of the item visualization that the framework is
responsible for, like visual states. You can use the ItemContainerStyle property to modify the control
template. Typically, you do this to modify the list view colors to match your branding, or change how selected
items are shown.
This image shows how the control template and the data template combine to create the final visual for an item.

Here's the XAML that creates this item. We explain the templates later.
<ListView Width="220" SelectionMode="Multiple">
<ListView.ItemTemplate>
<DataTemplate x:DataType="x:String">
<Grid Background="Yellow">
<Grid.ColumnDefinitions>
<ColumnDefinition Width="54"/>
<ColumnDefinition/>
</Grid.ColumnDefinitions>
<Image Source="Assets/placeholder.png" Width="44" Height="44"
HorizontalAlignment="Left"/>
<TextBlock Text="{x:Bind}" Foreground="Blue"
FontSize="36" Grid.Column="1"/>
</Grid>
</DataTemplate>
</ListView.ItemTemplate>
<ListView.ItemContainerStyle>
<Style TargetType="ListViewItem">
<Setter Property="Background" Value="Green"/>
</Style>
</ListView.ItemContainerStyle>
<x:String>Item 1</x:String>
<x:String>Item 2</x:String>
<x:String>Item 3</x:String>
<x:String>Item 4</x:String>
<x:String>Item 5</x:String>
</ListView>

Prerequisites
We assume that you know how to use a list view control. For more info, see the ListView and GridView article.
We also assume that you understand control styles and templates, including how to use a style inline or as a
resource. For more info, see Styling controls and Control templates.

The data
Before we look deeper into how to show data items in a list view, we need to understand the data to be shown. In
this example, we create a data type called NamedColor . It combines a color name, color value, and a
SolidColorBrush for the color, which are exposed as 3 properties: Name , Color , and Brush .
We then populate a List with a NamedColor object for each named color in the Colors class. The list is set as the
ItemsSource for the list view.
Heres the code to define the class and populate the NamedColors list.
C#
using System.Collections.Generic;
using System.Linq;
using System.Reflection;
using Windows.UI;
using Windows.UI.Xaml.Controls;
using Windows.UI.Xaml.Media;

namespace ColorsListApp
{
public sealed partial class MainPage : Page
{
// The list of colors won't change after it's populated, so we use List<T>.
// If the data can change, we should use an ObservableCollection<T> intead.
List<NamedColor> NamedColors = new List<NamedColor>();

public MainPage()
{
this.InitializeComponent();

// Use reflection to get all the properties of the Colors class.


IEnumerable<PropertyInfo> propertyInfos = typeof(Colors).GetRuntimeProperties();

// For each property, create a NamedColor with the property name (color name),
// and property value (color value). Add it the NamedColors list.
for (int i = 0; i < propertyInfos.Count(); i++)
{
NamedColors.Add(new NamedColor(propertyInfos.ElementAt(i).Name,
(Color)propertyInfos.ElementAt(i).GetValue(null)));
}

colorsListView.ItemsSource = NamedColors;
}
}

class NamedColor
{
public NamedColor(string colorName, Color colorValue)
{
Name = colorName;
Color = colorValue;
}

public string Name { get; set; }

public Color Color { get; set; }

public SolidColorBrush Brush


{
get { return new SolidColorBrush(Color); }
}
}
}

Data template
You specify a data template to tell the list view how your data item should be shown.
By default, a data item is displayed in the list view as the string representation of the data object it's bound to. If
you show the 'NamedColors' data in a list view without telling the list view how it should look, it just shows
whatever the ToString method returns, like this.
XAML
<ListView x:Name="colorsListView"/>

You can show the string representation of a particular property of the data item by setting the
DisplayMemberPath to that property. Here, you set DisplayMemberPath to the Name property of the
NamedColor item.

XAML

<ListView x:Name="colorsListView" DisplayMemberPath="Name" />

The list view now displays items by name, as shown here. Its more useful, but its not very interesting and leaves a
lot of information hidden.

You typically want to show a more rich presentation of your data. To specify exactly how items in the list view are
displayed, you create a DataTemplate. The XAML in the DataTemplate defines the layout and appearance of
controls used to display an individual item. The controls in the layout can be bound to properties of a data object,
or have static content defined inline. You assign the DataTemplate to the ItemTemplate property of the list
control.

IMPORTANT
You cant use a ItemTemplate and DisplayMemberPath at the same time. If both properties are set, an exception occurs.

Here, you define a DataTemplate that shows a Rectangle in the color of the item, along with the color name and
RGB values.

NOTE
When you use the x:Bind markup extension in a DataTemplate, you have to specify the DataType ( x:DataType ) on the
DataTemplate.

XAML
<ListView x:Name="colorsListView">
<ListView.ItemTemplate>
<DataTemplate x:DataType="local:NamedColor">
<Grid>
<Grid.ColumnDefinitions>
<ColumnDefinition MinWidth="54"/>
<ColumnDefinition Width="32"/>
<ColumnDefinition Width="32"/>
<ColumnDefinition Width="32"/>
<ColumnDefinition/>
</Grid.ColumnDefinitions>
<Grid.RowDefinitions>
<RowDefinition/>
<RowDefinition/>
</Grid.RowDefinitions>
<Rectangle Width="44" Height="44" Fill="{x:Bind Brush}" Grid.RowSpan="2"/>
<TextBlock Text="{x:Bind Name}" Grid.Column="1" Grid.ColumnSpan="4"/>
<TextBlock Text="{x:Bind Color.R}" Grid.Column="1" Grid.Row="1" Foreground="Red"/>
<TextBlock Text="{x:Bind Color.G}" Grid.Column="2" Grid.Row="1" Foreground="Green"/>
<TextBlock Text="{x:Bind Color.B}" Grid.Column="3" Grid.Row="1" Foreground="Blue"/>
</Grid>
</DataTemplate>
</ListView.ItemTemplate>
</ListView>

Here's what the data items look like when they're displayed with this data template.

You might want to show the data in a GridView. Here's another data template that displays the data in a way that's
more appropriate for a grid layout. This time, the data template is defined as a resource rather than inline with the
XAML for the GridView.
XAML
<Page.Resources>
<DataTemplate x:Key="namedColorItemGridTemplate" x:DataType="local:NamedColor">
<Grid>
<Grid.ColumnDefinitions>
<ColumnDefinition Width="32"/>
<ColumnDefinition Width="32"/>
<ColumnDefinition Width="32"/>
</Grid.ColumnDefinitions>
<Grid.RowDefinitions>
<RowDefinition Height="96"/>
<RowDefinition/>
<RowDefinition/>
</Grid.RowDefinitions>

<Rectangle Width="96" Height="96" Fill="{x:Bind Brush}" Grid.ColumnSpan="3" />


<!-- Name -->
<Border Background="#AAFFFFFF" Grid.ColumnSpan="3" Height="40" VerticalAlignment="Top">
<TextBlock Text="{x:Bind Name}" TextWrapping="Wrap" Margin="4,0,0,0"/>
</Border>
<!-- RGB -->
<Border Background="Gainsboro" Grid.Row="1" Grid.ColumnSpan="3"/>
<TextBlock Text="{x:Bind Color.R}" Foreground="Red"
Grid.Column="0" Grid.Row="1" HorizontalAlignment="Center"/>
<TextBlock Text="{x:Bind Color.G}" Foreground="Green"
Grid.Column="1" Grid.Row="1" HorizontalAlignment="Center"/>
<TextBlock Text="{x:Bind Color.B}" Foreground="Blue"
Grid.Column="2" Grid.Row="1" HorizontalAlignment="Center"/>
<!-- HEX -->
<Border Background="Gray" Grid.Row="2" Grid.ColumnSpan="3">
<TextBlock Text="{x:Bind Color}" Foreground="White" Margin="4,0,0,0"/>
</Border>
</Grid>
</DataTemplate>
</Page.Resources>

...

<GridView x:Name="colorsGridView"
ItemTemplate="{StaticResource namedColorItemGridTemplate}"/>

When the data is shown in a grid using this data template, it looks like this.

Performance considerations
Data templates are the primary way you define the look of your list view. They can also have a significant impact
on performance if your list displays a large number of items.
An instance of every XAML element in a data template is created for each item in the list view. For example, the
grid template in the previous example has 10 XAML elements (1 Grid, 1 Rectangle, 3 Borders, 5 TextBlocks). A
GridView that shows 20 items on screen using this data template creates at least 200 elements (20*10=200).
Reducing the number of elements in a data template can greatly reduce the total number of elements created for
your list view. For more info, see ListView and GridView UI optimization: Element count reduction per item.
Consider this section of the grid data template. Let's look at a few things that reduce the element count.
XAML

<!-- RGB -->


<Border Background="Gainsboro" Grid.Row="1" Grid.ColumnSpan="3"/>
<TextBlock Text="{x:Bind Color.R}" Foreground="Red"
Grid.Column="0" Grid.Row="1" HorizontalAlignment="Center"/>
<TextBlock Text="{x:Bind Color.G}" Foreground="Green"
Grid.Column="1" Grid.Row="1" HorizontalAlignment="Center"/>
<TextBlock Text="{x:Bind Color.B}" Foreground="Blue"
Grid.Column="2" Grid.Row="1" HorizontalAlignment="Center"/>

First, the layout uses a single Grid. You could have a single-column Grid and place these 3 TextBlocks in a
StackPanel, but in a data template that gets created many times, you should look for ways to avoid embedding
layout panels within other layout panels.
Second, you can use a Border control to render a background without actually placing items within the Border
element. A Border element can have only one child element, so you would need to add an additional layout
panel to host the 3 TextBlock elements within the Border element in XAML. By not making the TextBlocks
children of the Border, you eliminate the need for a panel to hold the TextBlocks.
Finally, you could place the TextBlocks inside a StackPanel, and set the border properties on the StackPanel
rather than using an explicit Border element. However, the Border element is a more lightweight control than a
StackPanel, so it has less of an impact on performance when rendered many times over.

Control template
An items control template contains the visuals that display state, like selection, pointer over, and focus. These
visuals are rendered either on top of or below the data template. Some of the common default visuals drawn by
the ListView control template are shown here.
Hover A light gray rectangle drawn below the data template.
Selection A light blue rectangle drawn below the data template.
Keyboard focus A black and white dotted border drown on top of the item template.

The list view combines the elements from the data template and control template to create the final visuals
rendered on the screen. Here, the state visuals are shown in the context of a list view.
ListViewItemPresenter
As we noted previously about data templates, the number of XAML elements created for each item can have a
significant impact on the performance of a list view. Because the data template and control template are combined
to display each item, the actual number of elements needed to display an item includes the elements in both
templates.
The ListView and GridView controls are optimized to reduce the number of XAML elements created per item. The
ListViewItem visuals are created by the ListViewItemPresenter, which is a special XAML element that displays
complex visuals for focus, selection, and other visual states, without the overhead of numerous UIElements.

NOTE
In UWP apps for Windows 10, both ListViewItem and GridViewItem use ListViewItemPresenter; the
GridViewItemPresenter is deprecated and you should not use it. ListViewItem and GridViewItem set different property
values on ListViewItemPresenter to achieve different default looks.)

To modify the look of the item container, use the ItemContainerStyle property and provide a Style with its
TargetType set to ListViewItem or GridViewItem.
In this example, you add padding to the ListViewItem to create some space between the items in the list.

<ListView x:Name="colorsListView">
<ListView.ItemTemplate>
<!-- DataTemplate XAML shown in previous ListView example -->
</ListView.ItemTemplate>

<ListView.ItemContainerStyle>
<Style TargetType="ListViewItem">
<Setter Property="Padding" Value="0,4"/>
</Style>
</ListView.ItemContainerStyle>
</ListView>

Now the list view looks like this with space between the items.

In the ListViewItem default style, the ListViewItemPresenter ContentMargin property has a TemplateBinding to
the ListViewItem Padding property ( <ListViewItemPresenter ContentMargin="{TemplateBinding Padding}"/> ). When we set the
Padding property, that value is really being passed to the ListViewItemPresenter ContentMargin property.
To modify other ListViewItemPresenter properties that aren't template bound to ListViewItems properties, you
need to retemplate the ListViewItem with a new ListViewItemPresenter that you can modify properties on.
NOTE
ListViewItem and GridViewItem default styles set a lot of properties on ListViewItemPresenter. You should always start with
a copy of the default style and modify only the properties you need too. Otherwise, the visuals will probably not show up
the way you expect because some properties won't be set correctly.

To make a copy of the default template in Visual Studio


1. Open the Document Outline pane (View > Other Windows > Document Outline).
2. Select the list or grid element to modify. In this example, you modify the colorsGridView element.
3. Right-click and select Edit Additional Templates > Edit Generated Item Container (ItemContainerStyle)
> Edit a Copy.

4. In the Create Style Resource dialog, enter a name for the style. In this example, you use colorsGridViewItemStyle .!
[Visual Studio Create Style Resource dialog(images/listview-style-resource-vs.png)
A copy of the default style is added to your app as a resource, and the GridView.ItemContainerStyle property is
set to that resource, as shown in this XAML.
<Style x:Key="colorsGridViewItemStyle" TargetType="GridViewItem">
<Setter Property="FontFamily" Value="{ThemeResource ContentControlThemeFontFamily}"/>
<Setter Property="FontSize" Value="{ThemeResource ControlContentThemeFontSize}" />
<Setter Property="Background" Value="Transparent"/>
<Setter Property="Foreground" Value="{ThemeResource SystemControlForegroundBaseHighBrush}"/>
<Setter Property="TabNavigation" Value="Local"/>
<Setter Property="IsHoldingEnabled" Value="True"/>
<Setter Property="HorizontalContentAlignment" Value="Center"/>
<Setter Property="VerticalContentAlignment" Value="Center"/>
<Setter Property="Margin" Value="0,0,4,4"/>
<Setter Property="MinWidth" Value="{ThemeResource GridViewItemMinWidth}"/>
<Setter Property="MinHeight" Value="{ThemeResource GridViewItemMinHeight}"/>
<Setter Property="Template">
<Setter.Value>
<ControlTemplate TargetType="GridViewItem">
<ListViewItemPresenter
CheckBrush="{ThemeResource SystemControlForegroundBaseMediumHighBrush}"
ContentMargin="{TemplateBinding Padding}"
CheckMode="Overlay"
ContentTransitions="{TemplateBinding ContentTransitions}"
CheckBoxBrush="{ThemeResource SystemControlBackgroundChromeMediumBrush}"
DragForeground="{ThemeResource ListViewItemDragForegroundThemeBrush}"
DragOpacity="{ThemeResource ListViewItemDragThemeOpacity}"
DragBackground="{ThemeResource ListViewItemDragBackgroundThemeBrush}"
DisabledOpacity="{ThemeResource ListViewItemDisabledThemeOpacity}"
FocusBorderBrush="{ThemeResource SystemControlForegroundAltHighBrush}"
FocusSecondaryBorderBrush="{ThemeResource SystemControlForegroundBaseHighBrush}"
HorizontalContentAlignment="{TemplateBinding HorizontalContentAlignment}"
PointerOverForeground="{ThemeResource SystemControlForegroundBaseHighBrush}"
PressedBackground="{ThemeResource SystemControlHighlightListMediumBrush}"
PlaceholderBackground="{ThemeResource ListViewItemPlaceholderBackgroundThemeBrush}"
PointerOverBackground="{ThemeResource SystemControlHighlightListLowBrush}"
ReorderHintOffset="{ThemeResource GridViewItemReorderHintThemeOffset}"
SelectedPressedBackground="{ThemeResource SystemControlHighlightListAccentHighBrush}"
SelectionCheckMarkVisualEnabled="True"
SelectedForeground="{ThemeResource SystemControlForegroundBaseHighBrush}"
SelectedPointerOverBackground="{ThemeResource SystemControlHighlightListAccentMediumBrush}"
SelectedBackground="{ThemeResource SystemControlHighlightAccentBrush}"
VerticalContentAlignment="{TemplateBinding VerticalContentAlignment}"/>
</ControlTemplate>
</Setter.Value>
</Setter>
</Style>

...

<GridView x:Name="colorsGridView" ItemContainerStyle="{StaticResource colorsGridViewItemStyle}"/>

You can now modify properties on the ListViewItemPresenter to control the selection check box, item positioning,
and brush colors for visual states.
Inline and Overlay selection visuals
ListView and GridView indicate selected items in different ways depending on the control and the
SelectionMode. For more info about list view selection, see ListView and GridView.
When SelectionMode is set to Multiple, a selection check box is shown as part of the item's control template.
You can use the SelectionCheckMarkVisualEnabled property to turn off the selection check box in Multiple
selection mode. However, this property is ignored in other selection modes, so you can't turn on the check box in
Extended or Single selection mode.
You can set the CheckMode property to specify whether the check box is shown using the inline style or overlay
style.
Inline: This style shows the check box to the left of the content, and colors the background of the item
container to indicate selection. This is the default style for ListView.
Overlay: This style shows the check box on top of the content, and colors only the border of the item container
to indicate selection. This is the default style for GridView.
This table shows the default visuals used to indicate selection.

SELECTIONMODE: SINGLE/EXTENDED MULTIPLE

Inline

Overlay

NOTE
In this and the following examples, simple string data items are shown without data templates to emphasize the visuals
provided by the control template.

There are also several brush properties to change the colors of the check box. We'll look at these next along with
other brush properties.
Brushes
Many of the properties specify the brushes used for different visual states. You might want to modify these to
match the color of your brand.
This table shows the Common and Selection visual states for ListViewItem, and the brushes used to render the
visuals for each state. The images show the effects of the brushes on both the inline and overlay selection visual
styles.

NOTE
In this table, the modified color values for the brushes are hardcoded named colors and the colors are selected to make it
more apparent where they are applied in the template. These are not the default colors for the visual states. If you modify
the default colors in your app, you should use brush resources to modify the color values as done in the default template.
STATE/BRUSH NAME INLINE STYLE OVERLAY STYLE

Normal
CheckBoxBrush="Red"

PointerOver
PointerOverForeground="Da
rkOrange"
PointerOverBackground="Mi
styRose"
CheckBoxBrush="Red"

Pressed
PressedBackground="LightC
yan"
PointerOverForeground="Dark
Orange"
CheckBoxBrush="Red"

Selected
SelectedForeground="Navy"
SelectedBackground="Khaki
"
CheckBrush="Green"
CheckBoxBrush="Red" (inline
only)

PointerOverSelected
SelectedPointerOverBackgro
und="Lavender"
SelectedForeground="Navy"
SelectedBackground="Khaki"
(overlay only)
CheckBrush="Green"
CheckBoxBrush="Red" (inline
only)

PressedSelected
SelectedPressedBackground
="MediumTurquoise"
SelectedForeground="Navy"
SelectedBackground="Khaki"
(overlay only)
CheckBrush="Green"
CheckBoxBrush="Red" (inline
only)
STATE/BRUSH NAME INLINE STYLE OVERLAY STYLE

Focused
FocusBorderBrush="Crimson
"
FocusSecondaryBorderBrush
="Gold"
CheckBoxBrush="Red"

ListViewItemPresenter has other brush properties for data placeholders and drag states. If you use incremental
loading or drag and drop in your list view, you should consider whether you need to also modify these additional
brush properties. See the ListViewItemPresenter class for the complete list of properties you can modify.
Expanded XAML item templates
If you need to make more modifications than what is allowed by the ListViewItemPresenter properties - if you
need to change the position of the check box, for example - you can use the ListViewItemExpanded or
GridViewItemExpanded templates. These templates are included with the default styles in generic.xaml. They
follow the standard XAML pattern of building all the visuals from individual UIElements.
As mentioned previously, the number of UIElements in an item template has a significant impact on the
performance of your list view. Replacing ListViewItemPresenter with the expanded XAML templates greatly
increases the element count, and is not recommended when your list view will show a large number of items or
when performance is a concern.

NOTE
ListViewItemPresenter is supported only when the list views ItemsPanel is an ItemsWrapGrid or ItemsStackPanel. If
you change the ItemsPanel to use VariableSizedWrapGrid, WrapGrid, or StackPanel, then the item template is
automatically switched to the expanded XAML template. For more info, see ListView and GridView UI optimization.

To customize an expanded XAML template, you need to make a copy of it in your app, and set the
ItemContainerStyle property to your copy.
To copy the expanded template
1. Set the ItemContainerStyle property as shown here for your ListView or GridView.
xaml <ListView ItemContainerStyle="{StaticResource ListViewItemExpanded}"/> <GridView ItemContainerStyle="{StaticResource
GridViewItemExpanded}"/>
2. In the Visual Studio Properties pane, expand the Miscellaneous section and find the ItemContainerStyle
property. (Make sure the ListView or GridView is selected.)
3. Click the property marker for the ItemContainerStyle property. (Its the small box next to the TextBox. Its
coloreed green to show that its set to a StaticResource.) The property menu opens.
4. In the property menu, click Convert to New Resource.
5. In the Create Style Resource dialog, enter a name for the resource and click OK.
A copy of the expanded template from generic.xaml is created in your app, which you can modify as needed.

Related articles
Lists
ListView and GridView
Inverted lists
3/6/2017 1 min to read Edit on GitHub

You can use a list view to present a conversation in a chat experience with items that are visually distinct to
represent the sender/receiver. Using different colors and horizontal alignment to separate messages from the
sender/receiver helps the user quickly orient themselves in a conversation.
You will typically need to present the list such that it appears to grow from the bottom up instead of from the top
down. When a new message arrives and is added to the end, the previous messages slide up to make room
drawing the users attention to the latest arrival. However, if a user has scrolled up to view previous replies then the
arrival of a new message must not cause a visual shift that would disrupt their focus.

Important APIs
ListView class
ItemsStackPanel class
ItemsUpdatingScrollMode property

Create an inverted list


To create an inverted list, use a list view with an ItemsStackPanel as its items panel. On the ItemsStackPanel, set
the ItemsUpdatingScrollMode to KeepLastItemInView.
IMPORTANT
The KeepLastItemInView enum value is available starting with Windows 10, version 1607. You can't use this value when
your app runs on earlier versions of Windows 10.

This example shows how to align the list views items to the bottom and indicate that when there is a change to the
items the last item should remain in view.
XAML

<ListView>
<ListView.ItemsPanel>
<ItemsPanelTemplate>
<ItemsStackPanel VerticalAlignment="Bottom"
ItemsUpdatingScrollMode="KeepLastItemInView"/>
</ItemsPanelTemplate>
</ListView.ItemsPanel>
</ListView>

Do's and don'ts


Align messages from the sender/receiver on opposite sides to make the flow of conversation clear to users.
Animate the existing messages out of the way to display the latest message if the user is already at the end of
the conversation awaiting the next message.
Dont disrupt the users focus by moving items if theyre not reading the end of the conversation.
Pull to refresh
3/6/2017 7 min to read Edit on GitHub

The pull-to-refresh pattern lets a user pull down on a list of data using touch in order to retrieve more data. Pull-to-
refresh is widely used on mobile apps, but is useful on any device with a touch screen. You can handle
manipulation events to implement pull-to-refresh in your app.
The pull-to-refresh sample shows how to extend a ListView control to support this pattern. In this article, we use
this sample to explain the key points of implementing pull-to-refresh.

Is this the right pattern?


Use the pull-to-refresh pattern when you have a list or grid of data that the user might want to refresh regularly,
and your app is likely to be running on mobile, touch-first devices.

Implement pull-to-refresh
To implement pull-to-refresh, you need to handle manipulation events to detect when a user has pulled the list
down, provide visual feedback, and refresh the data. Here, we look at how this is done in the pull-to-refresh sample.
We don't show all the code here, so you should download the sample or view the code on GitHub.
The pull-to-refresh sample creates a custom control called RefreshableListView that extends the ListView control. This
control adds a refresh indicator to provide visual feedback and handles the manipulation events on the list view's
internal scroll viewer. It also adds 2 events to notify you when the list is pulled and when the data should be
refreshed. RefreshableListView only provides notification that the data should be refreshed. You need to handle the
event in your app to update the data, and that code will be different for every app.
RefreshableListView provides an 'auto refresh' mode that determines when the refresh is requested and how the
refresh indicator goes out of view. Auto refresh can be on or off.
Off: A refresh is requested only if the list is released while the PullThreshold is exceded. The indicator animates
out of view when the user releases the scroller. The status bar indicator is shown if it's available (on phone).
On: A refresh is requested as soon as the PullThreshold is exceded, whether released or not. The indicator
remains in view until the new data is retrieved, then animates out of view. A Deferral is used to notify the app
when fetching the data is complete.

Note The code in sample is also applicable to a GridView. To modify a GridView, derive the custom class from
GridView instead of ListView and modify the default GridView template.
Add a refresh indicator
It's important to provide visual feedback to the user so they know that your app supports pull-to-refresh.
RefreshableListView has a RefreshIndicatorContent property that lets you set the indicator visual in your XAML. It also
includes a default text indicator that it falls back to if you don't set the RefreshIndicatorContent .
Here are recommended guidelines for the refresh indicator.

Modify the list view template


In the pull-to-refresh sample, the RefreshableListView control template modifies the standard ListView template by
adding a refresh indicator. The refresh indicator is placed in a Grid above the ItemsPresenter, which is the part
that shows the list items.

Note The DefaultRefreshIndicatorContenttext box provides a text fallback indicator that is shown only if the
RefreshIndicatorContent property is not set.

Here's the part of the control template that's modified from the default ListView template.
XAML

<!-- Styles/Styles.xaml -->


<Grid x:Name="ScrollerContent" VerticalAlignment="Top">
<Grid.RowDefinitions>
<RowDefinition Height="Auto"/>
<RowDefinition Height="*"/>
<RowDefinition Height="Auto"/>
</Grid.RowDefinitions>
<Border x:Name="RefreshIndicator" VerticalAlignment="Top" Grid.Row="1">
<Grid>
<TextBlock x:Name="DefaultRefreshIndicatorContent" HorizontalAlignment="Center"
Foreground="White" FontSize="20" Margin="20, 35, 20, 20"/>
<ContentPresenter Content="{TemplateBinding RefreshIndicatorContent}"></ContentPresenter>
</Grid>
</Border>
<ItemsPresenter FooterTransitions="{TemplateBinding FooterTransitions}"
FooterTemplate="{TemplateBinding FooterTemplate}"
Footer="{TemplateBinding Footer}"
HeaderTemplate="{TemplateBinding HeaderTemplate}"
Header="{TemplateBinding Header}"
HeaderTransitions="{TemplateBinding HeaderTransitions}"
Padding="{TemplateBinding Padding}"
Grid.Row="1"
x:Name="ItemsPresenter"/>
</Grid>

Set the content in XAML


You set the content of the refresh indicator in the XAML for your list view. The XAML content you set is displayed by
the refresh indicator's ContentPresenter ( <ContentPresenter Content="{TemplateBinding RefreshIndicatorContent}"> ). If you
don't set this content, the default text indicator is shown instead.
XAML

<!-- MainPage.xaml -->


<c:RefreshableListView
<!-- ... See sample for removed code. -->
AutoRefresh="{x:Bind Path=UseAutoRefresh, Mode=OneWay}"
ItemsSource="{x:Bind Items}"
PullProgressChanged="listView_PullProgressChanged"
RefreshRequested="listView_RefreshRequested">

<c:RefreshableListView.RefreshIndicatorContent>
<Grid Height="100" Background="Transparent">
<FontIcon
Margin="0,0,0,30"
HorizontalAlignment="Center"
VerticalAlignment="Bottom"
FontFamily="Segoe MDL2 Assets"
FontSize="20"
Glyph="&#xE72C;"
RenderTransformOrigin="0.5,0.5">
<FontIcon.RenderTransform>
<RotateTransform x:Name="SpinnerTransform" Angle="0" />
</FontIcon.RenderTransform>
</FontIcon>
</Grid>
</c:RefreshableListView.RefreshIndicatorContent>

<!-- ... See sample for removed code. -->

</c:RefreshableListView>

Animate the spinner


When the list is pulled down, RefreshableListView's PullProgressChanged event occurs. You handle this event in your
app to control the refresh indicator. In the sample, this storyboard is started to animate the indicator's
RotateTransform and spin the refresh indicator.
XAML

<!-- MainPage.xaml -->


<Storyboard x:Name="SpinnerStoryboard">
<DoubleAnimation
Duration="00:00:00.5"
FillBehavior="HoldEnd"
From="0"
RepeatBehavior="Forever"
Storyboard.TargetName="SpinnerTransform"
Storyboard.TargetProperty="Angle"
To="360" />
</Storyboard>

Handle scroll viewer manipulation events


The list view control template includes a built-in ScrollViewer that lets a user scroll through the list items. To
implement pull-to-refresh, you have to handle the manipulation events on the built-in scroll viewer, as well as
several related events. For more info about manipulation events, see Touch interactions.
** OnApplyTemplate**
To get access to the scroll viewer and other template parts so that you can add event handlers and call them later in
your code, you must override the OnApplyTemplate method. In OnApplyTemplate, you call GetTemplateChild
to get a reference to a named part in the control template, which you can save to use later in your code.
In the sample, the variables used to store the template parts are declared in the Private Variables region. After they
are retrieved in the OnApplyTemplate method, event handlers are added for the DirectManipulationStarted,
DirectManipulationCompleted, ViewChanged, and PointerPressed events.
DirectManipulationStarted
In order to initiate a pull-to-refresh action, the content has to be scrolled to the top of the scroll viewer when the
user starts to pull down. Otherwise, it's assumed that the user is pulling in order to pan up in the list. The code in
this handler determines whether the manipulation started with the content at the top of the scroll viewer, and can
result in the list being refreshed. The control's 'refreshable' status is set accordingly.
If the control can be refreshed, event handlers for animations are also added.
DirectManipulationCompleted
When the user stops pulling the list down, the code in this handler checks whether a refresh was activated during
the manipulation. If a refresh was activated, the RefreshRequested event is raised and the RefreshCommand command is
executed.
The event handlers for animations are also removed.
Based on the value of the AutoRefresh property, the list can animate back up immediately, or wait until the refresh is
complete and then animate back up. A Deferral object is used to to mark the completion of the refresh. At that
point the refresh indicator UI is hidden.
This part of the DirectManipulationCompleted event handler raises the RefreshRequested event and get's the Deferral
if needed.
C#

if (this.RefreshRequested != null)
{
RefreshRequestedEventArgs refreshRequestedEventArgs = new RefreshRequestedEventArgs(
this.AutoRefresh ? new DeferralCompletedHandler(RefreshCompleted) : null);
this.RefreshRequested(this, refreshRequestedEventArgs);
if (this.AutoRefresh)
{
m_scrollerContent.ManipulationMode = ManipulationModes.None;
if (!refreshRequestedEventArgs.WasDeferralRetrieved)
{
// The Deferral object was not retrieved in the event handler.
// Animate the content up right away.
this.RefreshCompleted();
}
}
}

ViewChanged
Two cases are handled in the ViewChanged event handler.
First, if the view changed due to the scroll viewer zooming, the control's 'refreshable' status is canceled.
Second, if the content finished animating up at the end of an auto refresh, the padding rectangles are hidden, touch
interactions with the scroll viewer are re-anabled, the VerticalOffset is set to 0.
PointerPressed
Pull-to-refresh happens only when the list is pulled down by a touch manipulation. In the PointerPressed event
handler, the code checks what kind of pointer caused the event and sets a variable ( m_pointerPressed ) to indicate
whether it was a touch pointer. This variable is used in the DirectManipulationStarted handler. If the pointer is not a
touch pointer, the DirectManipulationStarted handler returns without doing anything.

Add pull and refresh events


'RefreshableListView' adds 2 events that you can handle in your app to refresh the data and manage the refresh
indicator.
For more info about events, see Events and routed events overview.
RefreshRequested
The 'RefreshRequested' event notifies your app that the user has pulled the list to refresh it. You handle this event
to fetch new data and update your list.
Here's the event handler from the sample. The important thing to notice is that it check's the list view's AutoRefresh
property and get's a Deferral if it's true. With a Deferral, the refresh indicator is not stopped and hidden until the
refresh is complete.
C#

private async void listView_RefreshRequested(object sender, RefreshableListView.RefreshRequestedEventArgs e)


{
using (Deferral deferral = listView.AutoRefresh ? e.GetDeferral() : null)
{
await FetchAndInsertItemsAsync(_rand.Next(1, 5));

if (SpinnerStoryboard.GetCurrentState() != Windows.UI.Xaml.Media.Animation.ClockState.Stopped)
{
SpinnerStoryboard.Stop();
}
}
}

PullProgressChanged
In the sample, content for the refresh indicator is provided and controlled by the app. The 'PullProgressChanged'
event notifies your app when the use is pulling the list so that you can start, stop, and reset the refresh indicator.

Composition animations
By default, content in a scroll viewer stops when the scrollbar reaches the top. To let the user continue to pull the
list down, you need to access the visual layer and animate the list content. The sample uses composition animations
for this; specifically, expression animations.
In the sample, this work is done primarily in the CompositionTarget_Rendering event handler and the
UpdateCompositionAnimations method.

Related articles
Styling controls
Touch interactions
List view and grid view
List view item templates
Expression animations
Nested UI in list items
3/6/2017 10 min to read Edit on GitHub

Nested UI is a user interface (UI) that exposes nested actionable controls enclosed inside a container that also can
take independent focus.
You can use nested UI to present a user with additional options that help accelerate taking important actions.
However, the more actions you expose, the more complicated your UI becomes. You need to take extra care when
you choose to use this UI pattern. This article provides guidelines to help you determine the best course of action
for your particular UI.
In this article, we discuss the creation of nested UI in ListView and GridView items. While this section does not talk
about other nested UI cases, these concepts are transferrable. Before you start, you should be familiar with the
general guidance for using ListView or GridView controls in your UI, which is found in the Lists and List view and
grid view articles.
In this article, we use the terms list, list item, and nested UI as defined here:
List refers to a collection of items contained in a list view or grid view.
List item refers to an individual item that a user can take action on in a list.
Nested UI refers to UI elements within a list item that a user can take action on separate from taking action on
the list item itself.

NOTE ListView and GridView both derive from the ListViewBase class, so they have the same functionality, but
display data differently. In this article, when we talk about lists, the info applies to both the ListView and
GridView controls.

Primary and secondary actions


When creating UI with a list, consider what actions the user might take from those list items.
Can a user click on the item to perform an action?
Typically, clicking a list item initiates an action, but it doesn't have too.
Is there more than one action the user can take?
For example, tapping an email in a list opens that email. However, there might be other actions, like
deleting the email, that the user would want to take without opening it first. It would benefit the user to
access this action directly in the list.
How should the actions be exposed to the user?
Consider all input types. Some forms of nested UI work great with one method of input, but might not
work with other methods.
The primary action is what the user expects to happen when they press the list item.
Secondary actions are typically accelerators associated with list items. These accelerators can be for list
management or actions related to the list item.

Options for secondary actions


When creating list UI, you first need to make sure you account for all input methods that UWP supports. For more
info about different kinds of input, see Input primer.
After you have made sure that your app supports all inputs that UWP supports, you should decide if your apps
secondary actions are important enough to expose as accelerators in the main list. Remember that the more
actions you expose, the more complicated your UI becomes. Do you really need to expose the secondary actions in
the main list UI, or can you put them somewhere else?
You might consider exposing additional actions in the main list UI when those actions need to be accessible by any
input at all times.
If you decide that putting secondary actions in the main list UI is not necessary, there are several other ways you
can expose them to the user. Here are some options you can consider for where to place secondary actions.
Put secondary actions on the detail page
Put the secondary actions on the page that the list item navigates to when its pressed. When you use the
master/details pattern, the detail page is often a good place to put secondary actions.
For more info, see the Master/detail pattern.
Put secondary actions in a context menu
Put the secondary actions in a context menu that the user can access via right-click or press-and-hold. This
provides the benefit of letting the user perform an action, such as deleting an email, without having to load the
detail page. It's a good practice to also make these options available on the detail page, as context menus are
intended to be accelerators rather than primary UI.
To expose secondary actions when input is from a gamepad or remote control, we recommend that you use a
context menu.
For more info, see Context menus and flyouts.
Put secondary actions in hover UI to optimize for pointer input
If you expect your app to be used frequently with pointer input such as mouse and pen, and want to make
secondary actions readily available only to those inputs, then you can show the secondary actions only on hover.
This accelerator is visible only when a pointer input is used, so be sure to use the other options to support other
input types as well.

For more info, see Mouse interactions.


UI placement for primary and secondary actions
If you decide that secondary actions should be exposed in the main list UI, we recommend the following guidelines.
When you create a list item with primary and secondary actions, place the primary action to the left and secondary
actions to the right. In left-to-right reading cultures, users associate actions on the left side of list item as the
primary action.
In these examples, we talk about list UI where the item flows more horizontally (it is wider than its height).
However, you might have list items that are more square in shape, or taller than their width. Typically, these are
items used in a grid. For these items, if the list doesn't scroll vertically, you can place the secondary actions at the
bottom of the list item rather than to the right side.

Consider all inputs


When deciding to use nested UI, also evaluate the user experience with all input types. As mentioned earlier, nested
UI works great for some input types. However, it does not always work great for some other. In particular,
keyboard, controller, and remote inputs can have difficulty accessing nested UI elements. Be sure to follow the
guidance below to ensure your UWP works with all input types.

Nested UI handling
When you have more than one action nested in the list item, we recommend this guidance to handle navigation
with a keyboard, gamepad, remote control, or other non-pointer input.
Nested UI where list items perform an action
If your list UI with nested elements supports actions such as invoking, selection (single or multiple), or drag-and-
drop operations, we recommend these arrowing techniques to navigate through your nested UI elements.

Gamepad
When input is from a gamepad, provide this user experience:
From A, right directional key puts focus on B.
From B, right directional key puts focus on C.
From C, right directional key is either no op, or if there is a focusable UI element to the right of List, put the
focus there.
From C, left directional key puts focus on B.
From B, left directional key puts focus on A.
From A, left directional key is either no op, or if there is a focusable UI element to the right of List, put the focus
there.
From A, B, or C, down directional key puts focus on D.
From UI element to the left of List Item, right directional key puts focus on A.
From UI element to the right of List Item, left directional key puts focus on A.
Keyboard
When input is from a keyboard, this is the experience user gets:
From A, tab key puts focus on B.
From B, tab key puts focus on C.
From C, tab key puts focus on next focusable UI element in the tab order.
From C, shift+tab key puts focus on B.
From B, shift+tab or left arrow key puts focus on A.
From A, shift+tab key puts focus on next focusable UI element in the reverse tab order.
From A, B, or C, down arrow key puts focus on D.
From UI element to the left of List Item, tab key puts focus on A.
From UI element to the right of List Item, shift tab key puts focus on C.
To achieve this UI, set IsItemClickEnabled to true on your list. SelectionMode can be any value.
For the code to implement this, see the Example section of this article.
Nested UI where list items do not perform an action
You might use a list view because it provides virtualization and optimized scrolling behavior, but not have an
action associated with a list item. These UIs typically use the list item only to group elements and ensure they scroll
as a set.
This kind of UI tends to be much more complicated than the previous examples, with a lot of nested elements that
the user can take action on.

To achieve this UI, set the following properties on your list:


SelectionMode to None.
IsItemClickEnabled to false.
IsFocusEngagementEnabled to true.
<ListView SelectionMode="None" IsItemClickEnabled="False" >
<ListView.ItemContainerStyle>
<Style TargetType="ListViewItem">
<Setter Property="IsFocusEngagementEnabled" Value="True"/>
</Style>
</ListView.ItemContainerStyle>
</ListView>

When the list items do not perform an action, we recommend this guidance to handle navigation with a gamepad
or keyboard.
Gamepad
When input is from a gamepad, provide this user experience:
From List Item, down directional key puts focus on next List Item.
From List Item, left/right key is either no op, or if there is a focusable UI element to the right of List, put the
focus there.
From List Item, 'A' button puts the focus on Nested UI in top/down left/right priority.
While inside Nested UI, follow the XY Focus navigation model. Focus can only navigate around Nested UI
contained inside the current List Item until user presses 'B' button, which puts the focus back onto the List Item.
Keyboard
When input is from a keyboard, this is the experience user gets:
From List Item, down arrow key puts focus on the next List Item.
From List Item, pressing left/right key is no op.
From List Item, pressing tab key puts focus on the next tab stop amongst the Nested UI item.
From one of the Nested UI items, pressing tab traverses the nested UI items in tab order. Once all the Nested UI
items are traveled to, it puts the focus onto the next control in tab order after ListView.
Shift+Tab behaves in reverse direction from tab behavior.

Example
This example shows how to implement nested UI where list items perform an action.

<ListView SelectionMode="None" IsItemClickEnabled="True"


ChoosingItemContainer="listview1_ChoosingItemContainer"/>

private void OnListViewItemKeyDown(object sender, KeyRoutedEventArgs e)


{
// Code to handle going in/out of nested UI with gamepad and remote only.
if (e.Handled == true)
{
return;
}

var focusedElementAsListViewItem = FocusManager.GetFocusedElement() as ListViewItem;


if (focusedElementAsListViewItem != null)
{
// Focus is on the ListViewItem.
// Go in with Right arrow.
Control candidate = null;

switch (e.OriginalKey)
{
case Windows.System.VirtualKey.GamepadDPadRight:
case Windows.System.VirtualKey.GamepadLeftThumbstickRight:
case Windows.System.VirtualKey.GamepadLeftThumbstickRight:
var rawPixelsPerViewPixel = DisplayInformation.GetForCurrentView().RawPixelsPerViewPixel;
GeneralTransform generalTransform = focusedElementAsListViewItem.TransformToVisual(null);
Point startPoint = generalTransform.TransformPoint(new Point(0, 0));
Rect hintRect = new Rect(startPoint.X * rawPixelsPerViewPixel, startPoint.Y * rawPixelsPerViewPixel, 1,
focusedElementAsListViewItem.ActualHeight * rawPixelsPerViewPixel);
candidate = FocusManager.FindNextFocusableElement(FocusNavigationDirection.Right, hintRect) as Control;
break;
}

if (candidate != null)
{
candidate.Focus(FocusState.Keyboard);
e.Handled = true;
}
}
else
{
// Focus is inside the ListViewItem.
FocusNavigationDirection direction = FocusNavigationDirection.None;
switch (e.OriginalKey)
{
case Windows.System.VirtualKey.GamepadDPadUp:
case Windows.System.VirtualKey.GamepadLeftThumbstickUp:
direction = FocusNavigationDirection.Up;
break;
case Windows.System.VirtualKey.GamepadDPadDown:
case Windows.System.VirtualKey.GamepadLeftThumbstickDown:
direction = FocusNavigationDirection.Down;
break;
case Windows.System.VirtualKey.GamepadDPadLeft:
case Windows.System.VirtualKey.GamepadLeftThumbstickLeft:
direction = FocusNavigationDirection.Left;
break;
case Windows.System.VirtualKey.GamepadDPadRight:
case Windows.System.VirtualKey.GamepadLeftThumbstickRight:
direction = FocusNavigationDirection.Right;
break;
default:
break;
}

if (direction != FocusNavigationDirection.None)
{
Control candidate = FocusManager.FindNextFocusableElement(direction) as Control;
if (candidate != null)
{
ListViewItem listViewItem = sender as ListViewItem;

// If the next focusable candidate to the left is outside of ListViewItem,


// put the focus on ListViewItem.
if (direction == FocusNavigationDirection.Left &&
!listViewItem.IsAncestorOf(candidate))
{
listViewItem.Focus(FocusState.Keyboard);
}
else
{
candidate.Focus(FocusState.Keyboard);
}
}

e.Handled = true;
}
}
}

private void listview1_ChoosingItemContainer(ListViewBase sender, ChoosingItemContainerEventArgs args)


{
if (args.ItemContainer == null)
{
args.ItemContainer = new ListViewItem();
args.ItemContainer.KeyDown += OnListViewItemKeyDown;
}
}

// DependencyObjectExtensions.cs definition.
public static class DependencyObjectExtensions
{
public static bool IsAncestorOf(this DependencyObject parent, DependencyObject child)
{
DependencyObject current = child;
bool isAncestor = false;

while (current != null && !isAncestor)


{
if (current == parent)
{
isAncestor = true;
}

current = VisualTreeHelper.GetParent(current);
}

return isAncestor;
}
}
Master/details pattern
3/6/2017 2 min to read Edit on GitHub

The master/details pattern has a master pane (usually with a list view) and a details pane for content. When an
item in the master list is selected, the details pane is updated. This pattern is frequently used for email and
address books.

Is this the right pattern?


The master/details pattern works well if you want to:
Build an email app, address book, or any app that is based on a list-details layout.
Locate and prioritize a large collection of content.
Allow the quick addition and removal of items from a list while working back-and-forth between contexts.

Choose the right style


When implementing the master/details pattern, we recommend that you use either the stacked style or the side-
by-side style, based on the amount of available screen space.

AVAILABLE WINDOW WIDTH RECOMMENDED STYLE

320 epx-719 epx Stacked

720 epx or wider Side-by-side

Stacked style
In the stacked style, only one pane is visible at a time: the master or the details.
The user starts at the master pane and "drills down" to the details pane by selecting an item in the master list. To
the user, it appears as though the master and details views exist on two separate pages.
Create a stacked master/details pattern
One way to create the stacked master/details pattern is to use separate pages for the master pane and the details
pane. Place the list view that provides the master list on one page, and the content element for the details pane on
a separate page.

For the master pane, a list view control works well for presenting lists that can contain images and text.
For the details pane, use the content element that makes the most sense. If you have a lot of separate fields,
consider using a grid layout to arrange elements into a form.

Side-by-side style
In the side-by-side style, the master pane and details pane are visible at the same time.
The list in the master pane has a selection visual to indicate the currently selected item. Selecting a new item in
the master list updates the details pane.
Create a side-by-side master/details pattern
For the master pane, a list view control works well for presenting lists that can contain images and text.
For the details pane, use the content element that makes the most sense. If you have a lot of separate fields,
consider using a grid layout to arrange elements into a form.

Examples
This design of an app that tracks the stock market uses a master/details pattern. In this example of the app as it
would appear on phone, the master pane/list is on the left, with the details pane on the right.

This design of an app that tracks the stock market uses a master/details pattern. In this example of the app as it
would appear on desktop, the master pane/list and details pane are both visible and full-screen. The master pane
features a search box at the top and a command bar at the bottom.

For sample code that shows the master/details pattern, see


ListView and GridView sample
RSS Reader sample

Related articles
Lists
Search
App and command bars
ListView class (XAML)
Media player
3/6/2017 11 min to read Edit on GitHub

The media player is used to view and listen to video and audio. Media playback can be inline (embedded in a page
or with a group of other controls) or in a dedicated full-screen view. You can modify the player's button set,
change the background of the control bar, and arrange layouts as you see fit. Just keep in mind that users expect a
basic control set (play/pause, skip back, skip forward).

Important APIs
MediaPlayerElement class
MediaTransportControls class

Note MediaPlayerElement is only available in Windows 10, version 1607 and up. If you are developing an
app for an earlier version of Windows 10 you will need to use MediaElement instead. All of the
recommendations on this page apply to MediaElement as well.

Is this the right control?


Use a media player when you want to play audio or video in your app. To display a collection of images, use a Flip
view.

Examples
A media player in the Windows 10 Get Started app.
Create a media player
Add media to your app by creating a MediaPlayerElement object in XAML and set the Source to a
MediaSource that points to an audio or video file.
This XAML creates a MediaPlayerElement and sets its Source property to the URI of a video file that's local to
the app. The MediaPlayerElement begins playing when the page loads. To suppress media from starting right
away, you can set the AutoPlay property to false.

<MediaPlayerElement x:Name="mediaSimple"
Source="Videos/video1.mp4"
Width="400" AutoPlay="True"/>

This XAML creates a MediaPlayerElement with the built in transport controls enabled and the AutoPlay
property set to false.

<MediaPlayerElement x:Name="mediaPlayer"
Source="Videos/video1.mp4"
Width="400"
AutoPlay="False"
AreTransportControlsEnabled="True"/>

Media transport controls


[MediaPlayerElement]
((https://msdn.microsoft.com/library/windows/apps/windows.ui.xaml.controls.mediaplayerelement.aspx) has built
in transport controls that handle play, stop, pause, volume, mute, seeking/progress, closed captions, and audio
track selection. To enable these controls, set AreTransportControlsEnabled to true. To disable them, set
AreTransportControlsEnabled to false. The transport controls are represented by the
MediaTransportControls class. You can use the transport controls as-is, or customize them in various ways. For
more info, see the MediaTransportControls class reference and Create custom transport controls.
The transport controls support single- and double-row layouts. The first example here is a single-row layout, with
the play/pause button located to the left of the media timeline. This layout is best reserved for inline media
playback and compact screens.
The double-row controls layout (below) is recommended for most usage scenarios, especially on larger screens.
This layout provides more space for controls and makes the timeline easier for the user to operate.

System media transport controls


MediaPlayerElement is automatically integrated with the system media transport controls. The system media
transport controls are the controls that pop up when hardware media keys are pressed, such as the media buttons
on keyboards. For more info, see SystemMediaTransportControls.

Note MediaElement does not automatically integrate with the system media transport controls so you
must connect them yourself. For more information, see System Media Transport Controls.

Set the media source


To play files on the network or files embedded with the app, set the Source property to a MediaSource with the
path of the file.
Tip To open files from the internet, you need to declare the Internet (Client) capability in your app's manifest
(Package.appxmanifest). For more info about declaring capabilities, see App capability declarations.
This code attempts to set the Source property of the MediaPlayerElement defined in XAML to the path of a file
entered into a TextBox.

<TextBox x:Name="txtFilePath" Width="400"


FontSize="20"
KeyUp="TxtFilePath_KeyUp"
Header="File path"
PlaceholderText="Enter file path"/>
private void TxtFilePath_KeyUp(object sender, KeyRoutedEventArgs e)
{
if (e.Key == Windows.System.VirtualKey.Enter)
{
TextBox tbPath = sender as TextBox;

if (tbPath != null)
{
LoadMediaFromString(tbPath.Text);
}
}
}

private void LoadMediaFromString(string path)


{
try
{
Uri pathUri = new Uri(path);
mediaPlayer.Source = MediaSource.CreateFromUri(pathUri);
}
catch (Exception ex)
{
if (ex is FormatException)
{
// handle exception.
// For example: Log error or notify user problem with file
}
}
}

To set the media source to a media file embedded in the app, initialize a Uri with the path prefixed with ms-
appx:///, create a MediaSource with the Uri and then set the Source to the Uri. For example, for a file called
video1.mp4 that is in a Videos subfolder, the path would look like: ms-appx:///Videos/video1.mp4
This code sets the Source property of the MediaPlayerElement defined previously in XAML to ms-
appx:///Videos/video1.mp4.

private void LoadEmbeddedAppFile()


{
try
{
Uri pathUri = new Uri("ms-appx:///Videos/video1.mp4");
mediaPlayer.Source = MediaSource.CreateFromUri(pathUri);
}
catch (Exception ex)
{
if (ex is FormatException)
{
// handle exception.
// For example: Log error or notify user problem with file
}
}
}

Open local media files


To open files on the local system or from OneDrive, you can use the FileOpenPicker to get the file and Source to
set the media source, or you can programmatically access the user media folders.
If your app needs access without user interaction to the Music or Video folders, for example, if you are
enumerating all the music or video files in the user's collection and displaying them in your app, then you need to
declare the Music Library and Video Library capabilities. For more info, see Files and folders in the Music,
Pictures, and Videos libraries.
The FileOpenPicker does not require special capabilities to access files on the local file system, such as the user's
Music or Video folders, since the user has complete control over which file is being accessed. From a security and
privacy standpoint, it is best to minimize the number of capabilities your app uses.
To open local media using FileOpenPicker
1. Call FileOpenPicker to let the user pick a media file.
Use the FileOpenPicker class to select a media file. Set the FileTypeFilter to specify which file types the
FileOpenPicker displays. Call PickSingleFileAsync to launch the file picker and get the file.
2. Use a MediaSource to set the chosen media file as the MediaPlayerElement.Source.
To use the StorageFile returned from the FileOpenPicker, you need to call the CreateFromStorageFile
method on MediaSource and set it as the Source of MediaPlayerElement. Then call Play on the
MediaPlayerElement.MediaPlayer to start the media.
This example shows how to use the FileOpenPicker to choose a file and set the file as the Source of a
MediaPlayerElement.

<MediaPlayerElement x:Name="mediaPlayer"/>
...
<Button Content="Choose file" Click="Button_Click"/>

private async void Button_Click(object sender, RoutedEventArgs e)


{
await SetLocalMedia();
}

async private System.Threading.Tasks.Task SetLocalMedia()


{
var openPicker = new Windows.Storage.Pickers.FileOpenPicker();

openPicker.FileTypeFilter.Add(".wmv");
openPicker.FileTypeFilter.Add(".mp4");
openPicker.FileTypeFilter.Add(".wma");
openPicker.FileTypeFilter.Add(".mp3");

var file = await openPicker.PickSingleFileAsync();

// mediaPlayer is a MediaPlayerElement defined in XAML


if (file != null)
{
mediaPlayer.Source = MediaSource.CreateFromStorageFile(file);

mediaPlayer.MediaPlayer.Play();
}
}

Set the poster source


You can use the PosterSource property to provide your MediaPlayerElement with a visual representation
before the media is loaded. A PosterSource is an image, such as a screen shot or movie poster, that is displayed
in place of the media. The PosterSource is displayed in the following situations:
When a valid source is not set. For example, Source is not set, Source was set to Null, or the source is invalid
(as is the case when a MediaFailed event occurs).
While media is loading. For example, a valid source is set, but the MediaOpened event has not occurred yet.
When media is streaming to another device.
When the media is audio only.
Here's a MediaPlayerElement with its Source set to an album track, and it's PosterSource set to an image of
the album cover.

<MediaPlayerElement Source="Media/Track1.mp4" PosterSource="Media/AlbumCover.png"/>

Keep the device's screen active


Typically, a device dims the display (and eventually turns it off) to save battery life when the user is away, but
video apps need to keep the screen on so the user can see the video. To prevent the display from being
deactivated when user action is no longer detected, such as when an app is playing video, you can call
DisplayRequest.RequestActive. The DisplayRequest class lets you tell Windows to keep the display turned on
so the user can see the video.
To conserve power and battery life, you should call DisplayRequest.RequestRelease to release the display
request when it is no longer required. Windows automatically deactivates your app's active display requests when
your app moves off screen, and re-activates them when your app comes back to the foreground.
Here are some situations when you should release the display request:
Video playback is paused, for example, by user action, buffering, or adjustment due to limited bandwidth.
Playback stops. For example, the video is done playing or the presentation is over.
A playback error has occurred. For example, network connectivity issues or a corrupted file.

Note If MediaPlayerElement.IsFullWindow is set to true and media is playing, the display will
automatically be prevented from deactivating.

To keep the screen active


1. Create a global DisplayRequest variable. Initialize it to null.

// Create this variable at a global scope. Set it to null.


private DisplayRequest appDisplayRequest = null;

2. Call RequestActive to notify Windows that the app requires the display to remain on.
3. Call RequestRelease to release the display request whenever video playback is stopped, paused, or
interrupted by a playback error. When your app no longer has any active display requests, Windows saves
battery life by dimming the display (and eventually turning it off) when the device is not being used.
Each MediaPlayerElement.MediaPlayer has a PlaybackSession of type MediaPlaybackSession that
controls various aspects of media playback such as PlaybackRate, PlaybackState and Position. Here,
you use the PlaybackStateChanged event on MediaPlayer.PlaybackSession to detect situations when
you should release the display request. Then, use the NaturalVideoHeight property to determine whether
an audio or video file is playing, and keep the screen active only if video is playing.

<MediaPlayerElement x:Name="mpe" Source="Media/video1.mp4"/>


protected override void OnNavigatedTo(NavigationEventArgs e)
{
mpe.MediaPlayer.PlaybackSession.PlaybackStateChanged += MediaPlayerElement_CurrentStateChanged;
base.OnNavigatedTo(e);
}

private void MediaPlayerElement_CurrentStateChanged(object sender, RoutedEventArgs e)


{
MediaPlaybackSession playbackSession = sender as MediaPlaybackSession;
if (playbackSession != null && playbackSession.NaturalVideoHeight != 0)
{
if(playbackSession.PlaybackState == MediaPlaybackState.Playing)
{
if(appDisplayRequest == null)
{
// This call creates an instance of the DisplayRequest object
appDisplayRequest = new DisplayRequest();
appDisplayRequest.RequestActive();
}
}
else // PlaybackState is Buffering, None, Opening or Paused
{
if(appDisplayRequest != null)
{
// Deactivate the displayr request and set the var to null
appDisplayRequest.RequestRelease();
appDisplayRequest = null;
}
}
}

Control the media player programmatically


MediaPlayerElement provides numerous properties, methods, and events for controlling audio and video
playback through the MediaPlayerElement.MediaPlayer property. For a full listing of properties, methods, and
events, see the MediaPlayer reference page.
Advanced media playback scenarios
For more complex media playback scenarios like playing a playlist, switching between audio languages or creating
custom metadata tracks set the MediaPlayerElement.Source to a MediaPlaybackItem or a
MediaPlaybackList. See the Media playback page in the dev center for more information on how to enable
various advanced media functionality.
Enable full window video rendering
Set the IsFullWindow property to enable and disable full window rendering. When you programmatically set full
window rendering in your app, you should always use IsFullWindow instead of doing it manually.
IsFullWindow insures that system level optimizations are performed that improve performance and battery life.
If full window rendering is not set up correctly, these optimizations may not be enabled.
Here is some code that creates an AppBarButton that toggles full window rendering.

<AppBarButton Icon="FullScreen"
Label="Full Window"
Click="FullWindow_Click"/>>
private void FullWindow_Click(object sender, object e)
{
mediaPlayer.IsFullWindow = !media.IsFullWindow;
}

Resize and stretch video


Use the Stretch property to change how the video content and/or the PosterSource fills the container it's in. This
resizes and stretches the video depending on the Stretch value. The Stretch states are similar to picture size
settings on many TV sets. You can hook this up to a button and allow the user to choose which setting they prefer.
None displays the native resolution of the content in its original size.
Uniform fills up as much of the space as possible while preserving the aspect ratio and the image content. This
can result in horizontal or vertical black bars at the edges of the video. This is similar to wide-screen modes.
UniformToFill fills up the entire space while preserving the aspect ratio. This can result in some of the image
being cropped. This is similar to full-screen modes.
Fill fills up the entire space, but does not preserve the aspect ratio. None of image is cropped, but stretching
may occur. This is similar to stretch modes.

Here, an AppBarButton is used to cycle through the Stretch options. A switch statement checks the current state
of the Stretch property and sets it to the next value in the Stretch enumeration. This lets the user cycle through
the different stretch states.

<AppBarButton Icon="Switch"
Label="Resize Video"
Click="PictureSize_Click" />

private void PictureSize_Click(object sender, RoutedEventArgs e)


{
switch (mediaPlayer.Stretch)
{
case Stretch.Fill:
mediaPlayer.Stretch = Stretch.None;
break;
case Stretch.None:
mediaPlayer.Stretch = Stretch.Uniform;
break;
case Stretch.Uniform:
mediaPlayer.Stretch = Stretch.UniformToFill;
break;
case Stretch.UniformToFill:
mediaPlayer.Stretch = Stretch.Fill;
break;
default:
break;
}
}
Enable low-latency playback
Set the RealTimePlayback property to true on a MediaPlayerElement.MediaPlayer to enable the media
player element to reduce the initial latency for playback. This is critical for two-way communications apps, and can
be applicable to some gaming scenarios. Be aware that this mode is more resource intensive and less power-
efficient.
This example creates a MediaPlayerElement and sets RealTimePlayback to true.

MediaPlayerElement mp = new MediaPlayerElement();


mp.MediaPlayer.RealTimePlayback = true;

Recommendations
The media player supports both light and dark themes, but dark theme provides a better experience for most
entertainment scenarios. The dark background provides better contrast, in particular for low-light conditions, and
limits the control bar from interfering in the viewing experience.
When playing video content, encourage a dedicated viewing experience by promoting full-screen mode over
inline mode. The full-screen viewing experience is optimal, and options are restricted in the inline mode.
If you have the screen real estate or are designing for the 10-foot experience, go with the double-row layout. It
provides more space for controls than the compact single-row layout and it is easier to navigate using gamepad
for 10-foot.

Note Visit the Designing for Xbox and TV article for more information on optimizing your application for the
10-foot experience.

The default controls have been optimized for media playback, however you have the ability to add custom options
you need to the media player in order to provide the best experience for you app. Visit Create custom transport
controls to learn more about adding custom controls.

Related articles
Command design basics for UWP apps
Content design basics for UWP apps
Create custom transport controls
3/6/2017 9 min to read Edit on GitHub

MediaPlayerElement has customizable XAML transport controls to manage control of audio and video content
within a Universal Windows Platform (UWP) app. Here, we demonstrate how to customize the
MediaTransportControls template. Well show you how to work with the overflow menu, add a custom button and
modify the slider.
Before starting, you should be familiar with the MediaPlayerElement and the MediaTransportControls classes. For
more info, see the MediaPlayerElement control guide.

TIP
The examples in this topic are based on the Media Transport Controls sample. You can download the sample to view and run
the completed code.

Important APIs
MediaPlayerElement
MediaPlayerElement.AreTransportControlsEnabled
MediaTransportControls

NOTE
MediaPlayerElement is only available in Windows 10, version 1607 and up. If you are developing an app for an earlier
version of Windows 10 you will need to use MediaElement instead. All of the examples on this page work with
MediaElement as well.

When should you customize the template?


MediaPlayerElement has built-in transport controls that are designed to work well without modification in most
video and audio playback apps. Theyre provided by the MediaTransportControls class and include buttons to
play, stop, and navigate media, adjust volume, toggle full screen, cast to a second device, enable captions, switch
audio tracks, and adjust the playback rate. MediaTransportControls has properties that let you control whether
each button is shown and enabled. You can also set the IsCompact property to specify whether the controls are
shown in one row or two.
However, there may be scenarios where you need to further customize the look of the control or change its
behavior. Here are some examples:
Change the icons, slider behavior, and colors.
Move less commonly used command buttons into an overflow menu.
Change the order in which commands drop out when the control is resized.
Provide a command button thats not in the default set.
NOTE
The buttons visible on screen will drop out of the built-in transport controls in a predefined order if there is not enough
room on screen. To change this ordering or put commands that don't fit into an overflow menu, you will need to customize
the controls.

You can customize the appearance of the control by modifying the default template. To modify the control's
behavior or add new commands, you can create a custom control that's derived from MediaTransportControls.

TIP
Customizable control templates are a powerful feature of the XAML platform, but there are also consequences that you
should take into consideration. When you customize a template, it becomes a static part of your app and therefore will not
receive any platform updates that are made to the template by Microsoft. If template updates are made by Microsoft, you
should take the new template and re-modify it in order to get the benefits of the updated template.

Template structure
The ControlTemplate is part of the default style. The transport control's default style is shown in the
MediaTransportControls class reference page. You can copy this default style into your project to modify it. The
ControlTemplate is divided into sections similar to other XAML control templates.
The first section of the template contains the Style definitions for the various components of the
MediaTransportControls.
The second section defines the various visual states that are used by the MediaTransportControls.
The third section contains the Grid that holds that various MediaTransportControls elements together and
defines how the components are laid out.

NOTE
For more info about modifying templates, see [Control templates](). You can use a text editor or similar editors in your IDE to
open the XAML files in (Program Files)\Windows Kits\10\DesignTime\CommonConfiguration\Neutral\UAP\(SDK
version)\Generic. The default style and template for each control is defined in the generic.xaml file. You can find the
MediaTransportControls template in generic.xaml by searching for "MediaTransportControls".

In the following sections, you learn how to customize several of the main elements of the transport controls:
Slider: allows a user to scrub through their media and also displays progress
CommandBar: contains all of the buttons. For more info, see the Anatomy section of the
MediaTransportControls reference topic.

Customize the transport controls


If you want to modify only the appearance of the MediaTransportControls, you can create a copy of the default
control style and template, and modify that. However, if you also want to add to or modify the functionality of the
control, you need to create a new class that derives from MediaTransportControls.
Re-template the control
To customize the MediaTransportControls default style and template
1. Copy the default style from MediaTransportControls styles and templates into a ResourceDictionary in your
project.
2. Give the Style an x:Key value to identify it, like this.
xaml <Style TargetType="MediaTransportControls" x:Key="myTransportControlsStyle"> <!-- Style content ... --> </Style>
3. Add a MediaPlayerElement with MediaTransportControls to your UI.
4. Set the Style property of the MediaTransportControls element to your custom Style resource, as shown here.
xaml <MediaPlayerElement AreTransportControlsEnabled="True"> <MediaPlayerElement.TransportControls> <MediaTransportControls
Style="{StaticResource myTransportControlsStyle}"/> </MediaPlayerElement.TransportControls> </MediaPlayerElement>

For more info about modifying styles and templates, see [Styling controls]() and [Control templates]().
Create a derived control
To add to or modify the functionality of the transport controls, you must create a new class that's derived from
MediaTransportControls. A derived class called CustomMediaTransportControls is shown in the Media Transport
Controls sample and the remaining examples on this page.
To create a new class derived from MediaTransportControls
1. Add a new class file to your project.
In Visual Studio, select Project > Add Class. The Add New Item dialog opens.
In the Add New Item dialog, enter a name for the class file, then click Add. (In the Media Transport
Controls sample, the class is named CustomMediaTransportControls .)
2. Modify the class code to derive from the MediaTransportControls class.
csharp public sealed class CustomMediaTransportControls : MediaTransportControls { }
3. Copy the default style for MediaTransportControls into a ResourceDictionary in your project. This is the style
and template you modify. (In the Media Transport Controls sample, a new folder called "Themes" is created, and
a ResourceDictionary file called generic.xaml is added to it.)
4. Change the TargetType of the style to the new custom control type. (In the sample, the TargetType is changed
to local:CustomMediaTransportControls .)
xaml xmlns:local="using:CustomMediaTransportControls"> ... <Style TargetType="local:CustomMediaTransportControls">
5. Set the DefaultStyleKey of your custom class. This tells your custom class to use a Style with a TargetType of
local:CustomMediaTransportControls .
csharp public sealed class CustomMediaTransportControls : MediaTransportControls { public CustomMediaTransportControls() {
this.DefaultStyleKey = typeof(CustomMediaTransportControls); } }
6. Add a MediaPlayerElement to your XAML markup and add the custom transport controls to it. One thing to
note is that the APIs to hide, show, disable, and enable the default buttons still work with a customized template.
xaml <MediaPlayerElement Name="MediaPlayerElement1" AreTransportControlsEnabled="True" Source="video.mp4">
<MediaPlayerElement.TransportControls> <local:CustomMediaTransportControls x:Name="customMTC"
IsFastForwardButtonVisible="True" IsFastForwardEnabled="True" IsFastRewindButtonVisible="True" IsFastRewindEnabled="True"
IsPlaybackRateButtonVisible="True" IsPlaybackRateEnabled="True" IsCompact="False"> </local:CustomMediaTransportControls>
</MediaPlayerElement.TransportControls> </MediaPlayerElement>
You can now modify the control style and template to update the look of your custom control, and the control
code to update its behavior.
Working with the overflow menu
You can move MediaTransportControls command buttons into an overflow menu, so that less commonly used
commands are hidden until the user needs them.
In the MediaTransportControls template, the command buttons are contained in a CommandBar element. The
command bar has the concept of primary and secondary commands. The primary commands are the buttons that
appear in the control by default and are always visible (unless you disable the button, hide the button or there is
not enough room). The secondary commands are shown in an overflow menu that appears when a user clicks the
ellipsis () button. For more info, see the App bars and command bars article.
To move an element from the command bar primary commands to the overflow menu, you need to edit the XAML
control template.
To move a command to the overflow menu:
1. In the control template, find the CommandBar element named MediaControlsCommandBar .
2. Add a SecondaryCommands section to the XAML for the CommandBar. Put it after the closing tag for the
PrimaryCommands.
xaml <CommandBar x:Name="MediaControlsCommandBar" ... > <CommandBar.PrimaryCommands> ... <AppBarButton
x:Name='PlaybackRateButton' Style='{StaticResource AppBarButtonStyle}' MediaTransportControlsHelper.DropoutOrder='4'
Visibility='Collapsed'> <AppBarButton.Icon> <FontIcon Glyph="&#xEC57;"/> </AppBarButton.Icon> </AppBarButton> ...
</CommandBar.PrimaryCommands> <!-- Add secondary commands (overflow menu) here --> <CommandBar.SecondaryCommands> ...
</CommandBar.SecondaryCommands> </CommandBar>
3. To populate the menu with commands, cut and paste the XAML for the desired AppBarButton objects from
the PrimaryCommands to the SecondaryCommands. In this example, we move the PlaybackRateButton to the
overflow menu.
4. Add a label to the button and remove the styling information, as shown here. Because the overflow menu is
comprised of text buttons, you must add a text label to the button and also remove the style that sets the
height and width of the button. Otherwise, it won't appear correctly in the overflow menu.

<CommandBar.SecondaryCommands>
<AppBarButton x:Name='PlaybackRateButton'
Label='Playback Rate'>
</AppBarButton>
</CommandBar.SecondaryCommands>

IMPORTANT
You must still make the button visible and enable it in order to use it in the overflow menu. In this example, the
PlaybackRateButton element isn't visible in the overflow menu unless the IsPlaybackRateButtonVisible property is true. It's
not enabled unless the IsPlaybackRateEnabled property is true. Setting these properties is shown in the previous section.

Adding a custom button


One reason you might want to customize MediaTransportControls is to add a custom command to the control.
Whether you add it as a primary command or a secondary command, the procedure for creating the command
button and modifying its behavior is the same. In the Media Transport Controls sample, a "rating" button is added
to the primary commands.
To add a custom command button
1. Create an AppBarButton object and add it to the CommandBar in the control template.

<AppBarButton x:Name="LikeButton"
Icon="Like"
Style="{StaticResource AppBarButtonStyle}"
MediaTransportControlsHelper.DropoutOrder="3"
VerticalAlignment="Center" />

You must add it to the CommandBar in the appropriate location. (For more info, see the Working with the
overflow menu section.) How it's positioned in the UI is determined by where the button is in the markup.
For example, if you want this button to appear as the last element in the primary commands, add it at the
very end of the primary commands list.
You can also customize the icon for the button. For more info, see the AppBarButton reference.
2. In the OnApplyTemplate override, get the button from the template and register a handler for its Click
event. This code goes in the CustomMediaTransportControls class.
public sealed class CustomMediaTransportControls : MediaTransportControls
{
// ...

protected override void OnApplyTemplate()


{
// Find the custom button and create an event handler for its Click event.
var likeButton = GetTemplateChild("LikeButton") as Button;
likeButton.Click += LikeButton_Click;
base.OnApplyTemplate();
}

//...
}

3. Add code to the Click event handler to perform the action that occurs when the button is clicked. Here's the
complete code for the class.

public sealed class CustomMediaTransportControls : MediaTransportControls


{
public event EventHandler< EventArgs> Liked;

public CustomMediaTransportControls()
{
this.DefaultStyleKey = typeof(CustomMediaTransportControls);
}

protected override void OnApplyTemplate()


{
// Find the custom button and create an event handler for its Click event.
var likeButton = GetTemplateChild("LikeButton") as Button;
likeButton.Click += LikeButton_Click;
base.OnApplyTemplate();
}

private void LikeButton_Click(object sender, RoutedEventArgs e)


{
// Raise an event on the custom control when 'like' is clicked.
var handler = Liked;
if (handler != null)
{
handler(this, EventArgs.Empty);
}
}
}

Custom media transport controls with a "Like" button added

Modifying the slider


The "seek" control of the MediaTransportControls is provided by a Slider element. One way you can customize it is
to change the granularity of the seek behavior.
The default seek slider is divided into 100 parts, so the seek behavior is limited to that many sections. You can
change the granularity of the seek slider by getting the Slider from the XAML visual tree in your MediaOpened
event handler on MediaPlayerElement.MediaPlayer. This example shows how to use VisualTreeHelper to get
a reference to the Slider, then change the default step frequency of the slider from 1% to 0.1% (1000 steps) if the
media is longer than 120 minutes. The MediaPlayerElement is named MediaPlayerElement1 .

protected override void OnNavigatedTo(NavigationEventArgs e)


{
MediaPlayerElement1.MediaPlayer.MediaOpened += MediaPlayerElement_MediaPlayer_MediaOpened;
base.OnNavigatedTo(e);
}

private void MediaPlayerElement_MediaPlayer_MediaOpened(object sender, RoutedEventArgs e)


{
FrameworkElement transportControlsTemplateRoot = (FrameworkElement)VisualTreeHelper.GetChild(MediaPlayerElement1.TransportControls,
0);
Slider sliderControl = (Slider)transportControlsTemplateRoot.FindName("ProgressSlider");
if (sliderControl != null && MediaPlayerElement1.NaturalDuration.TimeSpan.TotalMinutes > 120)
{
// Default is 1%. Change to 0.1% for more granular seeking.
sliderControl.StepFrequency = 0.1;
}
}

Related articles
Media playback
Menus and context menus
3/6/2017 3 min to read Edit on GitHub

Menus and context menus display a list of commands or options when the user requests them.

Important APIs
MenuFlyout class
ContextFlyout property
FlyoutBase.AttachedFlyout property

Is this the right control?


Menus and context menus save space by organizing commands and hiding them until the user needs them. If a
particular command will be used frequently and you have the space available, consider placing it directly in its
own element, rather than in a menu, so that users don't have to go through a menu to get to it.
Menus and context menus are for organizing commands; to display arbitrary content, such as an notification or to
request confirmation, use a dialog or a flyout.

Menus vs. context menus


Menus and context menus are identical in how they look and what they can contain. In fact, you use the same
control, MenuFlyout, to create them. The only difference is how you let the user access it.
When should you use a menu or a context menu?
If the host element is a button or some other command element who's primary role is to present additional
commands, use a menu.
If the host element is some other type of element that has another primary purpose (such as presenting text or
an image), use a context menu.
For example, use a menu on a button in a navigation pane to provide additional navigation options. In this
scenario, the primary purpose of the button control is to provide access to a menu.
If you want to add commands (such as cut, copy, and paste) to a text element, use a context menu instead of a
menu. In this scenario, the primary role of the text element is to present and edit text; additional commands (such
as cut, copy, and paste) are secondary and belong in a context menu.
Menus

Have a single entry point (a File menu at the top of the screen, for example) that is always displayed.
Are usually attached to a button or a parent menu item.
Are invoked by left-clicking (or an equivalent action, such as tapping with your finger).

Are associated with an element via its Flyout or FlyoutBase.AttachedFlyout properties.

Context menus
Are attched to a single element, but are only accessible when the context makes sense.
Are invoked by right clicking (or an equavent action, such as pressing and holding with your finger).
Are associated with an element via its ContextFlyout property.

Create a menu or a context menu


To create a menu or a content menu, you use the MenuFlyout class. You define the contents of the menu by
adding MenuFlyoutItem, ToggleMenuFlyoutItem, and MenuFlyoutSeparator objects to the MenuFlyout. These
objects are for:
MenuFlyoutItemPerforming an immediate action.
ToggleMenuFlyoutItemSwitching an option on or off.
MenuFlyoutSeparatorVisually separating menu items.
This example creates a MenuFlyout class and uses the ContextFlyout property, a property available to most
controls, to show the MenuFlyout class as a context menu.

<Rectangle
Height="100" Width="100"
Tapped="Rectangle_Tapped">
<Rectangle.ContextFlyout>
<MenuFlyout>
<MenuFlyoutItem Text="Change color" Click="ChangeColorItem_Click" />
</MenuFlyout>
</Rectangle.ContextFlyout>
<Rectangle.Fill>
<SolidColorBrush x:Name="rectangleFill" Color="Red" />
</Rectangle.Fill>
</Rectangle>
private void ChangeColorItem_Click(object sender, RoutedEventArgs e)
{
// Change the color from red to blue or blue to red.
if (rectangleFill.Color == Windows.UI.Colors.Red)
{
rectangleFill.Color = Windows.UI.Colors.Blue;
}
else
{
rectangleFill.Color = Windows.UI.Colors.Red;
}
}

The next example is nearly identical, but instead of using the ContextFlyout property to show the MenuFlyout class
as a context menu, the example uses the FlyoutBase.ShowAttachedFlyout property to show it as a menu.

<Rectangle
Height="100" Width="100"
Tapped="Rectangle_Tapped">
<FlyoutBase.AttachedFlyout>
<MenuFlyout>
<MenuFlyoutItem Text="Change color" Click="ChangeColorItem_Click" />
</MenuFlyout>
</FlyoutBase.AttachedFlyout>
<Rectangle.Fill>
<SolidColorBrush x:Name="rectangleFill" Color="Red" />
</Rectangle.Fill>
</Rectangle>

private void Rectangle_Tapped(object sender, TappedRoutedEventArgs e)


{
FlyoutBase.ShowAttachedFlyout((FrameworkElement)sender);
}

private void ChangeColorItem_Click(object sender, RoutedEventArgs e)


{
// Change the color from red to blue or blue to red.
if (rectangleFill.Color == Windows.UI.Colors.Red)
{
rectangleFill.Color = Windows.UI.Colors.Blue;
}
else
{
rectangleFill.Color = Windows.UI.Colors.Red;
}
}

Light dismiss controls, such as menus, context menus, and other flyouts, trap keyboard and gamepad focus
inside the transient UI until dismissed. To provide a visual cue for this behavior, light dismiss controls on Xbox
will draw an overlay that dims the visibility of out of scope UI. This behavior can be modified with the new
LightDismissOverlayMode property. By default, transient UIs will draw the light dismiss overlay on Xbox but
not other device families, but apps can choose to force the overlay to be always On or always Off.

<MenuFlyout LightDismissOverlayMode=\"Off\">

Get the sample code


XAML UI basics sample
See all of the XAML controls in an interactive format.

Related articles
MenuFlyout class
Nav panes
3/6/2017 4 min to read Edit on GitHub

A navigation pane (or just "nav" pane) is a pattern that allows for many top-level navigation items while
conserving screen real estate. The nav pane is widely used for mobile apps, but also works well on larger
screens. When used as an overlay, the pane remains collapsed and out-of-the way until the user presses the
button, which is handy for smaller screens. When used in its docked mode, the pane remains open, which allows
greater utility if there's enough screen real estate.

Important APIs
SplitView class

Is this the right pattern?


The nav pane works well for:
Apps with many top-level navigation items that are of similar type. For example, a sports app with categories
like Football, Baseball, Basketball, Soccer, and so on.
Providing a consistent navigational experience across apps. Nav pane should include only navigational
elements, not actions.
A medium-to-high number (5-10+) of top-level navigational categories.
Preserving screen real estate (as an overlay).
Navigation items that are infrequently accessed. (as an overlay).
Building a nav pane
The nav pane pattern consists of a pane for navigation categories, a content area, and an optional button to open
or close the pane. The easiest way to build a nav pane is with a split view control, which comes with an empty
pane and a content area that's always visible.
To try out code implementing this pattern, download the XAML Navigation solution from GitHub.
Pane
Headers for navigational categories go in the pane. Entry points to app settings and account management, if
applicable, also go in the pane. Navigation headers are usually a list of items for the user to choose from.

Content area
The content area is where information for the selected nav location is displayed. It can contain individual
elements or other sub-level navigation.
Button
When present, the button allows users to open and close the pane. The button remains visible in a fixed position
and does not move with the pane. We recommend placing the button in the upper-left corner of your app. The
nav pane button is visualized as three stacked horizontal lines and is commonly referred to as the "hamburger"
button.

The button is usually associated with a text string. At the top level of the app, the app title can be displayed next
to the button. At lower levels of the app, the text string may be the title of the page that the user is currently on.

Nav pane variations


The nav pane has three modes: overlay, compact and inline. An overlay collapses and expands as needed. When
compact, the pane always shows as a narrow sliver which can be expanded. An inline pane remains open by
default.
Overlay
An overlay can be used on any screen size and in either portrait or landscape orientation. In its default
(collapsed) state, the overlay takes up no real-estate, with only the button shown.
Provides on-demand navigation that conserves screen real estate. Ideal for apps on phones and phablets.
The pane is hidden by default, with only the button visible.
Pressing the nav pane button opens and closes the overlay.
The expanded state is transient and is dismissed when a selection is made, when the back button is used, or
when the user taps outside of the pane.
The overlay draws over the top of content and does not reflow content.
Compact
Compact mode can be specified as CompactOverlay , which overlays content when opened, or CompactInline ,
which pushes content out of its way. We recommend using CompactOverlay.
Compact panes provide some indication of the selected location while using a small amount of screen real-
estate.
This mode is better suited for medium screens like tablets.
By default, the pane is closed with only navigation icons and the button visible.
Pressing the nav pane button opens and closes the pane, which behaves like overlay or inline depending on
the specified display mode.
The selection should be shown on the list icons to highlight where the user is in the navigation tree.
Inline
The navigation pane remains open. This mode is better suited for larger screens.
Supports drag-and-drop scenarios to and from the pane.
The nav pane button is not required for this state. If the button is used, then the content area is pushed out
and the content within that area will reflow.
The selection should be shown on the list items to highlight where the user is in the navigation tree.

Adaptability
To maximize usability on a variety of devices, we recommend utilizing break points and adjusting nav pane's
mode based on the width of its app window.
Small window
Less than or equal to 640px wide.
Nav pane should be in overlay mode, closed by default.
Medium window
Greater than 640px and less than or equal to 1007px wide.
Nav pane should be in sliver mode, closed by default.
Large window
Greater than 1007px wide.
Nav pane should be in docked mode, opened by default.

Tailoring
To optimize your app's 10ft experience, consider tailoring nav pane by altering the visual appearance of its
navigational elements. Depending on the interaction context, it may be more important to call the user's
attention to either the selected nav item or to the focused nav item. For 10ft experience, where gamepad is the
most common input device, ensuring that the user can easily keep track of the currently focused item's location
on screen is particularly important.
Related topics
Split view control
Master/details
Navigation basics
Progress controls
3/6/2017 5 min to read Edit on GitHub

A progress control provides feedback to the user that a long-running operation is underway. It can mean that the
user cannot interact with the app when the progress indicator is visible, and can also indicate how long the wait
time might be, depending on the indicator used.

Important APIs
ProgressBar class
IsIndeterminate property
ProgressRing class
IsActive property

Types of progress
There are two controls to show the user that an operation is underway either through a ProgressBar or through
a ProgressRing.
The ProgressBar determinate state shows the percentage completed of a task. This should be used during an
operation whose duration is known, but it's progress should not block the user's interaction with the app.
The ProgressBar indeterminate state shows that an operation is underway, does not block user interaction with
the app, and its completion time is unknown.
The ProgressRing only has an indeterminate state, and should be used when any further user interaction is
blocked until the operation has completed.
Additionally, a progress control is read only, and not interactive. Meaning that the user cannot invoke or use these
controls directly.

Top to bottom - Indeterminate ProgressBar and a determinate ProgressBar

An indeterminate ProgressRing

When to use each control


It's not always obvious what control or what state (determinate vs indeterminate) to use when trying to show
something is happening. Sometimes a task is obvious enough that it doesnt require a progress control at all and
sometimes even if a progress control is used, a line of text is still necessary in order to explain to the user what
operation is underway.
ProgressBar
Does the control have a defined duration or predictable end?
Use a determinate ProgressBar then, and update the percentage or value accordingly.
Can the user continue without having to monitor the operations progress?
When a ProgressBar is in use, interaction is non-modal, typically meaning that the user is not blocked by
that operations completion, and can continue to use the app in other ways until that aspect has completed.
Keywords
If your operation falls around these keywords, or if youre showing text that alongside the progress
operation that matches these keywords; consider using a ProgressBar:
Loading...
Retrieving
Working...
ProgressRing
Will the operation cause the user to wait to continue?
If an operation requires all (or a large portion of) interaction with the app to wait until it has been
completed, then the ProgressRing is the better choice. The ProgressRing control is used for modal
interactions, meaning that the user is blocked until the ProgressRing disappears.
Is the app waiting for the user to complete a task?
If so, use a ProgressRing, as theyre meant to indicate an unknown wait time for the user.
Keywords
If your operation falls around these keywords, or if youre showing text alongside the progress operation
that matches these keywords; consider using a ProgressRing:
Refreshing
Signing in...
Connecting...
No progress indication necessary
Does the user need to know that something is happening?
For example, if the app is downloading something in the background and the user didnt initiate the
download, the user doesnt necessarily need to know about it.
Is the operation a background activity that doesn't block user activity and is of minimal (but still
some) interest to the user?
Use text when your app is performing tasks that don't have to be visible all the time, but you still need to
show the status.
Does the user only care about the completion of the operation?
Sometimes its best to show a notice only when the operation is completed, or give a visual that the
operation has been completed immediately, and run the finishing touches in the background.

Progress controls best practices


Sometimes its best to see some visual representations of when and where to use these different progress
controls:
ProgressBar - Determinate

The first example is the determinate ProgressBar. When the duration of the operation is known, when installing,
downloading, setting up, etc; a determinate ProgressBar is best.
ProgressBar - Indeterminate

When it is not known how long the operation will take, use an indeterminate ProgressBar. Indeterminate
ProgressBars are also good when filling a virtualized list, and creating a smooth visual transition between an
indeterminate to determinate ProgressBar.
Is the operation in a virtualized collection?
If so, do not put a progress indicator on each list item as they appear. Instead, use a ProgressBar and place it
at the top of the collection of items being loaded in, to show that the items are being fetched.
ProgressRing - Indeterminate

The indeterminate ProgressRing is used when any further user interaction with the app is halted, or the app is
waiting for the users input to continue. The signing in example above is a perfect scenario for the
ProgressRing, the user cannot continue using the app until the sign is has completed.

Customizing a progress control


Both progress controls are rather simple; but some visual features of the controls are not obvious to customize.
Sizing the ProgressRing
The ProgressRing can be sized as large as you want, but can only be as small as 20x20epx. In order to resize a
ProgressRing, you must set its height and width. If only height or width are set, the control will assume minimum
sizing (20x20epx) conversely if the height and width are set to two different sizes, the smaller of the sizes will be
assumed. To ensure your ProgressRing is correct for your needs, set bother the height and the width to the same
value:

<ProgressRing Height="100" Width="100"/>

To make your ProgressRing visible, and animate, you must set the IsActive property to true:

<ProgressRing IsActive="True" Height="100" Width="100"/>

progressRing.IsActive = true;

Coloring the progress controls


By default, the main coloring of the progress controls is set to the accent color of the system. To override this
brush simply change the foreground property on either control.

<ProgressRing IsActive="True" Height="100" Width="100" Foreground="Blue"/>


<ProgressBar Width="100" Foreground="Green"/>

Changing the foreground color for the ProgressRing will change the colors of the dots. The foreground property
for the ProgressBar will change the fill color of the bar to alter the unfilled portion of the bar, simply override the
background property.
Showing a wait cursor
Sometimes its best to just show a brief wait cursor, when the app or operation needs time to think, and you need
to indicate to the user that the app or area where the wait cursor is visible should not be interacted with until the
wait cursor has disappeared.

Window.Current.CoreWindow.PointerCursor = new Windows.UI.Core.CoreCursor(Windows.UI.Core.CoreCursorType.Wait, 10);

Related articles
ProgressBar class
ProgressRing class
For developers (XAML)
Adding progress controls
How to create a custom indeterminate progress bar for Windows Phone
Radio buttons
3/6/2017 5 min to read Edit on GitHub

Radio buttons let users select one option from two or more choices. Each option is represented by one radio
button; a user can select only one radio button in a radio button group.
(If you're curious about the name, radio buttons are named for the channel preset buttons on a radio.)

Important APIs
RadioButton class
Checked event
IsChecked property

Is this the right control?


Use radio buttons to present users with two or more mutually exclusive options, as here.

Radio buttons add clarity and weight to very important options in your app. Use radio buttons when the options
being presented are important enough to command more screen space and where the clarity of the choice
demands very explicit options.
Radio buttons emphasize all options equally, and that may draw more attention to the options than necessary.
Consider using other controls, unless the options deserve extra attention from the user. For example, if the
default option is recommended for most users in most situations, use a drop-down list instead.
If there are only two mutually exclusive options, combine them into a single checkbox or toggle switch. For
example, use a checkbox for "I agree" instead of two radio buttons for "I agree" and "I don't agree."

When the user can select multiple options, use a checkbox or list box control instead.
Don't use radio buttons when the options are numbers that have fixed steps, like 10, 20, 30. Use a slider control
instead.
If there are more than 8 options, use a drop-down list, a single-select list box, or a list box instead.
If the available options are based on the apps current context, or can otherwise vary dynamically, use a single-
select list box instead.

Example
Radio buttons in the Microsoft Edge browser settings.

Create a radio button


Radio buttons work in groups. There are 2 ways you can group radio button controls:
Put them inside the same parent container.
Set the GroupName property on each radio button to the same value.

Note A group of radio buttons behaves like a single control when accessed via the keyboard. Only the
selected choice is accessible using the Tab key but users can cycle through the group using arrow keys.

In this example, the first group of radio buttons is implicitly grouped by being in the same stack panel. The
second group is divided between 2 stack panels, so they're explicitly grouped by GroupName.
<StackPanel>
<StackPanel>
<TextBlock Text="Background" Style="{ThemeResource BaseTextBlockStyle}"/>
<StackPanel Orientation="Horizontal">
<RadioButton Content="Green" Tag="Green" Checked="BGRadioButton_Checked"/>
<RadioButton Content="Yellow" Tag="Yellow" Checked="BGRadioButton_Checked"/>
<RadioButton Content="Blue" Tag="Blue" Checked="BGRadioButton_Checked"/>
<RadioButton Content="White" Tag="White" Checked="BGRadioButton_Checked" IsChecked="True"/>
</StackPanel>
</StackPanel>
<StackPanel>
<TextBlock Text="BorderBrush" Style="{ThemeResource BaseTextBlockStyle}"/>
<StackPanel Orientation="Horizontal">
<StackPanel>
<RadioButton Content="Green" GroupName="BorderBrush" Tag="Green" Checked="BorderRadioButton_Checked"/>
<RadioButton Content="Yellow" GroupName="BorderBrush" Tag="Yellow" Checked="BorderRadioButton_Checked"
IsChecked="True"/>
</StackPanel>
<StackPanel>
<RadioButton Content="Blue" GroupName="BorderBrush" Tag="Blue" Checked="BorderRadioButton_Checked"/>
<RadioButton Content="White" GroupName="BorderBrush" Tag="White" Checked="BorderRadioButton_Checked"/>
</StackPanel>
</StackPanel>
</StackPanel>
<Border x:Name="BorderExample1" BorderThickness="10" BorderBrush="#FFFFD700" Background="#FFFFFFFF" Height="50"
Margin="0,10,0,10"/>
</StackPanel>
private void BGRadioButton_Checked(object sender, RoutedEventArgs e)
{
RadioButton rb = sender as RadioButton;

if (rb != null && BorderExample1 != null)


{
string colorName = rb.Tag.ToString();
switch (colorName)
{
case "Yellow":
BorderExample1.Background = new SolidColorBrush(Colors.Yellow);
break;
case "Green":
BorderExample1.Background = new SolidColorBrush(Colors.Green);
break;
case "Blue":
BorderExample1.Background = new SolidColorBrush(Colors.Blue);
break;
case "White":
BorderExample1.Background = new SolidColorBrush(Colors.White);
break;
}
}
}

private void BorderRadioButton_Checked(object sender, RoutedEventArgs e)


{
RadioButton rb = sender as RadioButton;

if (rb != null && BorderExample1 != null)


{
string colorName = rb.Tag.ToString();
switch (colorName)
{
case "Yellow":
BorderExample1.BorderBrush = new SolidColorBrush(Colors.Gold);
break;
case "Green":
BorderExample1.BorderBrush = new SolidColorBrush(Colors.DarkGreen);
break;
case "Blue":
BorderExample1.BorderBrush = new SolidColorBrush(Colors.DarkBlue);
break;
case "White":
BorderExample1.BorderBrush = new SolidColorBrush(Colors.White);
break;
}
}
}

The radio button groups look like this.

A radio button has two states: selected or cleared. When a radio button is selected, its IsChecked property is
true. When a radio button is cleared, its IsChecked property is false. A radio button can be cleared by clicking
another radio button in the same group, but it cannot be cleared by clicking it again. However, you can clear a
radio button programmatically by setting its IsChecked property to false.

Recommendations
Make sure that the purpose and current state of a set of radio buttons is clear.
Always give visual feedback when the user taps a radio button.
Give visual feedback as the user interacts with radio buttons. Normal, pressed, checked, and disabled are
examples of radio button states. A user taps a radio button to activate the related option. Tapping an
activated option doesnt deactivate it, but tapping another option transfers activation to that option.
Reserve visual effects and animations for touch feedback, and for the checked state; in the unchecked state,
radio button controls should appear unused or inactive (but not disabled).
Limit the radio buttons text content to a single line. You can customize the radio buttons visuals to display a
description of the option in smaller font size below the main line of text.
If the text content is dynamic, consider how the button will resize and what will happen to visuals around it.
Use the default font unless your brand guidelines tell you to use another.
Enclose the radio button in a label element so that tapping the label selects the radio button.
Place the label text after the radio button control, not before or above it.
Consider customizing your radio buttons. By default, a radio button consists of two concentric circlesthe
inner one filled (and shown when the radio button is checked), the outer one strokedand some text
content. But we encourage you to be creative. Users are comfortable interacting directly with the content of
an app. So you may choose to show the actual content on offer, whether thats presented with graphics or as
subtle textual toggle buttons.
Don't put more than 8 options in a radio button group. When you need to present more options, use a drop-
down list, list box, or a list view instead.
Don't put two radio button groups next to each other. When two radio button groups are right next to each
other, it's difficult to determine which buttons belong to which group. Use group labels to separate them.

Additional usage guidance


This illustration shows the proper way to position and space radio buttons.

Related topics
For designers
Guidelines for buttons
Guidelines for toggle switches
Guidelines for checkboxes
Guidelines for drop-down lists
Guidelines for list view and grid view controls
Guidelines for sliders
Guidelines for the select control
For developers (XAML)
Windows.UI.Xaml.Controls RadioButton class
Scroll bars
3/6/2017 4 min to read Edit on GitHub

Panning and scrolling allows users to reach content that extends beyond the bounds of the screen.
A scroll viewer control is composed of as much content as will fit in the viewport, and either one or two scroll bars.
Touch gestures can be used to pan and zoom (the scroll bars fade in only during manipulation), and the pointer
can be used to scroll. The flick gesture pans with inertia.
Note Windows has two scroller visualizations, which are based on the user's input mode: scroll indicators when
using touch or gamepad; and interactive scroll bars for other input devices including mouse, keyboard, and pen.

Important APIs
ScrollViewer class
ScrollBar class

Examples
A ScrollViewer enables content to be displayed in a smaller area than its actual size. When the content of the
scroll viewer is not entirely visible, the scroll viewer displays scrollbars that the user can use to move the content
area that is visible. The area that includes all of the content of the scroll viewer is the extent. The visible area of the
content is the viewport.

Create a scroll viewer


To add vertical scrolling to your page, wrap the page content in a scroll viewer.
<Page
x:Class="App1.MainPage"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:local="using:App1">

<ScrollViewer>
<StackPanel>
<TextBlock Text="My Page Title" Style="{StaticResource TitleTextBlockStyle}"/>
<!-- more page content -->
</StackPanel>
</ScrollViewer>
</Page>

This XAML shows how to place an image in a scroll viewer and enable zooming.

<ScrollViewer ZoomMode="Enabled" MaxZoomFactor="10"


HorizontalScrollMode="Enabled" HorizontalScrollBarVisibility="Visible"
Height="200" Width="200">
<Image Source="Assets/Logo.png" Height="400" Width="400"/>
</ScrollViewer>

ScrollViewer in a control template


It's typical for a ScrollViewer control to exist as a composite part of other controls. A ScrollViewer part, along with
the ScrollContentPresenter class for support, will display a viewport along with scrollbars only when the host
control's layout space is being constrained smaller than the expanded content size. This is often the case for lists,
so ListView and GridView templates always include a ScrollViewer. TextBox and RichEditBox also include a
ScrollViewer in their templates.
When a ScrollViewer part exists in a control, the host control often has built-in event handling for certain input
events and manipulations that enable the content to scroll. For example, a GridView interprets a swipe gesture and
this causes the content to scroll horizontally. The input events and raw manipulations that the host control receives
are considered handled by the control, and lower-level events such as PointerPressed won't be raised and won't
bubble to any parent containers either. You can change some of the built-in control handling by overriding a
control class and the On* virtual methods for events, or by retemplating the control. But in either case it's not
trivial to reproduce the original default behavior, which is typically there so that the control reacts in expected ways
to events and to a user's input actions and gestures. So you should consider whether you really need that input
event to fire. You might want to investigate whether there are other input events or gestures that are not being
handled by the control, and use those in your app or control interaction design.
To make it possible for controls that include a ScrollViewer to influence some of the behavior and properties that
are from within the ScrollViewer part, ScrollViewer defines a number of XAML attached properties that can be set
in styles and used in template bindings. For more info about attached properties, see Attached properties overview.
ScrollViewer XAML attached properties
ScrollViewer defines the following XAML attached properties:
ScrollViewer.BringIntoViewOnFocusChange
ScrollViewer.HorizontalScrollBarVisibility
ScrollViewer.HorizontalScrollMode
ScrollViewer.IsDeferredScrollingEnabled
ScrollViewer.IsHorizontalRailEnabled
ScrollViewer.IsHorizontalScrollChainingEnabled
ScrollViewer.IsScrollInertiaEnabled
ScrollViewer.IsVerticalRailEnabled
ScrollViewer.IsVerticalScrollChainingEnabled
ScrollViewer.IsZoomChainingEnabled
ScrollViewer.IsZoomInertiaEnabled
ScrollViewer.VerticalScrollBarVisibility
ScrollViewer.VerticalScrollMode
ScrollViewer.ZoomMode
These XAML attached properties are intended for cases where the ScrollViewer is implicit, such as when the
ScrollViewer exists in the default template for a ListView or GridView, and you want to be able to influence the
scrolling behavior of the control without accessing template parts.
For example, here's how to make the vertical scroll bars always visible for a ListView's built in scroll viewer.

<ListView ScrollViewer.VerticalScrollBarVisibility="Visible"/>

For cases where a ScrollViewer is explicit in your XAML, as is shown in the example code, you don't need to use
attached property syntax. Just use attribute syntax, for example <ScrollViewer VerticalScrollBarVisibility="Visible"/> .

Do's and don'ts


Whenever possible, design for vertical scrolling rather than horizontal.
Use one-axis panning for content regions that extend beyond one viewport boundary (vertical or horizontal).
Use two-axis panning for content regions that extend beyond both viewport boundaries (vertical and
horizontal).
Use the built-in scroll functionality in the list view, grid view, combo box, list box, text input box, and hub
controls. With those controls, if there are too many items to show all at once, the user is able to scroll either
horizontally or vertically over the list of items.
If you want the user to pan in both directions around a larger area, and possibly to zoom, too, for example, if
you want to allow the user to pan and zoom over a full-sized image (rather than an image sized to fit the
screen) then place the image inside a scroll viewer.
If the user will scroll through a long passage of text, configure the scroll viewer to scroll vertically only.
Use a scroll viewer to contain one object only. Note that the one object can be a layout panel, in turn containing
any number of objects of its own.
Don't place a Pivot control inside a scroll viewer to avoid conflicts with pivot's scrolling logic.

Related topics
For developers (XAML)
ScrollViewer class
Search and find-in-page
3/6/2017 7 min to read Edit on GitHub

Search is one of the top ways users can find content in your app. The guidance in this article covers elements of
the search experience, search scopes, implementation, and examples of search in context.

Important APIs
AutoSuggestBox class (XAML)

Elements of the search experience


Input. Text is the most common mode of search input and is the focus of this guidance. Other common input
modes include voice and camera, but these typically require the ability to interface with device hardware and may
require additional controls or custom UI within the app.
Zero input. Once the user has activated the input field, but before the user has entered text, you can display
what's called a "zero input canvas." The zero input canvas will commonly appear in the app canvas, so that auto-
suggest replaces this content when the user begins to input their query. Recent search history, trending searches,
contextual search suggestions, hints and tips are all good candidates for the zero input state.

Query formulation/auto-suggest. Query formulation replaces zero input content as soon as the user begins to
enter input. As the user enters a query string, they are provided with a continuously updated set of query
suggestions or disambiguation options to help them expedite the input process and formulate an effective query.
This behavior of query suggestions is built into the auto-suggest control, and is also a way to show the icon inside
the search (like a microphone or a commit icon). Any behavior outside of this falls to the app.
Results set. Search results commonly appear directly under the search input field. While this isn't a requirement,
the juxtaposition of input and results maintains context and provides the user with immediate access to edit the
previous query or enter a new query. This connection can be further communicated by replacing the hint text with
the query that created the results set.
One method to enable efficient access to both edit the previous query and enter a new query is to highlight the
previous query when the field is reactivated. This way, any keystroke will replace the previous string, but the string
is maintained so that the user can position a cursor to edit or append the previous string.
The results set can appear in any form that best communicates the content. A list view provides a good deal of
flexibility and is well-suited to most searches. A grid view works well for images or other media, and a map can be
used to communicate spatial distribution.

Search scopes
Search is a common feature, and users will encounter search UI in the shell and within many apps. Although
search entry points tend to be similarly visualized, they can provide access to results that range from broad (web
or device searches) to narrow (a user's contact list). The search entry point should be juxtaposed against the
content being searched.
Some common search scopes include:
Global and contextual/refine. Search across multiple sources of cloud and local content. Varied results include
URLs, documents, media, actions, apps, and more.
Web. Search a web index. Results include pages, entities, and answers.
My stuff. Search across device(s), cloud, social graphs, and more. Results are varied, but are constrained by the
connection to user account(s).
Use hint text to communicate search scope. Examples include:
"Search Windows and the Web"
"Search contacts list"
"Search mailbox"
"Search settings"
"Search for a place"

By effectively communicating the scope of a search input point, you can help to ensure that the user expectation
will be met by the capabilities of the search you are performing and reduce the possibility of frustration.
Implementation
For most apps, it's best to have a text input field as the search entry point, which provides a prominent visual
footprint. In addition, hint text helps with discoverability and communicating the search scope. When search is a
more secondary action, or when space is constrained, the search icon can serve as an entry point without the
accompanying input field. When visualized as an icon, be sure that there's room for a modal search box, as seen in
the below examples.
Before clicking search icon:

After clicking search icon:

Search always uses a right-pointing magnifying glass glyph for the entry point. The glyph to use is Segoe UI
Symbol, hex character code 0xE0094, and usually at 15 epx font size.
The search entry point can be placed in a number of different areas, and its placement communicates both search
scope and context. Searches that gather results from across an experience or external to the app are typically
located within top-level app chrome, such as global command bars or navigation.
As the search scope becomes more narrow or contextual, the placement will typically be more directly associated
with the content to be searched, such as on a canvas, as a list header, or within contextual command bars. In all
cases, the connection between search input and results or filtered content should be visually clear.
In the case of scrollable lists, it's helpful to always have search input be visible. We recommend making the search
input sticky and have content scroll behind it.
Zero input and query formulation functionality is optional for contextual/refine searches in which the list will be
filtered in real-time by user input. Exceptions include cases where query formatting suggestions may be available,
such as inbox filtering options (to:<input string>, from: <input string>, subject: <input string>, and so on).

Example
The examples in this section show search placed in context.
Search as an action in the Windows tool bar:
Search as an input on the app canvas:

Search in a navigation pane:

Inline search is best reserved for cases where search is infrequently accessed or is highly contextual:
Guidelines for find-in-page
Find-in-page enables users to find text matches in the current body of text. Document viewers, readers, and
browsers are the most typical apps that provide find-in-page.

Do's and don'ts


Place a command bar in your app with find-in-page functionality to let the user search for on-page text. For
placement details, see the Examples section.
Apps that provide find-in-page should have all necessary controls in a command bar.
If your app includes a lot of functionality beyond find-in-page, you can provide a Find button in the top-
level command bar as an entry point to another command bar that contains all of your find-in-page
controls.
The find-in-page command bar should remain visible when the user is interacting with the touch
keyboard. The touch keyboard appears when a user taps the input box. The find-in-page command
bar should move up, so it's not obscured by the touch keyboard.
Find-in-page should remain available while the user interacts with the view. Users need to interact
with the in-view text while using find-in-page. For example, users may want to zoom in or out of a
document or pan the view to read the text. Once the user starts using find-in-page, the command bar
should remain available with a Close button to exit find-in-page.
Enable the keyboard shortcut (CTRL+F). Implement the keyboard shortcut CTRL+F to enable the user
to invoke the find-in-page command bar quickly.
Include the basics of find-in-page functionality. These are the UI elements that you need in order to
implement find-in-page:
Input box
Previous and Next buttons
A match count
Close (desktop-only)
The view should highlight matches and scroll to show the next match on screen. Users can move
quickly through the document by using the Previous and Next buttons and by using scroll bars or
direct manipulation with touch.
Find-and-replace functionality should work alongside the basic find-in-page functionality. For apps
that have find-and-replace, ensure that find-in-page doesn't interfere with find-and-replace
functionality.
Include a match counter to indicate to the user the number of text matches there are on the page.
Enable the keyboard shortcut (CTRL+F).

Examples
Provide an easy way to access the find-in-page feature. In this example on a mobile UI, "Find on page" appears
after two "Add to..." commands in an expandable menu:

After selecting find-in-page, the user enters a search term. Text suggestions can appear when a search term is
being entered:
If there isn't a text match in the search, a "No results" text string should appear in the results box:

If there is a text match in the search, the first term should be highlighted in a distinct color, with succeeding
matches in a more subtle tone of that same color palette, as seen in this example:
Find-in-page has a match counter:

Implementing find-in-page
Document viewers, readers, and browsers are the likeliest app types to provide find-in-page, and enable the
user to have a full screen viewing/reading experience.
Find-in-page functionality is secondary and should be located in a command bar.
For more info about adding commands to your command bar, see Command bar.

Related articles
Auto-suggest box
Semantic zoom
3/6/2017 5 min to read Edit on GitHub

Semantic zoom lets the user switch between two different views of the same content so that they can quickly
navigate through a large set of grouped data.
The zoomed-in view is the main view of the content. This is the main view where you show individual data
items.
The zoomed-out view is a higher-level view of the same content. You typically show the group headers for a
grouped data set in this view.
For example, when viewing an address book, the user could zoom out to quickly jump to the letter "W", and zoom
in on that letter to see the names associated with it.

Important APIs
SemanticZoom class
ListView class
GridView class

Features:
The size of the zoomed-out view is constrained by the bounds of the semantic zoom control.
Tapping on a group header toggles views. Pinching as a way to toggle between views can be enabled.
Active headers switch between views.

Is this the right control?


Use a SemanticZoom control when you need to show a grouped data set that's large enough that it cant all be
shown on one or two pages.
Don't confuse semantic zooming with optical zooming. While they share both the same interaction and basic
behavior (displaying more or less detail based on a zoom factor), optical zoom refers to the adjustment of
magnification for a content area or object such as a photograph. For info about a control that performs optical
zooming, see the ScrollViewer control.

Examples
Photos app
Here's a semantic zoom used in the Photos app. Photos are grouped by month. Selecting a month header in the
default grid view zooms out to the month list view for quicker navigation.
Address book
An address book is another example of a data set that can be much easier to navigate using semantic zoom. You
can use the zoomed-out view to quickly jump to the letter you need (left image), while the zoomed-in view
displays the individual data items (right image).

Create a semantic zoom


The SemanticZoom control doesnt have any visual representation of its own. Its a host control that manages
the transition between 2 other controls that provide the views of your content, typically ListView or GridView
controls. You set the view controls to the ZoomedInView and ZoomedOutView properties of the
SemanticZoom.
The 3 elements you need for a semantic zoom are:
A grouped data source
A zoomed-in view that shows the item-level data.
A zoomed-out view that shows the group-level data.
Before you use a semantic zoom, you should understand how to use a list view with grouped data. For more info,
see List view and grid view and [Grouping items in a list]().
Note To define the zoomed-in view and the zoomed-out view of the SemanticZoom control, you can use any
two controls that implement the [ISemanticZoomInformation]() interface. The XAML framework provides 3
controls that implement this interface: ListView, GridView, and Hub.

This XAML shows the structure of the SemanticZoom control. You assign other controls to the ZoomedInView and
ZoomedOutView properties.
XAML

<SemanticZoom>
<SemanticZoom.ZoomedInView>
<!-- Put the GridView for the zoomed out view here. -->
</SemanticZoom.ZoomedInView>

<SemanticZoom.ZoomedOutView>
<!-- Put the ListView for the zoomed in view here. -->
</SemanticZoom.ZoomedOutView>
</SemanticZoom>

The examples here are taken from the SemanticZoom page of the XAML UI Basics sample. You can download the
sample to see the complete code incuding the data source. This semantic zoom uses a GridView to supply the
zoomed-in view and a ListView for the zoomed-out view.
Define the zoomed-in view
Here's the GridView control for the zoomed-in view. The zoomed-in view should display the individual data items
in groups. This example shows how to display the items in a grid with an image and text.
XAML

<SemanticZoom.ZoomedInView>
<GridView ItemsSource="{x:Bind cvsGroups.View}"
ScrollViewer.IsHorizontalScrollChainingEnabled="False"
SelectionMode="None"
ItemTemplate="{StaticResource ZoomedInTemplate}">
<GridView.GroupStyle>
<GroupStyle HeaderTemplate="{StaticResource ZoomedInGroupHeaderTemplate}"/>
</GridView.GroupStyle>
</GridView>
</SemanticZoom.ZoomedInView>

The look of the group headers is defined in the ZoomedInGroupHeaderTemplate recsource. The look of the items is
defined in the ZoomedInTemplate resource.
XAML
<DataTemplate x:Key="" x:DataType="data:ControlInfoDataGroup">
<TextBlock Text="{x:Bind Title}"
Foreground="{ThemeResource ApplicationForegroundThemeBrush}"
Style="{StaticResource SubtitleTextBlockStyle}"/>
</DataTemplate>

<DataTemplate x:Key="ZoomedInTemplate" x:DataType="data:ControlInfoDataItem">


<StackPanel Orientation="Horizontal" MinWidth="200" Margin="12,6,0,6">
<Image Source="{x:Bind ImagePath}" Height="80" Width="80"/>
<StackPanel Margin="20,0,0,0">
<TextBlock Text="{x:Bind Title}"
Style="{StaticResource BaseTextBlockStyle}"/>
<TextBlock Text="{x:Bind Subtitle}"
TextWrapping="Wrap" HorizontalAlignment="Left"
Width="300" Style="{StaticResource BodyTextBlockStyle}"/>
</StackPanel>
</StackPanel>
</DataTemplate>

Define the zoomed-out view


This XAML defines a ListView control for the zoomed-out view. This example shows how to display the group
headers as text in a list.
XAML

<SemanticZoom.ZoomedOutView>
<ListView ItemsSource="{x:Bind cvsGroups.View.CollectionGroups}"
SelectionMode="None"
ItemTemplate="{StaticResource ZoomedOutTemplate}" />
</SemanticZoom.ZoomedOutView>

The look is defined in the ZoomedOutTemplate resource.


XAML

<DataTemplate x:Key="ZoomedOutTemplate" x:DataType="wuxdata:ICollectionViewGroup">


<TextBlock Text="{x:Bind Group.(data:ControlInfoDataGroup.Title)}"
Style="{StaticResource SubtitleTextBlockStyle}" TextWrapping="Wrap"/>
</DataTemplate>

Synchronize the views


The zoomed-in view and zoomed-out view should be synchronized, so if a user selects a group in the zoomed-out
view, the details of that same group are shown in the zoomed-in view. You can use a CollectionViewSource or
add code to synchronize the views.
Any controls that you bind to the same CollectionViewSource always have the same current item. If both views
use the same CollectionViewSource as their data source, the CollectionViewSource synchronizes the views
automatically. For more info, see CollectionViewSource.
If you don't use a CollectionViewSource to synchronize the views, you should handle the ViewChangeStarted
event and synchronize the items in the event handler, as shown here.
XAML

<SemanticZoom x:Name="semanticZoom" ViewChangeStarted="SemanticZoom_ViewChangeStarted">

C#
private void SemanticZoom_ViewChangeStarted(object sender, SemanticZoomViewChangedEventArgs e)
{
if (e.IsSourceZoomedInView == false)
{
e.DestinationItem.Item = e.SourceItem.Item;
}
}

Recommendations
When using semantic zoom in your app, be sure that the item layout and panning direction don't change
based on the zoom level. Layouts and panning interactions should be consistent and predictable across zoom
levels.
Semantic zoom enables the user to jump quickly to content, so limit the number of pages/screens to three in
the zoomed-out mode. Too much panning diminishes the practicality of semantic zoom.
Avoid using semantic zoom to change the scope of the content. For example, a photo album shouldn't switch
to a folder view in File Explorer.
Use a structure and semantics that are essential to the view.
Use group names for items in a grouped collection.
Use sort ordering for a collection that is ungrouped but sorted, such as chronological for dates or alphabetical
for a list of names.

Get the sample code


XAML UI Basics sample

Related articles
Navigation design basics
List view and grid view
List view item templates
Sliders
3/6/2017 7 min to read Edit on GitHub

A slider is a control that lets the user select from a range of values by moving a thumb control along a track.

Important APIs
Slider class
Value property
ValueChanged event

Is this the right control?


Use a slider when you want your users to be able to set defined, contiguous values (such as volume or
brightness) or a range of discrete values (such as screen resolution settings).
A slider is a good choice when you know that users think of the value as a relative quantity, not a numeric value.
For example, users think about setting their audio volume to low or mediumnot about setting the value to 2 or
5.
Don't use a slider for binary settings. Use a toggle switch instead.
Here are some additional factors to consider when deciding whether to use a slider:
Does the setting seem like a relative quantity? If not, use radio buttons or a list box.
Is the setting an exact, known numeric value? If so, use a numeric text box.
Would a user benefit from instant feedback on the effect of setting changes? If so, use a slider. For
example, users can choose a color more easily by immediately seeing the effect of changes to hue, saturation,
or luminosity values.
Does the setting have a range of four or more values? If not, use radio buttons.
Can the user change the value? Sliders are for user interaction. If a user can't ever change the value, use
read-only text instead.
If you are deciding between a slider and a numeric text box, use a numeric text box if:
Screen space is tight.
The user is likely to prefer using the keyboard.
Use a slider if:
Users will benefit from instant feedback.

Examples
A slider to control the volume on Windows Phone.
A slider to change text size in Windows display settings.

Create a slider
Here's how to create a slider in XAML.

<Slider x:Name="volumeSlider" Header="Volume" Width="200"


ValueChanged="Slider_ValueChanged"/>

Here's how to create a slider in code.

Slider volumeSlider = new Slider();


volumeSlider.Header = "Volume";
volumeSlider.Width = 200;
volumeSlider.ValueChanged += Slider_ValueChanged;

// Add the slider to a parent container in the visual tree.


stackPanel1.Children.Add(volumeSlider);

You get and set the value of the slider from the Value property. To respond to value changes, you can use data
binding to bind to the Value property, or handle the ValueChanged event.
private void Slider_ValueChanged(object sender, RangeBaseValueChangedEventArgs e)
{
Slider slider = sender as Slider;
if (slider != null)
{
media.Volume = slider.Value;
}
}

Recommendations
Size the control so that users can easily set the value they want. For settings with discrete values, make sure
the user can easily select any value using the mouse. Make sure the endpoints of the slider always fit within
the bounds of a view.
Give immediate feedback while or after a user makes a selection (when practical). For example, the Windows
volume control beeps to indicate the selected audio volume.
Use labels to show the range of values. Exception: If the slider is vertically oriented and the top label is
Maximum, High, More, or equivalent, you can omit the other labels because the meaning is clear.
Disable all associated labels or feedback visuals when you disable the slider.
Consider the direction of text when setting the flow direction and/or orientation of your slider. Script flows
from left to right in some languages, and from right to left in others.
Don't use a slider as a progress indicator.
Don't change the size of the slider thumb from the default size.
Don't create a continuous slider if the range of values is large and users will most likely select one of several
representative values from the range. Instead, use those values as the only steps allowed. For example if time
value might be up to 1 month but users only need to pick from 1 minute, 1 hour, 1 day or 1 month, then
create a slider with only 4 step points.

Additional usage guidance


Choosing the right layout: horizontal or vertical
You can orient your slider horizontally or vertically. Use these guidelines to determine which layout to use.
Use a natural orientation. For example, if the slider represents a real-world value that is normally shown
vertically (such as temperature), use a vertical orientation.
If the control is used to seek within media, like in a video app, use a horizontal orientation.
When using a slider in page that can be panned in one direction (horizontally or vertically), use a different
orientation for the slider than the panning direction. Otherwise, users might swipe the slider and change its
value accidentally when they try to pan the page.
If you're still not sure which orientation to use, use the one that best fits your page layout.
Range direction
The range direction is the direction you move the slider when you slide it from its current value to its max value.
For vertical slider, put the largest value at the top of the slider, regardless of reading direction. For example, for
a volume slider, always put the maximum volume setting at the top of the slider. For other types of values
(such as days of the week), follow the reading direction of the page.
For horizontal styles, put the lower value on the left side of the slider for left-to-right page layout, and on the
right for right-to-left page layout.
The one exception to the previous guideline is for media seek bars: always put the lower value on the left side
of the slider.
Steps and tick marks
Use step points if you don't want the slider to allow arbitrary values between min and max. For example, if you
use a slider to specify the number of movie tickets to buy, don't allow floating point values. Give it a step value
of 1.
If you specify steps (also known as snap points), make sure that the final step aligns to the slider's max value.
Use tick marks when you want to show users the location of major or significant values. For example, a slider
that controls a zoom might have tick marks for 50%, 100%, and 200%.
Show tick marks when users need to know the approximate value of the setting.
Show tick marks and a value label when users need to know the exact value of the setting they choose,
without interacting with the control. Otherwise, they can use the value tooltip to see the exact value.
Always show tick marks when step points aren't obvious. For example, if the slider is 200 pixels wide and has
200 snap points, you can hide the tick marks because users won't notice the snapping behavior. But if there
are only 10 snap points, show tick marks.
Labels
Slider labels
The slider label indicates what the slider is used for.
Use a label with no ending punctuation (this is the convention for all control labels).
Position labels above the slider when the slider is in a form that places most of its labels above their
controls.
Position labels to the sides when the slider is in a form that places most of its labels to the side of their
controls.
Avoid placing labels below the slider because the user's finger might occlude the label when the user
touches the slider.
Range labels
The range, or fill, labels describe the slider's minimum and maximum values.
Label the two ends of the slider range, unless a vertical orientation makes this unnecessary.
Use only one word, if possible, for each label.
Don't use ending punctuation.
Make sure these labels are descriptive and parallel. Examples: Maximum/Minimum, More/Less,
Low/High, Soft/Loud.
Value labels
A value label displays the current value of the slider.
If you need a value label, display it below the slider.
Center the text relative to the control and include the units (such as pixels).
Since the sliders thumb is covered during scrubbing, consider showing the current value some other
way, with a label or other visual. A slider setting text size could render some sample text of the right size
beside the slider.
Appearance and interaction
A slider is composed of a track and a thumb. The track is a bar (which can optionally show various styles of tick
marks) representing the range of values that can be input. The thumb is a selector, which the user can position by
either tapping the track or by scrubbing back and forth on it.
A slider has a large touch target. To maintain touch accessibility, a slider should be positioned far enough away
from the edge of the display.
When youre designing a custom slider, consider ways to present all the necessary info to the user with as little
clutter as possible. Use a value label if a user needs to know the units in order to understand the setting; find
creative ways to represent these values graphically. A slider that controls volume, for example, could display a
speaker graphic without sound waves at the minimum end of the slider, and a speaker graphic with sound waves
at the maximum end.

Related topics
Toggle switches
Slider class
Split view control
3/6/2017 1 min to read Edit on GitHub

A split view control has an expandable/collapsible pane and a content area.

Important APIs
SplitView class

Here is an example of the Microsoft Edge app using SplitView to show its Hub.

A split view's content area is always visible. The pane can expand and collapse or remain in an open state, and can
present itself from either the left side or right side of an app window. The pane has four modes:
Overlay
The pane is hidden until opened. When open, the pane overlays the content area.
Inline
The pane is always visible and doesn't overlay the content area. The pane and content areas divide the
available screen real estate.
CompactOverlay
A narrow portion of the pane is always visible in this mode, which is just wide enough to show icons. The
default closed pane width is 48px, which can be modified with CompactPaneLength . If the pane is opened, it
will overlay the content area.
CompactInline
A narrow portion of the pane is always visible in this mode, which is just wide enough to show icons. The
default closed pane width is 48px, which can be modified with CompactPaneLength . If the pane is opened, it
will reduce the space available for content, pushing the content out of its way.

Is this the right control?


The split view control can be used to make a navigation pane. To build this pattern, add an expand/collapse button
(the "hamburger" button) and a list view representing the nav items.
The split view control can also be used to create any "drawer" experience where users can open and close the
supplemental pane.

Create a split view


Here's a SplitView control with an open Pane appearing inline next to the Content.

<SplitView IsPaneOpen="True"
DisplayMode="Inline"
OpenPaneLength="296">
<SplitView.Pane>
<TextBlock Text="Pane"
FontSize="24"
VerticalAlignment="Center"
HorizontalAlignment="Center"/>
</SplitView.Pane>

<Grid>
<TextBlock Text="Content"
FontSize="24"
VerticalAlignment="Center"
HorizontalAlignment="Center"/>
</Grid>
</SplitView>

Related topics
Nav pane pattern
List view
Pivot and tabs
3/6/2017 3 min to read Edit on GitHub

The Pivot control and related tabs pattern are used for navigating frequently accessed, distinct content categories.
Pivots allow for navigation between two or more content panes and relies on text headers to articulate the
different sections of content.

Tabs are a visual variant of Pivot that use a combination of icons and text or just icons to articulate section
content. Tabs are built using the Pivot control. The Pivot sample shows how to customize the Pivot control into
the tabs pattern.

Important APIs
Pivot class

The pivot pattern


When building an app with pivot, there are a few key design variables to consider.
Header labels. Headers can have an icon with text, icon only, or text only.
Header alignment. Headers can be left-justified or centered.
Top-level or sub-level navigation. Pivots can be used for either level of navigation. Optionally, navigation
pane can serve as the primary level with pivot acting as secondary.
Touch gesture support. For devices that support touch gestures, you can use one of two interaction sets to
navigate between content categories:
1. Tap on a tab/pivot header to navigate to that category.
2. Swipe left or right on the content area to navigate to the adjacent category.

Examples
Pivot control on phone.
Tabs pattern in the Alarms & Clock app.

Create a pivot control


The Pivot control comes with the basic functionality described in this section.
This XAML creates a basic pivot control with 3 sections of content.
<Pivot x:Name="rootPivot" Title="Pivot Title">
<PivotItem Header="Pivot Item 1">
<!--Pivot content goes here-->
<TextBlock Text="Content of pivot item 1."/>
</PivotItem>
<PivotItem Header="Pivot Item 2">
<!--Pivot content goes here-->
<TextBlock Text="Content of pivot item 2."/>
</PivotItem>
<PivotItem Header="Pivot Item 3">
<!--Pivot content goes here-->
<TextBlock Text="Content of pivot item 3."/>
</PivotItem>
</Pivot>

Pivot items
Pivot is an ItemsControl, so it can contain a collection of items of any type. Any item you add to the Pivot that is
not explicitly a PivotItem is implicitly wrapped in a PivotItem. Because a Pivot is often used to navigate between
pages of content, it's common to populate the Items collection directly with XAML UI elements. Or, you can set
the ItemsSource property to a data source. Items bound in the ItemsSource can be of any type, but if they aren't
explicitly PivotItems, you must define an ItemTemplate and HeaderTemplate to specify how the items are
displayed.
You can use the SelectedItem property to get or set the Pivot's active item. Use the SelectedIndex property to
get or set the index of the active item.
Pivot headers
You can use the LeftHeader and RightHeader properties to add other controls to the Pivot header.
Pivot interaction
The control features these touch gesture interactions:
Tapping on a pivot item header navigates to that header's section content.
Swiping left or right on a pivot item header navigates to the adjacent section.
Swiping left or right on section content navigates to the adjacent section.

The control comes in two modes:


Stationary
Pivots are stationary when all pivot headers fit within the allowed space.
Tapping on a pivot label navigates to the corresponding page, though the pivot itself will not move. The active
pivot is highlighted.
Carousel
Pivots carousel when all pivot headers don't fit within the allowed space.
Tapping a pivot label navigates to the corresponding page, and the active pivot label will carousel into the first
position.
Pivot items in a carousel loop from last to first pivot section.

Recommendations
Base the alignment of tab/pivot headers on screen size. For screen widths below 720 epx, center-aligning
usually works better, while left-aligning for screen widths above 720 epx is recommended in most cases.
Avoid using more than 5 headers when using carousel (round-trip) mode, as looping more than 5 can
become confusing.
Use the tabs pattern only if your pivot items have distinct icons.
Include text in pivot item headers to help users understand the meaning of each pivot section. Icons are not
necessarily self-explanatory to all users.

Get the sample code


Pivot sample
See how to customize the Pivot control into the tabs pattern.
XAML UI basics sample
See all of the XAML controls in an interactive format.

Related topics
Navigation design basics
Pivot sample
Text controls
3/6/2017 8 min to read Edit on GitHub

Text controls consist of text input boxes, password boxes, auto-suggest boxes, and text blocks. The XAML
framework provides several controls for rendering, entering, and editing text, and a set of properties for
formatting the text.
The controls for displaying read-only text are TextBlock and RichTextBlock.
The controls for text entry and editing are: TextBox, AutoSuggestBox, PasswordBox, and RichEditBox.

Important APIs
AutoSuggestBox class
PasswordBox class
RichEditBox class
RichTextBlock class
TextBlock class
TextBox class

Is this the right control?


The text control you use depends on your scenario. Use this info to pick the right text control to use in your
app.
Render read-only text
Use a TextBlock to display most read-only text in your app. You can use it to display single-line or multi-line
text, inline hyperlinks, and text with formatting like bold, italic, or underlined.
TextBlock is typically easier to use and provides better text rendering performance than RichTextBlock, so it's
preferred for most app UI text. You can easily access and use text from a TextBlock in your app by getting the
value of the Text property.
It also provides many of the same formatting options for customizing how your text is rendered. Although
you can put line breaks in the text, TextBlock is designed to display a single paragraph and doesnt support
text indentation.
Use a RichTextBlock when you need support for multiple paragraphs, multi-column text or other complex
text layouts, or inline UI elements like images. RichTextBlock provides several features for advanced text
layout.
The content property of RichTextBlock is the Blocks property, which supports paragraph based text via the
Paragraph element. It doesn't have a Text property that you can use to easily access the control's text content
in your app.
Text input
Use a TextBox control to let a user enter and edit unformatted text, such as in a form. You can use the Text
property to get and set the text in a TextBox.
You can make a TextBox read-only, but this should be a temporary, conditional state. If the text is never
editable, consider using a TextBlock instead.
Use a PasswordBox control to collect a password or other private data, such as a Social Security number. A
password box is a text input box that conceals the characters typed in it for the sake of privacy. A password
box looks like a text input box, except that it renders bullets in place of the text that has been entered. The
bullet character can be customized.
Use an AutoSuggestBox control to show the user a list of suggestions to choose from as they type. An auto-
suggest box is a text entry box that triggers a list of basic search suggestions. Suggested terms can draw from
a combination of popular search terms and historical user-entered terms.
You should also use an AutoSuggestBox control to implement a search box.
Use a RichEditBox to display and edit text files. You don't use a RichEditBox to get user input into your app
the way you use other standard text input boxes. Rather, you use it to work with text files that are separate
from your app. You typically save text entered into a RichEditBox to a .rtf file.
Is text input the best option?
There are many ways you can get user input in your app. These questions will help answer whether one of the
standard text input boxes or another control is the best fit for getting user input.
Is it practical to efficiently enumerate all valid values? If so, consider using one of the selection
controls, such as a check box, drop-down list, list box, radio button, slider, toggle switch, date picker, or
time picker.
Is there a fairly small set of valid values? If so, consider a drop-down list or a list box, especially if the
values are more than a few characters long.
Is the valid data completely unconstrained? Or is the valid data only constrained by format
(constrained length or character types)? If so, use a text input control. You can limit the number of
characters that can be entered, and you can validate the format in your app code.
Does the value represent a data type that has a specialized common control? If so, use the
appropriate control instead of a text input control. For example, use a DatePicker instead of a text input
control to accept a date entry.
If the data is strictly numeric:
Is the value being entered approximate and/or relative to another quantity on the same
page? If so, use a slider.
Would the user benefit from instant feedback on the effect of setting changes? If so, use a
slider, possibly with an accompanying control.
Is the value entered likely to be adjusted after the result is observed, such as with volume
or screen brightness? If so, use a slider.

Examples
Text box

Auto suggest box


Password box

Create a text control


See these articles for info and examples specific to each text control.
AutoSuggestBox
PasswordBox
RichEditBox
RichTextBlock
TextBlock
TextBox

Font and style guidelines


See these articles for font guidelines:
Font guidelines
Segoe MDL2 icon list and guidelines

Choose the right keyboard for your text control


Applies to: TextBox, PasswordBox RichEditBox
To help users to enter data using the touch keyboard, or Soft Input Panel (SIP), you can set the input scope of
the text control to match the kind of data the user is expected to enter.

Tip This info applies only to the SIP. It does not apply to hardware keyboards or the On-Screen Keyboard
available in the Windows Ease of Access options.

The touch keyboard can be used for text entry when your app runs on a device with a touch screen. The touch
keyboard is invoked when the user taps on an editable input field, such as a TextBox or RichEditBox. You can
make it much faster and easier for users to enter data in your app by setting the input scope of the text control
to match the kind of data you expect the user to enter. The input scope provides a hint to the system about the
type of text input expected by the control so the system can provide a specialized touch keyboard layout for
the input type.
For example, if a text box is used only to enter a 4-digit PIN, set the InputScope property to Number. This tells
the system to show the number keypad layout, which makes it easier for the user to enter the PIN.
Important
The input scope does not cause any input validation to be performed, and does not prevent the user from
providing any input through a hardware keyboard or other input device. You are still responsible for
validating the input in your code as needed.

For more info, see Use input scope to change the touch keyboard.

Color fonts
Applies to: TextBlock, RichTextBlock, TextBox, RichEditBox
Windows has the ability for fonts to include multiple colored layers for each glyph. For example, the Segoe UI
Emoji font defines color versions of the Emoticon and other Emoji characters.
The standard and rich text controls support display color fonts. By default, the IsColorFontEnabled property
is true and fonts with these additional layers are rendered in color. The default color font on the system is
Segoe UI Emoji and the controls will fall back to this font to display the glyphs in color.

<TextBlock FontSize="30">Hello </TextBlock>

The rendered text looks like this:

For more info, see the IsColorFontEnabled property.

Guidelines for line and paragraph separators


Applies to: TextBlock, RichTextBlock, multi-line TextBox, RichEditBox
Use the line separator character (0x2028) and the paragraph separator character (0x2029) to divide plain text.
A new line is begun after each line separator. A new paragraph is begun after each paragraph separator.
It isn't necessary to start the first line or paragraph in a file with these characters or to end the last line or
paragraph with them; doing so indicates that there is an empty line or paragraph in that location.
Your app can use the line separator to indicate an unconditional end of line. However, line separators do not
correspond to the separate carriage return and linefeed characters, or to a combination of these characters.
Line separators must be processed separately from carriage return and linefeed characters.
Your app can insert a paragraph separator between paragraphs of text. Use of this separator allows creation
of plain text files that can be formatted with different line widths on different operating systems. The target
system can ignore any line separators and break paragraphs only at the paragraph separators.

Guidelines for spell checking


Applies to: TextBox, RichEditBox
During text entry and editing, spell checking informs the user that a word is misspelled by highlighting it with
a red squiggle and provides a way for the user to correct the misspelling.
Here's an example of the built-in spell checker:
Use spell checking with text input controls for these two purposes:
To auto-correct misspellings
The spell checking engine automatically corrects misspelled words when it's confident about the
correction. For example, the engine automatically changes "teh" to "the."
To show alternate spellings
When the spell checking engine is not confident about the corrections, it adds a red line under the
misspelled word and displays the alternates in a context menu when you tap or right-click the word.
Use spell checking to help users as they enter words or sentences into text input controls. Spell
checking works with touch, mouse, and keyboard inputs.
Don't use spell checking when a word is not likely to be in the dictionary or if users wouldn't value spell
checking. For example, don't turn it on if the text box is intended to capture a telephone number or name.
Don't disable spell checking just because the current spell checking engine doesn't support your app
language. When the spell checker doesn't support a language, it doesn't do anything, so there's no harm in
leaving the option on. Also, some users might use an Input Method Editor (IME) to enter another language
into your app, and that language might be supported. For example, when building a Japanese language
app, even though the spell checking engine might not currently recognize that language, don't turn spell
checking off. The user may switch to an English IME and type English into the app; if spell checking is
enabled, the English will get spell checked.
For TextBox and RichEditBox controls, spell checking is turned on by default. You can turn it off by setting the
IsSpellCheckEnabled property to false.

Related articles
For designers
Font guidelines
Segoe MDL2 icon list and guidelines
Adding search
For developers (XAML)
TextBox class
Windows.UI.Xaml.Controls PasswordBox class
String.Length property
Labels
3/6/2017 1 min to read Edit on GitHub

A label is the name or title of a control or a group of related controls.

Important APIs
Header property
TextBlock class

In XAML, many controls have a built-in Header property that you use to display the label. For controls that don't
have a Header property, or to label groups of controls, you can use a TextBlock instead.

Example

Recommendations
Use a label to indicate to the user what they should enter into an adjacent control. You can also label a group of
related controls, or display instructional text near a group of related controls.
When labeling controls, write the label as a noun or a concise noun phrase, not as a sentence, and not as
instructional text. Avoid colons or other punctuation.
When you do have instructional text in a label, you can be more generous with text-string length and also use
punctuation.

Get the sample code


XAML UI basics sample

Related topics
Text controls
For developers
TextBox.Header property
PasswordBox.Header property
ToggleSwitch.Header property
DatePicker.Header property
TimePicker.Header property
Slider.Header property
ComboBox.Header property
RichEditBox.Header property
TextBlock class
Password box
3/6/2017 5 min to read Edit on GitHub

A password box is a text input box that conceals the characters typed into it for the purpose of privacy. A password
box looks like a text box, except that it renders placeholder characters in place of the text that has been entered.
You can configure the placeholder character.
By default, the password box provides a way for the user to view their password by holding down a reveal button.
You can disable the reveal button, or provide an alternate mechanism to reveal the password, such as a check box.

Important APIs
PasswordBox class
Password property
PasswordChar property
PasswordRevealMode property
PasswordChanged event

Is this the right control?


Use a PasswordBox control to collect a password or other private data, such as a Social Security number.
For more info about choosing the right text control, see the Text controls article.

Examples
The password box has several states, including these notable ones.
A password box at rest can show hint text so that the user knows its purpose:

When the user types in a password box, the default behavior is to show bullets that hide the text being entered:

Pressing the "reveal" button on the right gives a peek at the password text being entered:

Create a password box


Use the Password property to get or set the contents of the PasswordBox. You can do this in the handler for the
PasswordChanged event to perform validation while the user enters the password. Or, you can use another event,
like a button Click, to perform validation after the user completes the text entry.
Here's the XAML for a password box control that demonstrates the default look of the PasswordBox. When the
user enters a password, you check to see if it's the literal value, "Password". If it is, you display a message to the
user.

<StackPanel>
<PasswordBox x:Name="passwordBox" Width="200" MaxLength="16"
PasswordChanged="passwordBox_PasswordChanged"/>

<TextBlock x:Name="statusText" Margin="10" HorizontalAlignment="Center" />


</StackPanel>

private void passwordBox_PasswordChanged(object sender, RoutedEventArgs e)


{
if (passwordBox.Password == "Password")
{
statusText.Text = "'Password' is not allowed as a password.";
}
else
{
statusText.Text = string.Empty;
}
}

Here's the result when this code runs and the user enters "Password".

Password character
You can change the character used to mask the password by setting the PasswordChar property. Here, the default
bullet is replaced with an asterisk.

<PasswordBox x:Name="passwordBox" Width="200" PasswordChar="*"/>

The result looks like this.

Headers and placeholder text


You can use the Header and PlaceholderText properties to provide context for the PasswordBox. This is especially
useful when you have multiple boxes, such as on a form to change a password.

<PasswordBox x:Name="passwordBox" Width="200" Header="Password" PlaceholderText="Enter your password"/>

Maximum length
Specify the maximum number of characters that the user can enter by setting the MaxLength property. There is no
property to specify a minimum length, but you can check the password length, and perform any other validation,
in your app code.
Password reveal mode
The PasswordBox has a built-in button that the user can press to display the password text. Here's the result of the
user's action. When the user releases it, the password is automatically hidden again.

Peek mode
By default, the password reveal button (or "peek" button) is shown. The user must continuously press the button to
view the password, so that a high level of security is maintained.
The value of the PasswordRevealMode property is not the only factor that determines whether a password reveal
button is visible to the user. Other factors include whether the control is displayed above a minimum width,
whether the PasswordBox has focus, and whether the text entry field contains at least one character. The password
reveal button is shown only when the PasswordBox receives focus for the first time and a character is entered. If
the PasswordBox loses focus and then regains focus, the reveal button is not shown again unless the password is
cleared and character entry starts over.

Caution Prior to Windows 10, the password reveal button was not shown by default. If the security of your
app requires that the password is always obscured, be sure to set PasswordRevealMode to Hidden.

Hidden and Visible modes


The other PasswordRevealMode enumeration values, Hidden and Visible, hide the password reveal button and
let you programmatically manage whether the password is obscured.
To always obscure the password, set PasswordRevealMode to Hidden. Unless you need the password to be always
obscured, you can provide a custom UI to let the user toggle the PasswordRevealMode between Hidden and
Visible.
In previous versions of Windows Phone, PasswordBox used a check box to toggle whether the password was
obscured. You can create a similar UI for your app, as shown in the following example. You can also use other
controls, like ToggleButton, to let the user switch modes.
This example shows how to use a CheckBox to let a user switch the reveal mode of a PasswordBox.

<StackPanel Width="200">
<PasswordBox Name="passwordBox1"
PasswordRevealMode="Hidden"/>
<CheckBox Name="revealModeCheckBox" Content="Show password"
IsChecked="False"
Checked="CheckBox_Changed" Unchecked="CheckBox_Changed"/>
</StackPanel>

private void CheckBox_Changed(object sender, RoutedEventArgs e)


{
if (revealModeCheckBox.IsChecked == true)
{
passwordBox1.PasswordRevealMode = PasswordRevealMode.Visible;
}
else
{
passwordBox1.PasswordRevealMode = PasswordRevealMode.Hidden;
}
}
This PasswordBox looks like this.

Choose the right keyboard for your text control


To help users to enter data using the touch keyboard, or Soft Input Panel (SIP), you can set the input scope of the
text control to match the kind of data the user is expected to enter. PasswordBox supports only the Password and
NumericPin input scope values. Any other value is ignored.
For more info about how to use input scopes, see Use input scope to change the touch keyboard.

Recommendations
Use a label or placeholder text if the purpose of the password box isn't clear. A label is visible whether or not
the text input box has a value. Placeholder text is displayed inside the text input box and disappears once a
value has been entered.
Give the password box an appropriate width for the range of values that can be entered. Word length varies
between languages, so take localization into account if you want your app to be world-ready.
Don't put another control right next to a password input box. The password box has a password reveal button
for users to verify the passwords they have typed, and having another control right next to it might make users
accidentally reveal their passwords when they try to interact with the other control. To prevent this from
happening, put some spacing between the password in put box and the other control, or put the other control
on the next line.
Consider presenting two password boxes for account creation: one for the new password, and a second to
confirm the new password.
Only show a single password box for logins.
When a password box is used to enter a PIN, consider providing an instant response as soon as the last number
is entered instead of using a confirmation button.

Related articles
Text controls
Guidelines for spell checking
Adding search
Guidelines for text input
TextBox class
Windows.UI.Xaml.Controls PasswordBox class
String.Length property
Rich edit box
3/6/2017 4 min to read Edit on GitHub

You can use a RichEditBox control to enter and edit rich text documents that contain formatted text, hyperlinks,
and images. You can make a RichEditBox read-only by setting its IsReadOnly property to true.

Important APIs
RichEditBox class
Document property
IsReadOnly property
IsSpellCheckEnabled property

Is this the right control?


Use a RichEditBox to display and edit text files. You don't use a RichEditBox to get user input into you app the
way you use other standard text input boxes. Rather, you use it to work with text files that are separate from your
app. You typically save text entered into a RichEditBox to a .rtf file.
If the primary purpose of the multi-line text box is for creating documents (such as blog entries or the contents
of an email message), and those documents require rich text, use a rich text box.
If you want users to be able to format their text, use a rich text box.
When capturing text that will only be consumed and not redisplayed to users, use a plain text input control.
For all other scenarios, use a plain text input control.
For more info about choosing the right text control, see the Text controls article.

Examples
This rich edit box has a rich text document open in it. The formatting and file buttons aren't part of the rich edit
box, but you should provide at least a minimal set of styling buttons and implement their actions.

Create a rich edit box


By default, the RichEditBox supports spell checking. To disable the spell checker, set the IsSpellCheckEnabled
property to false. For more info, see the Guidelines for spell checking article.
You use the Document property of the RichEditBox to get its content. The content of a RichEditBox is a
Windows.UI.Text.ITextDocument object, unlike the RichTextBlock control, which uses
Windows.UI.Xaml.Documents.Block objects as its content. The ITextDocument interface provides a way to load and
save the document to a stream, retrieve text ranges, get the active selection, undo and redo changes, set default
formatting attributes, and so on.
This example shows how to edit, load, and save a Rich Text Format (.rtf) file in a RichEditBox.

<RelativePanel Margin="20" HorizontalAlignment="Stretch">


<RelativePanel.Resources>
<Style TargetType="AppBarButton">
<Setter Property="IsCompact" Value="True"/>
</Style>
</RelativePanel.Resources>
<AppBarButton x:Name="openFileButton" Icon="OpenFile"
Click="OpenButton_Click" ToolTipService.ToolTip="Open file"/>
<AppBarButton Icon="Save" Click="SaveButton_Click"
ToolTipService.ToolTip="Save file"
RelativePanel.RightOf="openFileButton" Margin="8,0,0,0"/>

<AppBarButton Icon="Bold" Click="BoldButton_Click" ToolTipService.ToolTip="Bold"


RelativePanel.LeftOf="italicButton" Margin="0,0,8,0"/>
<AppBarButton x:Name="italicButton" Icon="Italic" Click="ItalicButton_Click"
ToolTipService.ToolTip="Italic" RelativePanel.LeftOf="underlineButton" Margin="0,0,8,0"/>
<AppBarButton x:Name="underlineButton" Icon="Underline" Click="UnderlineButton_Click"
ToolTipService.ToolTip="Underline" RelativePanel.AlignRightWithPanel="True"/>

<RichEditBox x:Name="editor" Height="200" RelativePanel.Below="openFileButton"


RelativePanel.AlignLeftWithPanel="True" RelativePanel.AlignRightWithPanel="True"/>
</RelativePanel>

private async void OpenButton_Click(object sender, RoutedEventArgs e)


{
// Open a text file.
Windows.Storage.Pickers.FileOpenPicker open =
new Windows.Storage.Pickers.FileOpenPicker();
open.SuggestedStartLocation =
Windows.Storage.Pickers.PickerLocationId.DocumentsLibrary;
open.FileTypeFilter.Add(".rtf");

Windows.Storage.StorageFile file = await open.PickSingleFileAsync();

if (file != null)
{
try
{
Windows.Storage.Streams.IRandomAccessStream randAccStream =
await file.OpenAsync(Windows.Storage.FileAccessMode.Read);

// Load the file into the Document property of the RichEditBox.


editor.Document.LoadFromStream(Windows.UI.Text.TextSetOptions.FormatRtf, randAccStream);
}
catch (Exception)
{
ContentDialog errorDialog = new ContentDialog()
{
Title = "File open error",
Content = "Sorry, I couldn't open the file.",
PrimaryButtonText = "Ok"
};

await errorDialog.ShowAsync();
}
}
}
private async void SaveButton_Click(object sender, RoutedEventArgs e)
{
Windows.Storage.Pickers.FileSavePicker savePicker = new Windows.Storage.Pickers.FileSavePicker();
savePicker.SuggestedStartLocation = Windows.Storage.Pickers.PickerLocationId.DocumentsLibrary;

// Dropdown of file types the user can save the file as


savePicker.FileTypeChoices.Add("Rich Text", new List<string>() { ".rtf" });

// Default file name if the user does not type one in or select a file to replace
savePicker.SuggestedFileName = "New Document";

Windows.Storage.StorageFile file = await savePicker.PickSaveFileAsync();


if (file != null)
{
// Prevent updates to the remote version of the file until we
// finish making changes and call CompleteUpdatesAsync.
Windows.Storage.CachedFileManager.DeferUpdates(file);
// write to file
Windows.Storage.Streams.IRandomAccessStream randAccStream =
await file.OpenAsync(Windows.Storage.FileAccessMode.ReadWrite);

editor.Document.SaveToStream(Windows.UI.Text.TextGetOptions.FormatRtf, randAccStream);

// Let Windows know that we're finished changing the file so the
// other app can update the remote version of the file.
Windows.Storage.Provider.FileUpdateStatus status = await Windows.Storage.CachedFileManager.CompleteUpdatesAsync(file);
if (status != Windows.Storage.Provider.FileUpdateStatus.Complete)
{
Windows.UI.Popups.MessageDialog errorBox =
new Windows.UI.Popups.MessageDialog("File " + file.Name + " couldn't be saved.");
await errorBox.ShowAsync();
}
}
}

private void BoldButton_Click(object sender, RoutedEventArgs e)


{
Windows.UI.Text.ITextSelection selectedText = editor.Document.Selection;
if (selectedText != null)
{
Windows.UI.Text.ITextCharacterFormat charFormatting = selectedText.CharacterFormat;
charFormatting.Bold = Windows.UI.Text.FormatEffect.Toggle;
selectedText.CharacterFormat = charFormatting;
}
}

private void ItalicButton_Click(object sender, RoutedEventArgs e)


{
Windows.UI.Text.ITextSelection selectedText = editor.Document.Selection;
if (selectedText != null)
{
Windows.UI.Text.ITextCharacterFormat charFormatting = selectedText.CharacterFormat;
charFormatting.Italic = Windows.UI.Text.FormatEffect.Toggle;
selectedText.CharacterFormat = charFormatting;
}
}

private void UnderlineButton_Click(object sender, RoutedEventArgs e)


{
Windows.UI.Text.ITextSelection selectedText = editor.Document.Selection;
if (selectedText != null)
{
Windows.UI.Text.ITextCharacterFormat charFormatting = selectedText.CharacterFormat;
if (charFormatting.Underline == Windows.UI.Text.UnderlineType.None)
{
charFormatting.Underline = Windows.UI.Text.UnderlineType.Single;
}
else {
charFormatting.Underline = Windows.UI.Text.UnderlineType.None;
}
selectedText.CharacterFormat = charFormatting;
}
}

Choose the right keyboard for your text control


To help users to enter data using the touch keyboard, or Soft Input Panel (SIP), you can set the input scope of the
text control to match the kind of data the user is expected to enter. The default keyboard layout is usually
appropriate for working with rich text documents.
For more info about how to use input scopes, see Use input scope to change the touch keyboard.

Do's and don'ts


When you create a rich text box, provide styling buttons and implement their actions.
Use a font that's consistent with the style of your app.
Make the height of the text control tall enough to accommodate typical entries.
Don't let your text input controls grow in height while users type.
Don't use a multi-line text box when users only need a single line.
Don't use a rich text control if a plain text control is adequate.

Related articles
Text controls
Guidelines for spell checking
Adding search
Guidelines for text input
TextBox class
Windows.UI.Xaml.Controls PasswordBox class
Rich text block
3/6/2017 3 min to read Edit on GitHub

Rich text blocks provide several features for advanced text layout that you can use when you need support for
paragraphs, inline UI elements, or complex text layouts.

Important APIs
RichTextBlock class
RichTextBlockOverflow class
Paragraph class
Typography class

Is this the right control?


Use a RichTextBlock when you need support for multiple paragraphs, multi-column or other complex text
layouts, or inline UI elements like images.
Use a TextBlock to display most read-only text in your app. You can use it to display single-line or multi-line text,
inline hyperlinks, and text with formatting like bold, italic, or underlined. TextBlock provides a simpler content
model, so its typically easier to use, and it can provide better text rendering performance than RichTextBlock. It's
preferred for most app UI text. Although you can put line breaks in the text, TextBlock is designed to display a
single paragraph and doesnt support text indentation.
For more info about choosing the right text control, see the Text controls article.

Create a rich text block


The content property of RichTextBlock is the Blocks property, which supports paragraph based text via the
Paragraph element. It doesn't have a Text property that you can use to easily access the control's text content in
your app. However, RichTextBlock provides several unique features that TextBlock doesnt provide.
RichTextBlock supports:
Multiple paragraphs. Set the indentation for paragraphs by setting the TextIndent property.
Inline UI elements. Use an InlineUIContainer to display UI elements, such as images, inline with your text.
Overflow containers. Use RichTextBlockOverflow elements to create multi-column text layouts.
Paragraphs
You use Paragraph elements to define the blocks of text to display within a RichTextBlock control. Every
RichTextBlock should include at least one Paragraph.
You can set the indent amount for all paragraphs in a RichTextBlock by setting the RichTextBlock.TextIndent
property. You can override this setting for specific paragraphs in a RichTextBlock by setting the
Paragraph.TextIndent property to a different value.
<RichTextBlock TextIndent="12">
<Paragraph TextIndent="24">First paragraph.</Paragraph>
<Paragraph>Second paragraph.</Paragraph>
<Paragraph>Third paragraph. <Bold>With an inline.</Bold></Paragraph>
</RichTextBlock>

Inline UI elements
The InlineUIContainer class lets you embed any UIElement inline with your text. A common scenario is to place
an Image inline with your text, but you can also use interactive elements, like a Button or CheckBox.
If you want to embed more than one element inline in the same position, consider using a panel as the single
InlineUIContainer child, and then place the multiple elements within that panel.
This example shows how to use an InlineUIContainer to insert an image into a RichTextBlock.

<RichTextBlock>
<Paragraph>
<Italic>This is an inline image.</Italic>
<InlineUIContainer>
<Image Source="Assets/Square44x44Logo.png" Height="30" Width="30"/>
</InlineUIContainer>
Mauris auctor tincidunt auctor.
</Paragraph>
</RichTextBlock>

Overflow containers
You can use a RichTextBlock with RichTextBlockOverflow elements to create multi-column or other advanced
page layouts. The content for a RichTextBlockOverflow element always comes from a RichTextBlock element. You
link RichTextBlockOverflow elements by setting them as the OverflowContentTarget of a RichTextBlock or another
RichTextBlockOverflow.
Here's a simple example that creates a two column layout. See the Examples section for a more complex example.

<Grid>
<Grid.ColumnDefinitions>
<ColumnDefinition/>
<ColumnDefinition/>
</Grid.ColumnDefinitions>
<RichTextBlock Grid.Column="0"
OverflowContentTarget="{Binding ElementName=overflowContainer}" >
<Paragraph>
Proin ac metus at quam luctus ultricies.
</Paragraph>
</RichTextBlock>
<RichTextBlockOverflow x:Name="overflowContainer" Grid.Column="1"/>
</Grid>

Formatting text
Although the RichTextBlock stores plain text, you can apply various formatting options to customize how the text is
rendered in your app. You can set standard control properties like FontFamily, FontSize, FontStyle, Foreground,
and CharacterSpacing to change the look of the text. You can also use inline text elements and Typography
attached properties to format your text. These options affect only how the RichTextBlock displays the text locally,
so if you copy and paste the text into a rich text control, for example, no formatting is applied.
Inline elements
The Windows.UI.Xaml.Documents namespace provides a variety of inline text elements that you can use to format
your text, such as Bold, Italic, Run, Span, and LineBreak. A typical way to apply formatting to sections of text is to
place the text in a Run or Span element, and then set properties on that element.
Here's a Paragraph with the first phrase shown in bold, blue, 16pt text.

<Paragraph>
<Bold><Span Foreground="DarkSlateBlue" FontSize="16">Lorem ipsum dolor sit amet</Span></Bold>
, consectetur adipiscing elit.
</Paragraph>

Typography
The attached properties of the Typography class provide access to a set of Microsoft OpenType typography
properties. You can set these attached properties either on the RichTextBlock, or on individual inline text elements,
as shown here.

<RichTextBlock Typography.StylisticSet4="True">
<Paragraph>
<Span Typography.Capitals="SmallCaps">Lorem ipsum dolor sit amet</Span>
, consectetur adipiscing elit.
</Paragraph>
</RichTextBlock>

Recommendations
See Typography and Guidelines for fonts.

Related articles
Text controls
For designers
Guidelines for spell checking
Adding search
Guidelines for text input
For developers (XAML)
TextBox class
Windows.UI.Xaml.Controls PasswordBox class
For developers (other)
String.Length property
Text block
3/6/2017 5 min to read Edit on GitHub

Text block is the primary control for displaying read-only text in apps. You can use it to display single-line or
multi-line text, inline hyperlinks, and text with formatting like bold, italic, or underlined.

Important APIs
TextBlock class
Text property
Inlines property

Is this the right control?


A text block is typically easier to use and provides better text rendering performance than a rich text block, so it's
preferred for most app UI text. You can easily access and use text from a text block in your app by getting the
value of the Text property. It also provides many of the same formatting options for customizing how your text is
rendered.
Although you can put line breaks in the text, text block is designed to display a single paragraph and doesnt
support text indentation. Use a RichTextBlock when you need support for multiple paragraphs, multi-column
text or other complex text layouts, or inline UI elements like images.
For more info about choosing the right text control, see the Text controls article.

Create a text block


Here's how to define a simple TextBlock control and set its Text property to a string.

<TextBlock Text="Hello, world!" />

TextBlock textBlock1 = new TextBlock();


textBlock1.Text = "Hello, world!";

<TextBlock Text="Hello, world!" />

TextBlock textBlock1 = new TextBlock();


textBlock1.Text = "Hello, world!";

Content model
There are two properties you can use to add content to a TextBlock: Text and Inlines.
The most common way to display text is to set the Text property to a string value, as shown in the previous
example.
You can also add content by placing inline flow content elements in the TextBox.Inlines property, like this.
<TextBlock><Run>Text can be <Bold>bold</Bold>, <Italic>italic</Italic>, or <Bold><Italic>both</Italic></Bold>.</Run></TextBlock>

Elements derived from the Inline class, such as Bold, Italic, Run, Span, and LineBreak, enable different formatting
for different parts of the text. For more info, see the [Formatting text]() section. The inline Hyperlink element lets
you add a hyperlink to your text. However, using Inlines also disables fast path text rendering, which is discussed
in the next section.

Performance considerations
Whenever possible, XAML uses a more efficient code path to layout text. This fast path both decreases overall
memory use and greatly reduces the CPU time to do text measuring and arranging. This fast path applies only to
TextBlock, so it should be preferred when possible over RichTextBlock.
Certain conditions require TextBlock to fall back to a more feature-rich and CPU intensive code path for text
rendering. To keep text rendering on the fast path, be sure to follow these guidelines when setting the properties
listed here.
Text: The most important condition is that the fast path is used only when you set text by explicitly setting the
Text property, either in XAML or in code (as shown in the previous examples). Setting the text via TextBlocks
Inlines collection (such as <TextBlock>Inline text</TextBlock> ) will disable the fast path, due to the potential
complexity of multiple formats.
CharacterSpacing: Only the default value of 0 is fast path.
TextTrimming: Only the None, CharacterEllipsis, and WordEllipsis values are fast path. The Clip value
disables the fast path.

Note Prior to Windows 10, version 1607, additional properties also affect the fast path. If your app is run on
an earlier version of Windows, these conditions will also cause your text to render on the slow path. For more
info about versions, see Version adaptive code.
Typography: Only the default values for the various Typography properties are fast path.
LineStackingStrategy: If LineHeight is not 0, the BaselineToBaseline and MaxHeight values disable the
fast path.
IsTextSelectionEnabled: Only false is fast path. Setting this property to true disables the fast path.

You can set the DebugSettings.IsTextPerformanceVisualizationEnabled property to true during debugging to


determine whether text is using fast path rendering. When this property is set to true, the text that is on the fast
path displays in a bright green color.

Tip This feature is explained in depth in this session from Build 2015- XAML Performance: Techniques for
Maximizing Universal Windows App Experiences Built with XAML.

You typically set debug settings in the OnLaunched method override in the code-behind page for App.xaml, like
this.
protected override void OnLaunched(LaunchActivatedEventArgs e)
{
#if DEBUG
if (System.Diagnostics.Debugger.IsAttached)
{
this.DebugSettings.IsTextPerformanceVisualizationEnabled = true;
}
#endif

// ...

In this example, the first TextBlock is rendered using the fast path, while the second is not.

<StackPanel>
<TextBlock Text="This text is on the fast path."/>
<TextBlock>This text is NOT on the fast path.</TextBlock>
<StackPanel/>

When you run this XAML in debug mode with IsTextPerformanceVisualizationEnabled set to true, the result looks
like this.

Caution The color of text that is not on the fast path is not changed. If you have text in your app with its color
specified as bright green, it is still displayed in bright green when it's on the slower rendering path. Be careful
to not confuse text that is set to green in the app with text that is on the fast path and green because of the
debug settings.

Formatting text
Although the Text property stores plain text, you can apply various formatting options to the TextBlock control to
customize how the text is rendered in your app. You can set standard control properties like FontFamily, FontSize,
FontStyle, Foreground, and CharacterSpacing to change the look of the text. You can also use inline text elements
and Typography attached properties to format your text. These options affect only how the TextBlock displays the
text locally, so if you copy and paste the text into a rich text control, for example, no formatting is applied.

Note Remember, as noted in the previous section, inline text elements and non-default typography values
are not rendered on the fast path.

Inline elements
The Windows.UI.Xaml.Documents namespace provides a variety of inline text elements that you can use to format
your text, such as Bold, Italic, Run, Span, and LineBreak.
You can display a series of strings in a TextBlock, where each string has different formatting. You can do this by
using a Run element to display each string with its formatting and by separating each Run element with a
LineBreak element.
Here's how to define several differently formatted text strings in a TextBlock by using Run objects separated with a
LineBreak.
<TextBlock FontFamily="Arial" Width="400" Text="Sample text formatting runs">
<LineBreak/>
<Run Foreground="Gray" FontFamily="Courier New" FontSize="24">
Courier New 24
</Run>
<LineBreak/>
<Run Foreground="Teal" FontFamily="Times New Roman" FontSize="18" FontStyle="Italic">
Times New Roman Italic 18
</Run>
<LineBreak/>
<Run Foreground="SteelBlue" FontFamily="Verdana" FontSize="14" FontWeight="Bold">
Verdana Bold 14
</Run>
</TextBlock>

Here's the result.

Typography
The attached properties of the Typography class provide access to a set of Microsoft OpenType typography
properties. You can set these attached properties either on the TextBlock, or on individual inline text elements.
These examples show both.

<TextBlock Text="Hello, world!"


Typography.Capitals="SmallCaps"
Typography.StylisticSet4="True"/>

TextBlock textBlock1 = new TextBlock();


textBlock1.Text = "Hello, world!";
Windows.UI.Xaml.Documents.Typography.SetCapitals(textBlock1, FontCapitals.SmallCaps);
Windows.UI.Xaml.Documents.Typography.SetStylisticSet4(textBlock1, true);

<TextBlock>12 x <Run Typography.Fraction="Slashed">1/3</Run> = 4.</TextBlock>

Related articles
Text controls
Guidelines for spell checking
Adding search
Guidelines for text input
TextBox class
Windows.UI.Xaml.Controls PasswordBox class
String.Length property
Text box
3/6/2017 12 min to read Edit on GitHub

The TextBox control lets a user type text into an app. It's typically used to capture a single line of text, but can be
configured to capture multiple lines of text. The text displays on the screen in a simple, uniform, plaintext format.
TextBox has a number of features that can simplify text entry. It comes with a familiar, built-in context menu with
support for copying and pasting text. The "clear all" button lets a user quickly delete all text that has been entered.
It also has spell checking capabilities built in and enabled by default.

Important APIs
TextBox class
Text property

Is this the right control?


Use a TextBox control to let a user enter and edit unformatted text, such as in a form. You can use the Text
property to get and set the text in a TextBox.
You can make a TextBox read-only, but this should be a temporary, conditional state. If the text is never editable,
consider using a TextBlock instead.
Use a PasswordBox control to collect a password or other private data, such as a Social Security number. A
password box looks like a text input box, except that it renders bullets in place of the text that has been entered.
Use an AutoSuggestBox control to let the user enter search terms or to show the user a list of suggestions to
choose from as they type.
Use a RichEditBox to display and edit rich text files.
For more info about choosing the right text control, see the Text controls article.

Examples

Create a text box


Here's the XAML for a simple text box with a header and placeholder text.

<TextBox Width="500" Header="Notes" PlaceholderText="Type your notes here"/>

TextBox textBox = new TextBox();


textBox.Width = 500;
textBox.Header = "Notes";
textBox.PlaceholderText = "Type your notes here";
// Add the TextBox to the visual tree.
rootGrid.Children.Add(textBox);
Here's the text box that results from this XAML.

Use a text box for data input in a form


Its common to use a text box to accept data input on a form, and use the Text property to get the complete text
string from the text box. You typically use an event like a submit button click to access the Text property, but you
can handle the TextChanged or TextChanging event if you need to do something when the text changes.
You can add a Header (or label) and PlaceholderText (or watermark) to the text box to give the user an indication
of what the text box is for. To customize the look of the header, you can set the HeaderTemplate property instead
of Header. For design info, see Guidelines for labels.
You can restrict the number of characters the user can type by setting the MaxLength property. However,
MaxLength does not restrict the length of pasted text. Use the Paste event to modify pasted text if this is important
for your app.
The text box includes a clear all button ("X") that appears when text is entered in the box. When a user clicks the
"X", the text in the text box is cleared. It looks like this.

The clear all button is shown only for editable, single-line text boxes that contain text and have focus.
The clear all button is not shown in any of these cases:
IsReadOnly is true
AcceptsReturn is true
TextWrap has a value other than NoWrap
Make a text box read-only
You can make a text box read-only by setting the IsReadOnly property to true. You typically toggle this property
in your app code based on conditions in your app. If need text that is always read-only, consider using a TextBlock
instead.
You can make a TextBox read-only by setting the IsReadOnly property to true. For example, you might have a
TextBox for a user to enter comments that is enabled only under certain conditions. You can make the TextBox
read-only until the conditions are met. If you need only to display text, consider using a TextBlock or RichTextBlock
instead.
A read-only text box looks the same as a read/write text box, so it might be confusing to a user. A user can select
and copy text. IsEnabled
Enable multi-line input
There are two properties that you can use to control whether the text box displays text on more than one line. You
typically set both properties to make a multi-line text box.
To let the text box allow and display the newline or return characters, set the AcceptsReturn property to true.
To enable text wrapping, set the TextWrapping property to Wrap. This causes the text to wrap when it
reaches the edge of the text box, independent of line separator characters.

Note TextBox and RichEditBox don't support the WrapWholeWords value for their TextWrapping
properties. If you try to use WrapWholeWords as a value for TextBox.TextWrapping or
RichEditBox.TextWrapping an invalid argument exception is thrown.
A multi-line text box will continue to grow vertically as text is entered unless its constrained by its Height or
MaxHeight property, or by a parent container. You should test that a multi-line text box doesnt grow beyond its
visible area, and constrain its growth if it does. We recommend that you always specify an appropriate height for a
multi-line text box, and not let it grow in height as the user types.
Scrolling using a scroll-wheel or touch is automatically enabled when needed. However, the vertical scrollbars are
not visible by default. You can show the vertical scrollbars by setting the ScrollViewer.VerticalScrollBarVisibility to
Auto on the embedded ScrollViewer, as shown here.

<TextBox AcceptsReturn="True" TextWrapping="Wrap"


MaxHeight="172" Width="300" Header="Description"
ScrollViewer.VerticalScrollBarVisibility="Auto"/>

TextBox textBox = new TextBox();


textBox.AcceptsReturn = true;
textBox.TextWrapping = TextWrapping.Wrap;
textBox.MaxHeight = 172;
textBox.Width = 300;
textBox.Header = "Description";
ScrollViewer.SetVerticalScrollBarVisibility(textBox, ScrollBarVisibility.Auto);

Here's what the text box looks like after text is added.

Format the text display


Use the [TextAlignment]() property to align text within a text box. To align the text box within the layout of the
page, use the HorizontalAlignment and VerticalAlignment properties.
While the text box supports only unformatted text, you can customize how the text is displayed in the text box to
match your branding. You can set standard Control properties like FontFamily, FontSize, FontStyle, Background,
Foreground, and CharacterSpacing to change the look of the text. These properties affect only how the text box
displays the text locally, so if you were to copy and paste the text into a rich text control, for example, no
formatting would be applied.
This example shows a read-only text box with several properties set to customize the appearance of the text.

<TextBox Text="Sample Text" IsReadOnly="True"


FontFamily="Verdana" FontSize="24"
FontWeight="Bold" FontStyle="Italic"
CharacterSpacing="200" Width="300"
Foreground="Blue" Background="Beige"/>
TextBox textBox = new TextBox();
textBox.Text = "Sample Text";
textBox.IsReadOnly = true;
textBox.FontFamily = new FontFamily("Verdana");
textBox.FontSize = 24;
textBox.FontWeight = Windows.UI.Text.FontWeights.Bold;
textBox.FontStyle = Windows.UI.Text.FontStyle.Italic;
textBox.CharacterSpacing = 200;
textBox.Width = 300;
textBox.Background = new SolidColorBrush(Windows.UI.Colors.Beige);
textBox.Foreground = new SolidColorBrush(Windows.UI.Colors.Blue);
// Add the TextBox to the visual tree.
rootGrid.Children.Add(textBox);

The resulting text box looks like this.

Modify the context menu


By default, the commands shown in the text box context menu depend on the state of the text box. For example,
the following commands can be shown when the text box is editable.

COMMAND SHOWN WHEN...

Copy text is selected.

Cut text is selected.

Paste the clipboard contains text.

Select all the TextBox contains text.

Undo text has been changed.

To modify the commands shown in the context menu, handle the ContextMenuOpening event. For an example of
this, see Scenario 2 of the ContextMenu sample. For design info, see Guidelines for context menus.
Select, copy, and paste
You can get or set the selected text in a text box using the SelectedText property. Use the SelectionStart and
SelectionLength properties, and the Select and SelectAll methods, to manipulate the text selection. Handle the
SelectionChanged event to do something when the user selects or de-selects text. You can change the color used
to highlight the selected text by setting the SelectionHighlightColor property.
TextBox supports copy and paste by default. You can provide custom handling of the Paste event on editable text
controls in your app. For example, you might remove the line breaks from a multi-line address when pasting it
into a single-line search box. Or, you might check the length of the pasted text and warn the user if it exceeds the
maximum length that can be saved to a database. For more info and examples, see the Paste event.
Here, we have an example of these properties and methods in use. When you select text in the first text box, the
selected text is displayed in the second text box, which is read-only. The values of the SelectionLength and
SelectionStart properties are shown in two text blocks. This is done using the SelectionChanged event.
<StackPanel>
<TextBox x:Name="textBox1" Height="75" Width="300" Margin="10"
Text="The text that is selected in this TextBox will show up in the read only TextBox below."
TextWrapping="Wrap" AcceptsReturn="True"
SelectionChanged="TextBox1_SelectionChanged" />
<TextBox x:Name="textBox2" Height="75" Width="300" Margin="5"
TextWrapping="Wrap" AcceptsReturn="True" IsReadOnly="True"/>
<TextBlock x:Name="label1" HorizontalAlignment="Center"/>
<TextBlock x:Name="label2" HorizontalAlignment="Center"/>
</StackPanel>

private void TextBox1_SelectionChanged(object sender, RoutedEventArgs e)


{
textBox2.Text = textBox1.SelectedText;
label1.Text = "Selection length is " + textBox1.SelectionLength.ToString();
label2.Text = "Selection starts at " + textBox1.SelectionStart.ToString();
}

Here's the result of this code.

Choose the right keyboard for your text control


To help users to enter data using the touch keyboard, or Soft Input Panel (SIP), you can set the input scope of the
text control to match the kind of data the user is expected to enter.
The touch keyboard can be used for text entry when your app runs on a device with a touch screen. The touch
keyboard is invoked when the user taps on an editable input field, such as a TextBox or RichEditBox. You can make
it much faster and easier for users to enter data in your app by setting the input scope of the text control to match
the kind of data you expect the user to enter. The input scope provides a hint to the system about the type of text
input expected by the control so the system can provide a specialized touch keyboard layout for the input type.
For example, if a text box is used only to enter a 4-digit PIN, set the InputScope property to Number. This tells the
system to show the number keypad layout, which makes it easier for the user to enter the PIN.

Important The input scope does not cause any input validation to be performed, and does not prevent the
user from providing any input through a hardware keyboard or other input device. You are still responsible
for validating the input in your code as needed.

Other properties that affect the touch keyboard are IsSpellCheckEnabled, IsTextPredictionEnabled, and
PreventKeyboardDisplayOnProgrammaticFocus. (IsSpellCheckEnabled also affects the TextBox when a hardware
keyboard is used.)
For more info and examples, see Use input scope to change the touch keyboard and the property documentation.
Recommendations
Use a label or placeholder text if the purpose of the text box isn't clear. A label is visible whether or not the text
input box has a value. Placeholder text is displayed inside the text input box and disappears once a value has
been entered.
Give the text box an appropriate width for the range of values that can be entered. Word length varies between
languages, so take localization into account if you want your app to be world-ready.
A text input box is typically single-line ( TextWrap = "NoWrap" ). When users need to enter or edit a long string, set
the text input box to multi-line ( TextWrap = "Wrap" ).
Generally, a text input box is used for editable text. But you can make a text input box read-only so that its
content can be read, selected, and copied, but not edited.
If you need to reduce clutter in a view, consider making a set of text input boxes appear only when a
controlling checkbox is checked. You can also bind the enabled state of a text input box to a control such as a
checkbox.
Consider how you want a text input box to behave when it contains a value and the user taps it. The default
behavior is appropriate for editing the value rather than replacing it; the insertion point is placed between
words and nothing is selected. If replacing is the most common use case for a given text input box, you can
select all the text in the field whenever the control receives focus, and typing replaces the selection.
Single-line input boxes
Use several single-line text boxes to capture many small pieces of text information. If the text boxes are
related in nature, group those together.
Make the size of single-line text boxes slightly wider than the longest anticipated input. If doing so makes
the control too wide, separate it into two controls. For example, you could split a single address input into
"Address line 1" and "Address line 2".
Set a maximum length for characters that can be entered. If the backing data source doesn't allow a long input
string, limit the input and use a validation popup to let users know when they reach the limit.
Use single-line text input controls to gather small pieces of text from users.
The following example shows a single-line text box to capture an answer to a security question. The answer
is expected to be short, and so a single-line text box is appropriate here.

Use a set of short, fixed-sized, single-line text input controls to enter data with a specific format.

Use a single-line, unconstrained text input control to enter or edit strings, combined with a command
button that helps users select valid values.

Multi-line text input controls


When you create a rich text box, provide styling buttons and implement their actions.
Use a font that's consistent with the style of your app.
Make the height of the text control tall enough to accommodate typical entries.
When capturing long spans of text with a maximum character or word count, use a plain text box and
provide a live-running counter to show the user how many characters or words they have left before they
reach the limit. You'll need to create the counter yourself; place it below the text box and dynamically
update it as the user enters each character or word.

Don't let your text input controls grow in height while users type.
Don't use a multi-line text box when users only need a single line.
Don't use a rich text control if a plain text control is adequate.

Related articles
Text controls
Guidelines for spell checking
Adding search
Guidelines for text input
TextBox class
Windows.UI.Xaml.Controls PasswordBox class
String.Length property
Tiles, badges, and notifications for UWP apps
3/6/2017 3 min to read Edit on GitHub

Learn how to use tiles, badges, toasts, and notifications to provide entry points into your app and keep users up-
to-date.
A tile is an app's representation on the Start menu. Every UWP app has a tile. You
can enable different tile sizes (small, medium, wide, and large).
You can use a tile notification to update the tile to communicate new information
to the user, such as news headlines, or the subject of the most recent unread
message.
You can use a badge to provide status or summary info in the form of a system-
provided glyph or a number from 1-99. Badges also appear on the task bar icon
for an app.
A toast notification is a notification that your app sends to the user via a pop-up
UI element called a toast (or banner). The notification can be seen whether the
user is in your app or not.
A push notification or raw notification is a notification sent to your app either from Windows Push Notification
Services (WNS) or from a background task. Your app can respond to these notifications either by notifying the
user that something of interest happened (via badge update, tile update, or toast) or it can respond in any way of
your choice.

Tiles
TOPIC DESCRIPTION

Create tiles Customize the default tile for your app and provide assets
for different screen sizes.

Create adaptive tiles Adaptive tile templates are a new feature in Windows 10,
allowing you to design your own tile notification content
using a simple and flexible markup language that adapts
to different screen densities. This article tells you how to
create adaptive live tiles for your Universal Windows
Platform (UWP) app.

Adaptive tiles schema Here are the elements and attributes you use to create
adaptive tiles.

Special tile templates Special tile templates are unique templates that are either
animated, or just allow you to do things that aren't
possible with adaptive tiles.
TOPIC DESCRIPTION

App icon assets App icon assets, which appear in a variety of forms
throughout the Windows 10 operating system, are the
calling cards for your Universal Windows Platform (UWP)
app. These guidelines detail where app icon assets appear
in the system, and provide in-depth design tips on how
to create the most polished icons.

Notifications
TOPIC DESCRIPTION

Adaptive and interactive toast notifications Adaptive and interactive toast notifications let you create
flexible pop-up notifications with more content, optional
inline images, and optional user interaction.

Notifications Visualizer Notifications Visualizer is a new Universal Windows


Platform (UWP) app in the Store that helps developers
design adaptive live tiles for Windows 10.

Choose a notification delivery method This article covers the four notification optionslocal,
scheduled, periodic, and pushthat deliver tile and badge
updates and toast notification content.

Send a local tile notification This article describes how to send a local tile notification
to a primary tile and a secondary tile using adaptive tile
templates.

Periodic notification overview Periodic notifications, which are also called polled
notifications, update tiles and badges at a fixed interval by
downloading content from a cloud service.

Windows Push Notification Services (WNS) overview The Windows Push Notification Services (WNS) enables
third-party developers to send toast, tile, badge, and raw
updates from their own cloud service. This provides a
mechanism to deliver new updates to your users in a
power-efficient and dependable way.

Code generated by the push notification wizard By using a wizard in Visual Studio, you can generate push
notifications from a mobile service that was created with
Azure Mobile Services. The Visual Studio wizard generates
code to help you get started. This topic explains how the
wizard modifies your project, what the generated code
does, how to use this code, and what you can do next to
get the most out of push notifications. See Windows Push
Notification Services (WNS) overview.
TOPIC DESCRIPTION

Raw notification overview Raw notifications are short, general purpose push
notifications. They are strictly instructional and do not
include a UI component. As with other push notifications,
the WNS feature delivers raw notifications from your
cloud service to your app.
Tiles for UWP apps
3/6/2017 2 min to read Edit on GitHub

A tile is an app's representation on the Start menu. Every app has a tile. When you create a new Universal
Windows Platform (UWP) app project in Microsoft Visual Studio, it includes a default tile that displays your app's
name and logo. Windows displays this tile when your app is first installed. After your app is installed, you can
change your tile's content through notifications; for example, you can change the tile to communicate new
information to the user, such as news headlines, or the subject of the most recent unread message.

Configure the default tile


When you create a new project in Visual Studio, it creates a simple default tile that displays your app's name and
logo.

<Applications>
<Application Id="App"
Executable="$targetnametoken$.exe"
EntryPoint="ExampleApp.App">
<uap:VisualElements
DisplayName="ExampleApp"
Square150x150Logo="Assets\Logo.png"
Square44x44Logo="Assets\SmallLogo.png"
Description="ExampleApp"
BackgroundColor="#464646">
<uap:SplashScreen Image="Assets\SplashScreen.png" />
</uap:VisualElements>
</Application>
</Applications>

There are a few items you should update:


DisplayName: Replace this value with the name you want to display on your tile.
ShortName: Because there is limited room for your display name to fit on tiles, we recommend that you to
specify a ShortName as well, to make sure your app's name doesnt get truncated.
Logo images:
You should replace these images with your own. You have the option of supplying images for different
visual scales, but you are not required to supply them all. To ensure that you app looks good on a range of
devices, we recommend that you provide 100%, 200%, and 400% scale versions of each image.
Scaled images follow this naming convention: testing
<image name>.scale-<scale factor>.<image file extension>
For example: SmallLogo.scale-100.png
When you refer to the image, you refer to it as <image name>.<image file extension> ("SmallLogo.png" in
this example). The system will automatically select the appropriate scaled image for the device from the
images you've provided.
You don't have to, but we highly recommend supplying logos for wide and large tile sizes so that the user
can resize your app's tile to those sizes. To provide these additional images, you create a DefaultTile element
and use the Wide310x150Logo and Square310x310Logo attributes to specify the additional images:
<Applications>
<Application Id="App"
Executable="$targetnametoken$.exe"
EntryPoint="ExampleApp.App">
<uap:VisualElements
DisplayName="ExampleApp"
Square150x150Logo="Assets\Logo.png"
Square44x44Logo="Assets\SmallLogo.png"
Description="ExampleApp"
BackgroundColor="#464646">
<uap:DefaultTile
Wide310x150Logo="Assets\WideLogo.png"
Square310x310Logo="Assets\LargeLogo.png">
</uap:DefaultTile>
<uap:SplashScreen Image="Assets\SplashScreen.png" />
</uap:VisualElements>
</Application>
</Applications>

Use notifications to customize your tile


After your app is installed, you can use notifications to customize your tile. You can do this the first time your app
launches or in response to some event, such as a push notification.
1. Create an XML payload (in the form of an Windows.Data.Xml.Dom.XmlDocument) that describes the
tile.
Windows 10 introduces a new adaptive tile schema you can use. For instructions, see Adaptive tiles.
For the schema, see the Adaptive tiles schema.
You can use the Windows 8.1 tile templates to define your tile. For more info, see Creating tiles and
badges (Windows 8.1).
2. Create a tile notification object and pass it the XmlDocument you created. There are several types of
notification objects:
A Windows.UI.NotificationsTileNotification object for updating the tile immediately.
A Windows.UI.Notifications.ScheduledTileNotification object for updating the tile at some point in
the future.
3. Use the Windows.UI.Notifications.TileUpdateManager.CreateTileUpdaterForApplication to create a
TileUpdater object.
4. Call the TileUpdater.Update method and pass it the tile notification object you created in step 2.
Create adaptive tiles
3/6/2017 18 min to read Edit on GitHub

Adaptive tile templates are a new feature in Windows 10, allowing you to design your own tile notification
content using a simple and flexible markup language that adapts to different screen densities. This article tells you
how to create adaptive live tiles for your Universal Windows Platform (UWP) app. For the complete list of
adaptive elements and attributes, see the Adaptive tiles schema.
(If you'd like, you can still use the preset templates from the Windows 8 tile template catalog when designing
notifications for Windows 10.)

Getting started
Install Notifications library. If you'd like to use C# instead of XML to generate notifications, install the NuGet
package named Microsoft.Toolkit.Uwp.Notifications (search for "notifications uwp"). The C# samples provided in
this article use version 1.0.0 of the NuGet package.
Install Notifications Visualizer. This free UWP app helps you design adaptive live tiles by providing an instant
visual preview of your tile as you edit it, similar to Visual Studio's XAML editor/design view. You can read this
blog post for more information, and you can download Notifications Visualizer here.

How to send a tile notification


Please read our Quickstart on sending local tile notifications. The documentation on this page explains all the
visual UI possibilities you have with adaptive tiles.

Usage guidance
Adaptive templates are designed to work across different form factors and notification types. Elements such as
group and subgroup link together content and don't imply a particular visual behavior on their own. The final
appearance of a notification should be based on the specific device on which it will appear, whether it's phone,
tablet, or desktop, or another device.
Hints are optional attributes that can be added to elements in order to achieve a specific visual behavior. Hints can
be device-specific or notification-specific.

A basic example
This example demonstrates what the adaptive tile templates can produce.
<tile>
<visual>

<binding template="TileMedium">
...
</binding>

<binding template="TileWide">
<text hint-style="subtitle">Jennifer Parker</text>
<text hint-style="captionSubtle">Photos from our trip</text>
<text hint-style="captionSubtle">Check out these awesome photos I took while in New Zealand!</text>
</binding>

<binding template="TileLarge">
...
</binding>

</visual>
</tile>

TileContent content = new TileContent()


{
Visual = new TileVisual()
{
TileMedium = ...

TileWide = new TileBinding()


{
Content = new TileBindingContentAdaptive()
{
Children =
{
new AdaptiveText()
{
Text = "Jennifer Parker",
HintStyle = AdaptiveTextStyle.Subtitle
},

new AdaptiveText()
{
Text = "Photos from our trip",
HintStyle = AdaptiveTextStyle.CaptionSubtle
},

new AdaptiveText()
{
Text = "Check out these awesome photos I took while in New Zealand!",
HintStyle = AdaptiveTextStyle.CaptionSubtle
}
}
}
},

TileLarge = ...
}
};

Result:
Tile sizes
Content for each tile size is individually specified in separate <binding> elements within the XML payload. Choose
the target size by setting the template attribute to one of the following values:
TileSmall
TileMedium
TileWide
TileLarge (only for desktop)
For a single tile notification XML payload, provide <binding> elements for each tile size that you'd like to support,
as shown in this example:

<tile>
<visual>

<binding template="TileSmall">
<text>Small</text>
</binding>

<binding template="TileMedium">
<text>Medium</text>
</binding>

<binding template="TileWide">
<text>Wide</text>
</binding>

<binding template="TileLarge">
<text>Large</text>
</binding>

</visual>
</tile>
TileContent content = new TileContent()
{
Visual = new TileVisual()
{
TileSmall = new TileBinding()
{
Content = new TileBindingContentAdaptive()
{
Children =
{
new AdaptiveText() { Text = "Small" }
}
}
},

TileMedium = new TileBinding()


{
Content = new TileBindingContentAdaptive()
{
Children =
{
new AdaptiveText() { Text = "Medium" }
}
}
},

TileWide = new TileBinding()


{
Content = new TileBindingContentAdaptive()
{
Children =
{
new AdaptiveText() { Text = "Wide" }
}
}
},

TileLarge = new TileBinding()


{
Content = new TileBindingContentAdaptive()
{
Children =
{
new AdaptiveText() { Text = "Large" }
}
}
}
}
};

Result:
Branding
You can control the branding on the bottom of a live tile (the display name and corner logo) by using the
branding attribute on the notification payload. You can choose to display "none," only the "name," only the "logo,"
or both with "nameAndLogo."
Note Windows Mobile doesn't support the corner logo, so "logo" and "nameAndLogo" default to "name" on
Mobile.

<visual branding="logo">
...
</visual>

new TileVisual()
{
Branding = TileBranding.Logo,
...
}

Result:

Branding can be applied for specific tile sizes one of two ways:
1. By applying the attribute on the <binding> element
2. By applying the attribute on the <visual> element, which affects the entire notification payload If you don't
specify branding for a binding, it will use the branding that's provided on the visual element.

<tile>
<visual branding="nameAndLogo">

<binding template="TileMedium" branding="logo">


...
</binding>

<!--Inherits branding from visual-->


<binding template="TileWide">
...
</binding>

</visual>
</tile>
TileContent content = new TileContent()
{
Visual = new TileVisual()
{
Branding = TileBranding.NameAndLogo,

TileMedium = new TileBinding()


{
Branding = TileBranding.Logo,
...
},

// Inherits branding from Visual


TileWide = new TileBinding()
{
...
}
}
};

Default branding result:

If you don't specify the branding in your notification payload, the base tile's properties will determine the
branding. If the base tile shows the display name, then the branding will default to "name." Otherwise, the
branding will default to "none" if the display name isn't shown.
Note This is a change from Windows 8.x, in which the default branding was "logo."

Display name
You can override the display name of a notification by entering the text string of your choice with the
displayName attribute. As with branding, you can specify this on the <visual> element, which affects the entire
notification payload, or on the <binding> element, which only affects individual tiles.
Known Issue On Windows Mobile, if you specify a ShortName for your Tile, the display name provided in your
notification will not be used (the ShortName will always be displayed).
<tile>
<visual branding="nameAndLogo" displayName="Wednesday 22">

<binding template="TileMedium" displayName="Wed. 22">


...
</binding>

<!--Inherits displayName from visual-->


<binding template="TileWide">
...
</binding>

</visual>
</tile>

TileContent content = new TileContent()


{
Visual = new TileVisual()
{
Branding = TileBranding.NameAndLogo,
DisplayName = "Wednesday 22",

TileMedium = new TileBinding()


{
DisplayName = "Wed. 22",
...
},

// Inherits DisplayName from Visual


TileWide = new TileBinding()
{
...
}
}
};

Result:

Text
The <text> element is used to display text. You can use hints to modify how text appears.

<text>This is a line of text</text>

new AdaptiveText()
{
Text = "This is a line of text"
};

Result:
Text wrapping
By default, text doesn't wrap and will continue off the edge of the tile. Use the hint-wrap attribute to set text
wrapping on a text element. You can also control the minimum and maximum number of lines by using hint-
minLines and hint-maxLines, both of which accept positive integers.

<text hint-wrap="true">This is a line of wrapping text</text>

new AdaptiveText()
{
Text = "This is a line of wrapping text",
HintWrap = true
};

Result:

Text styles
Styles control the font size, color, and weight of text elements. There are a number of available styles, including a
"subtle" variation of each style that sets the opacity to 60%, which usually makes the text color a shade of light
gray.

<text hint-style="base">Header content</text>


<text hint-style="captionSubtle">Subheader content</text>

new AdaptiveText()
{
Text = "Header content",
HintStyle = AdaptiveTextStyle.Base
},

new AdaptiveText()
{
Text = "Subheader content",
HintStyle = AdaptiveTextStyle.CaptionSubtle
}

Result:
Note The style defaults to caption if hint-style isn't specified.
Basic text styles

<text hint-style="*" /> Font height Font weight

caption 12 effective pixels (epx) Regular

body 15 epx Regular

base 15 epx Semibold

subtitle 20 epx Regular

title 24 epx Semilight

subheader 34 epx Light

header 46 epx Light

Numeral text style variations


These variations reduce the line height so that content above and below come much closer to the text.

titleNumeral

subheaderNumeral

headerNumeral

Subtle text style variations


Each style has a subtle variation that gives the text a 60% opacity, which usually makes the text color a shade of
light gray.

captionSubtle

bodySubtle

baseSubtle

subtitleSubtle

titleSubtle
titleNumeralSubtle

subheaderSubtle

subheaderNumeralSubtle

headerSubtle

headerNumeralSubtle

Text alignment
Text can be horizontally aligned left, center, or right. In left-to-right languages like English, text defaults to left-
aligned. In right-to-left languages like Arabic, text defaults to right-aligned. You can manually set alignment with
the hint-align attribute on elements.

<text hint-align="center">Hello</text>

new AdaptiveText()
{
Text = "Hello",
HintAlign = AdaptiveTextAlign.Center
};

Result:

Groups and subgroups


Groups allow you to semantically declare that the content inside the group is related and must be displayed in its
entirety for the content to make sense. For example, you might have two text elements, a header, and a
subheader, and it would not make sense for only the header to be shown. By grouping those elements inside a
subgroup, the elements will either all be displayed (if they can fit) or not be displayed at all (because they can't fit).
To provide the best experience across devices and screens, provide multiple groups. Having multiple groups
allows your tile to adapt to larger screens.
Note The only valid child of a group is a subgroup.
<binding template="TileWide" branding="nameAndLogo">
<group>
<subgroup>
<text hint-style="subtitle">Jennifer Parker</text>
<text hint-style="captionSubtle">Photos from our trip</text>
<text hint-style="captionSubtle">Check out these awesome photos I took while in New Zealand!</text>
</subgroup>
</group>

<text />

<group>
<subgroup>
<text hint-style="subtitle">Steve Bosniak</text>
<text hint-style="captionSubtle">Build 2015 Dinner</text>
<text hint-style="captionSubtle">Want to go out for dinner after Build tonight?</text>
</subgroup>
</group>
</binding>
TileWide = new TileBinding()
{
Branding = TileBranding.NameAndLogo,
Content = new TileBindingContentAdaptive()
{
Children =
{
CreateGroup(
from: "Jennifer Parker",
subject: "Photos from our trip",
body: "Check out these awesome photos I took while in New Zealand!"),

// For spacing
new AdaptiveText(),

CreateGroup(
from: "Steve Bosniak",
subject: "Build 2015 Dinner",
body: "Want to go out for dinner after Build tonight?")
}
}
}

...

private static AdaptiveGroup CreateGroup(string from, string subject, string body)


{
return new AdaptiveGroup()
{
Children =
{
new AdaptiveSubgroup()
{
Children =
{
new AdaptiveText()
{
Text = from,
HintStyle = AdaptiveTextStyle.Subtitle
},
new AdaptiveText()
{
Text = subject,
HintStyle = AdaptiveTextStyle.CaptionSubtle
},
new AdaptiveText()
{
Text = body,
HintStyle = AdaptiveTextStyle.CaptionSubtle
}
}
}
}
};
}

Result:
Subgroups (columns)
Subgroups also allow you to divide data into semantic sections within a group. For live tiles, this visually
translates to columns.
The hint-weight attribute lets you to control the widths of columns. The value of hint-weight is expressed as a
weighted proportion of available space, which is identical to GridUnitType.Star behavior. For equal-width
columns, assign each weight to 1.

hint-weight Percentage of width

1 25%

1 25%

1 25%

1 25%

Total weight: 4

To make one column twice as large as another column, assign the smaller column a weight of 1 and the larger
column a weight of 2.

hint-weight Percentage of width

1 33.3%

2 66.7%

Total weight: 3

If you want your first column to take up 20% of the total width and your second column to take up 80% of the
total width, assign the first weight to 20 and the second weight to 80. If your total weights equal 100, they'll act as
percentages.

hint-weight Percentage of width

20 20%

80 80%

Total weight: 100

Note An 8-pixel margin is automatically added between the columns.


When you have more than two subgroups, you should specify the hint-weight, which only accepts positive
integers. If you don't specify hint-weight for the first subgroup, it will be assigned a weight of 50. The next
subgroup that doesn't have a specified hint-weight will be assigned a weight equal to 100 minus the sum of the
preceding weights, or to 1 if the result is zero. The remaining subgroups that don't have specified hint-weights
will be assigned a weight of 1.
Here's sample code for a weather tile that shows how you can achieve a tile with five columns of equal width:
<binding template="TileWide" displayName="Seattle" branding="name">
<group>
<subgroup hint-weight="1">
<text hint-align="center">Mon</text>
<image src="Assets\Weather\Mostly Cloudy.png" hint-removeMargin="true"/>
<text hint-align="center">63</text>
<text hint-align="center" hint-style="captionsubtle">42</text>
</subgroup>
<subgroup hint-weight="1">
<text hint-align="center">Tue</text>
<image src="Assets\Weather\Cloudy.png" hint-removeMargin="true"/>
<text hint-align="center">57</text>
<text hint-align="center" hint-style="captionsubtle">38</text>
</subgroup>
<subgroup hint-weight="1">
<text hint-align="center">Wed</text>
<image src="Assets\Weather\Sunny.png" hint-removeMargin="true"/>
<text hint-align="center">59</text>
<text hint-align="center" hint-style="captionsubtle">43</text>
</subgroup>
<subgroup hint-weight="1">
<text hint-align="center">Thu</text>
<image src="Assets\Weather\Sunny.png" hint-removeMargin="true"/>
<text hint-align="center">62</text>
<text hint-align="center" hint-style="captionsubtle">42</text>
</subgroup>
<subgroup hint-weight="1">
<text hint-align="center">Fri</text>
<image src="Assets\Weather\Sunny.png" hint-removeMargin="true"/>
<text hint-align="center">71</text>
<text hint-align="center" hint-style="captionsubtle">66</text>
</subgroup>
</group>
</binding>
TileWide = new TileBinding()
{
DisplayName = "Seattle",
Branding = TileBranding.Name,
Content = new TileBindingContentAdaptive()
{
Children =
{
new AdaptiveGroup()
{
Children =
{
CreateSubgroup("Mon", "Mostly Cloudy.png", "63", "42"),
CreateSubgroup("Tue", "Cloudy.png", "57", "38"),
CreateSubgroup("Wed", "Sunny.png", "59", "43"),
CreateSubgroup("Thu", "Sunny.png", "62", "42"),
CreateSubgroup("Fri", "Sunny.png", "71", "66")
}
}
}
}
}

...

private static AdaptiveSubgroup CreateSubgroup(string day, string image, string highTemp, string lowTemp)
{
return new AdaptiveSubgroup()
{
HintWeight = 1,
Children =
{
new AdaptiveText()
{
Text = day,
HintAlign = AdaptiveTextAlign.Center
},
new AdaptiveImage()
{
Source = "Assets/Weather/" + image,
HintRemoveMargin = true
},
new AdaptiveText()
{
Text = highTemp,
HintAlign = AdaptiveTextAlign.Center
},
new AdaptiveText()
{
Text = lowTemp,
HintAlign = AdaptiveTextAlign.Center,
HintStyle = AdaptiveTextStyle.CaptionSubtle
}
}
};
}

Result:
Images
The <image> element is used to display images on the tile notification. Images can be placed inline within the tile
content (default), as a background image behind your content, or as a peek image that animates in from the top
of the notification.
Note There are restrictions on the file size and dimensions of images.
With no extra behaviors specified, images will uniformly shrink or expand to fill the available width. The sample
below shows a tile using two columns and inline images. The inline images stretch to fill the width of the column.

<binding template="TileMedium" displayName="Seattle" branding="name">


<group>
<subgroup>
<text hint-align="center">Mon</text>
<image src="Assets\Apps\Weather\Mostly Cloudy.png" hint-removeMargin="true"/>
<text hint-align="center">63</text>
<text hint-style="captionsubtle" hint-align="center">42</text>
</subgroup>
<subgroup>
<text hint-align="center">Tue</text>
<image src="Assets\Apps\Weather\Cloudy.png" hint-removeMargin="true"/>
<text hint-align="center">57</text>
<text hint-style="captionSubtle" hint-align="center">38</text>
</subgroup>
</group>
</binding>
TileMedium = new TileBinding()
{
DisplayName = "Seattle",
Branding = TileBranding.Name,
Content = new TileBindingContentAdaptive()
{
Children =
{
new AdaptiveGroup()
{
Children =
{
CreateSubgroup("Mon", "Mostly Cloudy.png", "63", "42"),
CreateSubgroup("Tue", "Cloudy.png", "57", "38")
}
}
}
}
}
...
private static AdaptiveSubgroup CreateSubgroup(string day, string image, string highTemp, string lowTemp)
{
return new AdaptiveSubgroup()
{
Children =
{
new AdaptiveText()
{
Text = day,
HintAlign = AdaptiveTextAlign.Center
},
new AdaptiveImage()
{
Source = "Assets/Weather/" + image,
HintRemoveMargin = true
},
new AdaptiveText()
{
Text = highTemp,
HintAlign = AdaptiveTextAlign.Center
},
new AdaptiveText()
{
Text = lowTemp,
HintAlign = AdaptiveTextAlign.Center,
HintStyle = AdaptiveTextStyle.CaptionSubtle
}
}
};
}

Result:

Images placed in the <binding> root, or in the first group, will also stretch to fit the available height.
Image alignment
Images can be set to align left, center, or right using the hint-align attribute. This will also cause images to
display at their native resolution instead of stretching to fill width.
<binding template="TileLarge">
<image src="Assets/fable.jpg" hint-align="center"/>
</binding>

TileLarge = new TileBinding()


{
Content = new TileBindingContentAdaptive()
{
Children =
{
new AdaptiveImage()
{
Source = "Assets/fable.jpg",
HintAlign = AdaptiveImageAlign.Center
}
}
}
}

Result:

Image margins
By default, inline images have an 8-pixel margin between any content above or below the image. This margin can
be removed by using the hint-removeMargin attribute on the image. However, images always retain the 8-pixel
margin from the edge of the tile, and subgroups (columns) always retain the 8-pixel padding between columns.

<binding template="TileMedium" branding="none">


<group>
<subgroup>
<text hint-align="center">Mon</text>
<image src="Assets\Numbers\4.jpg" hint-removeMargin="true"/>
<text hint-align="center">63</text>
<text hint-style="captionsubtle" hint-align="center">42</text>
</subgroup>
<subgroup>
<text hint-align="center">Tue</text>
<image src="Assets\Numbers\3.jpg" hint-removeMargin="true"/>
<text hint-align="center">57</text>
<text hint-style="captionsubtle" hint-align="center">38</text>
</subgroup>
</group>
</binding>
TileMedium = new TileBinding()
{
Branding = TileBranding.None,
Content = new TileBindingContentAdaptive()
{
Children =
{
new AdaptiveGroup()
{
Children =
{
CreateSubgroup("Mon", "4.jpg", "63", "42"),
CreateSubgroup("Tue", "3.jpg", "57", "38")
}
}
}
}
}

...

private static AdaptiveSubgroup CreateSubgroup(string day, string image, string highTemp, string lowTemp)
{
return new AdaptiveSubgroup()
{
HintWeight = 1,
Children =
{
new AdaptiveText()
{
Text = day,
HintAlign = AdaptiveTextAlign.Center
},
new AdaptiveImage()
{
Source = "Assets/Numbers/" + image,
HintRemoveMargin = true
},
new AdaptiveText()
{
Text = highTemp,
HintAlign = AdaptiveTextAlign.Center
},
new AdaptiveText()
{
Text = lowTemp,
HintAlign = AdaptiveTextAlign.Center,
HintStyle = AdaptiveTextStyle.CaptionSubtle
}
}
};
}

Image cropping
Images can be cropped into a circle using the hint-crop attribute, which currently only supports the values
"none" (default) or "circle."

<binding template="TileLarge" hint-textStacking="center">


<group>
<subgroup hint-weight="1"/>
<subgroup hint-weight="2">
<image src="Assets/Apps/Hipstame/hipster.jpg" hint-crop="circle"/>
</subgroup>
<subgroup hint-weight="1"/>
</group>

<text hint-style="title" hint-align="center">Hi,</text>


<text hint-style="subtitleSubtle" hint-align="center">MasterHip</text>
</binding>

TileLarge = new TileBinding()


{
Content = new TileBindingContentAdaptive()
{
TextStacking = TileTextStacking.Center,
Children =
{
new AdaptiveGroup()
{
Children =
{
new AdaptiveSubgroup() { HintWeight = 1 },
new AdaptiveSubgroup()
{
HintWeight = 2,
Children =
{
new AdaptiveImage()
{
Source = "Assets/Apps/Hipstame/hipster.jpg",
HintCrop = AdaptiveImageCrop.Circle
}
}
},
new AdaptiveSubgroup() { HintWeight = 1 }
}
},
new AdaptiveText()
{
Text = "Hi,",
HintStyle = AdaptiveTextStyle.Title,
HintAlign = AdaptiveTextAlign.Center
},
new AdaptiveText()
{
Text = "MasterHip",
HintStyle = AdaptiveTextStyle.SubtitleSubtle,
HintAlign = AdaptiveTextAlign.Center
}
}
}
}

Result:
Background image
To set a background image, place an image element in the root of the <binding> and set the placement attribute
to "background."

<binding template="TileWide">
<image src="Assets\Mostly Cloudy-Background.jpg" placement="background"/>
<group>
<subgroup hint-weight="1">
<text hint-align="center">Mon</text>
<image src="Assets\Weather\Mostly Cloudy.png" hint-removeMargin="true"/>
<text hint-align="center">63</text>
<text hint-align="center" hint-style="captionsubtle">42</text>
</subgroup>
...
</group>
</binding>
TileWide = new TileBinding()
{
Content = new TileBindingContentAdaptive()
{
BackgroundImage = new TileBackgroundImage()
{
Source = "Assets/Mostly Cloudy-Background.jpg"
},

Children =
{
new AdaptiveGroup()
{
Children =
{
CreateSubgroup("Mon", "Mostly Cloudy.png", "63", "42")
...
}
}
}
}
}

...

private static AdaptiveSubgroup CreateSubgroup(string day, string image, string highTemp, string lowTemp)
{
return new AdaptiveSubgroup()
{
HintWeight = 1,
Children =
{
new AdaptiveText()
{
Text = day,
HintAlign = AdaptiveTextAlign.Center
},
new AdaptiveImage()
{
Source = "Assets/Weather/" + image,
HintRemoveMargin = true
},
new AdaptiveText()
{
Text = highTemp,
HintAlign = AdaptiveTextAlign.Center
},
new AdaptiveText()
{
Text = lowTemp,
HintAlign = AdaptiveTextAlign.Center,
HintStyle = AdaptiveTextStyle.CaptionSubtle
}
}
};
}

Result:
Peek image
You can specify an image that "peeks" in from the top of the tile. The peek image uses an animation to slide
down/up from the top of the tile, peeking into view, and then later sliding back out to reveal the main content on
the tile. To set a peek image, place an image element in the root of the <binding>, and set the placement attribute
to "peek."

<binding template="TileMedium" branding="name">


<image placement="peek" src="Assets/Apps/Hipstame/hipster.jpg"/>
<text>New Message</text>
<text hint-style="captionsubtle" hint-wrap="true">Hey, have you tried Windows 10 yet?</text>
</binding>

TileWide = new TileBinding()


{
Branding = TileBranding.Name,
Content = new TileBindingContentAdaptive()
{
PeekImage = new TilePeekImage()
{
Source = "Assets/Apps/Hipstame/hipster.jpg"
},
Children =
{
new AdaptiveText()
{
Text = "New Message"
},
new AdaptiveText()
{
Text = "Hey, have you tried Windows 10 yet?",
HintStyle = AdaptiveTextStyle.CaptionSubtle,
HintWrap = true
}
}
}
}

Circle crop for peek and background images


Use the hint-crop attribute on peek and background images to do a circle crop:

<image placement="peek" hint-crop="circle" src="Assets/Apps/Hipstame/hipster.jpg"/>

new TilePeekImage()
{
HintCrop = TilePeekImageCrop.Circle,
Source = "Assets/Apps/Hipstame/hipster.jpg"
}

The result will look like this:


Use both peek and background image
To use both a peek and a background image on a tile notification, specify both a peek image and a background
image in your notification payload.
The result will look like this:

Peek and background image overlays


You can set a black overlay on your background and peek images using hint-overlay, which accepts integers
from 0-100, with 0 being no overlay and 100 being full black overlay. You can use the overlay to help ensure that
text on your tile is readable.
Use hint-overlay on a background image
Your background image will default to 20% overlay as long as you have some text elements in your payload
(otherwise it will default to 0% overlay).

<binding template="TileWide">
<image placement="background" hint-overlay="60" src="Assets\Mostly Cloudy-Background.jpg"/>
...
</binding>

TileWide = new TileBinding()


{
Content = new TileBindingContentAdaptive()
{
BackgroundImage = new TileBackgroundImage()
{
Source = "Assets/Mostly Cloudy-Background.jpg",
HintOverlay = 60
},

...
}
}

hint-overlay Result:
Use hint-overlay on a peek image
In Version 1511 of Windows 10, we support an overlay for your peek image too, just like your background image.
Specify hint-overlay on the peek image element as an integer from 0-100. The default overlay for peek images is
0 (no overlay).

<binding template="TileMedium">
<image hint-overlay="20" src="Assets\Map.jpg" placement="peek"/>
...
</binding>

TileMedium = new TileBinding()


{
Content = new TileBindingContentAdaptive()
{
PeekImage = new TilePeekImage()
{
Source = "Assets/Map.jpg",
HintOverlay = 20
},
...
}
}

This example shows a peek image at 20% opacity (left) and at 0% opacity (right):

Vertical alignment (text stacking)


You can control the vertical alignment of content on your tile by using the hint-textStacking attribute on both
the <binding> element and <subgroup> element. By default, everything is vertically aligned to the top, but you
can also align content to the bottom or center.
Text stacking on binding element
When applied at the <binding> level, text stacking sets the vertical alignment of the notification content as a
whole, aligning in the available vertical space above the branding/badge area.

<binding template="TileMedium" hint-textStacking="center" branding="logo">


<text hint-style="base" hint-align="center">Hi,</text>
<text hint-style="captionSubtle" hint-align="center">MasterHip</text>
</binding>

TileMedium = new TileBinding()


{
Branding = TileBranding.Logo,
Content = new TileBindingContentAdaptive()
{
TextStacking = TileTextStacking.Center,
Children =
{
new AdaptiveText()
{
Text = "Hi,",
HintStyle = AdaptiveTextStyle.Base,
HintAlign = AdaptiveTextAlign.Center
},

new AdaptiveText()
{
Text = "MasterHip",
HintStyle = AdaptiveTextStyle.CaptionSubtle,
HintAlign = AdaptiveTextAlign.Center
}
}
}
}

Text stacking on subgroup element


When applied at the <subgroup> level, text stacking sets the vertical alignment of the subgroup (column) content,
aligning in the available vertical space within the entire group.
<binding template="TileWide" branding="nameAndLogo">
<group>
<subgroup hint-weight="33">
<image src="Assets/Apps/Hipstame/hipster.jpg" hint-crop="circle"/>
</subgroup>
<subgroup hint-textStacking="center">
<text hint-style="subtitle">Hi,</text>
<text hint-style="bodySubtle">MasterHip</text>
</subgroup>
</group>
</binding>

TileWide = new TileBinding()


{
Branding = TileBranding.NameAndLogo,
Content = new TileBindingContentAdaptive()
{
Children =
{
new AdaptiveGroup()
{
Children =
{
// Image column
new AdaptiveSubgroup()
{
HintWeight = 33,
Children =
{
new AdaptiveImage()
{
Source = "Assets/Apps/Hipstame/hipster.jpg",
HintCrop = AdaptiveImageCrop.Circle
}
}
},

// Text column
new AdaptiveSubgroup()
{
// Vertical align its contents
TextStacking = TileTextStacking.Center,
Children =
{
new AdaptiveText()
{
Text = "Hi,",
HintStyle = AdaptiveTextStyle.Subtitle
},

new AdaptiveText()
{
Text = "MasterHip",
HintStyle = AdaptiveTextStyle.BodySubtle
}
}
}
}
}
}
}
}
Related topics
Adaptive tiles schema
Quickstart: Send a local tile notification
Notifications library on GitHub
Special tile templates catalog
Adaptive tile templates: schema and guidance
3/6/2017 1 min to read Edit on GitHub

Here are the elements and attributes you use to create adaptive tiles. For instructions and examples, see Create
adaptive tiles.

tile element
<tile>

<!-- Child elements -->


visual

</tile>

visual element
<visual
version? = integer
lang? = string
baseUri? = anyURI
branding? = "none" | "logo" | "name" | "nameAndLogo"
addImageQuery? = boolean
contentId? = string
displayName? = string >

<!-- Child elements -->


binding+

</visual>

binding element
<binding
template = tileTemplateNameV3
fallback? = tileTemplateNameV1
lang? = string
baseUri? = anyURI
branding? = "none" | "logo" | "name" | "nameAndLogo"
addImageQuery? = boolean
contentId? = string
displayName? = string
hint-textStacking? = "top" | "center" | "bottom"
hint-overlay? = [0-100] >

<!-- Child elements -->


( image
| text
| group
)*

</binding>
image element
<image
src = string
placement? = "inline" | "background" | "peek"
alt? = string
addImageQuery? = boolean
hint-crop? = "none" | "circle"
hint-removeMargin? = boolean
hint-align? = "stretch" | "left" | "center" | "right" />

text element
<text
lang? = string
hint-style? = textStyle
hint-wrap? = boolean
hint-maxLines? = integer
hint-minLines? = integer
hint-align? = "left" | "center" | "right" >

<!-- text goes here -->

</text>

textStyle values: caption captionSubtle body bodySubtle base baseSubtle subtitle subtitleSubtle title titleSubtle
titleNumeral subheader subheaderSubtle subheaderNumeral header headerSubtle headerNumeral

group element
<group>

<!-- Child elements -->


subgroup+

</group>

subgroup element
<subgroup
hint-weight? = [0-100]
hint-textStacking? = "top" | "center" | "bottom" >

<!-- Child elements -->


( text
| image
)*

</subgroup>

Related topics
Create adaptive tiles
Guidelines for tile and icon assets
3/6/2017 9 min to read Edit on GitHub

App icon assets, which appear in a variety of forms throughout the Windows 10 operating system, are the calling
cards for your Universal Windows Platform (UWP) app. These guidelines detail where app icon assets appear in
the system, and provide in-depth design tips on how to create the most polished icons.

Adaptive scaling
First, a brief overview on adaptive scaling to better understand how scaling works with assets. Windows 10
introduces an evolution of the existing scaling model. In addition to scaling vector content, there is a unified set of
scale factors that provides a consistent size for UI elements across a variety of screen sizes and display resolutions.
The scale factors are also compatible with the scale factors of other operating systems such as iOS and Android,
which makes it easier to share assets between these platforms.
The Store picks the assets to download based in part of the DPI of the device. Only the assets that best match the
device are downloaded.

Tile elements
The basic components of a Start tile consist of a back plate, an icon, a branding bar, margins, and an app title:

The branding bar at the bottom of a tile is where the app name, badging, and counter (if used) appear:
The height of the branding bar is based on the scale factor of the device on which it appears:

SCALE FACTOR PIXELS

100% 32

125% 40

150% 48

200% 64

400% 128

The system sets tile margins and cannot be modified. Most content appears inside the margins, as seen in this
example:

Margin width is based on the scale factor of the device on which it appears:

SCALE FACTOR PIXELS

100% 8

125% 10

150% 12

200% 16

400% 32

Tile assets
Each tile asset is the same size as the tile on which it is placed. You can brand your app's tiles with two different
representations of an asset:
1. An icon or logo centered with padding. This lets the back plate color show through:
1. A full-bleed, branded tile without padding:

For consistency across devices, each tile size (small, medium, wide, and large) has its own sizing relationship. In
order to achieve a consistent icon placement across tiles, we recommend a few basic padding guidelines for the
following tile sizes. The area where the two purple overlays intersect represents the ideal footprint for an icon.
Although icons won't always fit inside the footprint, the visual volume of an icon should be roughly equivalent to
the provided examples.
Small tile sizing:

Medium tile sizing:

Wide tile sizing:


Large tile sizing:

In this example, the icon is too large for the tile:

In this example, the icon is too small for the tile:

The following padding ratios are optimal for horizontally or vertically oriented icons.
For small tiles, limit the icon width and height to 66% of the tile size:

For medium tiles, limit the icon width to 66% and height to 50% of tile size. This prevents overlapping of elements
in the branding bar:
For wide tiles, limit the icon width to 66% and height to 50% of tile size. This prevents overlapping of elements in
the branding bar:

For large tiles, limit the icon width and height to 50% of tile size:

Some icons are designed to be horizontally or vertically oriented, while others have more complex shapes that
prevent them from fitting squarely within the target dimensions. Icons that appear to be centered can be weighted
to one side. In this case, parts of an icon may hang outside the recommended footprint, provided it occupies the
same visual weight as a squarely fitted icon:

With full-bleed assets, take into account elements that interact within the margins and edges of the tiles. Maintain
margins of at least 16% of the height or width of the tile. This percentage represents double the width of the
margins at the smallest tile sizes:

In this example, margins are too tight:

Tile assets in list views


Tiles can also appear in a list view. Sizing guidelines for tile assets that appear in list views are a bit different than
tile assets previously outlined. This section details those sizing specifics.

Limit icon width and height to 75% of the tile size:


For vertical and horizontal icon formats, limit width and height to 75% of the tile size:

For full bleed artwork of important brand elements, maintain margins of at least 12.5%:

In this example, the icon is too big inside its tile:

In this example, the icon is too small inside its tile:


Target-based assets
Target-based assets are for icons and tiles that appear on the Windows taskbar, task view, ALT+TAB, snap-assist,
and the lower-right corner of Start tiles. You don't have to add padding to these assets; Windows adds padding if
needed. These assets should account for a minimum footprint of 16 pixels. Here's an example of these assets as
they appear in icons on the Windows taskbar:

Although these UI will use a target-based asset on top of a colored backplate by default, you may use a target-
based unplated asset as well. Unplated assets should be created with the possibility that they may appear on
various background colors:

These are size recommendations for target-based assets, at 100% scale:

Iconic template app assets


The iconic template (also known as the "IconWithBadge" template) lets you display a small image in the center of
the tile. Windows 10 supports the template on both phone and tablet/desktop. (Learn about creating iconic tiles in
the Special tile templates article.)
Apps that use the iconic template, such as Messaging, Phone, and Store, have target-based assets that can feature
a badge (with the live counter). As with other target-based assets, no padding is needed. Iconic assets aren't part of
the app manifest, but are part of a live tile payload. Assets are scaled to fit and centered within a 3:2 ratio container:
For square assets, automatic centering within the container occurs:

For non-square assets, automatic horizontal/vertical centering and snapping to the width/height of the container
occurs:

Splash screen assets


The splash screen image can be given either as a direct path to an image file or as a resource. By using a resource
reference, you can supply images of different scales so that Windows can choose the best size for the device and
screen resolution. You can also supply high contrast images for accessibility and localized images to match
different UI languages.
If you open "Package.appxmanifest" in a text editor, the SplashScreen element appears as a child of the
VisualElements element. The default splash screen markup in the manifest file looks like this in a text editor:

<uap:SplashScreen Image="Assets\SplashScreen.png" /></code></pre></td>


</tr>
</tbody>
</table>

The splash screen asset is centered by whichever device it appears on:


High-contrast assets
High-contrast mode makes use of separate sets of assets for high-contrast white (white background with black
text) and high-contrast black (black background with white text). If you don't provide high-contrast assets for your
app, standard assets will be used.
If your app's standard assets provide an acceptable viewing experience when rendered on a black-and-white
background, then your app should look at least satisfactory in high-contrast mode. If your standard assets don't
afford an acceptable viewing experience when rendered on a black-and-white background, consider specifically
including high-contrast assets. These examples illustrate the two types of high-contrast assets:

If you decide to provide high-contrast assets, you need to include both setsboth white-on-black and black-on-
white. When including these assets in your package, you could create a "contrast-black" folder for white-on-black
assets, and a "contrast-white" folder for black-on-white assets.

Asset size tables


At a bare minimum, we strongly recommend that you provide assets for the 100, 200, and 400 scale factors.
Providing assets for all scale factors will provide the optimal user experience.
Scale-based assets

CATEGORY ELEMENT NAME AT 100% SCALE AT 125% SCALE AT 150% SCALE AT 200% SCALE AT 400% SCALE

Small Square71x71L 71x71 89x89 107x107 142x142 284x284


ogo

Medium Square150x15 150x150 188x188 225x225 300x300 600x600


0Logo

Wide Square310x15 310x150 388x188 465x225 620x300 1240x600


0Logo
CATEGORY ELEMENT NAME AT 100% SCALE AT 125% SCALE AT 150% SCALE AT 200% SCALE AT 400% SCALE

Large Square310x31 310x310 388x388 465x465 620x620 1240x1240


(desktop only) 0Logo

App list (icon) Square44x44L 44x44 55x55 66x66 88x88 176x176


ogo

File name examples for scale-based assets

CATEGORY ELEMENT NAME AT 100% SCALE AT 125% SCALE AT 150% SCALE

Small Square71x71Logo AppNameSmallTile.sca AppNameSmallTile.sca AppNameSmallTile.sca


le-100.png le-125.png le-150.png

Medium Square150x150Logo AppNameMedTile.scal AppNameMedTile.scal AppNameMedTile.scal


e-100.png e-125.png e-150.png

Wide Square310x150Logo AppNameWideTile.sca AppNameWideTile.sca AppNameWideTile.sca


le-100.png le-125.png le-150.png

Large (desktop only) Square310x310Logo AppNameLargeTile.sc AppNameLargeTile.sc AppNameLargeTile.sc


ale-100.png ale-125.png ale-150.png

App list (icon) Square44x44Logo AppNameLargeTile.sc AppNameLargeTile.sc AppNameLargeTile.sc


ale-100.png ale-125.png ale-150.png

CATEGORY ELEMENT NAME AT 200% SCALE AT 400% SCALE

Small Square71x71Logo AppNameSmallTile.scale- AppNameSmallTile.scale-


200.png 400.png

Medium Square150x150Logo AppNameMedTile.scale- AppNameMedTile.scale-


200.png 400.png

Wide Square310x150Logo AppNameWideTile.scale- AppNameWideTile.scale-


200.png 400.png

Large (desktop only) Square310x310Logo AppNameLargeTile.scale- AppNameLargeTile.scale-


200.png 400.png

App list (icon) Square44x44Logo AppNameLargeTile.scale- AppNameLargeTile.scale-


200.png 400.png

Target-based assets
Target-based assets are used across multiple scale factors. The element name for target-based assets is
Square44x44Logo. We strongly recommend submitting the following assets as a bare minimum:
16x16, 24x24, 32x32, 48x48, 256x256
The following table lists all target-based asset sizes and corresponding file name examples:

ASSET SIZE FILE NAME EXAMPLE


ASSET SIZE FILE NAME EXAMPLE

16x16* AppNameAppList.targetsize-16.png

24x24* AppNameAppList.targetsize-24.png

32x32* AppNameAppList.targetsize-32.png

48x48* AppNameAppList.targetsize-48.png

256x256* AppNameAppList.targetsize-256.png

20x20 AppNameAppList.targetsize-20.png

30x30 AppNameAppList.targetsize-30.png

36x36 AppNameAppList.targetsize-36.png

40x40 AppNameAppList.targetsize-40.png

60x60 AppNameAppList.targetsize-60.png

64x64 AppNameAppList.targetsize-64.png

72x72 AppNameAppList.targetsize-72.png

80x80 AppNameAppList.targetsize-80.png

96x96 AppNameAppList.targetsize-96.png

* Submit these asset sizes as a baseline

Asset types
Listed here are all asset types, their uses, and recommended file names.
Tile assets
Centered assets are generally used on the Start to showcase your app.
File name format: *Tile.scale-*.PNG
Impacted apps: Every UWP app
Uses:
Default Start tiles (desktop and mobile)
Action center (desktop and mobile)
Task switcher (mobile)
Share picker (mobile)
Picker (mobile)
Store
Scalable list assets with plate
These assets are used on surfaces that request scale factors. Assets either get plated by the system or come
with their own background color if the app includes that.
File name format: *AppList.scale-*.PNG
Impacted apps: Every UWP app
Uses:
Start all apps list (desktop)
Start most-frequently used list (desktop)
Task manager (desktop)
Cortana search results
Start all apps list (mobile)
Settings
Target-size list assets with plate
These are fixed asset sizes that don't scale with plateaus. Mostly used for legacy experiences. Assets are checked
by the system.
File name format: *AppList.targetsize-*.PNG
Impacted apps: Every UWP app
Uses:
Start jump list (desktop)
Start lower corner of tile (desktop)
Shortcuts (desktop)
Control Panel (desktop)
Target-size list assets without plate
These are assets that don't get plated or scaled by the system.
File name format: *AppList.targetsize-*_altform-unplated.PNG
Impacted apps: Every UWP app
Uses:
Taskbar and taskbar thumbnail (desktop)
Taskbar jumplist
Task view
ALT+TAB
File extension assets
These are assets specific to file extensions. They appear next to Win32-style file association icons in File Explorer
and must be theme-agnostic. Sizing is different on desktop and mobile platforms.
File name format: *LogoExtensions.targetsize-*.PNG
Impacted apps: Music, Video, Photos, Microsoft Edge, Microsoft Office
Uses:
File Explorer
Cortana
Various UI surfaces (desktop)
Splash screen
The asset that appears on your app's splash screen. Automatically scales on both desktop and mobile platforms.
File name format: *SplashScreen.screen-100.PNG
Impacted apps: Every UWP app
Uses:
App's splash screen
Iconic tile assets
These are assets for apps that make use of the iconic template.
File name format: Not applicable
Impacted apps: Messaging, Phone, Store, more
Uses:
Iconic tile

Related topics
Special tile templates
Special tile templates
3/6/2017 6 min to read Edit on GitHub

Special tile templates are unique templates that are either animated, or just allow you to do things that aren't
possible with adaptive tiles. Each special tile template was specifically built for Windows 10, except for the iconic
tile template, a classic special template that has been updated for Windows 10. This article covers three special tile
templates: Iconic, Photos, and People.

Iconic tile template


The iconic template (also known as the "IconWithBadge" template) lets you display a small image in the center of
the tile. Windows 10 supports the template on both phone and tablet/desktop.

How to create an iconic tile


The following steps cover everything you need to know to create an iconic tile for Windows 10. At a high level, you
need your iconic image asset, then you send a notification to the tile using the iconic template, and finally you
send a badge notification that provides the number to be displayed on the tile.

Step 1: Create your image assets in PNG format


Create the icon assets for your tile and place those in your project resources with your other assets. At a bare
minimum, create a 200x200 pixel icon, which works for both small and medium tiles on phone and desktop. To
provide the best user experience, create an icon for each size. See sizing details in the below image.
Save icon assets in PNG format and with transparency. On Windows Phone, every non-transparent pixel is
displayed as white (RGB 255, 255, 255). For consistency and simplicity, use white for desktop icons as well.
Windows 10 on tablet, laptop, and desktop only supports square icon assets. Phone supports both square assets
and assets that are taller than they are wide, up to a 2:3 width:height ratio, which is useful for images such as a
phone icon.

Step 2: Create your base tile


You can use the iconic template on both primary and secondary tiles. If you're using it on a secondary tile, you'll
first have to create the secondary tile or use an already-pinned secondary tile. Primary tiles are implicitly pinned
and can always be sent notifications.
Step 3: Send a notification to your tile
Although this step can vary based on whether the notification is sent locally or via server push, the XML payload
that you send remains the same. To send a local tile notification, create a TileUpdater for your tile (either primary
or secondary tile), then send a notification to the tile that uses the iconic tile template as seen below. Ideally, you
should also include bindings for wide and large tile sizes using adaptive tile templates.
Here's sample code for the XML payload:

<tile>
<visual>

<binding template="TileSquare150x150IconWithBadge">
<image id="1" src="Iconic.png" alt="alt text"/>
</binding>

<binding template="TileSquare71x71IconWithBadge">
<image id="1" src="Iconic.png" alt="alt text"/>
</binding>

</visual>
</tile>

This iconic tile template XML payload uses an image element that points to the image that you created in Step 1.
Now your tile is ready to display the badge next to your icon; all that's left is sending badge notifications.
Step 4: Send a badge notification to your tile
As with step 3, this step can vary based on whether the notification is sent locally or via server push, yet the XML
payload that you send remains the same. To send a local badge notification, create a BadgeUpdater for your tile
(either primary or secondary tile), then send a badge notification with your desired value (or clear the badge).
Here's sample code for the XML payload:

<badge value="2"/>

The tile's badge will update accordingly.


Step 5: Putting it all together
The following image illustrates how the various APIs and payloads are associated with each aspect of the iconic tile
template. A tile notification (which contains those <binding> elements) is used to specify the iconic template and
the image asset; a badge notification specifies the numerical value; tile properties control your tile's display name,
color, and more.

Photos tile template


The photos tile template lets you display a slideshow of photos on your live tile. The template is supported on all
tile sizes, including small, and behaves the same on each tile size. The below example shows five frames of a
medium tile that uses the photos template. The template has a zoom and cross-fade animation that cycles through
selected photos and loops indefinitely.

How to use the photos template


Using the photos template is easy if you've installed the Notifications library. Although you can use raw XML, we
highly recommend using the library so you don't have to worry about generating valid XML or XML-escaping
content.
Windows Phone displays up to 9 photos in a slideshow; tablet, laptop, and desktop display up to 12.
For information about sending the tile notification, see the Send notifications article.
<!--

To use the Photos template...

- On any adaptive tile binding (like TileMedium or TileWide)


- Set the hint-presentation attribute to "photos"
- Add up to 12 images as children of the binding.

-->

<tile>
<visual>

<binding template="TileMedium" hint-presentation="photos">

<image src="Assets/1.jpg" />


<image src="ms-appdata:///local/Images/2.jpg"/>
<image src="http://msn.com/images/3.jpg"/>

<!--TODO: Can have 12 images total-->

</binding>

<!--TODO: Add bindings for other tile sizes-->

</visual>
</tile>

/*

To use the Photos template...

- On any TileBinding object


- Set Content property to new instance of TileBindingContentPhotos
- Add up to 12 images to Images property of TileBindingContentPhotos.

*/

TileContent content = new TileContent()


{
Visual = new TileVisual()
{
TileMedium = new TileBinding()
{
Content = new TileBindingContentPhotos()
{
Images =
{
new TileBasicImage() { Source = "Assets/1.jpg" },
new TileBasicImage() { Source = "ms-appdata:///local/Images/2.jpg" },
new TileBasicImage() { Source = "http://msn.com/images/3.jpg" }

// TODO: Can have 12 images total


}
}
}

// TODO: Add other tile sizes


}
};

People tile template


The People app in Windows 10 uses a special tile template that displays a collection of images in circles that slide
around vertically or horizontally on the tile. This tile template has been available since Windows 10 Build 10572,
and anyone is welcome to use it in their app.
The People tile template works on tiles of these sizes:
Medium tile (TileMedium)

Wide tile (TileWide)

Large tile (desktop only) (TileLarge)

If you're using the Notifications library, all you have to do to make use of the People tile template is create a new
TileBindingContentPeople object for your TileBinding content. The TileBindingContentPeople class has an Images
property where you add your images.
If you're using raw XML, set the hint-presentation to "people" and add your images as children of the binding
element.
The following C# code sample assumes that you're using the Notifications library.
TileContent content = new TileContent()
{
Visual = new TileVisual()
{
TileMedium = new TileBinding()
{
Content = new TileBindingContentPeople()
{
Images =
{
new TileBasicImage() { Source = "Assets/ProfilePics/1.jpg" },
new TileBasicImage() { Source = "Assets/ProfilePics/2.jpg" },
new TileBasicImage() { Source = "Assets/ProfilePics/3.jpg" },
new TileBasicImage() { Source = "Assets/ProfilePics/4.jpg" },
new TileBasicImage() { Source = "Assets/ProfilePics/5.jpg" },
new TileBasicImage() { Source = "Assets/ProfilePics/6.jpg" },
new TileBasicImage() { Source = "Assets/ProfilePics/7.jpg" },
new TileBasicImage() { Source = "Assets/ProfilePics/8.jpg" },
new TileBasicImage() { Source = "Assets/ProfilePics/9.jpg" }
}
}
}
}
};

<tile>
<visual>

<binding template="TileMedium" hint-presentation="people">


<image src="Assets/ProfilePics/1.jpg"/>
<image src="Assets/ProfilePics/2.jpg"/>
<image src="Assets/ProfilePics/3.jpg"/>
<image src="Assets/ProfilePics/4.jpg"/>
<image src="Assets/ProfilePics/5.jpg"/>
<image src="Assets/ProfilePics/6.jpg"/>
<image src="Assets/ProfilePics/7.jpg"/>
<image src="Assets/ProfilePics/8.jpg"/>
<image src="Assets/ProfilePics/9.jpg"/>
</binding>

</visual>
</tile>

For the best user experience, we recommend that you provide the following number of photos for each tile size:
Medium tile: 9 photos
Wide tile: 15 photos
Large tile: 20 photos
Having that number of photos allows for a few empty circles, which means that the tile won't be too visually busy.
Feel free to tweak the number of photos to get the look that works best for you.
To send the notification, see Choose a notification delivery method.

Related topics
Full code sample on GitHub
Notifications library
Tiles, badges, and notifications
Create adaptive tiles
Adaptive tile templates: schema and documentation
Adaptive and interactive toast notifications
3/6/2017 19 min to read Edit on GitHub

Adaptive and interactive toast notifications let you create flexible pop-up notifications with more content, optional
inline images, and optional user interaction.
The adaptive and interactive toast notifications model has these updates over the legacy toast template catalog:
The option to include buttons and inputs on the notifications.
Three different activation types for the main toast notification and for each action.
The option to create a notification for certain scenarios, including alarms, reminders, and incoming calls.
Note To see the legacy templates from Windows 8.1 and Windows Phone 8.1, see the legacy toast template
catalog.

Getting started
Install Notifications library. If you'd like to use C# instead of XML to generate notifications, install the NuGet
package named Microsoft.Toolkit.Uwp.Notifications (search for "notifications uwp"). The C# samples provided in
this article use version 1.0.0 of the NuGet package.
Install Notifications Visualizer. This free UWP app helps you design interactive toast notifications by providing
an instant visual preview of your toast as you edit it, similar to Visual Studio's XAML editor/design view. You can
read this blog post for more information, and you can download Notifications Visualizer here.

Toast notification structure


Toast notifications are constructed using XML, which would typically contain these key elements:
<visual> covers the content available for the users to visually see, including text and images
<actions> contains buttons/inputs the developer wants to add inside the notification
<audio> specifies the sound played when the notification pops
Here's a code example:

<toast launch="app-defined-string">
<visual>
<binding template="ToastGeneric">
<text>Sample</text>
<text>This is a simple toast notification example</text>
<image placement="AppLogoOverride" src="oneAlarm.png" />
</binding>
</visual>
<actions>
<action content="check" arguments="check" imageUri="check.png" />
<action content="cancel" arguments="cancel" />
</actions>
<audio src="ms-winsoundevent:Notification.Reminder"/>
</toast>
ToastContent content = new ToastContent()
{
Launch = "app-defined-string",

Visual = new ToastVisual()


{
BindingGeneric = new ToastBindingGeneric()
{
Children =
{
new AdaptiveText()
{
Text = "Sample"
},

new AdaptiveText()
{
Text = "This is a simple toast notification example"
}
},

AppLogoOverride = new ToastGenericAppLogo()


{
Source = "oneAlarm.png"
}
}
},

Actions = new ToastActionsCustom()


{
Buttons =
{
new ToastButton("check", "check")
{
ImageUri = "check.png"
},

new ToastButton("cancel", "cancel")


{
ImageUri = "cancel.png"
}
}
},

Audio = new ToastAudio()


{
Src = new Uri("ms-winsoundevent:Notification.Reminder")
}
};

Next we need to convert the toast into an XmlDocument object. If you defined the toast in an XML file (here named
"content.xml"), use this code:

string xmlText = File.ReadAllText("content.xml");


XmlDocument xmlContent = new XmlDocument();
xmlContent.LoadXml(xmlText);

Or, if you defined the toast template in C#, use this:

XmlDocument xmlContent = content.GetXml();

Regardless of how you create the XMLDocument, you can then use this code to create and send the toast:
ToastNotification notification = new ToastNotification(xmlContent);
ToastNotificationManager.CreateToastNotifier().Show(notification);

To see a complete app that shows toast notifications in action, see the Quickstart on Sending a local toast
notifications.
Here is a visual representation of the structure:

Visual
Inside the visual element, you must have exactly one binding element that contains the visual content of the toast.
Tile notifications in Universal Windows Platform (UWP) apps support multiple templates that are based on
different tile sizes. Toast notifications, however, have only one template name: ToastGeneric. Having just the one
template name means:
You can change the toast content, such as adding another line of text, adding an inline image, or changing the
thumbnail image from displaying the app icon to something else, and do any of these things without worrying
about changing the entire template or creating an invalid payload due to a mismatch between the template
name and the content.
You can use the same code to cons truct the same payload for the toast notification that targets to deliver to
different types of Microsoft Windows devices, including phones, tablets, PCs, and Xbox One. Each of these
devices will accept the notification and display it to the user under their UI policies with the appropriate visual
affordances and interaction model.
For all attributes supported in the visual section and its child elements, see the Schema section below. For more
examples, see the XML examples section below.
Your app's identity is conveyed via your app icon. However, if you use appLogoOverride, we will display your app
name beneath your lines of text.

NORMAL TOAST TOAST WITH APPLOGOOVERRIDE


Actions
In UWP apps, you can add buttons and other inputs to your toast notifications, which lets users do more outside of
the app. These actions are specified under the <actions> element, of which there are two types that you can
specify:
<action> This appears as a button on desktop and mobile devices. You can specify up to five custom or system
actions inside a toast notification.
<input> This allows users to provide input, such as quick replying to a message, or selecting an option from a
drop-down menu.
Both <action> and <input> are adaptive within the Windows family of devices. For example, on mobile or desktop
devices, an <action> to a user is a button on which to tap/click. A text <input> is a box in which users can input
text using either a physical keyboard or an on-screen keyboard. These elements will also adapt to future interaction
scenarios, such as an action announced by voice or a text input taken by dictation.
When an action is taken by the user, you can do one of the following by specifying the ActivationType attribute
inside the <action> element:
Activating the app in the foreground, with an action-specific argument that can be used to navigate to a specific
page/context.
Activating the app's background task without affecting the user.
Activating another app via protocol launch.
Specify a system action to perform. The current available system actions are snoozing and dismissing scheduled
alarm/reminder, which will be further explained in a section below.
For all attributes supported in the visual section and its child elements, see the Schema section below. For more
examples, see the XML examples section below.
Audio
Custom audio has always been supported by Mobile, and is supported in Desktop Version 1511 (build 10586) or
newer. Custom audio can be referenced via the following paths:
ms-appx:///
ms-appdata:///
Alternatively, you can pick from the list of ms-winsoundevents, which have always been supported on both
platforms.
See the audio schema page for information on audio in toast notifications. To learn how to send a toast using
custom audio, see this blog post.

Alarms, reminders, and incoming calls


You can use toast notifications for alarms, reminders, and incoming calls. These special toasts have an appearance
that's consistent with standard toasts, though special toasts feature some custom, scenario-based UI and patterns:
A reminder toast notification will stay on screen until the user dismisses it or takes action. On Windows Mobile,
the reminder toast notifications will also show up pre-expanded.
In addition to sharing the above behaviors with reminder notifications, alarm notifications also automatically
play looping audio.
Incoming call notifications are displayed full screen on Windows Mobile devices. This is done by specifying the
scenario attribute inside the root element of a toast notification <toast>: <toast scenario=" { default | alarm |
reminder | incomingCall } " >

XML examples
Note The toast notification screenshots for these examples were taken from an app on desktop. On mobile
devices, a toast notification may be collapsed when it pops up, with a grabber at the bottom of the toast to expand
it.
Notification with rich visual contents
This example shows how you can have multiple lines of text, an optional small image to override the application
logo, and an optional inline image thumbnail.

<toast launch="app-defined-string">
<visual>
<binding template="ToastGeneric">
<text>Photo Share</text>
<text>Andrew sent you a picture</text>
<text>See it in full size!</text>
<image src="https://unsplash.it/360/180?image=1043" />
<image placement="appLogoOverride" src="https://unsplash.it/64?image=883" hint-crop="circle" />
</binding>
</visual>
</toast>

ToastContent content = new ToastContent()


{
Launch = "app-defined-string",

Visual = new ToastVisual()


{
BindingGeneric = new ToastBindingGeneric()
{
Children =
{
new AdaptiveText()
{
Text = "Photo Share"
},

new AdaptiveText()
{
Text = "Andrew sent you a picture"
},

new AdaptiveText()
{
Text = "See it in full size!"
},

new AdaptiveImage()
{
Source = "https://unsplash.it/360/180?image=1043"
}
},

AppLogoOverride = new ToastGenericAppLogo()


{
Source = "https://unsplash.it/64?image=883",
HintCrop = ToastGenericAppLogoCrop.Circle
}
}
}
};
Notification with actions
This example creates a notification with two possible response actions.

<toast launch="app-defined-string">
<visual>
<binding template="ToastGeneric">
<text>Microsoft Company Store</text>
<text>New Halo game is back in stock!</text>
</binding>
</visual>
<actions>
<action activationType="foreground" content="See more details" arguments="details"/>
<action activationType="background" content="Remind me later" arguments="later"/>
</actions>
</toast>
ToastContent content = new ToastContent()
{
Launch = "app-defined-string",

Visual = new ToastVisual()


{
BindingGeneric = new ToastBindingGeneric()
{
Children =
{
new AdaptiveText()
{
Text = "Microsoft Company Store"
},

new AdaptiveText()
{
Text = "New Halo game is back in stock!"
}
}
}
},

Actions = new ToastActionsCustom()


{
Buttons =
{
new ToastButton("See more details", "details"),

new ToastButton("Remind me later", "later")


{
ActivationType = ToastActivationType.Background
}
}
}
};

Notification with text input and actions, example 1


This example creates a notification that accepts text input, along with two resonse actions.

<toast launch="developer-defined-string">
<visual>
<binding template="ToastGeneric">
<text>Andrew B.</text>
<text>Shall we meet up at 8?</text>
<image placement="appLogoOverride" src="https://unsplash.it/64?image=883" hint-crop="circle" />
</binding>
</visual>
<actions>
<input id="message" type="text" placeHolderContent="Type a reply" />
<action activationType="background" content="Reply" arguments="reply" />
<action activationType="foreground" content="Video call" arguments="video" />
</actions>
</toast>
ToastContent content = new ToastContent()
{
Launch = "app-defined-string",

Visual = new ToastVisual()


{
BindingGeneric = new ToastBindingGeneric()
{
Children =
{
new AdaptiveText()
{
Text = "Andrew B."
},

new AdaptiveText()
{
Text = "Shall we meet up at 8?"
}
},

AppLogoOverride = new ToastGenericAppLogo()


{
Source = "https://unsplash.it/64?image=883",
HintCrop = ToastGenericAppLogoCrop.Circle
}
}
},

Actions = new ToastActionsCustom()


{
Inputs =
{
new ToastTextBox("message")
{
PlaceholderContent = "Type a reply"
}
},

Buttons =
{
new ToastButton("Reply", "reply")
{
ActivationType = ToastActivationType.Background
},

new ToastButton("Video call", "video")


{
ActivationType = ToastActivationType.Foreground
}
}
}
};

Notification with text input and actions, example 2


This example creates a notification that accepts text input and a single action.
<toast launch="developer-defined-string">
<visual>
<binding template="ToastGeneric">
<text>Andrew B.</text>
<text>Shall we meet up at 8?</text>
<image placement="appLogoOverride" src="https://unsplash.it/64?image=883" hint-crop="circle" />
</binding>
</visual>
<actions>
<input id="message" type="text" placeHolderContent="Type a reply" />
<action activationType="background" content="Reply" arguments="reply" hint-inputId="message" imageUri="Assets/Icons/send.png"/>
</actions>
</toast>

ToastContent content = new ToastContent()


{
Launch = "app-defined-string",

Visual = new ToastVisual()


{
BindingGeneric = new ToastBindingGeneric()
{
Children =
{
new AdaptiveText()
{
Text = "Andrew B."
},

new AdaptiveText()
{
Text = "Shall we meet up at 8?"
}
},

AppLogoOverride = new ToastGenericAppLogo()


{
Source = "https://unsplash.it/64?image=883",
HintCrop = ToastGenericAppLogoCrop.Circle
}
}
},

Actions = new ToastActionsCustom()


{
Inputs =
{
new ToastTextBox("message")
{
PlaceholderContent = "Type a reply"
}
},

Buttons =
{
new ToastButton("Reply", "reply")
{
TextBoxId = "message",
ImageUri = "Assets/Icons/send.png",
ActivationType = ToastActivationType.Background
}
}
}
};
Notification with selection input and actions
This example creates a notification with a drop-down selection menu, and two possible actions.

<toast launch="developer-defined-string">
<visual>
<binding template="ToastGeneric">
<text>Spicy Heaven</text>
<text>When do you plan to come in tomorrow?</text>
</binding>
</visual>
<actions>
<input id="time" type="selection" defaultInput="2" >
<selection id="1" content="Breakfast" />
<selection id="2" content="Lunch" />
<selection id="3" content="Dinner" />
</input>
<action activationType="background" content="Reserve" arguments="reserve" />
<action activationType="foreground" content="Call Restaurant" arguments="call" />
</actions>
</toast>
ToastContent content = new ToastContent()
{
Launch = "app-defined-string",

Visual = new ToastVisual()


{
BindingGeneric = new ToastBindingGeneric()
{
Children =
{
new AdaptiveText()
{
Text = "Spicy Heaven"
},

new AdaptiveText()
{
Text = "When do you plan to come in tomorrow?"
}
}
}
},

Actions = new ToastActionsCustom()


{
Inputs =
{
new ToastSelectionBox("time")
{
DefaultSelectionBoxItemId = "2",
Items =
{
new ToastSelectionBoxItem("1", "Breakfast"),
new ToastSelectionBoxItem("2", "Lunch"),
new ToastSelectionBoxItem("3", "Dinner")
}
}
},

Buttons =
{
new ToastButton("Reserve", "reserve")
{
ActivationType = ToastActivationType.Background
},

new ToastButton("Call Restaurant", "call")


{
ActivationType = ToastActivationType.Foreground
}
}
}
};

Reminder notification
Using a selection menu and two actions as in the previous example, we can create a reminder notification:
<toast scenario="reminder" launch="action=viewEvent&amp;eventId=1983">

<visual>
<binding template="ToastGeneric">
<text>Adaptive Tiles Meeting</text>
<text>Conf Room 2001 / Building 135</text>
<text>10:00 AM - 10:30 AM</text>
</binding>
</visual>

<actions>

<input id="snoozeTime" type="selection" defaultInput="15">


<selection id="1" content="1 minute"/>
<selection id="15" content="15 minutes"/>
<selection id="60" content="1 hour"/>
<selection id="240" content="4 hours"/>
<selection id="1440" content="1 day"/>
</input>

<action activationType="system" arguments="snooze" hint-inputId="snoozeTime" content="" />

<action activationType="system" arguments="dismiss" content=""/>

</actions>

</toast>
ToastContent content = new ToastContent()
{
Launch = "action=viewEvent&eventId=1983",
Scenario = ToastScenario.Reminder,

Visual = new ToastVisual()


{
BindingGeneric = new ToastBindingGeneric()
{
Children =
{
new AdaptiveText()
{
Text = "Adaptive Tiles Meeting"
},

new AdaptiveText()
{
Text = "Conf Room 2001 / Building 135"
},

new AdaptiveText()
{
Text = "10:00 AM - 10:30 AM"
}
}
}
},

Actions = new ToastActionsCustom()


{
Inputs =
{
new ToastSelectionBox("snoozeTime")
{
DefaultSelectionBoxItemId = "15",
Items =
{
new ToastSelectionBoxItem("5", "5 minutes"),
new ToastSelectionBoxItem("15", "15 minutes"),
new ToastSelectionBoxItem("60", "1 hour"),
new ToastSelectionBoxItem("240", "4 hours"),
new ToastSelectionBoxItem("1440", "1 day")
}
}
},

Buttons =
{
new ToastButtonSnooze()
{
SelectionBoxId = "snoozeTime"
},

new ToastButtonDismiss()
}
}
};
Handling activation (foreground and background)
To learn how to handle toast activations (the user clicking on your toast or buttons on the toast), see Quicstart:
Sending a local toast notification and handling activation.

Schemas: <visual> and <audio>


In the following XML schemas, a "?" suffix means that an attribute is optional.

<toast launch? duration? activationType? scenario? >


<visual lang? baseUri? addImageQuery? >
<binding template? lang? baseUri? addImageQuery? >
<text lang? hint-maxLines? >content</text>
<image src placement? alt? addImageQuery? hint-crop? />
<group>
<subgroup hint-weight? hint-textStacking? >
<text />
<image />
</subgroup>
</group>
</binding>
</visual>
<audio src? loop? silent? />
</toast>
ToastContent content = new ToastContent()
{
Launch = ?,
Duration = ?,
ActivationType = ?,
Scenario = ?,

Visual = new ToastVisual()


{
Language = ?,
BaseUri = ?,
AddImageQuery = ?,
BindingGeneric = new ToastBindingGeneric()
{
Children =
{
new AdaptiveText()
{
Text = ?,
Language = ?,
HintMaxLines = ?
},

new AdaptiveGroup()
{
Children =
{
new AdaptiveSubgroup()
{
HintWeight = ?,
HintTextStacking = ?,
Children =
{
new AdaptiveText(),
new AdaptiveImage()
}
}
}
},

new AdaptiveImage()
{
Source = ?,
AddImageQuery = ?,
AlternateText = ?,
HintCrop = ?
}
}
}
},

Audio = new ToastAudio()


{
Src = ?,
Loop = ?,
Silent = ?
}
};

Attributes in <toast>
launch?
launch? = string
This is an optional attribute.
A string that is passed to the application when it is activated by the toast.
Depending on the value of activationType, this value can be received by the app in the foreground, inside the
background task, or by another app that's protocol launched from the original app.
The format and contents of this string are defined by the app for its own use.
When the user taps or clicks the toast to launch its associated app, the launch string provides the context to the
app that allows it to show the user a view relevant to the toast content, rather than launching in its default way.
If the activation is happened because user clicked on an action, instead of the body of the toast, the developer
retrieves back the "arguments" pre-defined in that <action> tag, instead of "launch" pre-defined in the <toast>
tag.
duration?
duration? = "short|long"
This is an optional attribute. Default value is "short".
This is only here for specific scenarios and appCompat. You don't need this for the alarm scenario anymore.
We don't recommend using this property.
activationType?
activationType? = "foreground | background | protocol | system"
This is an optional attribute.
The default value is "foreground".
scenario?
scenario? = "default | alarm | reminder | incomingCall"
This is an optional attribute, default value is "default".
You do not need this unless your scenario is to pop an alarm, reminder, or incoming call.
Do not use this just for keeping your notification persistent on screen.
Attributes in <visual>
lang?
See this element schema article for details on this optional attribute.
baseUri?
See this element schema article for details on this optional attribute.
addImageQuery?
See this element schema article for details on this optional attribute.
Attributes in <binding>
template?
[Important] template? = "ToastGeneric"
If you are using any of the new adaptive and interactive notification features, please make sure you start using
"ToastGeneric" template instead of the legacy template.
Using the legacy templates with the new actions might work now, but that is not the intended use case, and we
cannot guarantee that will continue working.
lang?
See this element schema article for details on this optional attribute.
baseUri?
See this element schema article for details on this optional attribute.
addImageQuery?
See this element schema article for details on this optional attribute.
Attributes in <text>
lang?
See this element schema article for details on this optional attribute.
Attributes in <image>
src
See this element schema article for details on this required attribute.
placement?
placement? = "inline" | "appLogoOverride"
This attribute is optional.
This specifies where this image will be displayed.
"inline" means inside the toast body, below the text; "appLogoOverride" means replace the application icon (that
shows up on the top left corner of the toast).
You can have up to one image for each placement value.
alt?
See this element schema article for details on this optional attribute.
addImageQuery?
See this element schema article for details on this optional attribute.
hint-crop?
hint-crop? = "none" | "circle"
This attribute is optional.
"none" is the default value which means no cropping.
"circle" crops the image to a circular shape. Use this for profile images of a contact, images of a person, and so
on.
Attributes in <audio>
src?
See this element schema article for details on this optional attribute.
loop?
See this element schema article for details on this optional attribute.
silent?
See this element schema article for details on this optional attribute.

Schemas: <action>
In the following XML schemas, a "?" suffix means that an attribute is optional.

<toast>
<visual>
</visual>
<audio />
<actions>
<input id type title? placeHolderContent? defaultInput? >
<selection id content />
</input>
<action content arguments activationType? imageUri? hint-inputId />
</actions>
</toast>

ToastContent content = new ToastContent()


{
Visual = ...

Actions = new ToastActionsCustom()


{
Inputs =
{
new ToastSelectionBox("id")
{
Title = ?
DefaultSelectionBoxItemId = ?,
Items =
{
new ToastSelectionBoxItem("id", "content")
}
},

new ToastTextBox("id")
{
Title = ?,
PlaceholderContent = ?,
DefaultInput = ?
}
},

Buttons =
{
new ToastButton("content", "args")
{
ActivationType = ?,
ImageUri = ?,
TextBoxId = ?
},

new ToastButtonSnooze("content")
{
SelectionBoxId = "snoozeTime"
},

new ToastButtonDismiss("content")
}
}
};

Attributes in <input>
id
id = string
This attribute is required.
The id attribute is required and is used by developers to retrieve user inputs once the app is activated (in the
foreground or background).
type
type = "text | selection"
This attribute is required.
It is used to specify a text input or input from a list of pre-defined selections.
On mobile and desktop, this is to specify whether you want a textbox input or a listbox input.
title?
title? = string
The title attribute is optional and is for developers to specify a title for the input for shells to render when there
is affordance.
For mobile and desktop, this title will be displayed above the input.
placeHolderContent?
placeHolderContent? = string
The placeHolderContent attribute is optional and is the grey-out hint text for text input type. This attribute is
ignored when the input type is not "text".
defaultInput?
defaultInput? = string
The defaultInput attribute is optional and is used to provide a default input value.
If the input type is "text", this will be treated as a string input.
If the input type is "selection", this is expected to be the id of one of the available selections inside this input's
elements.
Attributes in <selection>
id
This attribute is required. It's used to identify user selections. The id is returned to your app.
content
This attribute is required. It provides the string to display for this selection element.
Attributes in <action>
content
content = string
The content attribute is required. It provides the text string displayed on the button.
arguments
arguments = string
The arguments attribute it required. It describes the app-defined data that the app can later retrieve once it is
activated from user taking this action.
activationType?
activationType? = "foreground | background | protocol | system"
The activationType attribute is optional and its default value is "foreground".
It describes the kind of activation this action will cause: foreground, background, or launching another app via
protocol launch, or invoking a system action.
imageUri?
imageUri? = string
imageUri is optional and is used to provide an image icon for this action to display inside the button alone with
the text content.
hint-inputId
hint-inputId = string
The hint-inpudId attribute is required. It's specifically used for the quick reply scenario.
The value needs to be the id of the input element desired to be associated with.
In mobile and desktop, this will put the button right next to the input box.

Attributes for system-handled actions


The system can handle actions for snoozing and dismissing notifications if you don't want your app to handle the
snoozing/rescheduling of notifications as a background task. System-handled actions can be combined (or
individually specified), but we don't recommend implementing a snooze action without a dismiss action.
System commands combo: SnoozeAndDismiss

<toast>
<visual>
</visual>
<actions hint-systemCommands="SnoozeAndDismiss" />
</toast>

ToastContent content = new ToastContent()


{
Visual = ...

Actions = new ToastActionsSnoozeAndDismiss()


};

Individual system-handled actions

<toast>
<visual>
</visual>
<actions>
<input id="snoozeTime" type="selection" defaultInput="10">
<selection id="5" content="5 minutes" />
<selection id="10" content="10 minutes" />
<selection id="20" content="20 minutes" />
<selection id="30" content="30 minutes" />
<selection id="60" content="1 hour" />
</input>
<action activationType="system" arguments="snooze" hint-inputId="snoozeTime" content=""/>
<action activationType="system" arguments="dismiss" content=""/>
</actions>
</toast>
ToastContent content = new ToastContent()
{
Visual = ...

Actions = new ToastActionsCustom()


{
Inputs =
{
new ToastSelectionBox("snoozeTime")
{
DefaultSelectionBoxItemId = "15",
Items =
{
new ToastSelectionBoxItem("5", "5 minutes"),
new ToastSelectionBoxItem("10", "10 minutes"),
new ToastSelectionBoxItem("20", "20 minutes"),
new ToastSelectionBoxItem("30", "30 minutes"),
new ToastSelectionBoxItem("60", "1 hour")
}
}
},

Buttons =
{
new ToastButtonSnooze()
{
SelectionBoxId = "snoozeTime"
},

new ToastButtonDismiss()
}
}
};

To construct individual snooze and dismiss actions, do the following:


Specify activationType = "system"
Specify arguments = "snooze" | "dismiss"
Specify content:
If you want localized strings of "snooze" and "dismiss" to be displayed on the actions, specify content to
be an empty string: <action content = ""/>
If you want a customized string, just provide its value: <action content="Remind me later" />
Specify input:
If you don't want the user to select a snooze interval and instead just want your notification to snooze
only once for a system-defined time interval (that is consistent across the OS), then don't construct any
<input> at all.
If you want to provide snooze interval selections:
Specify hint-inputId in the snooze action
Match the id of the input with the hint-inputId of the snooze action: <input id="snoozeTime">
</input><action hint-inputId="snoozeTime"/>
Specify selection id to be a nonNegativeInteger which represents snooze interval in minutes:
<selection id="240" /> means snoozing for 4 hours
Make sure that the value of defaultInput in <input> matches with one of the ids of the
<selection> children elements
Provide up to (but no more than) 5 <selection> values

Related topics
Quickstart: Send a local toast and handle activation
Notifications library on GitHub
Badge notifications for UWP apps
3/6/2017 2 min to read Edit on GitHub

A notification badge conveys summary or status information specific to your


app. They can be numeric (1-99) or one of a set of system-provided glyphs.
Examples of information best conveyed through a badge include network
connection status in an online game, user status in a messaging app, number
A tile with a numeric badge displaying of unread mails in a mail app, and number of new posts in a social media
the number 63 to indicate 63 unread mails. app.
Notification badges appear on your app's taskbar icon and in the lower-right
corner of its start tile, regardless of whether the app is running. Badges can be displayed on all tile sizes.

NOTE
You cannot provide your own badge image; only system-provided badge images can be used.

Numeric badges

VALUE BADGE XML

A number from 1 to 99. A value of 0 is <badge value="1"/>


equivalent to the glyph value "none"
and will clear the badge.

Any number greater than 99. <badge value="100"/>

Glyph badges
Instead of a number, a badge can display one of a non-extensible set of status glyphs.

STATUS GLYPH XML

none (No badge shown.) <badge value="none"/>

activity <badge value="activity"/>

alarm <badge value="alarm"/>

alert <badge value="alert"/>


attention <badge value="attention"/>

available <badge value="available"/>

away <badge value="away"/>

busy <badge value="busy"/>

error <badge value="error"/>

newMessage <badge value="newMessage"/>

paused <badge value="paused"/>

playing <badge value="playing"/>

unavailable <badge value="unavailable"/>

Create a badge
These examples show you how to to create a badge update.
Create a numeric badge
private void setBadgeNumber(int num)
{

// Get the blank badge XML payload for a badge number


XmlDocument badgeXml =
BadgeUpdateManager.GetTemplateContent(BadgeTemplateType.BadgeNumber);

// Set the value of the badge in the XML to our number


XmlElement badgeElement = badgeXml.SelectSingleNode("/badge") as XmlElement;
badgeElement.SetAttribute("value", num.ToString());

// Create the badge notification


BadgeNotification badge = new BadgeNotification(badgeXml);

// Create the badge updater for the application


BadgeUpdater badgeUpdater =
BadgeUpdateManager.CreateBadgeUpdaterForApplication();

// And update the badge


badgeUpdater.Update(badge);

Create a glyph badge

private void updateBadgeGlyph()


{
string badgeGlyphValue = "alert";

// Get the blank badge XML payload for a badge glyph


XmlDocument badgeXml =
BadgeUpdateManager.GetTemplateContent(BadgeTemplateType.BadgeGlyph);

// Set the value of the badge in the XML to our glyph value
Windows.Data.Xml.Dom.XmlElement badgeElement =
badgeXml.SelectSingleNode("/badge") as Windows.Data.Xml.Dom.XmlElement;
badgeElement.SetAttribute("value", badgeGlyphValue);

// Create the badge notification


BadgeNotification badge = new BadgeNotification(badgeXml);

// Create the badge updater for the application


BadgeUpdater badgeUpdater =
BadgeUpdateManager.CreateBadgeUpdaterForApplication();

// And update the badge


badgeUpdater.Update(badge);

Clear a badge

private void clearBadge()


{
BadgeUpdateManager.CreateBadgeUpdaterForApplication().Clear();
}

Get the sample code


Notifications sample
Shows how to create live tiles, send badge updates, and display toast notifications.
Related articles
Adaptive and interactive toast notifications
Create tiles
Create adaptive tiles
Notifications Visualizer
3/6/2017 1 min to read Edit on GitHub

Notifications Visualizer is a new Universal Windows Platform (UWP) app in the Store that helps developers design
adaptive live tiles for Windows 10.

Overview
The Notifications Visualizer app provides instant visual previews of your tile as you edit, similar to Visual Studio's
XAML editor/design view. The app also checks for errors, which ensures that you create a valid tile payload.
This screenshot from the app shows the XML payload and how tile sizes appear on a selected device:

With Notifications Visualizer, you can create and test adaptive tile payloads without having to edit and deploy the
app itself. Once you've created a payload with ideal visual results you can integrate that into your app. See Send a
local tile notification to learn more.
Note Notifications Visualizer's simulation of the Windows Start menu isn't always completely accurate, and it
doesn't support some payload properties like baseUri. When you have the tile design you want, test it by pinning
the tile to the actual Start menu to verify that it appears as you intend.

Features
Notifications Visualizer comes with a number of sample payloads to showcase what's possible with adaptive live
tiles and to help you get started. You can experiment with all the different text options, groups/subgroups,
background images, and you can see how the tile adapts to different devices and screens. Once you've made
changes, you can save your updated payload to a file for future use.
The editor provides real-time errors and warnings. For example, if your app payload is limited to less than 5 KB (a
platform limitation), Notifications Visualizer warns you if your payload exceeds that limit. It gives you warnings for
incorrect attribute names or values, which helps you debug visual issues.
You can control tile properties like display name, color, logos, ShowName, badge value. These options help you
instantly understand how your tile properties and tile notification payloads interact, and the results they produce.
This screenshot from the app shows the tile editor:

Related topics
Get Notifications Visualizer in the Store
Create adaptive tiles
Adaptive tile templates: schema and documentation
Tiles and toasts (MSDN blog)
Choose a notification delivery method
3/6/2017 5 min to read Edit on GitHub

This article covers the four notification optionslocal, scheduled, periodic, and pushthat deliver tile and badge
updates and toast notification content. A tile or a toast notification can get information to your user even when the
user is not directly engaged with your app. The nature and content of your app and the information that you want
to deliver can help you determine which notification method or methods is best for your scenario.

Notification delivery methods overview


There are four mechanisms that an app can use to deliver a notification:
Local
Scheduled
Periodic
Push
This table summarizes the notification delivery types.

DELIVERY METHOD USE WITH DESCRIPTION EXAMPLES

Local Tile, Badge, Toast A set of API calls that send A music app updates
notifications while your app its tile to show
is running, directly updating what's "Now
the tile or badge, or sending Playing".
a toast notification. A game app updates
its tile with the user's
high score when the
user leaves the
game.
A badge whose
glyph indicates that
there's new info int
the app is cleared
when the app is
activated.

Scheduled Tile, Toast A set of API calls that A calendar app sets a
schedule a notification in toast notification
advance, to update at the reminder for an
time you specify. upcoming meeting.
DELIVERY METHOD USE WITH DESCRIPTION EXAMPLES

Periodic Tile, Badge Notifications that update A weather app


tiles and badges regularly at updates its tile, which
a fixed time interval by shows the forecast,
polling a cloud service for at 30-minute
new content. intervals.
A "daily deals" site
updates its deal-of-
the-day every
morning.
A tile that displays
the days until an
event updates the
displayed countdown
each day at
midnight.

Push Tile, Badge, Toast, Raw Notifications sent from a A shopping app
cloud server, even if your sends a toast
app isn't running. notification to let a
user know about a
sale on an item that
they're watching.
A news app updates
its tile with breaking
news as it happens.
A sports app keeps
its tile up-to-date
during an ongoing
game.
A communication
app provides alerts
about incoming
messages or phone
calls.

Local notifications
Updating the app tile or badge or raising a toast notification while the app is running is the simplest of the
notification delivery mechanisms; it only requires local API calls. Every app can have useful or interesting
information to show on the tile, even if that content only changes after the user launches and interacts with the
app. Local notifications are also a good way to keep the app tile current, even if you also use one of the other
notification mechanisms. For instance, a photo app tile could show photos from a recently added album.
We recommended that your app update its tile locally on first launch, or at least immediately after the user makes
a change that your app would normally reflect on the tile. That update isn't seen until the user leaves the app, but
by making that change while the app is being used ensures that the tile is already up-to-date when the user
departs.
While the API calls are local, the notifications can reference web images. If the web image is not available for
download, is corrupted, or doesn't meet the image specifications, tiles and toast respond differently:
Tiles: The update is not shown
Toast: The notification is displayed, but your image is dropped
By default, local toast notifications expire in three days, and local tile notifications never expire. We recommend
overriding these defaults with an explicit expiration time that makes sense for your notifications (toasts have a max
of three days).
For more information, see these topics:
Send a local tile notification
Send a local toast notification
Universal Windows Platform (UWP) notifications code samples

Scheduled notifications
Scheduled notifications are the subset of local notifications that can specify the precise time when a tile should be
updated or a toast notification should be shown. Scheduled notifications are ideal in situations where the content
to be updated is known in advance, such as a meeting invitation. If you don't have advance knowledge of the
notification content, you should use a push or periodic notification.
Note that scheduled notifications cannot be used for badge notifications; badge notifications are best served by
local, periodic, or push notifications.
By default, scheduled notifications expire three days from the time they are delivered. You can override this default
expiration time on scheduled tile notifications, but you cannot override the expiration time on scheduled toasts.
For more information, see these topics:
Universal Windows Platform (UWP) notifications code samples

Periodic notifications
Periodic notifications give you live tile updates with a minimal cloud service and client investment. They are also
an excellent method of distributing the same content to a wide audience. Your client code specifies the URL of a
cloud location that Windows polls for tile or badge updates, and how often the location should be polled. At each
polling interval, Windows contacts the URL to download the specified XML content and display it on the tile.
Periodic notifications require the app to host a cloud service, and this service will be polled at the specified interval
by all users who have the app installed. Note that periodic updates cannot be used for toast notifications; toast
notifications are best served by scheduled or push notifications.
By default, periodic notifications expire three days from the time polling occurs. If needed, you can override this
default with an explicit expiration time.
For more information, see these topics:
Periodic notification overview
Universal Windows Platform (UWP) notifications code samples

Push notifications
Push notifications are ideal to communicate real-time data or data that is personalized for your user. Push
notifications are used for content that is generated at unpredictable times, such as breaking news, social network
updates, or instant messages. Push notifications are also useful in situations where the data is time-sensitive in a
way that would not suit periodic notifications, such as sports scores during a game.
Push notifications require a cloud service that manages push notification channels and chooses when and to
whom to send notifications.
By default, push notifications expire three days from the time they are received by the device. If needed, you can
override this default with an explicit expiration time (toasts have a max of three days).
For more information, see:
Windows Push Notification Services (WNS) overview
Guidelines for push notifications
Universal Windows Platform (UWP) notifications code samples

Related topics
Send a local tile notification
Send a local toast notification
Guidelines for push notifications
Guidelines for toast notifications
Periodic notification overview
Windows Push Notification Services (WNS) overview
Universal Windows Platform (UWP) notifications code samples on GitHub
Send a local tile notification
3/6/2017 6 min to read Edit on GitHub

Primary app tiles in Windows 10 are defined in your app manifest, while secondary tiles are programmatically
created and defined by your app code. This article describes how to send a local tile notification to a primary tile
and a secondary tile using adaptive tile templates. (A local notification is one that's sent from app code as opposed
to one that's pushed or pulled from a web server.)

NOTE
Learn about creating adaptive tiles and adaptive tile template schema.

Install the NuGet package


We recommend installing the Notifications library NuGet package, which simplifies things by generating tile
payloads with objects instead of raw XML.
The inline code examples in this article are for C# using the Notifications library. (If you'd prefer to create your
own XML, you can find code examples without the Notifications library toward the end of the article.)

Add namespace declarations


To access the tile APIs, include the Windows.UI.Notifications namespace. We also recommend including the
NotificationsExtensions.Tiles namespace so that you can take advantage of our tile helper APIs (you must
install the Notifications library NuGet package to access these APIs).

using Windows.UI.Notifications;
using Microsoft.Toolkit.Uwp.Notifications; // Notifications library

Create the notification content


In Windows 10, tile payloads are defined using adaptive tile templates, which allow you to create custom visual
layouts for your notifications. (To learn what's possible with adaptive tiles, see the Create adaptive tiles and
Adaptive tile templates articles.)
This code example creates adaptive tile content for medium and wide tiles.
// In a real app, these would be initialized with actual data
string from = "Jennifer Parker";
string subject = "Photos from our trip";
string body = "Check out these awesome photos I took while in New Zealand!";

// Construct the tile content


TileContent content = new TileContent()
{
Visual = new TileVisual()
{
TileMedium = new TileBinding()
{
Content = new TileBindingContentAdaptive()
{
Children =
{
new AdaptiveText()
{
Text = from
},

new AdaptiveText()
{
Text = subject,
HintStyle = AdaptiveTextStyle.CaptionSubtle
},

new AdaptiveText()
{
Text = body,
HintStyle = AdaptiveTextStyle.CaptionSubtle
}
}
}
},

TileWide = new TileBinding()


{
Content = new TileBindingContentAdaptive()
{
Children =
{
new AdaptiveText()
{
Text = from,
HintStyle = AdaptiveTextStyle.Subtitle
},

new AdaptiveText()
{
Text = subject,
HintStyle = AdaptiveTextStyle.CaptionSubtle
},

new AdaptiveText()
{
Text = body,
HintStyle = AdaptiveTextStyle.CaptionSubtle
}
}
}
}
}
};
The notification content looks like the following when displayed on a medium tile:

Create the notification


Once you have your notification content, you'll need to create a new TileNotification. The TileNotification
constructor takes a Windows Runtime XmlDocument object, which you can obtain from the
TileContent.GetXml method if you're using the Notifications library.
This code example creates a notification for a new tile.

// Create the tile notification


var notification = new TileNotification(content.GetXml());

Set an expiration time for the notification (optional)


By default, local tile and badge notifications don't expire, while push, periodic, and scheduled notifications expire
after three days. Because tile content shouldn't persist longer than necessary, it's a best practice to set an
expiration time that makes sense for your app, especially on local tile and badge notifications.
This code example creates a notification that expires and will be removed from the tile after ten minutes.

tileNotification.ExpirationTime = DateTimeOffset.UtcNow.AddMinutes(10);

Send the notification


Although locally sending a tile notification is simple, sending the notification to a primary or secondary tile is a bit
different.
Primary tile
To send a notification to a primary tile, use the TileUpdateManager to create a tile updater for the primary tile,
and send the notification by calling "Update". Regardless of whether it's visible, your app's primary tile always
exists, so you can send notifications to it even when it's not pinned. If the user pins your primary tile later, the
notifications that you sent will appear then.
This code example sends a notification to a primary tile.

// Send the notification to the primary tile


TileUpdateManager.CreateTileUpdaterForApplication().Update(notification);

Secondary tile
To send a notification to a secondary tile, first make sure that the secondary tile exists. If you try to create a tile
updater for a secondary tile that doesn't exist (for example, if the user unpinned the secondary tile), an exception
will be thrown. You can use SecondaryTile.Exists(tileId) to discover if your secondary tile is pinned, and then
create a tile updater for the secondary tile and send the notification.
This code example sends a notification to a secondary tile.
// If the secondary tile is pinned
if (SecondaryTile.Exists("MySecondaryTile"))
{
// Get its updater
var updater = TileUpdateManager.CreateTileUpdaterForSecondaryTile("MySecondaryTile");

// And send the notification


updater.Update(notification);
}

Clear notifications on the tile (optional)


In most cases, you should clear a notification once the user has interacted with that content. For example, when
the user launches your app, you might want to clear all the notifications from the tile. If your notifications are
time-bound, we recommend that you set an expiration time on the notification instead of explicitly clearing the
notification.
This code example clears the tile notification for the primary tile. You can do the same for secondary tiles by
creating a tile updater for the secondary tile.

TileUpdateManager.CreateTileUpdaterForApplication().Clear();

For a tile with the notification queue enabled and notifications in the queue, calling the Clear method empties the
queue. You can't, however, clear a notification via your app's server; only the local app code can clear notifications.
Periodic or push notifications can only add new notifications or replace existing notifications. A local call to the
Clear method will clear the tile whether or not the notifications themselves came via push, periodic, or local.
Scheduled notifications that haven't yet appeared are not cleared by this method.

Next steps
Using the notification queue
Now that you have done your first tile update, you can expand the functionality of the tile by enabling a
notification queue.
Other notification delivery methods
This article shows you how to send the tile update as a notification. To explore other methods of notification
delivery, including scheduled, periodic, and push, see Delivering notifications.
XmlEncode delivery method
If you're not using the Notifications library, this notification delivery method is another alternative.

public string XmlEncode(string text)


{
StringBuilder builder = new StringBuilder();
using (var writer = XmlWriter.Create(builder))
{
writer.WriteString(text);
}

return builder.ToString();
}

Code examples without Notifications library


If you prefer to work with raw XML instead of the Notifications library NuGet package, use these alternate code
examples to first three examples provided in this article. The rest of the code examples can be used either with the
Notifications library or with raw XML.
Add namespace declarations

using Windows.UI.Notifications;
using Windows.Data.Xml.Dom;

Create the notification content

// In a real app, these would be initialized with actual data


string from = "Jennifer Parker";
string subject = "Photos from our trip";
string body = "Check out these awesome photos I took while in New Zealand!";

// TODO - all values need to be XML escaped

// Construct the tile content as a string


string content = $@"
<tile>
<visual>

<binding template='TileMedium'>
<text>{from}</text>
<text hint-style='captionSubtle'>{subject}</text>
<text hint-style='captionSubtle'>{body}</text>
</binding>

<binding template='TileWide'>
<text hint-style='subtitle'>{from}</text>
<text hint-style='captionSubtle'>{subject}</text>
<text hint-style='captionSubtle'>{body}</text>
</binding>

</visual>
</tile>";

Create the notification


// Load the string into an XmlDocument
XmlDocument doc = new XmlDocument();
doc.LoadXml(content);

// Then create the tile notification


var notification = new TileNotification(doc);

Related topics
Create adaptive tiles
Adaptive tile templates: schema and documentation
Notifications library
Full code sample on GitHub
Windows.UI.Notifications namespace
How to use the notification queue (XAML)
Delivering notifications
Periodic notification overview
3/6/2017 6 min to read Edit on GitHub

Periodic notifications, which are also called polled notifications, update tiles and badges at a fixed interval by
downloading content from a cloud service. To use periodic notifications, your client app code needs to provide two
pieces of information:
The Uniform Resource Identifier (URI) of a web location for Windows to poll for tile or badge updates for your
app
How often that URI should be polled
Periodic notifications enable your app to get live tile updates with minimal cloud service and client investment.
Periodic notifications are a good delivery method for distributing the same content to a wide audience.
Note You can learn more by downloading the Push and periodic notifications sample for Windows 8.1 and re-
using its source code in your Windows 10 app.

How it works
Periodic notifications require that your app hosts a cloud service. The service will be polled periodically by all users
who have the app installed. At each polling interval, such as once an hour, Windows sends an HTTP GET request to
the URI, downloads the requested tile or badge content (as XML) that is supplied in response to the request, and
displays the content on the app's tile.
Note that periodic updates cannot be used with toast notifications. Toast is best delivered through scheduled or
push notifications.

URI location and XML content


Any valid HTTP or HTTPS web address can be used as the URI to be polled.
The cloud server's response includes the downloaded content. The content returned from the URI must conform to
the Tile or Badge XML schema specification, and must be UTF-8 encoded. You can use defined HTTP headers to
specify the expiration time or tag for the notification.

Polling Behavior
Call one of these methods to begin polling:
StartPeriodicUpdate (Tile)
StartPeriodicUpdate (Badge)
StartPeriodicUpdateBatch (Tile)
When you call one of these methods, the URI is immediately polled and the tile or badge is updated with the
received contents. After this initial poll, Windows continues to provide updates at the requested interval. Polling
continues until you explicitly stop it (with TileUpdater.StopPeriodicUpdate), your app is uninstalled, or, in the
case of a secondary tile, the tile is removed. Otherwise, Windows continues to poll for updates to your tile or
badge even if your app is never launched again.
The recurrence interval
You specify the recurrence interval as a parameter of the methods listed above. Note that while Windows makes a
best effort to poll as requested, the interval is not precise. The requested poll interval can be delayed by up to 15
minutes at the discretion of Windows.
The start time
You optionally can specify a particular time of day to begin polling. Consider an app that changes its tile content
just once a day. In such a case, we recommend that you poll close to the time that you update your cloud service.
For example, if a daily shopping site publishes the day's offers at 8 AM, poll for new tile content shortly after 8 AM.
If you provide a start time, the first call to the method polls for content immediately. Then, regular polling starts
within 15 minutes of the provided start time.
Automatic retry behavior
The URI is polled only if the device is online. If the network is available but the URI cannot be contacted for any
reason, this iteration of the polling interval is skipped, and the URI will be polled again at the next interval. If the
device is in an off, sleep, or hibernated state when a polling interval is reached, the URI is polled when the device
returns from its off or sleep state.
Handling app updates
If you release an app update that changes your polling URI, you should add a daily time trigger background task
which calls StartPeriodicUpdate with the new URI to ensure your tiles are using the new URI. Otherwise, if users
receive your app update but don't launch your app, their tiles will still be using the old URI, which may fail to
display if the URI is now invalid or if the returned payload references local images that no longer exist.

Expiration of tile and badge notifications


By default, periodic tile and badge notifications expire three days from the time they are downloaded. When a
notification expires, the content is removed from the badge, tile, or queue and is no longer shown to the user. It is
a best practice to set an explicit expiration time on all periodic tile and badge notifications, using a time that makes
sense for your app or notification, to ensure that the content does not persist longer than it is relevant. An explicit
expiration time is essential for content with a defined life span. It also assures the removal of stale content if your
cloud service becomes unreachable, or if the user disconnects from the network for an extended period of time.
Your cloud service sets an expiration date and time for a notification by including the X-WNS-Expires HTTP header
in the response payload. The X-WNS-Expires HTTP header conforms to the HTTP-date format. For more
information, see StartPeriodicUpdate or StartPeriodicUpdateBatch.
For example, during a stock market's active trading day, you can set the expiration for a stock price update to twice
that of your polling interval (such as one hour after receipt if you are polling every half-hour). As another example,
a news app might determine that one day is an appropriate expiration time for a daily news tile update.

Periodic notifications in the notification queue


You can use periodic tile updates with notification cycling. By default, a tile on the Start screen shows the content
of a single notification until it is replaced by a new notification. When you enable cycling, up to five notifications
are maintained in a queue and the tile cycles through them.
If the queue has reached its capacity of five notifications, the next new notification replaces the oldest notification
in the queue. However, by setting tags on your notifications, you can affect the queue's replacement policy. A tag is
an app-specific, case-insensitive string of up to 16 alphanumeric characters, specified in the X-WNS-Tag HTTP
header in the response payload. Windows compares the tag of an incoming notification with the tags of all
notifications already in the queue. If a match is found, the new notification replaces the queued notification with
the same tag. If no match is found, the default replacement rule is applied and the new notification replaces the
oldest notification in the queue.
You can use notification queuing and tagging to implement a variety of rich notification scenarios. For example, a
stock app could send five notifications, each about a different stock and each tagged with a stock name. This
prevents the queue from ever containing two notifications for the same stock, the older of which is out of date.
For more information, see Using the notification queue.
Enabling the notification queue
To implement a notification queue, first enable the queue for your tile (see How to use the notification queue with
local notifications). The call to enable the queue needs to be done only once in your app's lifetime, but there is no
harm in calling it each time your app is launched.
Polling for more than one notification at a time
You must provide a unique URI for each notification that you'd like Windows to download for your tile. By using
the StartPeriodicUpdateBatch method, you can provide up to five URIs at once for use with the notification
queue. Each URI is polled for a single notification payload, at or near the same time. Each polled URI can return its
own expiration and tag value.

Related topics
Guidelines for periodic notifications
How to set up periodic notifications for badges
How to set up periodic notifications for tiles
Windows Push Notification Services (WNS) overview
3/6/2017 12 min to read Edit on GitHub

The Windows Push Notification Services (WNS) enables third-party developers to send toast, tile, badge, and raw
updates from their own cloud service. This provides a mechanism to deliver new updates to your users in a
power-efficient and dependable way.

How it works
The following diagram shows the complete data flow for sending a push notification. It involves these steps:
1. Your app requests a push notification channel from the Universal Windows Platform.
2. Windows asks WNS to create a notification channel. This channel is returned to the calling device in the form
of a Uniform Resource Identifier (URI).
3. The notification channel URI is returned by Windows to your app.
4. Your app sends the URI to your own cloud service. You then store the URI on your own cloud service so that
you can access the URI when you send notifications. The URI is an interface between your own app and your
own service; it's your responsibility to implement this interface with safe and secure web standards.
5. When your cloud service has an update to send, it notifies WNS using the channel URI. This is done by issuing
an HTTP POST request, including the notification payload, over Secure Sockets Layer (SSL). This step requires
authentication.
6. WNS receives the request and routes the notification to the appropriate device.

Registering your app and receiving the credentials for your cloud
service
Before you can send notifications using WNS, your app must be registered with the Store Dashboard. This will
provide you with credentials for your app that your cloud service will use in authenticating with WNS. These
credentials consist of a Package Security Identifier (SID) and a secret key. To perform this registration, go to the
Windows Dev Center and select Dashboard.
Each app has its own set of credentials for its cloud service. These credentials cannot be used to send notifications
to any other app.
For more details on how to register your app, please see How to authenticate with the Windows Notification
Service (WNS).
Requesting a notification channel
When an app that is capable of receiving push notifications runs, it must first request a notification channel
through the CreatePushNotificationChannelForApplicationAsync. For a full discussion and example code,
see How to request, create, and save a notification channel. This API returns a channel URI that is uniquely linked
to the calling application and its tile, and through which all notification types can be sent.
After the app has successfully created a channel URI, it sends it to its cloud service, together with any app-specific
metadata that should be associated with this URI.
Important notes
We do not guarantee that the notification channel URI for an app will always remain the same. We advise that
the app requests a new channel every time it runs and updates its service when the URI changes. The
developer should never modify the channel URI and should consider it as a black-box string. At this time,
channel URIs expire after 30 days. If your Windows 10 app will periodically renew its channel in the
background then you can download the Push and periodic notifications sample for Windows 8.1 and re-use its
source code and/or the pattern it demonstrates.
The interface between the cloud service and the client app is implemented by you, the developer. We
recommend that the app go through an authentication process with its own service and transmit data over a
secure protocol such as HTTPS.
It is important that the cloud service always ensures that the channel URI uses the domain
"notify.windows.com". The service should never push notifications to a channel on any other domain. If the
callback for your app is ever compromised, a malicious attacker could submit a channel URI to spoof WNS.
Without inspecting the domain, your cloud service could be potentially disclose information to this attacker
unknowingly.
If your cloud service attempts to deliver a notification to an expired channel, WNS will return response code
410. In response to that code, your service should no longer attempt to send notifications to that URI.

Authenticating your cloud service


To send a notification, the cloud service must be authenticated through WNS. The first step in this process occurs
when you register your app with the Windows Store Dashboard. During the registration process, your app is
given a Package security identifier (SID) and a secret key. This information is used by your cloud service to
authenticate with WNS.
The WNS authentication scheme is implemented using the client credentials profile from the OAuth 2.0 protocol.
The cloud service authenticates with WNS by providing its credentials (Package SID and secret key). In return, it
receives an access token. This access token allows a cloud service to send a notification. The token is required with
every notification request sent to the WNS.
At a high level, the information chain is as follows:
1. The cloud service sends its credentials to WNS over HTTPS following the OAuth 2.0 protocol. This
authenticates the service with WNS.
2. WNS returns an access token if the authentication was successful. This access token is used in all subsequent
notification requests until it expires.

In the authentication with WNS, the cloud service submits an HTTP request over Secure Sockets Layer (SSL). The
parameters are supplied in the "application/x-www-for-urlencoded" format. Supply your Package SID in the
"client_id" field and your secret key in the "client_secret" field. For syntax details, see the access token request
reference.
Note This is just an example, not cut-and-paste code that you can successfully use in your own code.

POST /accesstoken.srf HTTP/1.1


Content-Type: application/x-www-form-urlencoded
Host: https://login.live.com
Content-Length: 211

grant_type=client_credentials&client_id=ms-app%3a%2f%2fS-1-15-2-2972962901-2322836549-3722629029-1345238579-3987825745-2155616079-
650196962&client_secret=Vex8L9WOFZuj95euaLrvSH7XyoDhLJc7&scope=notify.windows.com

The WNS authenticates the cloud service and, if successful, sends a response of "200 OK". The access token is
returned in the parameters included in the body of the HTTP response, using the "application/json" media type.
After your service has received the access token, you are ready to send notifications.
The following example shows a successful authentication response, including the access token. For syntax details,
see Push notification service request and response headers.

HTTP/1.1 200 OK
Cache-Control: no-store
Content-Length: 422
Content-Type: application/json

{
"access_token":"EgAcAQMAAAAALYAAY/c+Huwi3Fv4Ck10UrKNmtxRO6Njk2MgA=",
"token_type":"bearer"
}

Important notes
The OAuth 2.0 protocol supported in this procedure follows draft version V16.
The OAuth Request for Comments (RFC) uses the term "client" to refer to the cloud service.
There might be changes to this procedure when the OAuth draft is finalized.
The access token can be reused for multiple notification requests. This allows the cloud service to authenticate
just once to send many notifications. However, when the access token expires, the cloud service must
authenticate again to receive a new access token.

Sending a notification
Using the channel URI, the cloud service can send a notification whenever it has an update for the user.
The access token described above can be reused for multiple notification requests; the cloud server is not required
to request a new access token for every notification. If the access token has expired, the notification request will
return an error. We recommended that you do not try to re-send your notification more than once if the access
token is rejected. If you encounter this error, you will need to request a new access token and resend the
notification. For the exact error code, see Push notification response codes.
1. The cloud service makes an HTTP POST to the channel URI. This request must be made over SSL and
contains the necessary headers and the notification payload. The authorization header must include the
acquired access token for authorization.
An example request is shown here. For syntax details, see Push notification response codes.
For details on composing the notification payload, see Quickstart: Sending a push notification. The payload
of a tile, toast, or badge push notification is supplied as XML content that adheres to their respective
defined Adaptive tiles schema or Legacy tiles schema. The payload of a raw notification does not have a
specified structure. It is strictly app-defined.

POST https://cloud.notify.windows.com/?token=AQE%bU%2fSjZOCvRjjpILow%3d%3d HTTP/1.1


Content-Type: text/xml
X-WNS-Type: wns/tile
Authorization: Bearer EgAcAQMAAAAALYAAY/c+Huwi3Fv4Ck10UrKNmtxRO6Njk2MgA=
Host: cloud.notify.windows.com
Content-Length: 24

<body>
....

2. WNS responds to indicate that the notification has been received and will be delivered at the next available
opportunity. However, WNS does not provide end-to-end confirmation that your notification has been
received by the device or application.
This diagram illustrates the data flow:

Important notes
WNS does not guarantee the reliability or latency of a notification.
Notifications should never include confidential or sensitive data.
To send a notification, the cloud service must first authenticate with WNS and receive an access token.
An access token only allows a cloud service to send notifications to the single app for which the token was
created. One access token cannot be used to send notifications across multiple apps. Therefore, if your cloud
service supports multiple apps, it must provide the correct access token for the app when pushing a
notification to each channel URI.
When the device is offline, by default WNS will store up to five tile notifications (if queuing is enabled;
otherwise, one tile notification) and one badge notification for each channel URI, and no raw notifications. This
default caching behavior can be changed through the X-WNS-Cache-Policy header. Note that toast
notifications are never stored when the device is offline.
In scenarios where the notification content is personalized to the user, WNS recommends that the cloud
service immediately send those updates when those are received. Examples of this scenario include social
media feed updates, instant communication invitations, new message notifications, or alerts. As an alternative,
you can have scenarios in which the same generic update is frequently delivered to a large subset of your
users; for example, weather, stock, and news updates. WNS guidelines specify that the frequency of these
updates should be at most one every 30 minutes. The end user or WNS may determine more frequent routine
updates to be abusive.

Expiration of tile and badge notifications


By default, tile and badge notifications expire three days after being downloaded. When a notification expires, the
content is removed from the tile or queue and is no longer shown to the user. It's a best practice to set an
expiration (using a time that makes sense for your app) on all tile and badge notifications so that your tile's
content doesn't persist longer than it is relevant. An explicit expiration time is essential for content with a defined
lifespan. This also assures the removal of stale content if your cloud service stops sending notifications, or if the
user disconnects from the network for an extended period.
Your cloud service can set an expiration for each notification by setting the X-WNS-Expires HTTP header to specify
the time (in seconds) that your notification will remain valid after it is sent. For more information, see Push
notification service request and response headers.
For example, during a stock market's active trading day, you can set the expiration for a stock price update to
twice that of your sending interval (such as one hour after receipt if you are sending notifications every half-hour).
As another example, a news app might determine that one day is an appropriate expiration time for a daily news
tile update.

Push notifications and battery saver


Battery saver extends battery life by limiting background activity on the device. Windows 10 lets the user set
battery saver to turn on automatically when the battery drops below a specified threshold. When battery saver is
on, the receipt of push notifications is disabled to save energy. But there are a couple exceptions to this. The
following Windows 10 battery saver settings (found in the Settings app) allow your app to receive push
notifications even when battery saver is on.
Allow push notifications from any app while in battery saver: This setting lets all apps receive push
notifications while battery saver is on. Note that this setting applies only to Windows 10 for desktop editions
(Home, Pro, Enterprise, and Education).
Always allowed: This setting lets specific apps run in the background while battery saver is on - including
receiving push notifications. This list is maintained manually by the user.
There is no way to check the state of these two settings, but you can check the state of battery saver. In Windows
10, use the EnergySaverStatus property to check battery saver state. Your app can also use the
EnergySaverStatusChanged event to listen for changes to battery saver.
If your app depends heavily on push notifications, we recommend notifying users that they may not receive
notifications while battery saver is on and to make it easy for them to adjust battery saver settings. Using the
battery saver settings URI scheme in Windows 10, ms-settings:batterysaver-settings , you can provide a convenient link
to the Settings app.
Tip When notifying the user about battery saver settings, we recommend providing a way to suppress the
message in the future. For example, the dontAskMeAgainBox checkbox in the following example persists the user's
preference in LocalSettings.
Here's an example of how to check if battery saver is turned on in Windows 10. This example notifies the user and
launches the Settings app to battery saver settings. The dontAskAgainSetting lets the user suppress the message if
they don't want to be notified again.
using System;
using Windows.UI.Xaml;
using Windows.UI.Xaml.Controls;
using Windows.UI.Xaml.Navigation;
using Windows.System;
using Windows.System.Power;
...
...
async public void CheckForEnergySaving()
{
//Get reminder preference from LocalSettings
bool dontAskAgain;
var localSettings = Windows.Storage.ApplicationData.Current.LocalSettings;
object dontAskSetting = localSettings.Values["dontAskAgainSetting"];
if (dontAskSetting == null)
{ // Setting does not exist
dontAskAgain = false;
}
else
{ // Retrieve setting value
dontAskAgain = Convert.ToBoolean(dontAskSetting);
}

// Check if battery saver is on and that it&#39;s okay to raise dialog


if ((PowerManager.EnergySaverStatus == EnergySaverStatus.On)
&amp;&amp; (dontAskAgain == false))
{
// Check dialog results
ContentDialogResult dialogResult = await saveEnergyDialog.ShowAsync();
if (dialogResult == ContentDialogResult.Primary)
{
// Launch battery saver settings (settings are available only when a battery is present)
await Launcher.LaunchUriAsync(new Uri("ms-settings:batterysaver-settings"));
}

// Save reminder preference


if (dontAskAgainBox.IsChecked == true)
{ // Don&#39;t raise dialog again
localSettings.Values["dontAskAgainSetting"] = "true";
}
}
}

This is the XAML for the ContentDialog featured in this example.

<ContentDialog x:Name="saveEnergyDialog"
PrimaryButtonText="Open battery saver settings"
SecondaryButtonText="Ignore"
Title="Battery saver is on.">
<StackPanel>
<TextBlock TextWrapping="WrapWholeWords">
<LineBreak/><Run>Battery saver is on and you may
not receive push notifications.</Run><LineBreak/>
<LineBreak/><Run>You can choose to allow this app to work normally
while in battery saver, including receiving push notifications.</Run>
<LineBreak/>
</TextBlock>
<CheckBox x:Name="dontAskAgainBox" Content="OK, got it."/>
</StackPanel>
</ContentDialog>
NOTE
This article is for Windows 10 developers writing Universal Windows Platform (UWP) apps. If youre developing for Windows
8.x or Windows Phone 8.x, see the archived documentation.

Related topics
Send a local tile notification
Quickstart: Sending a push notification
How to update a badge through push notifications
How to request, create, and save a notification channel
How to intercept notifications for running applications
How to authenticate with the Windows Push Notification Service (WNS)
Push notification service request and response headers
Guidelines and checklist for push notifications
Raw notifications
Code generated by the push notification wizard
3/6/2017 7 min to read Edit on GitHub

By using a wizard in Visual Studio, you can generate push notifications from a mobile service that was created with
Azure Mobile Services. The Visual Studio wizard generates code to help you get started. This topic explains how the
wizard modifies your project, what the generated code does, how to use this code, and what you can do next to get
the most out of push notifications. See Windows Push Notification Services (WNS) overview.

How the wizard modifies your project


The push notification wizard modifies your project in the following ways:
Adds a reference to Mobile Services Managed Client (MobileServicesManagedClient.dll). Not applicable to
JavaScript projects.
Adds a file in a subfolder under services, and names the file push.register.cs, push.register.vb, push.register.cpp,
or push.register.js.
Creates a channels table on the database server for the mobile service. The table contains information that's
required to send push notifications to app instances.
Creates scripts for four functions: delete, insert, read and update.
Creates a script with a custom API, notifyallusers.js, which sends a push notification to all clients.
Adds a declaration to your App.xaml.cs, App.xaml.vb, or App.xaml.cpp file, or adds a declaration to a new file,
service.js, for JavaScript projects. The declaration declares a MobileServiceClient object, which contains the
information that's required to connect to the mobile service. You can access this MobileServiceClient object,
which is named MyServiceNameClient, from any page in your app by using the name
App.MyServiceNameClient.
The services.js file contains the following code:

var <mobile-service-name>Client = new Microsoft.WindowsAzure.MobileServices.MobileServiceClient(


"https://<mobile-service-name>.azure-mobile.net/",
"<your client secret>");

Registration for push notifications


In push.register.*, the UploadChannel method registers the device to receive push notifications. The Store tracks
installed instances of your app and provides the push notification channel. See
PushNotificationChannelManager.
The client code is similar for both the JavaScript backend and the .NET backend. By default, when you add push
notifications for a JavaScript backend service, a sample call to notifyAllUsers custom API is inserted into the
UploadChannel method.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Net.Http;
using System.Text;
using System.Threading.Tasks;
using Microsoft.WindowsAzure.MobileServices;
using Newtonsoft.Json.Linq;

namespace App2
{
internal class mymobileservice1234Push
{
public async static void UploadChannel()
{
var channel = await
Windows.Networking.PushNotifications.PushNotificationChannelManager.CreatePushNotificationChannelForApplicationAsync();

try
{
await App.mymobileservice1234Client.GetPush().RegisterNativeAsync(channel.Uri);
await App.mymobileservice1234Client.InvokeApiAsync("notifyAllUsers");
}
catch (Exception exception)
{
HandleRegisterException(exception);
}
}

private static void HandleRegisterException(Exception exception)


{

}
}
}

Imports Microsoft.WindowsAzure.MobileServices
Imports Newtonsoft.Json.Linq

Friend Class mymobileservice1234Push


Public Shared Async Sub UploadChannel()
Dim channel = Await
Windows.Networking.PushNotifications.PushNotificationChannelManager.CreatePushNotificationChannelForApplicationAsync()

Try
Await App.mymobileservice1234Client.GetPush().RegisterNativeAsync(channel.Uri)
Await App.mymobileservice1234Client.GetPush().RegisterNativeAsync(channel.Uri, New String() {"tag1", "tag2"})
Await App.mymobileservice1234Client.InvokeApiAsync("notifyAllUsers")
Catch exception As Exception
HandleRegisterException(exception)
End Try
End Sub

Private Shared Sub HandleRegisterException(exception As Exception)

End Sub
End Class
#include "pch.h"
#include "services\mobile services\mymobileservice1234\mymobileservice1234Push.h"

using namespace AzureMobileHelper;

using namespace web;


using namespace concurrency;

using namespace Windows::Networking::PushNotifications;

void mymobileservice1234Push::UploadChannel()
{
create_task(PushNotificationChannelManager::CreatePushNotificationChannelForApplicationAsync()).
then([] (PushNotificationChannel^ newChannel)
{
return mymobileservice1234MobileService::GetClient().get_push().register_native(newChannel-&amp;gt;Uri-&amp;gt;Data());
}).then([]()
{
return mymobileservice1234MobileService::GetClient().invoke_api(L"notifyAllUsers");
}).then([](task&amp;lt;json::value&amp;gt; result)
{
try
{
result.wait();
}
catch(...)
{
HandleExceptionsComingFromTheServer();
}
});
}

void mymobileservice1234Push::HandleExceptionsComingFromTheServer()
{
}

(function () {
"use strict";

var app = WinJS.Application;


var activation = Windows.ApplicationModel.Activation;

app.addEventListener("activated", function (args) {


if (args.detail.kind == activation.ActivationKind.launch) {
Windows.Networking.PushNotifications.PushNotificationChannelManager.createPushNotificationChannelForApplicationAsync()
.then(function (channel) {
mymobileserviceclient1234Client.push.registerNative(channel.Uri, new Array("tag1", "tag2"))
return mymobileservice1234Client.push.registerNative(channel.uri);
})
.done(function (registration) {
return mymobileservice1234Client.invokeApi("notifyAllUsers");
}, function (error) {
// Error

});
}
});
})();

Push notification tags provide a way to restrict notifications to a subset of clients. You can use the for the
registerNative method (or RegisterNativeAsync) method to either register for all push notifications without
specifying tags, or you can register with tags by providing the second argument, an array of tags. If you register
with one or more tags, you only receive notifications that match those tags.
Server-side scripts (JavaScript backend only)
For mobile services that use the JavaScript backend, the server-side scripts run when delete, insert, read, or update
operations occur. The scripts don't implement these operations, but they run when a call from the client to the
Windows Mobile REST API triggers these events. The scripts then pass control onto the operations themselves by
calling request.execute or request.respond to issue a response to the calling context. See Azure Mobile Services
REST API Reference.
A variety of functions are available in the server-side script. See Register table operations in Azure Mobile Services.
For a reference to all available functions, see Mobile Services server script reference.
The following custom API code in Notifyallusers.js is also created:

exports.post = function(request, response) {


response.send(statusCodes.OK,{ message : &#39;Hello World!&#39; })

// The following call is for illustration purpose only


// The call and function body should be moved to a script in your app
// where you want to send a notification
sendNotifications(request);
};

// The following code should be moved to appropriate script in your app where notification is sent
function sendNotifications(request) {
var payload = &#39;<?xml version="1.0" encoding="utf-8"?><toast><visual><binding template="ToastText01">&#39; +
&#39;<text id="1">Sample Toast</text></binding></visual></toast>&#39;;
var push = request.service.push;
push.wns.send(null,
payload,
&#39;wns/toast&#39;, {
success: function (pushResponse) {
console.log("Sent push:", pushResponse);
}
});
}

The sendNotifications function sends a single notification as a toast notification. You can also use other types of
push notifications.
Tip For information about how to get help while editing scripts, see Enabling IntelliSense for server-side JavaScript.

Push notification types


Windows supports notifications that aren't push notifications. For general information about notifications, see
Delivering scheduled, periodic, and push notifications.
Toast notifications are easy to use, and you can review an example in the Insert.js code on the channel's table that's
generated for you. If you plan to use tile or badge notifications, you must create an XML template for the tile and
badge, and you must specify the encoding of packaged information in the template. See Working with tiles, badges,
and toast notifications.
Because Windows responds to push notifications, it can handle most of these notifications when the app isn't
running. For example, a push notification could let a user know when a new mail message is available even when
the local mail app isn't running. Windows handles a toast notification by displaying a message, such as the first line
of a text message. Windows handles a tile or badge notification by updating an app's live tile to reflect the number
of new mail messages. In this way, you can prompt users of your app to check it for new information. Your app can
receive raw notifications when it's running, and you can use them to send data to your app. If your app isn't
running, you can set up a background task to monitor push notifications.
You should use push notifications according to the guidelines for Universal Windows Platform (UWP) apps,
because those notifications use up a user's resources and can be distracting if overused. See Guidelines and
checklist for push notifications.
If you're updating live tiles with push notifications, you should also follow the guidelines in Guidelines and checklist
for tiles and badges.

Next steps
Using the Windows Push Notification Services (WNS)
You can call Windows Push Notification Services (WNS) directly if Mobile Services doesn't provide enough
flexibility, if you want to write your server code in C# or Visual Basic, or if you already have a cloud service and you
want to send push notifications from it. By calling WNS directly, you can send push notifications from your own
cloud service, such as a worker role that monitors data from a database or another web service. Your cloud service
must authenticate with WNS to send push notifications to your apps. See How to authenticate with the Windows
Push Notification Service (JavaScript) or (C#/C++/VB).
You can also send push notifications by running a scheduled task in your mobile service. See Schedule recurring
jobs in Mobile Services.
Warning Once you've run the push notification wizard once, don't run the wizard a second time to add registration
code for another mobile service. Running the wizard more than once per project generates code that results in
overlapping calls to the CreatePushNotificationChannelForApplicationAsync method, which leads to a
runtime exception. If you want to register for push notifications for more than one mobile service, run the wizard
once and then rewrite the registration code to ensure that calls to
CreatePushNotificationChannelForApplicationAsync do not run at the same time. For example, you can
accomplish this by moving the wizard-generated code in push.register.* (including the call to
CreatePushNotificationChannelForApplicationAsync) outside of the OnLaunched event, but the specifics of
this will depend on your app's architecture.

Related topics
Windows Push Notification Services (WNS) overview
Raw notification overview
Connecting to Windows Azure Mobile Services (JavaScript)
Connecting to Windows Azure Mobile Services (C#/C++/VB)
Quickstart: Adding push notifications for a mobile service (JavaScript)
3/6/2017 8 min to read Edit on GitHub

Raw notification overview


Raw notifications are short, general purpose push notifications. They are strictly instructional and do not include a
UI component. As with other push notifications, the Windows Push Notification Services (WNS) feature delivers
raw notifications from your cloud service to your app.
You can use raw notifications for a variety of purposes, including to trigger your app to run a background task if
the user has given the app permission to do so. By using WNS to communicate with your app, you can avoid the
processing overhead of creating persistent socket connections, sending HTTP GET messages, and other service-to-
app connections.

IMPORTANT
To understand raw notifications, it's best to be familiar with the concepts discussed in the Windows Push Notification
Services (WNS) overview.

As with toast, tile, and badge push notifications, a raw notification is pushed from your app's cloud service over an
assigned channel Uniform Resource Identifier (URI) to WNS. WNS, in turn, delivers the notification to the device
and user account associated with that channel. Unlike other push notifications, raw notifications don't have a
specified format. The content of the payload is entirely app-defined.
As an illustration of an app that could benefit from raw notifications, let's look at a theoretical document
collaboration app. Consider two users who are editing the same document at the same time. The cloud service,
which hosts the shared document, could use raw notifications to notify each user when changes are made by the
other user. The raw notifications would not necessarily contain the changes to the document, but instead would
signal each user's copy of the app to contact the central location and sync the available changes. By using raw
notifications, the app and the its cloud service can save the overhead of maintaining persistent connections the
entire time the document is open.

How raw notifications work


All raw notifications are push notifications. Therefore, the setup required to send and receive push notifications
applies to raw notifications as well:
You must have a valid WNS channel to send raw notifications. For more information about acquiring a push
notification channel, see How to request, create, and save a notification channel.
You must include the Internet capability in your app's manifest. In the Microsoft Visual Studio manifest editor,
you will find this option under the Capabilities tab as Internet (Client). For more information, see
Capabilities.
The body of the notification is in an app-defined format. The client receives the data as a null-terminated string
(HSTRING) that only needs to be understood by the app.
If the client is offline, raw notifications will be cached by WNS only if the X-WNS-Cache-Policy header is included in
the notification. However, only one raw notification will be cached and delivered once the device comes back
online.
There are only three possible paths for a raw notification to take on the client: they will be delivered to your
running app through a notification delivery event, sent to a background task, or dropped. Therefore, if the client is
offline and WNS attempts to deliver a raw notification, the notification is dropped.

Creating a raw notification


Sending a raw notification is similar to sending a tile, toast, or badge push notification, with these differences:
The HTTP Content-Type header must be set to "application/octet-stream".
The HTTP X-WNS-Type header must be set to "wns/raw".
The notification body can contain any string payload smaller than 5 KB in size.
Raw notifications are intended to be used as short messages that trigger your app to take an action, such as to
directly contact the service to sync a larger amount of data or to make a local state modification based on the
notification content. Note that WNS push notifications cannot be guaranteed to be delivered, so your app and
cloud service must account for the possibility that the raw notification might not reach the client, such as when the
client is offline.
For more information on sending push notifications, see Quickstart: Sending a push notification.

Receiving a raw notification


There are two avenues through which your app can be receive raw notifications:
Through notification delivery events while your application is running.
Through background tasks triggered by the raw notification if your app is enabled to run background tasks.
An app can use both mechanisms to receive raw notifications. If an app implements both the notification delivery
event handler and background tasks that are triggered by raw notifications, the notification delivery event will take
priority when the app is running.
If the app is running, the notification delivery event will take priority over the background task and the app will
have the first opportunity to process the notification.
The notification delivery event handler can specify, by setting the event's
PushNotificationReceivedEventArgs.Cancel property to true, that the raw notification should not be passed
to its background task once the handler exits. If the Cancel property is set to false or is not set (the default
value is false), the raw notification will trigger the background task after the notification delivery event handler
has done its work.
Notification delivery events
Your app can use a notification delivery event (PushNotificationReceived) to receive raw notifications while the
app is in use. When the cloud service sends a raw notification, the running app can receive it by handling the
notification delivery event on the channel URI.
If your app is not running and does not use background tasks, any raw notification sent to that app is dropped by
WNS on receipt. To avoid wasting your cloud service's resources, you should consider implementing logic on the
service to track whether the app is active. There are two sources of this information: an app can explicitly tell the
service that it's ready to start receiving notifications, and WNS can tell the service when to stop.
The app notifies the cloud service: The app can contact its service to let it know that the app is running in the
foreground. The disadvantage of this approach is that the app can end up contacting your service very
frequently. However, it has the advantage that the service will always know when the app is ready to receive
incoming raw notifications. Another advantage is that when the app contacts its service, the service then knows
to send raw notifications to the specific instance of that app rather than broadcast.
The cloud service responds to WNS response messages : Your app service can use the X-WNS-
NotificationStatus and X-WNS-DeviceConnectionStatus information returned by WNS to determine when
to stop sending raw notifications to the app. When your service sends a notification to a channel as an HTTP
POST, it can receive one of these messages in the response:
X-WNS-NotificationStatus: dropped: This indicates that the notification was not received by the client.
It's a safe assumption that the dropped response is caused by your app no longer being in the
foreground on the user's device.
X-WNS-DeviceConnectionStatus: disconnected or X-WNS-DeviceConnectionStatus:
tempconnected: This indicates that the Windows client no longer has a connection to WNS. Note that
to receive this message from WNS, you have to ask for it by setting the X-WNS-RequestForStatus header
in the notification's HTTP POST.
Your app's cloud service can use the information in these status messages to cease communication
attempts through raw notifications. The service can resume sending raw notifications once it is contacted by
the app, when the app switches back into the foreground.
Note that you should not rely on X-WNS-NotificationStatus to determine whether the notification was
successfully delivered to the client.
For more information, see Push notification service request and response headers
Background tasks triggered by raw notifications

IMPORTANT
Before using raw notification background tasks, an app must be granted background access via
BackgroundExecutionManager.RequestAccessAsync.

Your background task must be registered with a PushNotificationTrigger. If it is not registered, the task will not
run when a raw notification is received.
A background task that is triggered by a raw notification enables your app's cloud service to contact your app,
even when the app is not running (though it might trigger it to run). This happens without the app having to
maintain a continuous connection. Raw notifications are the only notification type that can trigger background
tasks. However, while toast, tile, and badge push notifications cannot trigger background tasks, background tasks
triggered by raw notifications can update tiles and invoke toast notifications through local API calls.
As an illustration of how background tasks that are triggered by raw notifications work, let's consider an app used
to read e-books. First, a user purchases a book online, possibly on another device. In response, the app's cloud
service can send a raw notification to each of the user's devices, with a payload that states that the book was
purchased and the app should download it. The app then directly contacts the app's cloud service to begin a
background download of the new book so that later, when the user launches the app, the book is already there and
ready for reading.
To use a raw notification to trigger a background task, your app must:
1. Request permission to run tasks in the background (which the user can revoke at any time) by using
BackgroundExecutionManager.RequestAccessAsync.
2. Implement the background task. For more information, see Supporting your app with background tasks
Your background task is then invoked in response to the PushNotificationTrigger, each time a raw notification is
received for your app. Your background task interprets the raw notification's app-specific payload and acts on it.
For each app, only one background task can run at a time. If a background task is triggered for an app for which a
background task is already running, the first background task must complete before the new one is run.

Other resources
You can learn more by downloading the Raw notifications sample for Windows 8.1, and the Push and periodic
notifications sample for Windows 8.1, and re-using their source code in your Windows 10 app.

Related topics
Guidelines for raw notifications
Quickstart: Creating and registering a raw notification background task
Quickstart: Intercepting push notifications for running apps
RawNotification
BackgroundExecutionManager.RequestAccessAsync
Toggle switches
3/6/2017 3 min to read Edit on GitHub

The toggle switch represents a physical switch that allows users to turn things on or off. Use ToggleSwitch
controls to present users with exactly two mutually exclusive options (like on/off), where choosing an option
results in an immediate action.

Important APIs
ToggleSwitch class
IsOn property
Toggled event

Is this the right control?


Use a toggle switch for binary operations that take effect right after the user flips the toggle switch. For example,
use a toggle switch to turn services or hardware components on or off, such as WiFi:

If a physical switch would work for the action, a toggle switch is probably the best control to use.
After the user toggles the switch on or off, we recommend that the corresponding action is immediately
performed.
Choosing between toggle switch and check box
For some actions, either a toggle switch or a check box might work. To decide which control would work better,
follow these tips:
Use a toggle switch for binary settings when changes become effective immediately after the user
changes them.

In the above example, it's clear with the toggle switch that the wireless is set to "On." But with the
checkbox, the user needs to think about whether the wireless is on now or whether they need to check
the box to turn wireless on.
Use a checkbox when the user has to perform extra steps for changes to be effective. For example, if the
user must click a "submit" or "next" button to apply changes, use a check box.

Use check boxes or a list box when the user can select multiple items:

Examples
Toggle switches in the general settings of the News app.

Toggle switches in the start menu settings in Windows.


Create a toggle switch
Here's how to create a simple toggle switch. This XAML creates the WiFi toggle switch shown previously.

<ToggleSwitch x:Name="wiFiToggle" Header="Wifi"/>

Here's how to create the same toggle switch in code.

ToggleSwitch wiFiToggle = new ToggleSwitch();


wiFiToggle.Header = "WiFi";

// Add the toggle switch to a parent container in the visual tree.


stackPanel1.Children.Add(wiFiToggle);

IsOn
The switch can be either on or off. Use the IsOn property to determine the state of the switch. When the switch
is used to control the state of another binary property, you can use a binding as shown here.

<StackPanel Orientation="Horizontal">
<ToggleSwitch x:Name="ToggleSwitch1" IsOn="True"/>
<ProgressRing IsActive="{x:Bind ToggleSwitch1.IsOn, Mode=OneWay}" Width="130"/>
</StackPanel>
Toggled
In other cases, you can handle the Toggled event to respond to changes in the state.
This example shows how to add a Toggled event handler in XAML and in code. The Toggled event is handled to
turn a progress ring on or off, and change its visibility.

<ToggleSwitch x:Name="toggleSwitch1" IsOn="True"


Toggled="ToggleSwitch_Toggled"/>

Here's how to create the same toggle switch in code.

// Create a new toggle switch and add a Toggled event handler.


ToggleSwitch toggleSwitch1 = new ToggleSwitch();
toggleSwitch1.Toggled += ToggleSwitch_Toggled;

// Add the toggle switch to a parent container in the visual tree.


stackPanel1.Children.Add(toggleSwitch1);

Here's the handler for the Toggled event.

private void ToggleSwitch_Toggled(object sender, RoutedEventArgs e)


{
ToggleSwitch toggleSwitch = sender as ToggleSwitch;
if (toggleSwitch != null)
{
if (toggleSwitch.IsOn == true)
{
progress1.IsActive = true;
progress1.Visibility = Visibility.Visible;
}
else
{
progress1.IsActive = false;
progress1.Visibility = Visibility.Collapsed;
}
}
}

On/Off labels
By default, the toggle switch includes literal On and Off labels, which are localized automatically. You can replace
these labels by setting the OnContent, and OffContent properties.
This example replaces the On/Off labels with Show/Hide labels.

<ToggleSwitch x:Name="imageToggle" Header="Show images"


OffContent="Show" OnContent="Hide"
Toggled="ToggleSwitch_Toggled"/>

You can also use more complex content by setting the OnContentTemplate and OffContentTemplate
properties.

Recommendations
Replace the On and Off labels when there are more specific labels for the setting. If there are short (3-4
characters) labels that represent binary opposites that are more appropriate for a particular setting, use
those. For example, you could use "Show/Hide" if the setting is "Show images." Using more specific labels
can help when localizing the UI.
Avoid replacing the On and Off labels unless you must; stick with the default labels unless the situation calls
for custom ones.
Labels should be no more than 4 characters long.

Related articles
ToggleSwitch class
Radio buttons
Toggle switches
Check boxes
Tooltips
3/6/2017 2 min to read Edit on GitHub

A tooltip is a short description that is linked to another control or object. Tooltips help users understand unfamiliar
objects that aren't described directly in the UI. They display automatically when the user moves focus to, presses
and holds, or hovers the mouse pointer over a control. The tooltip disappears after a few seconds, or when the
user moves the finger, pointer or keyboard/gamepad focus.

Important APIs
ToolTip class
ToolTipService class

Is this the right control?


Use a tooltip to reveal more info about a control before asking the user to perform an action. Tooltips should be
used sparingly, and only when they are adding distinct value for the user who is trying to complete a task. One
rule of thumb is that if the information is available elsewhere in the same experience, you do not need a tooltip. A
valuable tooltip will clarify an unclear action.
When should you use a tooltip? To decide, consider these questions:
Should info become visible based on pointer hover? If not, use another control. Display tips only as
the result of user interaction, never display them on their own.
Does a control have a text label? If not, use a tooltip to provide the label. It is a good UX design practice
to label most controls inline and for these you don't need tooltips. Toolbar controls and command buttons
showing only icons need tooltips.
Does an object benefit from a description or further info? If so, use a tooltip. But the text must be
supplemental that is, not essential to the primary tasks. If it is essential, put it directly in the UI so that
users don't have to discover or hunt for it.
Is the supplemental info an error, warning, or status? If so, use another UI element, such as a flyout.
Do users need to interact with the tip? If so, use another control. Users can't interact with tips because
moving the mouse makes them disappear.
Do users need to print the supplemental info? If so, use another control.
Will users find the tips annoying or distracting? If so, consider using another solution including
doing nothing at all. If you do use tips where they might be distracting, allow users to turn them off.

Example
A tooltip in the Bing Maps app.
Recommendations
Use tooltips sparingly (or not at all). Tooltips are an interruption. A tooltip can be as distracting as a pop-up, so
don't use them unless they add significant value.
Keep the tooltip text concise. Tooltips are perfect for short sentences and sentence fragments. Large blocks of
text can be overwhelming and the tooltip may time out before the user has finished reading.
Create helpful, supplemental tooltip text. Tooltip text must be informative. Don't make it obvious or just repeat
what is already on the screen. Because tooltip text isn't always visible, it should be supplemental info that users
don't have to read. Communicate important info using self-explanatory control labels or in-place supplemental
text.
Use images when appropriate. Sometimes it's better to use an image in a tooltip. For example, when the user
hovers over a hyperlink, you can use a tooltip to show a preview of the linked page.
Don't use a tooltip to display text already visible in the UI. For example, don't put a tooltip on a button that
shows the same text of the button.
Don't put interactive controls inside the tooltip.
Don't put images that look like they are interactive inside the tooltip.
Related topics
ToolTip class
Web view
3/6/2017 12 min to read Edit on GitHub

A web view control embeds a view into your app that renders web content using the Microsoft Edge rendering
engine. Hyperlinks can also appear and function in a web view control.

Important APIs
WebView class

Is this the right control?


Use a web view control to display richly formatted HTML content from a remote web server, dynamically generated
code, or content files in your app package. Rich content can also contain script code and communicate between the
script and your app's code.

Create a web view


Modify the appearance of a web view
WebView is not a Control subclass, so it doesn't have a control template. However, you can set various properties
to control some visual aspects of the web view.
To constrain the display area, set the Width and Height properties.
To translate, scale, skew, and rotate a web view, use the RenderTransform property.
To control the opacity of the web view, set the Opacity property.
To specify a color to use as the web page background when the HTML content does not specify a color, set the
DefaultBackgroundColor property.
Get the web page title
You can get the title of the HTML document currently displayed in the web view by using the DocumentTitle
property.
Input events and tab order
Although WebView is not a Control subclass, it will receive keyboard input focus and participate in the tab
sequence. It provides a Focus method, and GotFocus and LostFocus events, but it has no tab-related properties.
Its position in the tab sequence is the same as its position in the XAML document order. The tab sequence includes
all elements in the web view content that can receive input focus.
As indicated in the Events table on the WebView class page, web view doesnt support most of the user input
events inherited from UIElement, such as KeyDown, KeyUp, and PointerPressed. Instead, you can use
InvokeScriptAsync with the JavaScript eval function to use the HTML event handlers, and to use
window.external.notify from the HTML event handler to notify the application using WebView.ScriptNotify.
Navigating to content
Web view provides several APIs for basic navigation: GoBack, GoForward, Stop, Refresh, CanGoBack, and
CanGoForward. You can use these to add typical web browsing capabilities to your app.
To set the initial content of the web view, set the Source property in XAML. The XAML parser automatically
converts the string to a Uri.
<!-- Source file is on the web. -->
<WebView x:Name="webView1" Source="http://www.contoso.com"/>

<!-- Source file is in local storage. -->


<WebView x:Name="webView2" Source="ms-appdata:///local/intro/welcome.html"/>

<!-- Source file is in the app package. -->


<WebView x:Name="webView3" Source="ms-appx-web:///help/about.html"/>

The Source property can be set in code, but rather than doing so, you typically use one of the Navigate methods
to load content in code.
To load web content, use the Navigate method with a Uri that uses the http or https scheme.

webView1.Navigate("http://www.contoso.com");

To navigate to a URI with a POST request and HTTP headers, use the NavigateWithHttpRequestMessage
method. This method supports only HttpMethod.Post and HttpMethod.Get for the
HttpRequestMessage.Method property value.
To load uncompressed and unencrypted content from your apps [LocalFolder]() or [TemporaryFolder]() data
stores, use the Navigate method with a Uri that uses the [ms-appdata scheme](). The web view support for this
scheme requires you to place your content in a subfolder under the local or temporary folder. This enables
navigation to URIs such as ms-appdata:///local/folder/file.html and ms-appdata:///temp/folder/file.html . (To load
compressed or encrypted files, see NavigateToLocalStreamUri.)
Each of these first-level subfolders is isolated from the content in other first-level subfolders. For example, you can
navigate to ms-appdata:///temp/folder1/file.html, but you cant have a link in this file to ms-
appdata:///temp/folder2/file.html. However, you can still link to HTML content in the app package using the ms-
appx-web scheme, and to web content using the http and https URI schemes.

webView1.Navigate("ms-appdata:///local/intro/welcome.html");

To load content from the your app package, use the Navigate method with a Uri that uses the ms-appx-web
scheme.

webView1.Navigate("ms-appx-web:///help/about.html");

You can load local content through a custom resolver using the NavigateToLocalStreamUri method. This
enables advanced scenarios such as downloading and caching web-based content for offline use, or extracting
content from a compressed file.
Responding to navigation events
The web view control provides several events that you can use to respond to navigation and content loading states.
The events occur in the following order for the root web view content: NavigationStarting, ContentLoading,
DOMContentLoaded, NavigationCompleted
NavigationStarting - Occurs before the web view navigates to new content. You can cancel navigation in a
handler for this event by setting the WebViewNavigationStartingEventArgs.Cancel property to true.
webView1.NavigationStarting += webView1_NavigationStarting;

private void webView1_NavigationStarting(object sender, WebViewNavigationStartingEventArgs args)


{
// Cancel navigation if URL is not allowed. (Implemetation of IsAllowedUri not shown.)
if (!IsAllowedUri(args.Uri))
args.Cancel = true;
}

ContentLoading - Occurs when the web view has started loading new content.

webView1.ContentLoading += webView1_ContentLoading;

private void webView1_ContentLoading(WebView sender, WebViewContentLoadingEventArgs args)


{
// Show status.
if (args.Uri != null)
{
statusTextBlock.Text = "Loading content for " + args.Uri.ToString();
}
}

DOMContentLoaded - Occurs when the web view has finished parsing the current HTML content.

webView1.DOMContentLoaded += webView1_DOMContentLoaded;

private void webView1_DOMContentLoaded(WebView sender, WebViewDOMContentLoadedEventArgs args)


{
// Show status.
if (args.Uri != null)
{
statusTextBlock.Text = "Content for " + args.Uri.ToString() + " has finished loading";
}
}

NavigationCompleted - Occurs when the web view has finished loading the current content or if navigation has
failed. To determine whether navigation has failed, check the IsSuccess and WebErrorStatus properties of the
WebViewNavigationCompletedEventArgs class.

webView1.NavigationCompleted += webView1_NavigationCompleted;

private void webView1_NavigationCompleted(WebView sender, WebViewNavigationCompletedEventArgs args)


{
if (args.IsSuccess == true)
{
statusTextBlock.Text = "Navigation to " + args.Uri.ToString() + " completed successfully.";
}
else
{
statusTextBlock.Text = "Navigation to: " + args.Uri.ToString() +
" failed with error " + args.WebErrorStatus.ToString();
}
}

Similar events occur in the same order for each iframe in the web view content:
FrameNavigationStarting - Occurs before a frame in the web view navigates to new content.
FrameContentLoading - Occurs when a frame in the web view has started loading new content.
FrameDOMContentLoaded - Occurs when a frame in the web view has finished parsing its current HTML
content.
FrameNavigationCompleted - Occurs when a frame in the web view has finished loading its content.
Responding to potential problems
You can respond to potential problems with the content such as long running scripts, content that web view can't
load, and warnings of unsafe content.
Your app might appear unresponsive while scripts are running. The LongRunningScriptDetected event occurs
periodically while the web view executes JavaScript and provides an opportunity to interrupt the script. To
determine how long the script has been running, check the ExecutionTime property of the
WebViewLongRunningScriptDetectedEventArgs. To halt the script, set the event args
StopPageScriptExecution property to true. The halted script will not execute again unless it is reloaded during a
subsequent web view navigation.
The web view control cannot host arbitrary file types. When an attempt is made to load content that the web view
can't host, the UnviewableContentIdentified event occurs. You can handle this event and notify the user, or use
the Launcher class to redirect the file to an external browser or another app.
Similarly, the UnsupportedUriSchemeIdentified event occurs when a URI scheme that's not supported is
invoked in the web content, such as fbconnect:// or mailto://. You can handle this event to provide custom behavior
instead of allowing the default system launcher to launch the URI.
The UnsafeContentWarningDisplayingevent occurs when the web view shows a warning page for content that
was reported as unsafe by the SmartScreen Filter. If the user chooses to continue the navigation, subsequent
navigation to the page will not display the warning nor fire the event.
Handling special cases for web view content
You can use the ContainsFullScreenElement property and ContainsFullScreenElementChanged event to
detect, respond to, and enable full-screen experiences in web content, such as full-screen video playback. For
example, you may use the ContainsFullScreenElementChanged event to resize the web view to occupy the entirety
of your app view, or, as the following example illustrates, put a windowed app in full screen mode when a full
screen web experience is desired.

// Assume webView is defined in XAML


webView.ContainsFullScreenElementChanged += webView_ContainsFullScreenElementChanged;

private void webView_ContainsFullScreenElementChanged(WebView sender, object args)


{
var applicationView = ApplicationView.GetForCurrentView();

if (sender.ContainsFullScreenElement)
{
applicationView.TryEnterFullScreenMode();
}
else if (applicationView.IsFullScreenMode)
{
applicationView.ExitFullScreenMode();
}
}

You can use the NewWindowRequested event to handle cases where hosted web content requests a new
window to be displayed, such as a popup window. You can use another WebView control to display the contents of
the requested window.
Use PermissionRequested event to enable web features that require special capabilities. These currently include
geolocation, IndexedDB storage, and user audio and video (for example, from a microphone or webcam). If your
app accesses user location or user media, you still are required to declare this capability in the app manifest. For
example, an app that uses geolocation needs the following capability declarations at minimum in
Package.appxmanifest:

<Capabilities>
<Capability Name="internetClient" />
<DeviceCapability Name="location" />
</Capabilities>

In addition to the app handling the PermissionRequested event, the user will have to approve standard system
dialogs for apps requesting location or media capabilities in order for these features to be enabled.
Here is an example of how an app would enable geolocation in a map from Bing:

// Assume webView is defined in XAML


webView.PermissionRequested += webView_PermissionRequested;

private void webView_PermissionRequested(WebView sender, WebViewPermissionRequestedEventArgs args)


{
if (args.PermissionRequest.PermissionType == WebViewPermissionType.Geolocation &&
args.PermissionRequest.Uri.Host == "www.bing.com")
{
args.PermissionRequest.Allow();
}
}

If your app requires user input or other asynchronous operations to respond to a permission request, use the
Defer method of WebViewPermissionRequest to create a WebViewDeferredPermissionRequest that can be
acted upon at a later time. See WebViewPermissionRequest.Defer.
If users must securely log out of a website hosted in a web view, or in other cases when security is important, call
the static method ClearTemporaryWebDataAsync to clear out all locally cached content from a web view
session. This prevents malicious users from accessing sensitive data.
Interacting with web view content
You can interact with the content of the web view by using the InvokeScriptAsync method to invoke or inject
script into the web view content, and the ScriptNotify event to get information back from the web view content.
To invoke JavaScript inside the web view content, use the InvokeScriptAsync method. The invoked script can
return only string values.
For example, if the content of a web view named webView1 contains a function named setDate that takes 3
parameters, you can invoke it like this.

string[] args = {"January", "1", "2000"};


string returnValue = await webView1.InvokeScriptAsync("setDate", args);

You can use InvokeScriptAsync with the JavaScript eval function to inject content into the web page.
Here, the text of a XAML text box ( nameTextBox.Text ) is written to a div in an HTML page hosted in webView1 .

private async void Button_Click(object sender, RoutedEventArgs e)


{
string functionString = String.Format("document.getElementById('nameDiv').innerText = 'Hello, {0}';", nameTextBox.Text);
await webView1.InvokeScriptAsync("eval", new string[] { functionString });
}

Scripts in the web view content can use window.external.notify with a string parameter to send information
back to your app. To receive these messages, handle the ScriptNotify event.
To enable an external web page to fire the ScriptNotify event when calling window.external.notify, you must
include the page's URI in the ApplicationContentUriRules section of the app manifest. (You can do this in
Microsoft Visual Studio on the Content URIs tab of the Package.appxmanifest designer.) The URIs in this list must
use HTTPS, and may contain subdomain wildcards (for example, https://*.microsoft.com ) but they cannot contain
domain wildcards (for example, https://*.com and https://*.* ). The manifest requirement does not apply to content
that originates from the app package, uses an ms-local-stream:// URI, or is loaded using NavigateToString.
Accessing the Windows Runtime in a web view
You can use the AddWebAllowedObject method to inject an instance of a native class from a Windows Runtime
component into the JavaScript context of the web view. This allows full access to the native methods, properties,
and events of that object in the JavaScript content of that web view. The class must be decorated with the
AllowForWeb attribute.
For example, this code injects an instance of MyClass imported from a Windows Runtime component into a web
view.

private void webView_NavigationStarting(WebView sender, WebViewNavigationStartingEventArgs args)


{
if (args.Uri.Host == "www.contoso.com")
{
webView.AddWebAllowedObject("nativeObject", new MyClass());
}
}

For more info, see WebView.AddWebAllowedObject.


In addition, trusted JavaScript content in a web view can be allowed to directly access Windows Runtime APIs. This
provides powerful native capabilities for web apps hosted in a web view. To enable this feature, the URI for trusted
content must be whitelisted in the ApplicationContentUriRules of the app in Package.appxmanifest, with
WindowsRuntimeAccess specifically set to "all".
This example shows a section of the app manifest. Here, a local URI is given access to the Windows Runtime.

<Applications>
<Application Id="App"
...

<uap:ApplicationContentUriRules>
<uap:Rule Match="ms-appx-web:///Web/App.html" WindowsRuntimeAccess="all" Type="include"/>
</uap:ApplicationContentUriRules>
</Application>
</Applications>

Options for web content hosting


You can use the WebView.Settings property (of type WebViewSettings) to control whether JavaScript and
IndexedDB are enabled. For example, if you use a web view to display strictly static content, you might want to
disable JavaScript for best performance.
Capturing web view content
To enable sharing web view content with other apps, use the CaptureSelectedContentToDataPackageAsync
method, which returns the selected content as a DataPackage. This method is asynchronous, so you must use a
deferral to prevent your DataRequested event handler from returning before the asynchronous call is complete.
To get a preview image of the web view's current content, use the CapturePreviewToStreamAsync method. This
method creates an image of the current content and writes it to the specified stream.
Threading behavior
By default, web view content is hosted on the UI thread on devices in the desktop device family, and off the UI
thread on all other devices. You can use the WebView.DefaultExecutionMode static property to query the
default threading behavior for the current client. If necessary, you can use the
WebView(WebViewExecutionMode) constructor to override this behavior.

Note There might be performance issues when hosting content on the UI thread on mobile devices, so be sure
to test on all target devices when you change DefaultExecutionMode.

A web view that hosts content off the UI thread is not compatible with parent controls that require gestures to
propagate up from the web view control to the parent, such as FlipView, ScrollViewer, and other related controls.
These controls will not be able to receive gestures initiated in the off-thread web view. In addition, printing off-
thread web content is not directly supported you should print an element with WebViewBrush fill instead.

Recommendations
Make sure that the website loaded is formatted correctly for the device and uses colors, typography, and
navigation that are consistent with the rest of your app.
Input fields should be appropriately sized. Users may not realize that they can zoom in to enter text.
If a web view doesn't look like the rest of your app, consider alternative controls or ways to accomplish relevant
tasks. If your web view matches the rest of your app, users will see it all as one seamless experience.

Related topics
WebView class
Inputs and devices
3/6/2017 4 min to read Edit on GitHub

UWP apps automatically handle a wide variety of inputs and run on a variety of devicestheres nothing extra you
need to do to enable touch input or make your app run on a phone, for example.
But there are times when you might want to optimize your app for certain types of input or devices. For example, if
youre creating a painting app, you might want to customize the way you handle pen input.
The design and coding instructions in this section help you customize your UWP app for specific types of inputs
and devices.

Input primer
See our Input primer to familiarize yourself with each input device type and its behaviors, capabilities, and
limitations when paired with certain form factors.

Inputs and interactions


Surface Dial
Learn how to integrate this brand new category of input device into your Windows apps.
This device is intended as a secondary, multi-modal input device that complements or modifies input from a
primary device.

Cortana
Extend the basic functionality of Cortana with voice commands that launch and execute a single action in an
external application.

Speech
Integrate speech recognition and text-to-speech (also known as TTS, or speech synthesis) directly into the user
experience of your app.

Pen
Optimize your UWP app for pen input to provide both standard pointer device functionality and the best
Windows Ink experience for your users.

Keyboard
Keyboard input is an important part of the overall user interaction experience for apps. The keyboard is
indispensable to people with certain disabilities or users who just consider it a more efficient way to interact
with an app.

Touch
UWP includes a number of different mechanisms for handling touch input, all of which enable you to create an
immersive experience that your users can explore with confidence.

Touchpad
A touchpad combines both indirect multi-touch input with the precision input of a pointing device, such as a
mouse. This combination makes the touchpad suited to both a touch-optimized UI and the smaller targets of
productivity apps.

Mouse
Mouse input is best suited for user interactions that require precision when pointing and clicking. This inherent
precision is naturally supported by the UI of Windows, which is optimized for the imprecise nature of touch.

Gamepad and remote control


UWP apps now support gamepad and remote control input. Gamepads and remote controls are the primary
input devices for Xbox and TV experiences.

Multiple inputs
To accommodate as many users and devices as possible, we recommend that you design your apps to work with as
many input types as possible (gesture, speech, touch, touchpad, mouse, and keyboard). Doing so will maximize
flexibility, usability, and accessibility.

Identify input devices


Identify the input devices connected to a Universal Windows Platform (UWP) device and identify their
capabilities and attributes.

Handle pointer input


Receive, process, and manage input data from pointing devices, such as touch, mouse, pen/stylus, and
touchpad, in Universal Windows Platform (UWP) apps.

Custom text input


The core text APIs in the Windows.UI.Text.Core namespace enable a UWP app to receive text input from any text
service supported on Windows devices. This enables the app to receive text in any language and from any input
type, like keyboard, speech, or pen.

Selecting text and images


This article describes selecting and manipulating text, images, and controls and provides user experience
guidelines that should be considered when using these mechanisms in your apps.

Panning
Panning or scrolling lets users navigate within a single view, to display the content of the view that does not fit
within the viewport.

Optical zoom and resizing


This article describes Windows zooming and resizing elements and provides user experience guidelines for
using these interaction mechanisms in your apps.

Rotation
This article describes the new Windows UI for rotation and provides user experience guidelines that should be
considered when using this new interaction mechanism in your UWP app.
Targeting
Touch targeting in Windows uses the full contact area of each finger that is detected by a touch digitizer. The
larger, more complex set of input data reported by the digitizer is used to increase precision when determining
the user's intended (or most likely) target.

Visual feedback
Use visual feedback to show users when their interactions are detected, interpreted, and handled. Visual
feedback can help users by encouraging interaction. It indicates the success of an interaction, which improves
the user's sense of control. It also relays system status and reduces errors.

Devices
Getting to know the devices that support UWP apps will help you offer the best user experience for each form
factor. When designing for a particular device, the main considerations include how the app will appear on that
device, where, when, and how the app will be used on that device, and how the user will interact with that device.

Device primer
Getting to know the devices that support UWP apps will help you offer the best user experience for each form
factor.

Designing for Xbox and TV


Design your Universal Windows Platform (UWP) app so that it looks good and functions well on Xbox One and
television screens.
Interaction primer
3/6/2017 11 min to read Edit on GitHub

User interactions in the Universal Windows Platform (UWP) are a combination of input and output sources (such
as mouse, keyboard, pen, touch, touchpad, speech, Cortana, controller, gesture, gaze, and so on), along with
various modes or modifiers that enable extended experiences (including mouse wheel and buttons, pen eraser
and barrel buttons, touch keyboard, and background app services).
The UWP uses a "smart" contextual interaction system that, in most cases, eliminates the need to individually
handle the unique types of input received by your app. This includes handling touch, touchpad, mouse, and pen
input as a generic pointer type to support static gestures such as tap or press-and-hold, manipulation gestures
such as slide for panning, or rendering digital ink.
Familiarize yourself with each input device type and its behaviors, capabilities, and limitations when paired with
certain form factors. This can help you decide whether the platform controls and affordances are sufficient for
your app, or require you to provide customized interaction experiences.

Surface Dial
For Windows 10 Anniversary Update, we're introducing a new category of input device called Windows Wheel.
The Surface Dial is the first in this class of device.
Device support
Tablet
PCs and laptops
Typical usage
With a form factor based on a rotate action (or gesture), the Surface Dial is intended as a secondary, multi-modal
input device that complements or modifies input from a primary device. In most cases, the device is manipulated
by a user's non-dominant hand while they perform a task with their dominant hand (such as inking with a pen).
More info
Surface Dial design guidelines

Cortana
In Windows 10, Cortana extensibility lets you handle voice commands from a user and launch your application to
carry out a single action.
Device support
Phones and phablets
Tablet
PCs and laptops
Surface Hub
IoT
Xbox
HoloLens

Typical usage
A voice command is a single utterance, defined in a Voice Command Definition (VCD) file, directed at an installed
app through Cortana. The app can be launched in the foreground or background, depending on the level and
complexity of the interaction. For instance, voice commands that require additional context or user input are best
handled in the foreground, while basic commands can be handled in the background.
Integrating the basic functionality of your app, and providing a central entry point for the user to accomplish most
of the tasks without opening your app directly, lets Cortana become a liaison between your app and the user. In
many cases, this can save the user significant time and effort. For more info, see Cortana design guidelines.
More info
Cortana design guidelines

Speech
Speech is an effective and natural way for people to interact with applications. It's an easy and accurate way to
communicate with applications, and lets people be productive and stay informed in a variety of situations.
Speech can complement or, in many cases, be the primary input type, depending on the user's device. For
example, devices such as HoloLens and Xbox do not support traditional input types (aside from a software
keyboard in specific scenarios). Instead, they rely on speech input and output (often combined with other non-
traditional input types such as gaze and gesture) for most user interactions.
Text-to-speech (also known as TTS, or speech synthesis) is used to inform or direct the user.
Device support
Phones and phablets
Tablet
PCs and laptops
Surface Hub
IoT
Xbox
HoloLens

Typical usage
There are three modes of Speech interaction:
Natural language
Natural language is how we verbally interact with people on a regular basis. Our speech varies from person to
person and situation to situation, and is generally understood. When it's not, we often use different words and
word order to get the same idea across.
Natural language interactions with an app are similar: we speak to the app through our device as if it were a
person and expect it to understand and react accordingly.
Natural language is the most advanced mode of speech interaction, and can be implemented and exposed
through Cortana.
Command and control
Command and control is the use of verbal commands to activate controls and functionality such as clicking a
button or selecting a menu item.
As command and control is critical to a successful user experience, a single input type is generally not
recommended. Speech is typically one of several input options for a user based on their preferences or hardware
capabilities.
Dictation
The most basic speech input method. Each utterance is converted to text.
Dictation is typically used when an app doesnt need to understand meaning or intent.
More info
Speech design guidelines

Pen
A pen (or stylus) can serve as a pixel precise pointing device, like a mouse, and is the optimal device for digital ink
input.
Note There are two types of pen devices: active and passive.
Passive pens do not contain electronics, and effectively emulate touch input from a finger. They require a basic
device display that recognizes input based on contact pressure. Because users often rest their hand as they
write on the input surface, input data can become polluted due to unsuccessful palm rejection.
Active pens contain electronics and can work with complex device displays to provide much more extensive
input data (including hover, or proximity data) to the system and your app. Palm rejection is much more
robust.
When we refer to pen devices here, we are referring to active pens that provide rich input data and are used
primarily for precise ink and pointing interactions.
Device support
Phones and phablets
Tablet
PCs and laptops
Surface Hub
IoT

Typical usage
The Windows ink platform, together with a pen, provides a natural way to create handwritten notes, drawings, and
annotations. The platform supports capturing ink data from digitizer input, generating ink data, rendering that
data as ink strokes on the output device, managing the ink data, and performing handwriting recognition. In
addition to capturing the spatial movements of the pen as the user writes or draws, your app can also collect info
such as pressure, shape, color, and opacity, to offer user experiences that closely resemble drawing on paper with
a pen, pencil, or brush.
Where pen and touch input diverge is the ability for touch to emulate direct manipulation of UI elements on the
screen through physical gestures performed on those objects (such as swiping, sliding, dragging, rotating, and so
on).
You should provide pen-specific UI commands, or affordances, to support these interactions. For example, use
previous and next (or + and -) buttons to let users flip through pages of content, or rotate, resize, and zoom
objects.
More info
Pen design guidelines

Touch
With touch, physical gestures from one or more fingers can be used to either emulate the direct manipulation of
UI elements (such as panning, rotating, resizing, or moving), as an alternative input method (similar to mouse or
pen), or as a complementary input method (to modify aspects of other input, such as smudging an ink stroke
drawn with a pen). Tactile experiences such as this can provide more natural, real-world sensations for users as
they interact with elements on a screen.
Device support
Phones and phablets
Tablet
PCs and laptops
Surface Hub
IoT

Typical usage
Support for touch input can vary significantly, depending on the device.
Some devices don't support touch at all, some devices support a single touch contact, while others support multi-
touch (two or more contacts).
Most devices that support multi-touch input, typically recognize ten unique, concurrent contacts.
Surface Hub devices recognize 100 unique, concurrent touch contacts.
In general, touch is:
Single user, unless being used with a Microsoft Team device like Surface Hub, where collaboration is
emphasized.
Not constrained to device orientation.
Used for all interactions, including text input (touch keyboard) and inking (app-configured).
More info
Touch design guidelines

Touchpad
A touchpad combines both indirect multi-touch input with the precision input of a pointing device, such as a
mouse. This combination makes the touchpad suited to both a touch-optimized UI and the smaller targets of
productivity apps.
Device support
PCs and laptops
IoT

Typical usage
Touchpads typically support a set of touch gestures that provide support similar to touch for direct manipulation
of objects and UI.
Because of this convergence of interaction experiences supported by touchpads, we recommend also providing
mouse-style UI commands or affordances rather than relying solely on support for touch input. Provide touchpad-
specific UI commands, or affordances, to support these interactions.
You should provide mouse-specific UI commands, or affordances, to support these interactions. For example, use
previous and next (or + and -) buttons to let users flip through pages of content, or rotate, resize, and zoom
objects.
More info
Touchpad design guidelines

Keyboard
A keyboard is the primary input device for text, and is often indispensable to people with certain disabilities or
users who consider it a faster and more efficient way to interact with an app.
With Continuum for Phone, a new experience for compatible Windows 10 mobile devices, users can connect their
phones to a mouse and keyboard to make their phones work like a laptop.
Device support
Phones and phablets
Tablet
PCs and laptops
Surface Hub
IoT
Xbox
HoloLens

Typical usage
Users can interact with Universal Windows apps through a hardware keyboard and two software keyboards: the
On-Screen Keyboard (OSK) and the touch keyboard.
The OSK is a visual, software keyboard that you can use instead of the physical keyboard to type and enter data
using touch, mouse, pen/stylus or other pointing device (a touch screen is not required). The OSK is provided for
systems that don't have a physical keyboard, or for users whose mobility impairments prevent them from using
traditional physical input devices. The OSK emulates most, if not all, the functionality of a hardware keyboard.
The touch keyboard is a visual, software keyboard used for text entry with touch input. The touch keyboard is not
a replacement for the OSK as it is used for text input only (it doesn't emulate the hardware keyboard) and appears
only when a text field or other editable text control gets focus. The touch keyboard does not support app or
system commands.
Note The OSK has priority over the touch keyboard, which won't be shown if the OSK is present.
In general, a keyboard is:
Single user.
Not constrained to device orientation.
Used for text input, navigation, gameplay, and accessibility.
Always available, either proactively or reactively.
More info
Keyboard design guidelines

Mouse
A mouse is best suited for productivity apps and high-density UI where user interactions require pixel-level
precision for targeting and commanding.
Device support
Phones and phablets
Tablet
PCs and laptops
Surface Hub
IoT

Typical usage
Mouse input can be modified with the addition of various keyboard keys (Ctrl, Shift, Alt, and so on). These keys
can be combined with the left mouse button, the right mouse button, the wheel button, and the X buttons for an
expanded mouse-optimized command set. (Some Microsoft mouse devices have two additional buttons, referred
to as X buttons, typically used to navigate back and forward in Web browsers).
Similar to pen, where mouse and touch input diverge is the ability for touch to emulate direct manipulation of UI
elements on the screen through physical gestures performed on those objects (such as swiping, sliding, dragging,
rotating, and so on).
You should provide mouse-specific UI commands, or affordances, to support these interactions. For example, use
previous and next (or + and -) buttons to let users flip through pages of content, or rotate, resize, and zoom
objects.
More info
Mouse design guidelines

Gesture
A gesture is any form of user movement that is recognized as input for controlling or interacting with an
application. Gestures take many forms, from simply using a hand to target something on the screen, to specific,
learned patterns of movement, to long stretches of continuous movement using the entire body. Be careful when
designing custom gestures, as their meaning can vary depending on locale and culture.
Device support
PCs and laptops
IoT
Xbox
HoloLens

Typical usage
Static gesture events are fired after an interaction is complete.
Static gesture events include Tapped, DoubleTapped, RightTapped, and Holding.
Manipulation gesture events indicate an ongoing interaction. They start firing when the user touches an element
and continue until the user lifts their finger(s), or the manipulation is canceled.
Manipulation events include multi-touch interactions such as zooming, panning, or rotating, and
interactions that use inertia and velocity data such as dragging. (The information provided by the
manipulation events doesn't identify the interaction, but rather provides data such as position, translation
delta, and velocity.)
Pointer events such as PointerPressed and PointerMoved provide low-level details for each touch contact,
including pointer motion and the ability to distinguish press and release events.
Because of the convergence of interaction experiences supported by Windows, we recommend also providing
mouse-style UI commands or affordances rather than relying solely on support for touch input. For example, use
previous and next (or + and -) buttons to let users flip through pages of content, or rotate, resize, and zoom
objects.

Gamepad/Controller
The gamepad/controller is a highly specialized device typically dedicated to playing games. However, it is also
used for to emulate basic keyboard input and provides a UI navigation experience very similar to the keyboard.
Device support
PCs and laptops
IoT
Xbox

Typical usage
Playing games and interacting with a specialized console.

Multiple inputs
Accommodating as many users and devices as possible and designing your apps to work with as many input
types (gesture, speech, touch, touchpad, mouse, and keyboard) as possible maximizes flexibility, usability, and
accessibility.
Device support
Phones and phablets
Tablet
PCs and laptops
Surface Hub
IoT
Xbox
HoloLens

Typical usage
Just as people use a combination of voice and gesture when communicating with each other, multiple types and
modes of input can also be useful when interacting with an app. However, these combined interactions need to be
as intuitive and natural as possible as they can also create a very confusing experience.
Surface Dial interactions
3/6/2017 18 min to read Edit on GitHub

Surface Dial with Surface Studio and Pen (available for purchase at the Microsoft Store).

Overview
Windows Wheel devices, such as the Surface Dial, are a new category of input device that enable a host of
compelling and unique user interaction experiences for Windows and Windows apps.

IMPORTANT
In this topic, we refer specifically to Surface Dial interactions, but the info is applicable to all Windows Wheel devices.

VIDEOS

Surface Dial app partners Surface Dial for devs

With a form factor based on a rotate action (or gesture), the Surface Dial is intended as a secondary, multi-modal
input device that complements input from a primary device. In most cases, the device is manipulated by a user's
non-dominant hand while performing a task with their dominant hand (such as inking with a pen). It is not
designed for precision pointer input (like touch, pen, or mouse).
The Surface Dial also supports both a press and hold action and a click action. Press and hold has a single function:
display a menu of commands. If the menu is active, the rotate and click input is processed by the menu. Otherwise,
the input is passed to your app for processing.
As with all Windows input devices, you can customize and tailor the Surface Dial interaction experience
to suit the functionality in your apps.
TIP
Used together, the Surface Dial and the new Surface Studio can provide an even more distinctive user experience.
In addition to the default press and hold menu experience described, the Surface Dial can also be placed directly on the
screen of the Surface Studio. This enables a special "on-screen" menu.
By detecting both the contact location and bounds of the Surface Dial, the system uses this info to handle occlusion by the
device and display a larger version of the menu that wraps around the outside of the Dial. This same info can also be used by
your app to adapt the UI for both the presence of the device and its anticipated usage, such as the placement of the user's
hand and arm.

SURFACE DIAL OFF-SCREEN MENU SURFACE DIAL ON-SCREEN MENU

System integration
The Surface Dial is tightly integrated with Windows and supports a set of built-in tools on the menu: system
volume, scroll, zoom in/out, and undo/redo.
This collection of built-in tools adapts to the current system context to include:
A system brightness tool when the user is on the Windows Desktop
A previous/next track tool when media is playing
In addition to this general platform support, the Surface Dial is also tightly integrated with the Windows Ink
platform controls (InkCanvas and InkToolbar).

Surface Dial with Surface Pen


When used with the Surface Dial, these controls enable additional functionality for modifying ink attributes and
controlling the ink toolbars ruler stencil.
When you open the Surface Dial Menu in an inking application that uses the ink toolbar, the menu now includes
tools for controlling pen type and brush thickness. When the ruler is enabled, a corresponding tool is added to the
menu that lets the device control the position and angle of the ruler.
Surface Dial menu with pen selection tool for the Windows Ink toolbar

Surface Dial menu with stroke size tool for the Windows Ink toolbar
Surface Dial menu with ruler tool for the Windows Ink toolbar

User customization
Users can customize some aspects of their Dial experience through the Windows Settings -> Devices -> Wheel
page, including default tools, vibration (or haptic feedback), and writing (or dominant) hand.
When customizing the Surface Dial user experience, you should always ensure that a particular function or
behavior is available and enabled by the user.

Custom tools
Here we discuss both UX and developer guidance for customizing the tools exposed on the Surface Dial menu.
UX guidance
Ensure your tools correspond to the current context When you make it clear and intuitive what a tool does
and how the Surface Dial interaction works, you help users learn quickly and stay focused on their task.
Minimize the number of app tools as much as possible
The Surface Dial menu has room for seven items. If there are eight or more items, the user needs to turn the Dial to
see which tools are available in an overflow flyout, making the menu difficult to navigate and tools difficult to
discover and select.
We recommend providing a single custom tool for your app or app context. Doing so enables you to set that tool
based on what the user is doing without requiring them to activate the Surface Dial menu and select a tool.
Dynamically update the collection of tools
Because Surface Dial menu items do not support a disabled state, you should dynamically add and remove tools
(including built-in, default tools) based on user context (current view or focused window). If a tool is not relevant to
the current activity or its redundant, remove it.

IMPORTANT
When you add an item to the menu, ensure the item does not already exist.

Dont remove the built-in system volume setting tool


Volume control is typically always required by user. They might be listening to music while using your app, so
volume and next track tools should always be accessible from the Surface Dial menu. (The next track tool is
automatically added to the menu when media is playing.)
Be consistent with menu organization
This helps users with discovering and learning what tools are available when using your app, and helps improve
their efficiency when switching tools.
Provide high-quality icons consistent with the built-in icons
Icons can convey professionalism and excellence, and inspire trust in users.
Provide a high-quality 64 x 64 pixel PNG image (44 x 44 is the smallest supported)
Ensure the background is transparent
The icon should fill most of the image
A white icon should have a black outline to be visible in high contrast mode

Icon with alpha background Icon displayed on wheel menu with Icon displayed on wheel menu with
default theme High Contrast White theme

Use concise and descriptive names


The tool name is displayed in the tool menu along with the tool icon and is also used by screen readers.
Names should be short to fit inside the central circle of the wheel menu
Names should clearly identify the primary action (a complementary action can be implied):
Scroll indicates the effect of both rotation directions
Undo specifies a primary action, but redo (the complementary action) can be inferred and easily
discovered by the user
Developer guidance
You can customize the Surface Dial experience to complement the functionality in your apps through a
comprehensive set of Windows Runtime APIs.
As previously mentioned, the default Surface Dial menu is pre-populated with a set of built-in tools covering a
broad range of basic system features (system volume, system brightness, scroll, zoom, undo, and media control
when the system detects ongoing audio or video playback). However, these default tools might not provide the
functionality required by your app.
In the following sections, we describe how to add a custom tool to the Surface Dial menu and specify which built-in
tools are exposed.
Add a custom tool
In this example, we add a basic custom tool that passes the input data from both the rotation and click events to
some XAML UI controls.
1. First, we declare our UI (just a slider and toggle button) in XAML.

The sample app UI

<Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">


<Grid.RowDefinitions>
<RowDefinition Height="Auto"/>
<RowDefinition Height="*"/>
</Grid.RowDefinitions>
<StackPanel x:Name="HeaderPanel"
Orientation="Horizontal"
Grid.Row="0">
<TextBlock x:Name="Header"
Text="RadialController customization sample"
VerticalAlignment="Center"
Style="{ThemeResource HeaderTextBlockStyle}"
Margin="10,0,0,0" />
</StackPanel>
<StackPanel Orientation="Vertical"
VerticalAlignment="Center"
HorizontalAlignment="Center"
Grid.Row="1">
<!-- Slider for rotation input -->
<Slider x:Name="RotationSlider"
Width="300"
HorizontalAlignment="Left"/>
<!-- Switch for click input -->
<ToggleSwitch x:Name="ButtonToggle"
HorizontalAlignment="Left"/>
</StackPanel>
</Grid>

2. Then, in code-behind, we add a custom tool to the Surface Dial menu and declare the RadialController
input handlers.
We get a reference to the RadialController object for the Surface Dial (myController) by calling
CreateForCurrentView.
We then create an instance of a RadialControllerMenuItem (myItem) by calling
RadialControllerMenuItem.CreateFromIcon.
Next, we append that item to the collection of menu items.
We declare the input event handlers (ButtonClicked and RotationChanged) for the RadialController
object.
Finally, we define the event handlers.

public sealed partial class MainPage : Page


{
RadialController myController;

public MainPage()
{
this.InitializeComponent();
// Create a reference to the RadialController.
myController = RadialController.CreateForCurrentView();

// Create an icon for the custom tool.


RandomAccessStreamReference icon =
RandomAccessStreamReference.CreateFromUri(
new Uri("ms-appx:///Assets/StoreLogo.png"));

// Create a menu item for the custom tool.


RadialControllerMenuItem myItem =
RadialControllerMenuItem.CreateFromIcon("Sample", icon);

// Add the custom tool to the RadialController menu.


myController.Menu.Items.Add(myItem);

// Declare input handlers for the RadialController.


myController.ButtonClicked += MyController_ButtonClicked;
myController.RotationChanged += MyController_RotationChanged;
}

// Handler for rotation input from the RadialController.


private void MyController_RotationChanged(RadialController sender,
RadialControllerRotationChangedEventArgs args)
{
if (RotationSlider.Value + args.RotationDeltaInDegrees > 100)
{
RotationSlider.Value = 100;
return;
}
else if (RotationSlider.Value + args.RotationDeltaInDegrees < 0)
{
RotationSlider.Value = 0;
return;
}
RotationSlider.Value += args.RotationDeltaInDegrees;
}

// Handler for click input from the RadialController.


private void MyController_ButtonClicked(RadialController sender,
RadialControllerButtonClickedEventArgs args)
{
ButtonToggle.IsOn = !ButtonToggle.IsOn;
}
}
When we run the app, we use the Surface Dial to interact with it. First, we press and hold to open the menu and
select our custom tool. Once the custom tool is activated, the slider control can be adjusted by rotating the Dial and
the switch can be toggled by clicking the Dial.

The sample app UI activated using the Surface Dial custom tool
Specify the built-in tools
You can use the RadialControllerConfiguration class to customize the collection of built-in menu items for your
app.
For example, if your app doesnt have any scrolling or zooming regions and doesnt require undo/redo
functionality, these tools can be removed from the menu. This opens space on the menu to add custom tools for
your app.

IMPORTANT
The Surface Dial menu must have at least one menu item. If all default tools are removed before you add one of your custom
tools, the default tools are restored and your tool is appended to the default collection.

Per the design guidelines, we do not recommend removing the media control tools (volume and previous/next
track) as users often have background music playing while they perform other tasks.
Here, we show how to configure the Surface Dial menu to include only media controls for volume and
next/previous track.
public MainPage()
{
...
//Remove a subset of the default system tools
RadialControllerConfiguration myConfiguration =
RadialControllerConfiguration.GetForCurrentView();
myConfiguration.SetDefaultMenuItems(new[]
{
RadialControllerSystemMenuItemKind.Volume,
RadialControllerSystemMenuItemKind.NextPreviousTrack
});
}

Custom interactions
As mentioned, the Surface Dial supports three gestures (press and hold, rotate, click) with corresponding default
interactions.
Ensure any custom interactions based on these gestures make sense for the selected action or tool.

NOTE
The interaction experience is dependent on the state of the Surface Dial menu. If the menu is active, it processes the input;
otherwise, your app does.

Press and hold


This gesture activates and shows the Surface Dial menu, there is no app functionality associated with this gesture.
By default, the menu is displayed at the center of the users screen. However, the user can grab it and move it
anywhere they choose.

NOTE
When the Surface Dial is placed on the screen of the Surface Studio, the menu is centered at the on-screen location of the
Surface Dial.

Rotate
The Surface Dial is primarily designed to support rotation for interactions that involve smooth, incremental
adjustments to analog values or controls.
The device can be rotated both clockwise and counter-clockwise, and can also provide haptic feedback to indicate
discrete distances.

NOTE
Haptic feedback can be disabled by the user in the Windows Settings -> Devices -> Wheel page.

UX guidance
Tools with continuous or high rotational sensitivity should disable haptic feedback
Haptic feedback matches the rotational sensitivity of the active tool. We recommend disabling haptic feedback for
tools with continuous or high rotational sensitivity as the user experience can get uncomfortable.
Dominant hand should not affect rotation-based interactions
The Surface Dial cannot detect which hand is being used, but the user can set the writing (or dominant hand) in
Windows Settings -> Device -> Pen & Windows Ink.
Locale should be considered for all rotation interactions
Maximize customer satisfaction by accomodating and adapting your interactions to locale and right-to-left layouts.
The built-in tools and commands on the Dial menu follow these guidelines for rotation-based interactions:

Left Right
Up Down
Out In

COUNTER-CLOCKWISE
CONCEPTUAL DIRECTION MAPPING TO SURFACE DIAL CLOCKWISE ROTATION ROTATION

Horizontal Left and right mapping Right Left


based on the top of the
Surface Dial

Vertical Up and down mapping Down Up


based on the left side of the
Surface Dial

Z-axis In (or nearer) mapped to In Out


up/right
Out (or further) mapped to
down/left

Developer guidance
As the user rotates the device, RadialController.RotationChanged events are fired based on a delta
(RadialControllerRotationChangedEventArgs.RotationDeltaInDegrees) relative to the direction of rotation.
The sensitivity (or resolution) of the data can be set with the RadialController.RotationResolutionInDegrees
property.

NOTE
By default, a rotational input event is delivered to a RadialController object only when the device is rotated a minimum of
10 degrees. Each input event causes the device to vibrate.

In general, we recommend disabling haptic feedback when the rotation resolution is set to less than 5 degrees. This
provides a smoother experience for continuous interactions.
You can enable and disable haptic feedback for custom tools by setting the
RadialController.UseAutomaticHapticFeedback property.

NOTE
You cannot override the haptic behavior for system tools such as the volume control. For these tools, haptic feedback can be
disabled only by the user from the Wheel settings page.
Heres an example of how to customize the resolution of the rotation data and enable or disable haptic feedback.

private void MyController_ButtonClicked(RadialController sender,


RadialControllerButtonClickedEventArgs args)
{
ButtonToggle.IsOn = !ButtonToggle.IsOn;

if(ButtonToggle.IsOn)
{
//high resolution mode
RotationSlider.LargeChange = 1;
myController.UseAutomaticHapticFeedback = false;
myController.RotationResolutionInDegrees = 1;
}
else
{
//low resolution mode
RotationSlider.LargeChange = 10;
myController.UseAutomaticHapticFeedback = true;
myController.RotationResolutionInDegrees = 10;
}
}

Click
Clicking the Surface Dial is similar to clicking the left mouse button (the rotation state of the device has no effect on
this action).
UX guidance
Do not map an action or command to this gesture if the user cannot easily recover from the result
Any action taken by your app based on the user clicking the Surface Dial must be reversible. Always enable the
user to easily traverse the app back stack and restore a previous app state.
Binary operations such as mute/unmute or show/hide provide good user experiences with the click gesture.
Modal tools should not be enabled or disabled by clicking the Surface Dial
Some app/tool modes can conflict with, or disable, interactions that rely on rotation. Tools such as the ruler in the
Windows Ink toolbar, should be toggled on or off through other UI affordances (the Ink Toolbar provides a built-in
ToggleButton control).
For modal tools, map the active Surface Dial menu item to the target tool or to the previously selected menu item.
Developer guidance
When the Surface Dial is clicked, a RadialController.ButtonClicked event is fired. The
RadialControllerButtonClickedEventArgs include a Contact property that contains the location and bounding
area of the Surface Dial contact on the Surface Studio screen. If the Surface Dial is not in contact with the screen,
this property is null.
On-screen
As described earlier, the Surface Dial can be used in conjunction with the Surface Studio to display the Surface Dial
menu in a special on-screen mode.
When in this mode, you can integrate and customize your Dial interaction experiences with your apps even further.
Examples of unique experiences only possible with the Surface Dial and Surface Studio include:
Displaying contextual tools (such as a color palette) based on the position of the Surface Dial, which makes
them easier to find and use
Setting the active tool based on the UI the Surface Dial is placed on
Magnifying a screen area based on location of the Surface Dial
Unique game interactions based on screen location
UX guidance
Apps should respond when the Surface Dial is detected on-screen
Visual feedback helps indicate to users that your app has detected the device on the screen of the Surface Studio.
Adjust Surface Dial-related UI based on device location
The device (and the user's body) can occlude critical UI depending on where the user places it.
Adjust Surface Dial-related UI based on user interaction
In addition to hardware occlusion, a users hand and arm can occlude part of the screen when using the device.
The occluded area depends on which hand is being used with the device. As the device is designed to be used
primarily with the non-dominant hand, Surface Dial-related UI should adjust for the opposite hand specified by the
user (Windows Settings > Devices > Pen & Windows Ink > Choose which hand you write with setting).
Interactions should respond to Surface Dial position rather than movement
The foot of the device is designed to stick to the screen rather than slide, as it is not a precision pointing device.
Therefore, we expect it to be more common for users to lift and place the Surface Dial rather than drag it across the
screen.
Use screen position to determine user intent
Setting the active tool based on UI context, such as proximity to a control, canvas, or window, can improve the user
experience by reducing the steps required to perform a task.
Developer guidance
When the Surface Dial is placed onto the digitizer surface of the Surface Studio, a
RadialController.ScreenContactStarted event is fired and the contact info
(RadialControllerScreenContactStartedEventArgs.Contact) is provided to your app.
Similarly, if the Surface Dial is clicked when in contact with the digitizer surface of the Surface Studio, a
RadialController.ButtonClicked event is fired and the contact info
(RadialControllerButtonClickedEventArgs.Contact) is provided to your app.
The contact info (RadialControllerScreenContact) includes the X/Y coordinate of the center of the Surface Dial in
the coordinate space of the app (RadialControllerScreenContact.Position), as well as the bounding rectangle
(RadialControllerScreenContact.Bounds) in Device Independent Pixels (DIPs). This info is very useful for
providing context to the active tool and providing device-related visual feedback to the user.
In the following example, weve created a basic app with four different sections, each of which includes one slider
and one toggle. We then use the onscreen position of the Surface Dial to dictate which set of sliders and toggles
are controlled by the Surface Dial.
1. First, we declare our UI (four sections, each with a slider and toggle button) in XAML.
The sample app UI

<Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">


<Grid.RowDefinitions>
<RowDefinition Height="Auto"/>
<RowDefinition Height="*"/>
</Grid.RowDefinitions>
<StackPanel x:Name="HeaderPanel"
Orientation="Horizontal"
Grid.Row="0">
<TextBlock x:Name="Header"
Text="RadialController customization sample"
VerticalAlignment="Center"
Style="{ThemeResource HeaderTextBlockStyle}"
Margin="10,0,0,0" />
</StackPanel>
<Grid Grid.Row="1" x:Name="RootGrid">
<Grid.RowDefinitions>
<RowDefinition Height="*"/>
<RowDefinition Height="*"/>
</Grid.RowDefinitions>
<Grid.ColumnDefinitions>
<ColumnDefinition Width="*"/>
<ColumnDefinition Width="*"/>
</Grid.ColumnDefinitions>
<Grid x:Name="Grid0"
Grid.Row="0"
Grid.Column="0">
<StackPanel Orientation="Vertical"
VerticalAlignment="Center"
HorizontalAlignment="Center">
<!-- Slider for rotational input -->
<Slider x:Name="RotationSlider0"
Width="300"
HorizontalAlignment="Left"/>
<!-- Switch for button input -->
<ToggleSwitch x:Name="ButtonToggle0"
<ToggleSwitch x:Name="ButtonToggle0"
HorizontalAlignment="Left"/>
</StackPanel>
</Grid>
<Grid x:Name="Grid1"
Grid.Row="0"
Grid.Column="1">
<StackPanel Orientation="Vertical"
VerticalAlignment="Center"
HorizontalAlignment="Center">
<!-- Slider for rotational input -->
<Slider x:Name="RotationSlider1"
Width="300"
HorizontalAlignment="Left"/>
<!-- Switch for button input -->
<ToggleSwitch x:Name="ButtonToggle1"
HorizontalAlignment="Left"/>
</StackPanel>
</Grid>
<Grid x:Name="Grid2"
Grid.Row="1"
Grid.Column="0">
<StackPanel Orientation="Vertical"
VerticalAlignment="Center"
HorizontalAlignment="Center">
<!-- Slider for rotational input -->
<Slider x:Name="RotationSlider2"
Width="300"
HorizontalAlignment="Left"/>
<!-- Switch for button input -->
<ToggleSwitch x:Name="ButtonToggle2"
HorizontalAlignment="Left"/>
</StackPanel>
</Grid>
<Grid x:Name="Grid3"
Grid.Row="1"
Grid.Column="1">
<StackPanel Orientation="Vertical"
VerticalAlignment="Center"
HorizontalAlignment="Center">
<!-- Slider for rotational input -->
<Slider x:Name="RotationSlider3"
Width="300"
HorizontalAlignment="Left"/>
<!-- Switch for button input -->
<ToggleSwitch x:Name="ButtonToggle3"
HorizontalAlignment="Left"/>
</StackPanel>
</Grid>
</Grid>
</Grid>

2. Here's the code-behind with handlers defined for Surface Dial screen position.

Slider ActiveSlider;
ToggleSwitch ActiveSwitch;
Grid ActiveGrid;

public MainPage()
{
...

myController.ScreenContactStarted +=
MyController_ScreenContactStarted;
myController.ScreenContactContinued +=
MyController_ScreenContactContinued;
myController.ScreenContactEnded +=
MyController_ScreenContactEnded;
MyController_ScreenContactEnded;
myController.ControlLost += MyController_ControlLost;

//Set initial grid for Surface Dial input.


ActiveGrid = Grid0;
ActiveSlider = RotationSlider0;
ActiveSwitch = ButtonToggle0;
}

private void MyController_ScreenContactStarted(RadialController sender,


RadialControllerScreenContactStartedEventArgs args)
{
//find grid at contact location, update visuals, selection
ActivateGridAtLocation(args.Contact.Position);
}

private void MyController_ScreenContactContinued(RadialController sender,


RadialControllerScreenContactContinuedEventArgs args)
{
//if a new grid is under contact location, update visuals, selection
if (!VisualTreeHelper.FindElementsInHostCoordinates(
args.Contact.Position, RootGrid).Contains(ActiveGrid))
{
ActiveGrid.Background = new
SolidColorBrush(Windows.UI.Colors.White);
ActivateGridAtLocation(args.Contact.Position);
}
}

private void MyController_ScreenContactEnded(RadialController sender, object args)


{
//return grid color to normal when contact leaves screen
ActiveGrid.Background = new
SolidColorBrush(Windows.UI.Colors.White);
}

private void MyController_ControlLost(RadialController sender, object args)


{
//return grid color to normal when focus lost
ActiveGrid.Background = new
SolidColorBrush(Windows.UI.Colors.White);
}

private void ActivateGridAtLocation(Point Location)


{
var elementsAtContactLocation =
VisualTreeHelper.FindElementsInHostCoordinates(Location,
RootGrid);

foreach (UIElement element in elementsAtContactLocation)


{
if (element as Grid == Grid0)
{
ActiveSlider = RotationSlider0;
ActiveSwitch = ButtonToggle0;
ActiveGrid = Grid0;
ActiveGrid.Background = new SolidColorBrush(
Windows.UI.Colors.LightGoldenrodYellow);
return;
}
else if (element as Grid == Grid1)
{
ActiveSlider = RotationSlider1;
ActiveSwitch = ButtonToggle1;
ActiveGrid = Grid1;
ActiveGrid.Background = new SolidColorBrush(
Windows.UI.Colors.LightGoldenrodYellow);
return;
}
else if (element as Grid == Grid2)
else if (element as Grid == Grid2)
{
ActiveSlider = RotationSlider2;
ActiveSwitch = ButtonToggle2;
ActiveGrid = Grid2;
ActiveGrid.Background = new SolidColorBrush(
Windows.UI.Colors.LightGoldenrodYellow);
return;
}
else if (element as Grid == Grid3)
{
ActiveSlider = RotationSlider3;
ActiveSwitch = ButtonToggle3;
ActiveGrid = Grid3;
ActiveGrid.Background = new SolidColorBrush(
Windows.UI.Colors.LightGoldenrodYellow);
return;
}
}
}

When we run the app, we use the Surface Dial to interact with it. First, we place the device on the Surface Studio
screen, which the app detects and associates with the lower right section (see image). We then press and hold the
Surface Dial to open the menu and select our custom tool. Once the custom tool is activated, the slider control can
be adjusted by rotating the Surface Dial and the switch can be toggled by clicking the Surface Dial.

The sample app UI activated using the Surface Dial custom tool

Summary
This topic provides an overview of the Surface Dial input device with UX and developer guidance on how to
customize the user experience for off-screen scenarios as well as on-screen scenarios when used with Surface
Studio.
Feedback
Please send your questions, suggestions, and feedback to radialcontroller@microsoft.com.

Related articles
API reference
RadialController class
RadialControllerButtonClickedEventArgs class
RadialControllerConfiguration class
RadialControllerControlAcquiredEventArgs class
RadialControllerMenu class
RadialControllerMenuItem class
RadialControllerRotationChangedEventArgs class
RadialControllerScreenContact class
RadialControllerScreenContactContinuedEventArgs class
RadialControllerScreenContactStartedEventArgs class
RadialControllerMenuKnownIcon enum
RadialControllerSystemMenuItemKind enum
Samples
Universal Windows Platform samples (C# and C++)
Windows classic desktop sample
Cortana interactions in UWP apps
3/6/2017 1 min to read Edit on GitHub

Cortana offers a robust and comprehensive extensibility framework that enables you to seamlessly incorporate
functionality from your app or service into the Cortana experience.

We've moved
All developer documentation for Cortana features and services is now available through the Cortana dev center.
To get started, see the Overview of Cortana extensibility.
To learn how to extend Cortana with functionality from a UWP app using voice commands, see Cortana voice
commands.

Related articles
VCD elements and attributes v1.2
Designers
Speech design guidelines
Cortana design guidelines for voice commands
Samples
Cortana voice command sample
Cortana design guidelines
1 min to read
Edit on Git Hub

This topic has been moved to https://msdn.microsoft.com/en-us/cortana/voicecommands/voicecommand-design-


guidelines.
Github: https://github.com/Microsoft/cortana-docs/blob/master/docs/voicecommands/voicecommand-design-
guidelines.md
Activate a foreground app with voice commands
through Cortana
1 min to read
Edit on Git Hub

This topic has been moved to https://msdn.microsoft.com/en-us/cortana/voicecommands/launch-a-foreground-


app-with-voice-commands-in-cortana.
Github: https://github.com/Microsoft/cortana-docs/blob/master/docs/voicecommands/launch-a-foreground-app-
with-voice-commands-in-cortana.md
Dynamically modify VCD phrase lists
1 min to read
Edit on Git Hub

This topic has been moved to https://msdn.microsoft.com/en-us/cortana/voicecommands/dynamically-modify-


voice-command-definition--vcd--phrase-lists.
Github: https://github.com/Microsoft/cortana-docs/blob/master/docs/voicecommands/dynamically-modify-voice-
command-definition--vcd--phrase-lists.md
Activate a background app with voice commands
through Cortana
1 min to read
Edit on Git Hub

This topic has been moved to https://msdn.microsoft.com/en-us/cortana/voicecommands/launch-a-background-


app-with-voice-commands-in-cortana.
Github: https://github.com/Microsoft/cortana-docs/blob/master/docs/voicecommands/launch-a-background-app-
with-voice-commands-in-cortana.md
Interact with a background app in Cortana
1 min to read
Edit on Git Hub

This topic has been moved to https://msdn.microsoft.com/en-us/cortana/voicecommands/interact-with-a-


background-app-in-cortana.
Github: https://github.com/Microsoft/cortana-docs/blob/master/docs/voicecommands/interact-with-a-
background-app-in-cortana.md
Deep link from Cortana to a background app
1 min to read
Edit on Git Hub

This topic has been moved to https://msdn.microsoft.com/en-us/cortana/voicecommands/deep-link-into-your-app-


from-cortana.
Github: https://github.com/Microsoft/cortana-docs/blob/master/docs/voicecommands/deep-link-into-your-app-
from-cortana.md
Support natural language voice commands in
Cortana
1 min to read
Edit on Git Hub

This topic has been moved to https://msdn.microsoft.com/en-us/cortana/voicecommands/support-natural-


language-voice-commands-in-cortana.
Github: https://github.com/Microsoft/cortana-docs/blob/master/docs/voicecommands/support-natural-language-
voice-commands-in-cortana.md
Keyboard interactions
3/6/2017 29 min to read Edit on GitHub

Keyboard input is an important part of the overall user interaction experience for apps. The keyboard is
indispensable to people with certain disabilities or users who just consider it a more efficient way to interact with
an app. For example, users should be able to navigate your app by using Tab and arrow keys, activate UI elements
by using Spacebar and Enter, and access commands by using keyboard shortcuts.

Important APIs
KeyDown
KeyUp
KeyRoutedEventArgs

A well-designed keyboard UI is an important aspect of software accessibility. It enables users with vision
impairments or who have certain motor disabilities to navigate an app and interact with its features. Such users
might not be able to operate a mouse and instead rely on various assistive technologies such as keyboard
enhancement tools, on-screen keyboards, screen enlargers, screen readers, and voice input utilities.
Users can interact with universal apps through a hardware keyboard and two software keyboards: the On-Screen
Keyboard (OSK) and the touch keyboard.
On-Screen Keyboard
The On-Screen Keyboard is a visual, software keyboard that you can use instead of the physical keyboard to type
and enter data using touch, mouse, pen/stylus or other pointing device (a touch screen is not required). The On-
Screen Keyboard is provided for systems that don't have a physical keyboard, or for users whose mobility
impairments prevent them from using traditional physical input devices. The On-Screen Keyboard emulates most,
if not all, the functionality of a hardware keyboard.
The On-Screen Keyboard can be turned on from the Keyboard page in Settings > Ease of access.
Note The On-Screen Keyboard has priority over the touch keyboard, which won't be shown if the On-Screen
Keyboard is present.

On-Screen Keyboard

Touch keyboard
The touch keyboard is a visual, software keyboard used for text entry with touch input. It is not a replacement for
the On-Screen Keyboard as it's used for text input only (it doesn't emulate the hardware keyboard).
Depending on the device, the touch keyboard appears when a text field or other editable text control gets focus, or
when the user manually enables it through the Notification Center:

Note The user might have to go to the Tablet mode screen in Settings > System and turn on "Make Windows
more touch-friendly when using your device as a tablet" to enable the automatic appearance of the touch
keyboard.
If your app sets focus programmatically to a text input control, the touch keyboard is not invoked. This eliminates
unexpected behaviors not instigated directly by the user. However, the keyboard does automatically hide when
focus is moved programmatically to a non-text input control.
The touch keyboard typically remains visible while the user navigates between controls in a form. This behavior
can vary based on the other control types within the form.
The following is a list of non-edit controls that can receive focus during a text entry session using the touch
keyboard without dismissing the keyboard. Rather than needlessly churn the UI and potentially disorient the user,
the touch keyboard remains in view because the user is likely to go back and forth between these controls and text
entry with the touch keyboard.
Check box
Combo box
Radio button
Scroll bar
Tree
Tree item
Menu
Menu bar
Menu item
Toolbar
List
List item
Here are examples of different modes for the touch keyboard. The first image is the default layout, the second is
the thumb layout (which might not be available in all languages).
Here are examples of different modes for the touch keyboard. The first image is the default layout, the second is
the thumb layout (which might not be available in all languages).

The touch keyboard in default layout mode: **


The touch keyboard in expanded layout mode:

The touch keyboard in default thumb layout mode:

The touch keyboard in numeric thumb layout mode: **

Successful keyboard interactions enable users to accomplish basic app scenarios using only the keyboard; that is,
users can reach all interactive elements and activate default functionality. A number of factors can affect the degree
of success, including keyboard navigation, access keys for accessibility, and accelerator (or shortcut) keys for
advanced users.
Note The touch keyboard does not support toggle and most system commands (see Patterns).

Navigation
To use a control (including navigation elements) with the keyboard, the control must have focus. One way for a
control to receive keyboard focus is to make it accessible via tab navigation. A well designed keyboard navigation
model provides a logical and predictable tab order that enables a user to explore and use your app quickly and
efficiently.
All interactive controls should have tab stops (unless they are in a group), whereas non-interactive controls, such as
labels, should not.
A set of related controls can be made into a control group and assigned a single tab stop. Control groups are used
for sets of controls that behave like a single control, such as radio buttons. They can also be used when there too
many controls to navigate efficiently with the Tab key alone. The arrow keys, Home, End, Page Up, and Page Down
move input focus among the controls within a group (it is not possible to navigate out of a control group using
these keys).
You should set initial keyboard focus on the element that users will intuitively (or most likely) interact with first
when your app starts. Often, this is the main content view of the app so that a user can immediately start using the
arrow keys to scroll the app content.
Dont set initial keyboard focus on an element with potentially negative, or even disastrous, results. This can
prevent loss of data or system access.
Try to rank and present the most important commands, controls, and content first in both the tab order and the
display order (or visual hierarchy). However, the actual display position can depend on the parent layout container
and certain properties of the child elements that influence the layout. In particular, layouts that use a grid metaphor
or a table metaphor can have a reading order quite different from the tab order. This is not always a problem, but
you should test your app's functionality, both as a touchable UI and as a keyboard-accessible UI.
Tab order should follow reading order, whenever possible. This can reduce confusion and is dependent on locale
and language.
Associate keyboard buttons with appropriate UI (back and forward buttons) in your app.
Try to make navigating back to the start screen of your app and between key content as easy and straightforward
as possible.
Use the arrow keys as keyboard shortcuts for proper inner navigation among child elements of composite
elements. If tree view nodes have separate child elements for handling expandcollapse and node activation, use
the left and right arrow keys to provide keyboard expandcollapse functionality. This is consistent with the
platform controls.
Because the touch keyboard occludes a large portion of the screen, the Universal Windows Platform (UWP)
ensures that the input field with focus scrolls into view as a user navigates through the controls on the form,
including controls that are not currently in view. Custom controls should emulate this behavior.

In some cases, there are UI elements that should stay on the screen the entire time. Design the UI so that the form
controls are contained in a panning region and the important UI elements are static. For example:

Activation
A control can be activated in a number of different ways, whether it currently has focus or not.
Spacebar, Enter, and Esc
The spacebar should activate the control with input focus. The Enter key should activate a default control or the
control with input focus. A default control is the control with initial focus or one that responds exclusively to the
Enter key (typically it changes with input focus). In addition, the Esc key should close or exit transitory UI, such as
menus and dialogs.
The Calculator app shown here uses the spacebar to activate the button with focus, locks the Enter key to the =
button, and locks the Esc key to the C button.
Keyboard modifiers
Keyboard modifiers fall into the following categories:

CATEGORY DESCRIPTION

Shortcut key Perform a common action without UI such as "Ctrl-S" for


Save. Implement keyboard shortcuts for key app functionality.
Not every command has, or requires, a shortcut.

Access key/Hot key Assigned to every visible, top-level control such as "Alt-F" for
the File menu. An access key does not invoke or activate a
command.

Accelerator key Perform default system or app-defined commands such as


"Alt-PrtScrn" for screen capture, "Alt-Tab" to switch apps, or
"F1" for help. A command associated with an accelerator key
does not have to be a menu item.

Application key/Menu key Show context menu.

Window key/Command key Activate system commands such as System Menu, Lock
Screen, or Show Desktop.

Access keys and accelerator keys support interaction with controls directly instead of navigating to them using the
Tab key.

While some controls have intrinsic labels, such as command buttons, check boxes, and radio buttons, other
controls have external labels, such as list views. For controls with external labels, the access key is assigned to
the label, which, when invoked, sets focus to an element or value within the associated control.
The example here, shows the access keys for the Page Layout tab in Word.

Here, the Indent Left text field value is highlighted after entering the access key identified in the associated label.

Usability and accessibility


A well-designed keyboard interaction experience is an important aspect of software accessibility. It enables users
with vision impairments or who have certain motor disabilities to navigate an app and interact with its features.
Such users might be unable to operate a mouse and must, instead, rely on various assistive technologies that
include keyboard enhancement tools and on-screen keyboards (along with screen enlargers, screen readers, and
voice input utilities). For these users, comprehensiveness is more important than consistency.
Experienced users often have a strong preference for using the keyboard, because keyboard-based commands can
be entered more quickly and don't require removing their hands from the keyboard. For these users, efficiency and
consistency are crucial; comprehensiveness is important only for the most frequently used commands.
There are subtle distinctions when designing for usability and accessibility, which is why two different keyboard
access mechanisms are supported.
Access keys have the following characteristics:
An access key is a shortcut to a UI element in your app.
They use the Alt key plus an alphanumeric key.
They are primarily for accessibility.
They are assigned to all menus and most dialog box controls.
They aren't intended to be memorized, so they are documented directly in the UI by underlining the
corresponding control label character.
They have effect only in the current window, and navigate to the corresponding menu item or control.
They aren't assigned consistently because they can't always be. However, access keys should be assigned
consistently for commonly used commands, especially commit buttons.
They are localized.
Because access keys aren't intended to be memorized, they are assigned to a character that is early in the label to
make them easy to find, even if there is a keyword that appears later in the label.
In contrast, accelerator keys have the following characteristics:
An accelerator key is a shortcut to an app command.
They primarily use Ctrl and Function key sequences (Windows system shortcut keys also use Alt+non-
alphanumeric keys and the Windows logo key).
They are primarily for efficiency for advanced users.
They are assigned only to the most commonly used commands.
They are intended to be memorized, and are documented only in menus, tooltips, and Help.
They have effect throughout the entire program, but have no effect if they don't apply.
They must be assigned consistently because they are memorized and not directly documented.
They aren't localized.
Because accelerator keys are intended to be memorized, the most frequently used accelerator keys ideally use
letters from the first or most memorable characters within the command's keywords, such as Ctrl+C for Copy and
Ctrl+Q for Request.
Users should be able to accomplish all tasks supported by your app using only the hardware keyboard or the On-
Screen Keyboard.
You should provide an easy way for users who rely on screen readers and other assistive technology to discover
your app's accelerator keys. Communicate accelerator keys by using tooltips, accessible names, accessible
descriptions, or some other form of on-screen communication. At a minimum, access and accelerator keys should
be well documented in your app's Help content.
Dont assign well-known or standard accelerator keys to other functionality. For example, Ctrl+F is typically used
for find or search.
Dont bother trying to assign access keys to all interactive controls in a dense UI. Just ensure the most important
and the most used have access keys, or use control groups and assign an access key to the control group label.
Don't change commands using keyboard modifiers. Doing so is undiscoverable and can cause confusion.
Don't disable a control while it has input focus. This can interfere with keyboard input.
To ensure successful keyboard interaction experiences, it is critical to test your app thoroughly and exclusively with
the keyboard.

Text input
Always query the device capabilities when relying on keyboard input. On some devices (such as phone), the touch
keyboard can only be used for text input as it does not provide many of the accelerators or command keys found
on a hardware keyboard (such as alt, the function keys, or the Windows Logo key).
Don't make users navigate the app using the touch keyboard. Depending on the control getting focus, the touch
keyboard might get dismissed.
Try to display the keyboard throughout the entire interaction with your form. This eliminates UI churn that can
disorient the user in the middle of a form or text entry flow.
Ensure that users can always see the input field that they're typing into. The touch keyboard occludes half of the
screen, so the input field with focus should scroll into view as the user traverses the form.
A standard hardware keyboard or OSK consists of seven types of keys, each supporting unique functionality:
Character key: sends a literal character to the window with input focus.
Modifier key: alters the function of a primary key when pressed simultaneously, such as Ctrl, Alt, Shift, and the
Windows logo key.
Navigation key: moves input focus or text input location, such as the Tab, Home, End, Page Up, Page Down, and
directional arrow keys.
Editing key: manipulates text, such as the Shift, Tab, Enter, Insert, Backspace, and Delete keys.
Function key: performs a special function, such as F1 through F12 keys.
Toggle key: puts the system into a mode, such as Caps Lock, ScrLk, and Num Lock keys.
Command key: performs a system task or command activation, such as Spacebar, Enter, Esc, Pause/Break, and
Print Screen keys.
In addition to these categories, a secondary class of keys and key combinations exist that can be used as shortcuts
to app functionality:
Access key: exposes controls or menu items by pressing the Alt key with a character key, indicated by
underlining of the access key character assignment in a menu, or displaying of the access key character(s) in an
overlay.
Accelerator key: exposes app commands by pressing a function key or the Ctrl key with a character key. Your
app might or might not have UI that corresponds to the command.
Another class of key combinations, known as secure attention sequence (SAS), cannot be intercepted by an app.
This is a security feature intended to protect the user's system during login, and include Ctrl-Alt-Del and Win-L.
The Notepad app is shown here with the expanded File menu that includes both access keys and accelerator keys.

Keyboard commands
The following is a comprehensive list of the keyboard interactions provided across the various devices that support
keyboard input. Some devices and platforms require native keystrokes and interactions, these are noted.
When designing custom controls and interactions, use this keyboard language consistently to make your app feel
familiar, dependable, and easy to learn.
Don't redefine the default keyboard shortcuts.
The following tables list frequently used keyboard commands. For a complete list of keyboard commands, see
Windows Keyboard Shortcut Keys.
Navigation commands
ACTION KEY COMMAND

Back Alt+Left or the back button on special keyboards

Forward Alt+Right

Up Alt+Up

Cancel or Escape from current mode Esc

Move through items in a list Arrow key (Left, Right, Up, Down)

Jump to next list of items Ctrl+Left

Semantic zoom Ctrl++ or Ctrl+-

Jump to a named item in a collection Start typing item name

Next page Page Up, Page Down or Spacebar

Next tab Ctrl+Tab

Previous tab Ctrl+Shift+Tab

Open app bar Windows+Z

Activate or Navigate into an item Enter

Select Spacebar

Continuously select Shift+Arrow key

Select all Ctrl+A

Common commands

ACTION KEY COMMAND

Pin an item Ctrl+Shift+1

Save Ctrl+S

Find Ctrl+F

Print Ctrl+P

Copy Ctrl+C

Cut Ctrl+X

New item Ctrl+N


ACTION KEY COMMAND

Paste Ctrl+V

Open Ctrl+O

Open address (for example, a URL in Internet Explorer) Ctrl+L or Alt+D

Media navigation commands

ACTION KEY COMMAND

Play/Pause Ctrl+P

Next item Ctrl+F

Preview item Ctrl+B

Note: The media navigation key commands for Play/Pause and Next item are the same as the key commands for
Print and Find, respectively. Common commands should take priority over media navigation commands. For
example, if an app supports both plays media and prints, the key command Ctrl+P should print.

Visual feedback
Use focus rectangles only with keyboard interactions. If the user initiates a touch interaction, make the keyboard UI
gradually fade away. This keeps the UI clean and uncluttered.
Don't display visual feedback if an element doesn't support interaction (such as static text). Again, this keeps the UI
clean and uncluttered.
Try to display visual feedback concurrently for all elements that represent the same input target.
Try to provide on-screen buttons (such as + and -) as hints for emulating touch-based manipulations such as
panning, rotating, zooming, and so on.
For more general guidance on visual feedback, see Guidelines for visual feedback.

Keyboard events and focus


The following keyboard events can occur for both hardware and touch keyboards.

EVENT DESCRIPTION

KeyDown Occurs when a key is pressed.

KeyUp Occurs when a key is released.

Important
Some Windows Runtime controls handle input events internally. In these cases, it might appear that an input event
doesn't occur because your event listener doesn't invoke the associated handler. Typically, this subset of keys is
processed by the class handler to provide built in support of basic keyboard accessibility. For example, the Button
class overrides the OnKeyDown events for both the Space key and the Enter key (as well as OnPointerPressed)
and routes them to the Click event of the control. When a key press is handled by the control class, the KeyDown
and KeyUp events are not raised.
This provides a built-in keyboard equivalent for invoking the button, similar to tapping it with a finger or clicking it
with a mouse. Keys other than Space or Enter still fire KeyDown and KeyUp events. For more info about how
class-based handling of events works (specifically, the "Input event handlers in controls" section), see Events and
routed events overview.
Controls in your UI generate keyboard events only when they have input focus. An individual control gains focus
when the user clicks or taps directly on that control in the layout, or uses the Tab key to step into a tab sequence
within the content area.
You can also call a control's Focus method to force focus. This is necessary when you implement shortcut keys,
because keyboard focus is not set by default when your UI loads. For more info, see the Shortcut keys example
later in this topic.
For a control to receive input focus, it must be enabled, visible, and have IsTabStop and HitTestVisible property
values of true. This is the default state for most controls. When a control has input focus, it can raise and respond
to keyboard input events as described later in this topic. You can also respond to a control that is receiving or
losing focus by handling the GotFocus and LostFocus events.
By default, the tab sequence of controls is the order in which they appear in the Extensible Application Markup
Language (XAML). However, you can modify this order by using the TabIndex property. For more info, see
Implementing keyboard accessibility.

Keyboard event handlers


An input event handler implements a delegate that provides the following information:
The sender of the event. The sender reports the object where the event handler is attached.
Event data. For keyboard events, that data will be an instance of KeyRoutedEventArgs. The delegate for
handlers is KeyEventHandler. The most relevant properties of KeyRoutedEventArgs for most handler
scenarios are Key and possibly KeyStatus.
OriginalSource. Because the keyboard events are routed events, the event data provides OriginalSource. If
you deliberately allow events to bubble up through an object tree, OriginalSource is sometimes the object of
concern rather than sender. However, that depends on your design. For more information about how you might
use OriginalSource rather than sender, see the "Keyboard Routed Events" section of this topic, or Events and
routed events overview.
Attaching a keyboard event handler
You can attach keyboard event-handler functions for any object that includes the event as a member. This includes
any UIElement derived class. The following XAML example shows how to attach handlers for the KeyUp event for
a Grid.

<Grid KeyUp="Grid_KeyUp">
...
</Grid>

You can also attach an event handler in code. For more info, see Events and routed events overview.
Defining a keyboard event handler
The following example shows the incomplete event handler definition for the KeyUp event handler that was
attached in the preceding example.
void Grid_KeyUp(object sender, KeyRoutedEventArgs e)
{
//handling code here
}

Private Sub Grid_KeyUp(ByVal sender As Object, ByVal e As KeyRoutedEventArgs)


' handling code here
End Sub

void MyProject::MainPage::Grid_KeyUp(
Platform::Object^ sender,
Windows::UI::Xaml::Input::KeyRoutedEventArgs^ e)
{
//handling code here
}

Using KeyRoutedEventArgs
All keyboard events use KeyRoutedEventArgs for event data, and KeyRoutedEventArgs contains the following
properties:
Key
KeyStatus
Handled
OriginalSource (inherited from RoutedEventArgs)
Key
The KeyDown event is raised if a key is pressed. Likewise, KeyUp is raised if a key is released. Usually, you listen
to the events to process a specific key value. To determine which key is pressed or released, check the Key value in
the event data. Key returns a VirtualKey value. The VirtualKey enumeration includes all the supported keys.
Modifier keys
Modifier keys are keys such as Ctrl or Shift that users typically press in combination with other keys. Your app can
use these combinations as keyboard shortcuts to invoke app commands.
You detect shortcut key combinations by using code in your KeyDown and KeyUp event handlers. You can then
track the pressed state of the modifier keys you are interested in. When a keyboard event occurs for a non-
modifier key, you can check whether a modifier key is in the pressed state at the same time.

NOTE
The Alt key is represented by the VirtualKey.Menu value.

Shortcut keys example


The following example demonstrates how to implement shortcut keys. In this example, users can control media
playback using Play, Pause, and Stop buttons or Ctrl+P, Ctrl+A, and Ctrl+S keyboard shortcuts. The button XAML
shows the shortcuts by using tooltips and AutomationProperties properties in the button labels. This self-
documentation is important to increase the usability and accessibility of your app. For more info, see Keyboard
accessibility.
Note also that the page sets input focus to itself when it is loaded. Without this step, no control has initial input
focus, and the app does not raise input events until the user sets the input focus manually (for example, by tabbing
to or clicking a control).
<Grid KeyDown="Grid_KeyDown">

<Grid.RowDefinitions>
<RowDefinition Height="Auto" />
<RowDefinition Height="Auto" />
</Grid.RowDefinitions>

<MediaElement x:Name="DemoMovie" Source="xbox.wmv"


Width="500" Height="500" Margin="20" HorizontalAlignment="Center" />

<StackPanel Grid.Row="1" Margin="10"


Orientation="Horizontal" HorizontalAlignment="Center">

<Button x:Name="PlayButton" Click="MediaButton_Click"


ToolTipService.ToolTip="Shortcut key: Ctrl+P"
AutomationProperties.AcceleratorKey="Control P">
<TextBlock>Play</TextBlock>
</Button>

<Button x:Name="PauseButton" Click="MediaButton_Click"


ToolTipService.ToolTip="Shortcut key: Ctrl+A"
AutomationProperties.AcceleratorKey="Control A">
<TextBlock>Pause</TextBlock>
</Button>

<Button x:Name="StopButton" Click="MediaButton_Click"


ToolTipService.ToolTip="Shortcut key: Ctrl+S"
AutomationProperties.AcceleratorKey="Control S">
<TextBlock>Stop</TextBlock>
</Button>

</StackPanel>

</Grid>
//showing implementations but not header definitions
void MainPage::OnNavigatedTo(NavigationEventArgs^ e)
{
(void) e; // Unused parameter
this->Loaded+=ref new RoutedEventHandler(this,&amp;MainPage::ProgrammaticFocus);
}
void MainPage::ProgrammaticFocus(Object^ sender, RoutedEventArgs^ e) {
this->Focus(Windows::UI::Xaml::FocusState::Programmatic);
}

void KeyboardSupport::MainPage::MediaButton_Click(Platform::Object^ sender, Windows::UI::Xaml::RoutedEventArgs^ e)


{
FrameworkElement^ fe = safe_cast<FrameworkElement^>(sender);
if (fe->Name == "PlayButton") {DemoMovie->Play();}
if (fe->Name == "PauseButton") {DemoMovie->Pause();}
if (fe->Name == "StopButton") {DemoMovie->Stop();}
}

void KeyboardSupport::MainPage::Grid_KeyDown(Platform::Object^ sender, Windows::UI::Xaml::Input::KeyRoutedEventArgs^ e)


{
if (e->Key == VirtualKey::Control) isCtrlKeyPressed = true;
}

void KeyboardSupport::MainPage::Grid_KeyUp(Platform::Object^ sender, Windows::UI::Xaml::Input::KeyRoutedEventArgs^ e)


{
if (e->Key == VirtualKey::Control) isCtrlKeyPressed = false;
else if (isCtrlKeyPressed) {
if (e->Key==VirtualKey::P) {
DemoMovie->Play();
}
if (e->Key==VirtualKey::A) {DemoMovie->Pause();}
if (e->Key==VirtualKey::S) {DemoMovie->Stop();}
}
}
protected override void OnNavigatedTo(NavigationEventArgs e)
{
// Set the input focus to ensure that keyboard events are raised.
this.Loaded += delegate { this.Focus(FocusState.Programmatic); };
}

private void MediaButton_Click(object sender, RoutedEventArgs e)


{
switch ((sender as Button).Name)
{
case "PlayButton": DemoMovie.Play(); break;
case "PauseButton": DemoMovie.Pause(); break;
case "StopButton": DemoMovie.Stop(); break;
}
}

private void Grid_KeyUp(object sender, KeyRoutedEventArgs e)


{
if (e.Key == VirtualKey.Control) isCtrlKeyPressed = false;
}

private void Grid_KeyDown(object sender, KeyRoutedEventArgs e)


{
if (e.Key == VirtualKey.Control) isCtrlKeyPressed = true;
else if (isCtrlKeyPressed)
{
switch (e.Key)
{
case VirtualKey.P: DemoMovie.Play(); break;
case VirtualKey.A: DemoMovie.Pause(); break;
case VirtualKey.S: DemoMovie.Stop(); break;
}
}
}
Private isCtrlKeyPressed As Boolean
Protected Overrides Sub OnNavigatedTo(e As Navigation.NavigationEventArgs)

End Sub

Private Sub Grid_KeyUp(sender As Object, e As KeyRoutedEventArgs)


If e.Key = Windows.System.VirtualKey.Control Then
isCtrlKeyPressed = False
End If
End Sub

Private Sub Grid_KeyDown(sender As Object, e As KeyRoutedEventArgs)


If e.Key = Windows.System.VirtualKey.Control Then isCtrlKeyPressed = True
If isCtrlKeyPressed Then
Select Case e.Key
Case Windows.System.VirtualKey.P
DemoMovie.Play()
Case Windows.System.VirtualKey.A
DemoMovie.Pause()
Case Windows.System.VirtualKey.S
DemoMovie.Stop()
End Select
End If
End Sub

Private Sub MediaButton_Click(sender As Object, e As RoutedEventArgs)


Dim fe As FrameworkElement = CType(sender, FrameworkElement)
Select Case fe.Name
Case "PlayButton"
DemoMovie.Play()
Case "PauseButton"
DemoMovie.Pause()
Case "StopButton"
DemoMovie.Stop()
End Select
End Sub

NOTE
Setting AutomationProperties.AcceleratorKey or AutomationProperties.AccessKey in XAML provides string
information, which documents the shortcut key for invoking that particular action. The information is captured by Microsoft
UI Automation clients such as Narrator, and is typically provided directly to the user.
Setting AutomationProperties.AcceleratorKey or AutomationProperties.AccessKey does not have any action on its
own. You will still need to attach handlers for KeyDown or KeyUp events in order to actually implement the keyboard
shortcut behavior in your app. Also, the underline text decoration for an access key is not provided automatically. You must
explicitly underline the text for the specific key in your mnemonic as inline Underline formatting if you wish to show
underlined text in the UI.

Keyboard routed events


Certain events are routed events, including KeyDown and KeyUp. Routed events use the bubbling routing
strategy. The bubbling routing strategy means that an event originates from a child object and is then routed up to
successive parent objects in the object tree. This presents another opportunity to handle the same event and
interact with the same event data.
Consider the following XAML example, which handles KeyUp events for a Canvas and two Button objects. In this
case, if you release a key while focus is held by either Button object, it raises the KeyUp event. The event is then
bubbled up to the parent Canvas.
<StackPanel KeyUp="StackPanel_KeyUp">
<Button Name="ButtonA" Content="Button A"/>
<Button Name="ButtonB" Content="Button B"/>
<TextBlock Name="statusTextBlock"/>
</StackPanel>

The following example shows how to implement the KeyUp event handler for the corresponding XAML content in
the preceding example.

void StackPanel_KeyUp(object sender, KeyRoutedEventArgs e)


{
statusTextBlock.Text = String.Format(
"The key {0} was pressed while focus was on {1}",
e.Key.ToString(), (e.OriginalSource as FrameworkElement).Name);
}

Notice the use of the OriginalSource property in the preceding handler. Here, OriginalSource reports the object
that raised the event. The object could not be the StackPanel because the StackPanel is not a control and cannot
have focus. Only one of the two buttons within the StackPanel could possibly have raised the event, but which
one? You use OriginalSource to distinguish the actual event source object, if you are handling the event on a
parent object.
The Handled property in event data
Depending on your event handling strategy, you might want only one event handler to react to a bubbling event.
For instance, if you have a specific KeyUp handler attached to one of the Button controls, it would have the first
opportunity to handle that event. In this case, you might not want the parent panel to also handle the event. For
this scenario, you can use the Handled property in the event data.
The purpose of the Handled property in a routed event data class is to report that another handler you registered
earlier on the event route has already acted. This influences the behavior of the routed event system. When you set
Handled to true in an event handler, that event stops routing and is not sent to successive parent elements.
AddHandler and already-handled keyboard events
You can use a special technique for attaching handlers that can act on events that you already marked as handled.
This technique uses the AddHandler method to register a handler, rather than using XAML attributes or language-
specific syntax for adding handlers, such as += in C#.
A general limitation of this technique is that the AddHandler API takes a parameter of type RoutedEvent
idnentifying the routed event in question. Not all routed events provide a RoutedEvent identifier, and this
consideration thus affects which routed events can still be handled in the Handled case. The KeyDown and
KeyUp events have routed event identifiers (KeyDownEvent and KeyUpEvent) on UIElement. However, other
events such as TextBox.TextChanged do not have routed event identifiers and thus cannot be used with the
AddHandler technique.
Overriding keyboard events and behavior
You can override key events for specific controls (such as GridView) to provide consistent focus navigation for
various input devices, including keyboard and gamepad.
In the following example, we subclass the control and override the KeyDown behavior to move focus to the the
GridView content when any arrow key is pressed.
public class CustomGridView : GridView
{
protected override void OnKeyDown(KeyRoutedEventArgs e)
{
// Override arrow key behaviors.
if (e.Key != Windows.System.VirtualKey.Left && e.Key !=
Windows.System.VirtualKey.Right && e.Key !=
Windows.System.VirtualKey.Down && e.Key !=
Windows.System.VirtualKey.Up)
base.OnKeyDown(e);
else
FocusManager.TryMoveFocus(FocusNavigationDirection.Down);
}
}

NOTE
If using a GridView for layout only, consider using other controls such as ItemsControl with ItemsWrapGrid.

Commanding
A small number of UI elements provide built-in support for commanding. Commanding uses input-related routed
events in its underlying implementation. It enables processing of related UI input, such as a certain pointer action
or a specific accelerator key, by invoking a single command handler.
If commanding is available for a UI element, consider using its commanding APIs instead of any discrete input
events. For more info, see ButtonBase.Command.
You can also implement ICommand to encapsulate command functionality that you invoke from ordinary event
handlers. This enables you to use commanding even when there is no Command property available.

Text input and controls


Certain controls react to keyboard events with their own handling. For instance, TextBox is a control that is
designed to capture and then visually represent text that was entered by using the keyboard. It uses KeyUp and
KeyDown in its own logic to capture keystrokes, then also raises its own TextChanged event if the text actually
changed.
You can still generally add handlers for KeyUp and KeyDown to a TextBox, or any related control that is intended
to process text input. However, as part of its intended design, a control might not respond to all key values that are
directed to it through key events. Behavior is specific to each control.
As an example, ButtonBase (the base class for Button) processes KeyUp so that it can check for the Spacebar or
Enter key. ButtonBase considers KeyUp equivalent to a mouse left button down for purposes of raising a Click
event. This processing of the event is accomplished when ButtonBase overrides the virtual method OnKeyUp. In
its implementation, it sets Handled to true. The result is that any parent of a button that is listening for a key
event, in the case of a Spacebar, would not receive the already-handled event for its own handlers.
Another example is TextBox. Some keys, such as the ARROW keys, are not considered text by TextBox and are
instead considered specific to the control UI behavior. The TextBox marks these event cases as handled.
Custom controls can implement their own similar override behavior for key events by overriding OnKeyDown /
OnKeyUp. If your custom control processes specific accelerator keys, or has control or focus behavior that is
similar to the scenario described for TextBox, you should place this logic in your own OnKeyDown / OnKeyUp
overrides.
The touch keyboard
Text input controls provide automatic support for the touch keyboard. When the user sets the input focus to a text
control by using touch input, the touch keyboard appears automatically. When the input focus is not on a text
control, the touch keyboard is hidden.
When the touch keyboard appears, it automatically repositions your UI to ensure that the focused element remains
visible. This can cause other important areas of your UI to move off screen. However, you can disable the default
behavior and make your own UI adjustments when the touch keyboard appears. For more info, see Responding to
the appearance of the on-screen keyboard sample.
If you create a custom control that requires text input, but does not derive from a standard text input control, you
can add touch keyboard support by implementing the correct UI Automation control patterns. For more info, see
Respond to the presence of the touch keyboard and the Touch keyboard sample.
Key presses on the touch keyboard raise KeyDown and KeyUp events just like key presses on hardware
keyboards. However, the touch keyboard will not raise input events for Ctrl+A, Ctrl+Z, Ctrl+X, Ctrl+C, and Ctrl+V,
which are reserved for text manipulation in the input control.
You can make it much faster and easier for users to enter data in your app by setting the input scope of the text
control to match the kind of data you expect the user to enter. The input scope provides a hint at the type of text
input expected by the control so the system can provide a specialized touch keyboard layout for the input type. For
example, if a text box is used only to enter a 4-digit PIN, set the InputScope property to Number. This tells the
system to show the numeric keypad layout, which makes it easier for the user to enter the PIN. For more detail, see
Use input scope to change the touch keyboard.

Additional articles in this section


TOPIC DESCRIPTION

Respond to the presence of the touch keyboard Learn how to tailor the UI of your app when showing or
hiding the touch keyboard.

Related articles
Developers
Identify input devices
Respond to the presence of the touch keyboard
Designers
Keyboard design guidelines
Samples
Basic input sample
Low latency input sample
Focus visuals sample
Archive Samples
Input sample
Input: Device capabilities sample
Input: Touch keyboard sample
Responding to the appearance of the on-screen keyboard sample
XAML text editing sample
Access keys
3/6/2017 16 min to read Edit on GitHub

Users who have difficulty using a mouse, such as those with motor disabilities, often rely on the keyboard to
navigate and interact with an app. The XAML framework enables you to provide keyboard access to UI elements
through tab navigation and access keys.
Tab navigation is a basic keyboard accessibility affordance (enabled by default) that lets users move focus
between UI elements using the tab and arrow keys on the keyboard.
Access keys are a supplementary accessibility affordance (that you implement in your app) for quick access to
app commands using a combination of keyboard modifier (Alt key) and one or more alphnumeric keys (typically
a letter associated with the command). Common access keys include Alt+F to open the File menu and Alt+AL to
align left.
For more info about keyboard navigation and accessibility, see Keyboard interaction and Keyboard accessibility.
This article assumes you understand the concepts discussed in those articles.

Access key overview


Access keys let users directly invoke buttons or set focus with the keyboard without requiring them to repeatedly
press the arrow keys and tab. Access keys are intended to be easily discoverable, so you should document them
directly in the UI; for example, a floating badge over the control with the access key.

Figure 1: Example of access keys and associated key tips in Microsoft Word.
An access key is one or several alphanumeric characters associated with a UI element. For example, Microsoft Word
uses H for the Home tab, 2 for Undo button, or JI for the Draw tab.
Access key scope
An access key belongs to a specific scope. For example, in Figure 1, F, H, N, and JI, belong to the pages scope. When
the user presses H, the scope changes to the Home tabs scope and its access keys are shown as seen in Figure 2.
The access keys, V, FP, FF, and FS belong to the Home tabs scope.

Figure 2: Example of access keys and associated key tips for the Home tab scope in Microsoft Word.
Two elements can have the same access keys if the elements belong to different scopes. For example, 2 is the access
key for Undo on the pages scope (Figure 1), and also for Italic in the Home tabs scope (Figure 2). All access keys
belong to the default scope unless another scope is specified.
Access key sequence
Access key combinations are typically pressed one key at a time to achieve the action rather than pressing the keys
simultaneously. (There is an exception to this that we discuss in the next section.) The sequence of keystrokes
needed to achieve the action is an access key sequence. The user presses the Alt key to initiate the access key
sequence. An access key is invoked when the user presses the last key in an access key sequence. For example, to
open the View tab in Word, the user would press the Alt, W access key sequence.
A user can invoke several access keys in an access key sequence. For example, to open the Format Painter in a Word
document, the user presses Alt to initialize the sequence, then presses H to navigate to the Home section and
change the access key scope, then F, and eventually P. H and FP are the access keys for the Home tab and the
Format Painter button respectively.
Some elements finalize an access key sequence after theyre invoked (like the Format Painter button) and others
dont (like the Home tab). Invoking an access key can result in executing a command, moving the focus, changing
the access key scope, or some other action associated with it.

Access Key User Interaction


To understand the Access Key APIs, it is necessary to first understand the user interaction model. Below you can find
a summary of the access key user interaction model:
When the user presses the Alt key, the access key sequence starts, even when the focus is on an input control.
Then, the user can press the access key to invoke the associated action. This user interaction requires that you
document the available access keys within the UI with some visual affordance, such as floating badges, that are
shown when the Alt key is pressed
When the user presses the Alt key plus the access key simultaneously, the access key is invoked immediately.
This is similar to having a keyboard shorcut defined by Alt+access key. In this case, the access key visual
affordances are not shown. However, invoking an access key could result in changing the access key scope. In
this case, an access key sequence is initiated and the visual affordances are shown for the new scope. > [!NOTE]
> Only access keys with one character can take advantage of this user interaction. The Alt+access key
combination is not supported for access keys with more than one character.
When there are several multi-character access keys that share some characters, when the user presses a shared
character, the access keys are filtered. For example, assume there are three access keys shown: A1, A2, and C. If
the user presses A, then only the A1 and A2 access key are shown and the visual affordance for C is hidden.
The Esc key removes one level the filtering. For example, if there are access keys B, ABC, ACD, and ABD and the
user presses A, then only ABC, ACD and ABD are shown. If the user then presses B, only ABC and ABD are shown.
If user presses Esc, one level of filtering is removed and ABC, ACD and ABD access keys are shown. If the user
presses Esc again, another level of filtering is removed and all the access keys - B, ABC, ACD, and ABD are
enabled and their visual affordances are shown.
The Esc key navigates back to the previous scope. Access keys can belong to different scopes to make it easier to
navigate across apps that have a lot of commands. The access key sequence always starts on the main scope. All
access keys belong to the main scope except those that specify a particular UI element as their scope owner.
When the user invokes the access key of an element that is a scope owner, the XAML framework automatically
moves the scope to it and adds it to an internal access key navigation stack. The Esc key moves back through the
access key navigation stack.
There are several ways to dismiss the access key sequence:
The user can press Alt to dismiss an access key sequence that is in progress. Remember that pressing Alt
initiates the access key sequence as well.
The Esc key dismisses the access key sequence if it is in the main scope and is not filtered. > [!NOTE] >
The Esc keystroke is passed to the UI layer to be handled there as well.
The Tab key dismisses the access key sequence and returns to the Tab navigation.
The Enter key dismisses the access key sequence and sends the keystroke to the element that has the
focus.
The arrow keys dismiss the access key sequence and send the keystroke to the element that has the focus.
A pointer down event such a mouse click or a touch dismisses the access key sequence.
By default, when an access key is invoked, the access key sequence is dismissed. However, you can
override this behavior by setting the ExitDisplayModeOnAccessKeyInvoked property to false.
Access key collisions occur when a deterministic finite automaton is not possible. Access key collisions are
not desirable but can happen because of a large number of commands, localization issues, or runtime
generation of access keys.
There are two cases where collisions happen:
When two UI elements have the same access key value and belong to the same access key scope. For
example, an access key A1 for a button1 and access key A1 for a button2 that belongs to the default
scope. In this case, the system resolves the collision by processing the access key of the first element
added to the visual tree. The rest are ignored.
When there is more than one computational option in the same access key scope. For example, A and A1.
When user presses A, the system has two options: invoke the A access key or keep going and consume
the A character from the A1 access key. In this case, the system will process only the first access key
invocation reached by the automata. For the example, A and A1, the system will only invoke the A access
key.
When the user presses an invalid access key value in an access key sequence, nothing happens. There are two
categories of keys considered as valid access keys in an access key sequence:
Special keys to exit the access key sequence: This is Esc, Alt, the arrow keys, Enter, and Tab.
The alphanumeric characters assigned to the access keys.

Access key APIs


To support the access key user interaction, the XAML framework provides the APIs described here.
AccessKeyManager
The AccessKeyManager is a helper class that you can use to manage your UI when access keys are shown or hidden.
The IsDisplayModeEnabledChanged event is raised each time the app enters and exits from the access key
sequence. You can query the IsDisplayModeEnabled property to determine whether the visual affordances are
shown or hidden. You can also call ExitDisplayMode to force dismissal of an access key sequence.

NOTE
There is no built-in implementation of the access key's visual; you have to provide it.

AccessKey
The AccessKey property lets you specify an access key on a UIElement or TextElement. If two elements have the
same access key and the same scope, only the first element added to the visual tree will be processed.
To ensure the XAML Framework processes the access keys, the UI elements must be realized in the visual tree. If
there are no elements in the visual tree with an access key, no access key events are raised.
Access key APIs dont support characters that need two keystrokes to be generated. An individual character must
correspond to a key on a particular languages native keyboard layout.
AccessKeyDisplayRequested/Dismissed
The AccessKeyDisplayRequested and the AccessKeyDisplayDismissed events are raised when an access key visual
affordance should be displayed or dismissed. These events are not raised for elements with their Visibility property
set to Collapsed. The AccessKeyDisplayRequested event is raised during an access key sequence every time the
user presses a character that is used by the access key. For example, if an access key is set to AB, this event is raised
when the user presses Alt, and again when the user presses A. When user presses B, the
AccessKeyDisplayDismissed event is raised
AccessKeyInvoked
The AccessKeyInvoked event is raised when a user reaches the last character of an access key. An access key can
have one or several characters. For example, for access keys A and BC, when a user presses Alt, A, or Alt, B, C, the
event is raised, but not when the user presses just Alt, B. This event is raised when the key is pressed, not when its
released.
IsAccessKeyScope
The IsAccessKeyScope property lets you specify that a UIElement is the root of an access key scope. The
AccessKeyDisplayRequested event is raised for this element, but not for its children. When a user invokes this
element, the XAML framework changes the scope automatically and raises the AccessKeyDisplayRequested event
on its children and the AccessKeyDisplayDismissed event on other UI elements (including the parent). The access
key sequence is not exited when the scope is changed.
AccessKeyScopeOwner
To make an element participate in the scope of another element (the source) that is not its parent in the visual tree,
you can set the AccessKeyScopeOwner property. The element bound to the AccessKeyScopeOwner property must
have IsAccessKeyScope set to true. Otherwise, an exception is thrown.
ExitDisplayModeOnAccessKeyInvoked
By default, when an access key is invoked and the element is not a scope owner, the access key sequence is finalized
and the AccessKeyManager.IsDisplayModeEnabledChanged event is raised. You can set the
ExitDisplayModeOnAccessKeyInvoked property to false to override this behavior and prevent exiting from the
access key sequence after its invoked. (This property is on both UIElement and TextElement).

NOTE
If the element is a scope owner ( IsAccessKeyScope="True" ), the app enters a new access key scope and the
IsDisplayModeEnabledChanged event is not raised.

Localization
Access keys can be localized in multiple languages and loaded at runtime using the ResourceLoader APIs.

Control patterns used when an access key is invoked


Control patterns are interface implementations that expose common control functionality; for example, buttons
implement the Invoke control pattern and this raises the Click event. When an access key is invoked, the XAML
framework looks up whether the invoked element implements a control pattern and executes it if it does. If the
element has more than one control pattern, only one is invoked, the rest are ignored. Control patterns are searched
in the following order:
1. Invoke. For example, a Button.
2. Toggle. For example, a Checkbox.
3. Selection. For example, a RadioButton.
4. Expand/Collapse. For example, a ComboBox.
If a control pattern is not found, the access key invocation will appear as a no-op and a debug message is recorded
to assist you in debugging this situation: "No automation patterns for this component found. Implement desired
behavior in the event handler for AccessKeyInvoked. Setting Handled to true in your event handler will suppress
this message."

NOTE
The debugger's Application process type must be Mixed (Managed and Native) or Native in Visual Studio's Debug Settings
to see this message.

If you do not want an access key to execute its default control pattern, or if the element does not have a control
pattern, you should handle the AccessKeyInvoked event and implement the desired behavior.

private void OnAccessKeyInvoked(UIElement sender, AccessKeyInvokedEventArgs args)


{
args.Handled = true;
//Do something
}

For more info about control patterns, see UI Automation Control Patterns Overview.

Access keys and Narrator


Windows Runtime has UI Automation providers that expose properties on Microsoft UI Automation elements.
These properties enable UI Automation client applications to discover information about pieces of the user
interface. The AutomationProperties.AccessKey property lets clients, such as Narrator, discover the access key
associated with an element. Narrator will read this property every time an element gets focus. If
AutomationProperties.AccessKey is does not have value, the XAML framework returns the AccessKey property value
from the UIElement or TextElement. You don't need to setup AutomationProperties.AccessKey if the AccessKey
property already has a value.

Example: Access key for button


This example shows how to create an access key for a Button. It uses Tooltips as a visual affordance to implement a
floating badge that contains the access key.

NOTE
Tooltip is used for simplicity, but we recommend that you create your own control to display it using, for example, Popup.

The XAML framework automatically calls the handler for the Click event, so you don't need to handle the
AccessKeyInvoked event. The example provides visual affordances for only the characters that are remaining to
invoke the access key by using the AccessKeyDisplayRequestedEventArgs.PressedKeys property. For example, if
there are three displayed access keys: A1, A2, and C, and the user presses A, then only A1 and A2 access key are
unfiltered, and are displayed as 1 and 2 instead of A1 and A2.
<StackPanel
VerticalAlignment="Center"
HorizontalAlignment="Center"
Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">
<Button Content="Press"
AccessKey="PB"
AccessKeyDisplayDismissed="OnAccessKeyDisplayDismissed"
AccessKeyDisplayRequested="OnAccessKeyDisplayRequested"
Click="DoSomething" />
<TextBlock Text="" x:Name="textBlock" />
</StackPanel>

public sealed partial class ButtonSample : Page


{
public ButtonSample()
{
this.InitializeComponent();
}

private void DoSomething(object sender, RoutedEventArgs args)


{
textBlock.Text = "Access Key is working!";
}

private void OnAccessKeyDisplayRequested(UIElement sender, AccessKeyDisplayRequestedEventArgs args)


{
var tooltip = ToolTipService.GetToolTip(sender) as ToolTip;

if (tooltip == null)
{
tooltip = new ToolTip();
tooltip.Background = new SolidColorBrush(Windows.UI.Colors.Black);
tooltip.Foreground = new SolidColorBrush(Windows.UI.Colors.White);
tooltip.Padding = new Thickness(4, 4, 4, 4);
tooltip.VerticalOffset = -20;
tooltip.Placement = PlacementMode.Bottom;
ToolTipService.SetToolTip(sender, tooltip);
}

if (string.IsNullOrEmpty(args.PressedKeys))
{
tooltip.Content = sender.AccessKey;
}
else
{
tooltip.Content = sender.AccessKey.Remove(0, args.PressedKeys.Length);
}

tooltip.IsOpen = true;
}
private void OnAccessKeyDisplayDismissed(UIElement sender, AccessKeyDisplayDismissedEventArgs args)
{
var tooltip = ToolTipService.GetToolTip(sender) as ToolTip;
if (tooltip != null)
{
tooltip.IsOpen = false;
//Fix to avoid show tooltip with mouse
ToolTipService.SetToolTip(sender, null);
}
}
}

Example: Scoped access keys


This example shows how to create scoped access keys. The PivotItems IsAccessKeyScope property prevents the
access keys of the PivotItem's child elements from showing when user presses Alt. These access keys are shown
only when the user invokes the PivotItem because the XAML framework automatically switches the scope. The
framework also hides the access keys of the other scopes.
This example also shows how to handle the AccessKeyInvoked event. The PivotItem doesnt implement any control
pattern, so the XAML framework doesn't invoke any action by default. This implementation shows how to select the
PivotItem that was invoked using the access key.
Finally, the example shows the IsDisplayModeChanged event where you can do something when the display mode
changes. In this example, the Pivot control is collapsed until the user presses Alt. When the user finishes interacting
with the Pivot, it collapses again. You can use IsDisplayModeEnabled to check if the access key display mode is
enabled or disabled.

<Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">


<Pivot x:Name="MyPivot" VerticalAlignment="Center" HorizontalAlignment="Center" >
<Pivot.Items>
<PivotItem
x:Name="PivotItem1"
AccessKey="A"
AccessKeyInvoked="OnAccessKeyInvoked"
AccessKeyDisplayDismissed="OnAccessKeyDisplayDismissed"
AccessKeyDisplayRequested="OnAccessKeyDisplayRequested"
IsAccessKeyScope="True">
<PivotItem.Header>
<TextBlock Text="A Options"/>
</PivotItem.Header>
<StackPanel Orientation="Horizontal" >
<Button Content="ButtonAA" AccessKey="A"
AccessKeyDisplayDismissed="OnAccessKeyDisplayDismissed"
AccessKeyDisplayRequested="OnAccessKeyDisplayRequested" />
<Button Content="ButtonAD1" AccessKey="D1"
AccessKeyDisplayDismissed="OnAccessKeyDisplayDismissed"
AccessKeyDisplayRequested="OnAccessKeyDisplayRequested" />
<Button Content="ButtonAD2" AccessKey="D2"
AccessKeyDisplayDismissed="OnAccessKeyDisplayDismissed"
AccessKeyDisplayRequested="OnAccessKeyDisplayRequested"/>
</StackPanel>
</PivotItem>
<PivotItem
x:Name="PivotItem2"
AccessKeyInvoked="OnAccessKeyInvoked"
AccessKeyDisplayDismissed="OnAccessKeyDisplayDismissed"
AccessKeyDisplayRequested="OnAccessKeyDisplayRequested"
AccessKey="B"
IsAccessKeyScope="true">
<PivotItem.Header>
<TextBlock Text="B Options"/>
</PivotItem.Header>
<StackPanel Orientation="Horizontal">
<Button AccessKey="B" Content="ButtonBB"
AccessKeyDisplayDismissed="OnAccessKeyDisplayDismissed"
AccessKeyDisplayRequested="OnAccessKeyDisplayRequested" />
<Button AccessKey="F1" Content="ButtonBF1"
AccessKeyDisplayDismissed="OnAccessKeyDisplayDismissed"
AccessKeyDisplayRequested="OnAccessKeyDisplayRequested" />
<Button AccessKey="F2" Content="ButtonBF2"
AccessKeyDisplayDismissed="OnAccessKeyDisplayDismissed"
AccessKeyDisplayRequested="OnAccessKeyDisplayRequested"/>
</StackPanel>
</PivotItem>
</Pivot.Items>
</Pivot>
</Grid>
public sealed partial class ScopedAccessKeys : Page
{
public ScopedAccessKeys()
{
this.InitializeComponent();
AccessKeyManager.IsDisplayModeEnabledChanged += OnDisplayModeEnabledChanged;
this.Loaded += OnLoaded;
}

void OnLoaded(object sender, object e)


{
//To let the framework discover the access keys, the elements should be realized
//on the visual tree. If there are no elements in the visual
//tree with access key, the framework won't raise the events.
//In this sample, if you define the Pivot as collapsed on the constructor, the Pivot
//will have a lazy loading and the access keys won't be enabled.
//For this reason, we make it visible when creating the object
//and we collapse it when we load the page.
MyPivot.Visibility = Visibility.Collapsed;
}

void OnAccessKeyInvoked(UIElement sender, AccessKeyInvokedEventArgs args)


{
args.Handled = true;
MyPivot.SelectedItem = sender as PivotItem;
}
void OnDisplayModeEnabledChanged(object sender, object e)
{
if (AccessKeyManager.IsDisplayModeEnabled)
{
MyPivot.Visibility = Visibility.Visible;
}
else
{
MyPivot.Visibility = Visibility.Collapsed;

}
}

DependencyObject AdjustTarget(UIElement sender)


{
DependencyObject target = sender;
if (sender is PivotItem)
{
PivotItem pivotItem = target as PivotItem;
target = (sender as PivotItem).Header as TextBlock;
}
return target;
}

void OnAccessKeyDisplayRequested(UIElement sender, AccessKeyDisplayRequestedEventArgs args)


{
DependencyObject target = AdjustTarget(sender);
var tooltip = ToolTipService.GetToolTip(target) as ToolTip;

if (tooltip == null)
{
tooltip = new ToolTip();
tooltip.Background = new SolidColorBrush(Windows.UI.Colors.Black);
tooltip.Foreground = new SolidColorBrush(Windows.UI.Colors.White);
tooltip.Padding = new Thickness(4, 4, 4, 4);
tooltip.VerticalOffset = -20;
tooltip.Placement = PlacementMode.Bottom;
ToolTipService.SetToolTip(target, tooltip);
}

if (string.IsNullOrEmpty(args.PressedKeys))
{
{
tooltip.Content = sender.AccessKey;
}
else
{
tooltip.Content = sender.AccessKey.Remove(0, args.PressedKeys.Length);
}

tooltip.IsOpen = true;
}
void OnAccessKeyDisplayDismissed(UIElement sender, AccessKeyDisplayDismissedEventArgs args)
{
DependencyObject target = AdjustTarget(sender);

var tooltip = ToolTipService.GetToolTip(target) as ToolTip;


if (tooltip != null)
{
tooltip.IsOpen = false;
//Fix to avoid show tooltip with mouse
ToolTipService.SetToolTip(target, null);
}
}
}
Respond to the presence of the touch keyboard
3/6/2017 3 min to read Edit on GitHub

Learn how to tailor the UI of your app when showing or hiding the touch keyboard.

Important APIs
AutomationPeer
InputPane

The\ touch\ keyboard\ in\ default\ layout\ mode

The touch keyboard enables text entry for devices that support touch. Universal Windows Platform (UWP) text
input controls invoke the touch keyboard by default when a user taps on an editable input field. The touch
keyboard typically remains visible while the user navigates between controls in a form, but this behavior can vary
based on the other control types within the form.
To support corresponding touch keyboard behavior in a custom text input control that does not derive from a
standard text input control, you must use the AutomationPeer class to expose your controls to Microsoft UI
Automation and implement the correct UI Automation control patterns. See Keyboard accessibility and Custom
automation peers.
Once this support has been added to your custom control, you can respond appropriately to the presence of the
touch keyboard.
**Prerequisites: **
This topic builds on Keyboard interactions.
You should have a basic understanding of standard keyboard interactions, handling keyboard input and events,
and UI Automation.
If you're new to developing Universal Windows Platform (UWP) apps, have a look through these topics to get
familiar with the technologies discussed here.
Create your first app
Learn about events with Events and routed events overview
**User experience guidelines: **
For helpful tips about designing a useful and engaging app optimized for keyboard input, see Keyboard design
guidelines .

Touch keyboard and a custom UI


Here are a few basic recommendations for custom text input controls.
Display the touch keyboard throughout the entire interaction with your form.
Ensure that your custom controls have the appropriate UI Automation AutomationControlType for the
keyboard to persist when focus moves from a text input field while in the context of text entry. For example,
if you have a menu that's opened in the middle of a text-entry scenario, and you want the keyboard to
persist, the menu must have the AutomationControlType of Menu.
Don't manipulate UI Automation properties to control the touch keyboard. Other accessibility tools rely on
the accuracy of UI Automation properties.
Ensure that users can always see the input field that they're interacting with.
Because the touch keyboard occludes a large portion of the screen, the UWP ensures that the input field
with focus scrolls into view as a user navigates through the controls on the form, including controls that are
not currently in view.
When customizing your UI, provide similar behavior on the appearance of the touch keyboard by handling
the Showing and Hiding events exposed by the InputPane object.

In some cases, there are UI elements that should stay on the screen the entire time. Design the UI so that the
form controls are contained in a panning region and the important UI elements are static. For example:

Handling the Showing and Hiding events


Here's an example of attaching event handlers for the showing and hiding events of the touch keyboard.
public class MyApplication
{
public MyApplication()
{
// Grab the input pane for the main application window and attach
// touch keyboard event handlers.
Windows.Foundation.Application.InputPane.GetForCurrentView().Showing
+= new EventHandler(_OnInputPaneShowing);
Windows.Foundation.Application.InputPane.GetForCurrentView().Hiding
+= new EventHandler(_OnInputPaneHiding);
}

private void _OnInputPaneShowing(object sender, IInputPaneVisibilityEventArgs eventArgs)


{
// If the size of this window is going to be too small, the app uses
// the Showing event to begin some element removal animations.
if (eventArgs.OccludedRect.Top < 400)
{
_StartElementRemovalAnimations();

// Don&#39;t use framework scroll- or visibility-related


// animations that might conflict with the app&#39;s logic.
eventArgs.EnsuredFocusedElementInView = true;
}
}

private void _OnInputPaneHiding(object sender, IInputPaneVisibilityEventArgs eventArgs)


{
if (_ResetToDefaultElements())
{
eventArgs.EnsuredFocusedElementInView = true;
}
}

private void _StartElementRemovalAnimations()


{
// This function starts the process of removing elements
// and starting the animation.
}

private void _ResetToDefaultElements()


{
// This function resets the window&#39;s elements to their default state.
}
}

Related articles
Keyboard interactions
Keyboard accessibility
Custom automation peers
Archive samples
Input: Touch keyboard sample
Responding to the appearance of the on-screen keyboard sample
XAML text editing sample
XAML accessibility sample
Use input scope to change the touch keyboard
3/6/2017 9 min to read Edit on GitHub

To help users to enter data using the touch keyboard, or Soft Input Panel (SIP), you can set the input scope of the
text control to match the kind of data the user is expected to enter.

Important APIs
InputScope
InputScopeNameValue

The touch keyboard can be used for text entry when your app runs on a device with a touch screen. The touch
keyboard is invoked when the user taps on an editable input field, such as a TextBox or RichEditBox. You can
make it much faster and easier for users to enter data in your app by setting the input scope of the text control to
match the kind of data you expect the user to enter. The input scope provides a hint to the system about the type of
text input expected by the control so the system can provide a specialized touch keyboard layout for the input type.
For example, if a text box is used only to enter a 4-digit PIN, set the InputScope property to Number. This tells the
system to show the number keypad layout, which makes it easier for the user to enter the PIN.

Important
This info applies only to the SIP. It does not apply to hardware keyboards or the On-Screen Keyboard
available in the Windows Ease of Access options.
The input scope does not cause any input validation to be performed, and does not prevent the user from
providing any input through a hardware keyboard or other input device. You are still responsible for
validating the input in your code as needed.

Changing the input scope of a text control


The input scopes that are available to your app are members of the InputScopeNameValue enumeration. You can
set the InputScope property of a TextBox or RichEditBox to one of these values.

Important The InputScope property on PasswordBox supports only the Password and NumericPin values.
Any other value is ignored.

Here, you change the input scope of several text boxes to match the expected data for each text box.
To change the input scope in XAML
1. In the XAML file for your page, locate the tag for the text control that you want to change.
2. Add the InputScope attribute to the tag and specify the InputScopeNameValue value that matches the
expected input.
Here are some text boxes that might appear on a common customer-contact form. With the InputScope set,
a touch keyboard with a suitable layout for the data shows for each text box.
<StackPanel Width="300">
<TextBox Header="Name" InputScope="Default"/>
<TextBox Header="Email Address" InputScope="EmailSmtpAddress"/>
<TextBox Header="Telephone Number" InputScope="TelephoneNumber"/>
<TextBox Header="Web site" InputScope="Url"/>
</StackPanel>

To change the input scope in code


1. In the XAML file for your page, locate the tag for the text control that you want to change. If it's not set, set
the x:Name attribute so you can reference the control in your code.

<TextBox Header="Telephone Number" x:Name="phoneNumberTextBox"/>

2. Instantiate a new InputScope object.

InputScope scope = new InputScope();

3. Instantiate a new InputScopeName object.

InputScopeName scopeName = new InputScopeName();

4. Set the NameValue property of the InputScopeName object to a value of the InputScopeNameValue
enumeration.

scopeName.NameValue = InputScopeNameValue.TelephoneNumber;

5. Add the InputScopeName object to the Names collection of the InputScope object.

scope.Names.Add(scopeName);

6. Set the InputScope object as the value of the text control's InputScope property.

phoneNumberTextBox.InputScope = scope;

Here's the code all together.

InputScope scope = new InputScope();


InputScopeName scopeName = new InputScopeName();
scopeName.NameValue = InputScopeNameValue.TelephoneNumber;
scope.Names.Add(scopeName);
phoneNumberTextBox.InputScope = scope;

The same steps can be condensed into this shorthand code.

phoneNumberTextBox.InputScope = new InputScope()


{
Names = {new InputScopeName(InputScopeNameValue.TelephoneNumber)}
};
Text prediction, spell checking, and auto-correction
The TextBox and RichEditBox controls have several properties that influence the behavior of the SIP. To provide
the best experience for your users, it's important to understand how these properties affect text input using touch.
IsSpellCheckEnabledWhen spell check is enabled for a text control, the control interacts with the
system's spell-check engine to mark words that are not recognized. You can tap a word to see a list of
suggested corrections. Spell checking is enabled by default.
For the Default input scope, this property also enables automatic capitalization of the first word in a
sentence and auto-correction of words as you type. These auto-correction features might be disabled in
other input scopes. For more info, see the tables later in this topic.
IsTextPredictionEnabledWhen text prediction is enabled for a text control, the system shows a list of
words that you might be beginning to type. You can select from the list so you don't have to type the whole
word. Text prediction is enabled by default.
Text prediction might be disabled if the input scope is other than Default, even if the
IsTextPredictionEnabled property is true. For more info, see the tables later in this topic.
Note On the Mobile device family, text predictions and spelling corrections are shown in the SIP in the area
above the keyboard. If IsTextPredictionEnabled is set to false, this part of the SIP is hidden and auto-
correction is disabled, even if IsSpellCheckEnabled is true.
PreventKeyboardDisplayOnProgrammaticFocusWhen this property is true, it prevents the system
from showing the SIP when focus is programmatically set on a text control. Instead, the keyboard is shown
only when the user interacts with the control.

Touch keyboard index for Windows and Windows Phone


These tables show the Soft Input Panel (SIP) layouts on desktop and mobile devices for common input scope values.
The effect of the input scope on the features enabled by the IsSpellCheckEnabled and IsTextPredictionEnabled
properties is listed for each input scope. This is not a comprehensive list of available input scopes.

Note The smaller size of the SIP on mobile devices makes it particularly important for mobile apps that you set
the correct input scope. As we show here, Windows Phone provides a greater variety of specialized keyboard
layouts. A text field that doesn't need to have its input scope set in a Windows Store app might benefit from
having it set in a Windows Phone Store app.
Tip You can toggle most touch keyboards between an alphabetic layout and a numbers-and-symbols layout.
On Windows, toggle the &123 key. On Windows Phone, press the &123 key to change to the numbers-and-
symbols layout, and press the abcd key to change to the alphabetic layout.

Default
<TextBox InputScope="Default"/>

The default keyboard.

WINDOWS WINDOWS PHONE


WINDOWS WINDOWS PHONE

Availability of features:
Spell check: enabled if IsSpellCheckEnabled = true, disabled if IsSpellCheckEnabled = false
Auto-correction: enabled if IsSpellCheckEnabled = true, disabled if IsSpellCheckEnabled = false
Automatic capitalization: enabled if IsSpellCheckEnabled = true, disabled if IsSpellCheckEnabled = false
Text prediction: enabled if IsTextPredictionEnabled = true, disabled if IsTextPredictionEnabled = false
CurrencyAmountAndSymbol
<TextBox InputScope="CurrencyAmountAndSymbol"/>

The default numbers and symbols keyboard layout.

WINDOWS WINDOWS PHONE

Also includes page left/right keys to show more symbols.

Availability of features: Availability of features:


Spell check: on by default, can be disabled Spell check: on by default, can be disabled
Auto-correction: always disabled Auto-correction: on by default, can be disabled
Automatic capitalization: always disabled Automatic capitalization: always disabled
Text prediction: always disabled Text prediction: on by default, can be disabled
Same as Number and TelephoneNumber.

Url
<TextBox InputScope="Url"/>

Includes the .com and (Go) keys. Press and hold the .com key to display additional options (.org, .net, and
region-specific suffixes).
WINDOWS WINDOWS PHONE

Also includes the :, -, and / keys.

Press and hold the period key to display additional options ( -


+ " / & : , ).

Availability of features: Availability of features:


Spell check: on by default, can be disabled Spell check: off by default, can be enabled
Auto-correction: on by default, can be disabled Auto-correction: off by default, can be enabled
Automatic capitalization: always disabled Automatic capitalization: off by default, can be enabled
Text prediction: always disabled Text prediction: off by default, can be enabled

EmailSmtpAddress
<TextBox InputScope="EmailSmtpAddress"/>

Includes the @ and .com keys. Press and hold the .com key to display additional options (.org, .net, and region-
specific suffixes).

WINDOWS WINDOWS PHONE

Also includes the _ and - keys.

Press and hold the period key to display additional options ( -


_ , ; ).

Availability of features: Availability of features:


Spell check: on by default, can be disabled Spell check: off by default, can be enabled
Auto-correction: on by default, can be disabled Auto-correction: off by default, can be enabled
Automatic capitalization: always disabled Automatic capitalization: off by default, can be enabled
Text prediction: always disabled Text prediction: off by default, can be enabled

Number
<TextBox InputScope="Number"/>

WINDOWS WINDOWS PHONE


WINDOWS WINDOWS PHONE

Keyboard contains numbers and a decimal point. Press and


hold the decimal point key to display additional options ( , - ).

Same as CurrencyAmountAndSymbol and Availability of features:


TelephoneNumber. Spell check: always disabled
Auto-correction: always disabled
Automatic capitalization: always disabled
Text prediction: always disabled

TelephoneNumber
<TextBox InputScope="TelephoneNumber"/>

WINDOWS WINDOWS PHONE

Keyboard mimics the telephone keypad. Press and hold the


period key to display additional options ( , ( ) X . ). Press and
hold the 0 key to enter +.

Same as CurrencyAmountAndSymbol and Availability of features:


TelephoneNumber. Spell check: always disabled
Auto-correction: always disabled
Automatic capitalization: always disabled
Text prediction: always disabled

Search
<TextBox InputScope="Search"/>

Includes the Search key instead of the Enter key.

WINDOWS WINDOWS PHONE


WINDOWS WINDOWS PHONE

Availability of features: Availability of features:


Spell check: on by default, can be disabled Spell check: on by default, can be disabled
Auto-correction: always disabled Auto-correction: on by default, can be disabled
Auto-capitalization: always disabled Auto-capitalization: always disabled
Text prediction: on by default, can be disabled Text prediction: on by default, can be disabled

SearchIncremental
<TextBox InputScope="SearchIncremental"/>

WINDOWS WINDOWS PHONE

Same layout as Default.

Availability of features: Same as Default.


Spell check: off by default, can be enabled
Auto-correction: always disabled
Automatic capitalization: always disabled
Text prediction: always disabled

Formula
<TextBox InputScope="Formula"/>

Includes the = key.

WINDOWS WINDOWS PHONE


WINDOWS WINDOWS PHONE

Also includes the %, $, and + keys.

Press and hold the period key to display additional options ( -


! ? , ). Press and hold the = key to display additional options ( (
) : < > ).

Availability of features: Availability of features:


Spell check: off by default, can be enabled Spell check: on by default, can be disabled
Auto-correction: always disabled Auto-correction: on by default, can be disabled
Automatic capitalization: always disabled Automatic capitalization: always disabled
Text prediction: always disabled Text prediction: on by default, can be disabled

Chat
<TextBox InputScope="Chat"/>

WINDOWS WINDOWS PHONE

Same layout as Default.

Same layout as Default.

Availability of features: Availability of features:


Spell check: off by default, can be enabled Spell check: on by default, can be disabled
Auto-correction: always disabled Auto-correction: on by default, can be disabled
Automatic capitalization: always disabled Automatic capitalization: on by default, can be disabled
Text prediction: always disabled Text prediction: on by default, can be disabled

NameOrPhoneNumber
<TextBox InputScope="NameOrPhoneNumber"/>

WINDOWS WINDOWS PHONE


WINDOWS WINDOWS PHONE

Same layout as Default.

Includes the ; and @ keys. The &123 key is replaced by the


123 key, which opens the phone keypad (see
TelephoneNumber).

Availability of features: Availability of features:


Spell check: on by default, can be disabled Spell check: off by default, can be enabled
Auto-correction: always disabled Auto-correction: off by default, can be enabled
Automatic capitalization: always enabled Automatic capitalization: off by default, can be enabled.
Text prediction: always disabled First letter of each word is capitalized.
Text prediction: off by default, can be enabled
Mouse interactions
3/6/2017 4 min to read Edit on GitHub

Optimize your Universal Windows Platform (UWP) app design for touch input and get basic mouse support by
default.

Mouse input is best suited for user interactions that require precision when pointing and clicking. This inherent
precision is naturally supported by the UI of Windows, which is optimized for the imprecise nature of touch.
Where mouse and touch input diverge is the ability for touch to more closely emulate the direct manipulation of UI
elements through physical gestures performed directly on those objects (such as swiping, sliding, dragging,
rotating, and so on). Manipulations with a mouse typically require some other UI affordance, such as the use of
handles to resize or rotate an object.
This topic describes design considerations for mouse interactions.

The UWP app mouse language


A concise set of mouse interactions are used consistently throughout the system.

TERM DESCRIPTION

Hover to learn Hover over an element to display more detailed info or


teaching visuals (such as a tooltip) without a commitment
to an action.

Left-click for primary action Left-click an element to invoke its primary action (such as
launching an app or executing a command).

Scroll to change view Display scroll bars to move up, down, left, and right within
a content area. Users can scroll by clicking scroll bars or
rotating the mouse wheel. Scroll bars can indicate the
location of the current view within the content area
(panning with touch displays a similar UI).
TERM DESCRIPTION

Right-click to select and command Right-click to display the navigation bar (if available) and
the app bar with global commands. Right-click an element
to select it and display the app bar with contextual
commands for the selected element.

Note Right-click to display a context menu if selection or


app bar commands are not appropriate UI behaviors. But
we strongly recommend that you use the app bar for all
command behaviors.

UI commands to zoom Display UI commands in the app bar (such as + and -), or
press Ctrl and rotate mouse wheel, to emulate pinch and
stretch gestures for zooming.

UI commands to rotate Display UI commands in the app bar, or press Ctrl+Shift


and rotate mouse wheel, to emulate the turn gesture for
rotating. Rotate the device itself to rotate the entire
screen.

Left-click and drag to rearrange Left-click and drag an element to move it.

Left-click and drag to select text Left-click within selectable text and drag to select it.
Double-click to select a word.

Mouse events
Respond to mouse input in your apps by handling the same basic pointer events that you use for touch and pen
input.
Use UIElement events to implement basic input functionality without having to write code for each pointer input
device. However, you can still take advantage of the special capabilities of each device (such as mouse wheel
events) using the pointer, gesture, and manipulation events of this object.
**Samples: **See this functionality in action in our app samples.
Input: Device capabilities sample
Input sample
Input: Gestures and manipulations with GestureRecognizer

Guidelines for visual feedback


When a mouse is detected (through move or hover events), show mouse-specific UI to indicate functionality
exposed by the element. If the mouse doesn't move for a certain amount of time, or if the user initiates a touch
interaction, make the mouse UI gradually fade away. This keeps the UI clean and uncluttered.
Don't use the cursor for hover feedback, the feedback provided by the element is sufficient (see Cursors below).
Don't display visual feedback if an element doesn't support interaction (such as static text).
Don't use focus rectangles with mouse interactions. Reserve these for keyboard interactions.
Display visual feedback concurrently for all elements that represent the same input target.
Provide buttons (such as + and -) for emulating touch-based manipulations such as panning, rotating, zooming,
and so on.
For more general guidance on visual feedback, see Guidelines for visual feedback.

Cursors
A set of standard cursors is available for a mouse pointer. These are used to indicate the primary action of an
element.
Each standard cursor has a corresponding default image associated with it. The user or an app can replace the
default image associated with any standard cursor at any time. Specify a cursor image through the PointerCursor
function.
If you need to customize the mouse cursor:
Always use the arrow cursor ( ) for clickable elements. don't use the pointing hand cursor ( ) for links or other
interactive elements. Instead, use hover effects (described earlier).
Use the text cursor ( ) for selectable text.
Use the move cursor ( ) when moving is the primary action (such as dragging or cropping). Don't use the
move cursor for elements where the primary action is navigation (such as Start tiles).
Use the horizontal, vertical and diagonal resize cursors ( , , , ), when an object is resizable.
Use the grasping hand cursors ( , ) when panning content within a fixed canvas (such as a map).

Related articles
Handle pointer input
Identify input devices
Samples
Basic input sample
Low latency input sample
User interaction mode sample
Focus visuals sample
Archive Samples
Input: Device capabilities sample
Input: XAML user input events sample
XAML scrolling, panning, and zooming sample
Input: Gestures and manipulations with GestureRecognizer
Gamepad and remote control interactions
3/6/2017 1 min to read Edit on GitHub

Universal Windows Platform (UWP) apps now support gamepad and remote control input. Gamepads and remote
controls are the primary input devices for Xbox and TV experiences. UWP apps should be optimized for these input
device types, just like they are for keyboard and mouse input on a PC, and touch input on a phone or tablet. Making
sure that your app works well with these input devices is the most important step when optimizing for Xbox and
the TV. You can now plug in and use the gamepad with UWP apps on PC which makes validating the work easy.
To ensure a successful and enjoyable user experience for your UWP app when using a gamepad or remote control,
you should consider the following:
Hardware buttons - The gamepad and remote provide very different buttons and configurations.
XY focus navigation and interaction - XY focus navigation enables the user to navigate around your app's UI.
Mouse mode - Mouse mode lets your app emulate a mouse experience when XY focus navigation isn't
sufficient.
Pen interactions and Windows Ink in UWP apps
3/6/2017 12 min to read Edit on GitHub

Surface Pen (available for purchase at the Microsoft Store).

Overview
Optimize your Universal Windows Platform (UWP) app for pen input to provide both standard pointer device
functionality and the best Windows Ink experience for your users.

NOTE
This topic focuses on the Windows Ink platform. For general pointer input handling (similar to mouse, touch, and touchpad),
see Handle pointer input.

VIDEOS

Using ink in your UWP app Use Windows Pen and Ink to build more engaging enterprise
apps

The Windows Ink platform, together with a pen device, provides a natural way to create digital handwritten notes,
drawings, and annotations. The platform supports capturing digitizer input as ink data, generating ink data,
managing ink data, rendering ink data as ink strokes on the output device, and converting ink to text through
handwriting recognition.
In addition to capturing the basic position and movement of the pen as the user writes or draws, your app can also
track and collect the varying amounts of pressure used throughout a stroke. This information, along with settings
for pen tip shape, size, and rotation, ink color, and purpose (plain ink, erasing, highlighting, and selecting), enables
you to provide user experiences that closely resemble writing or drawing on paper with a pen, pencil, or brush.

NOTE
Your app can also support ink input from other pointer-based devices, including touch digitizers and mouse devices.

The ink platform is very flexible. It is designed to support various levels of functionality, depending on your
requirements.
For Windows Ink UX guidelines, see Inking controls.

Components of the Windows Ink platform


COMPONENT DESCRIPTION

InkCanvas A XAML UI platform control that, by default, receives and


displays all input from a pen as either an ink stroke or an
erase stroke.
For more information about how to use the InkCanvas, see
Recognize Windows Ink strokes as text and Store and retrieve
Windows Ink stroke data.

InkPresenter A code-behind object, instantiated along with an InkCanvas


control (exposed through the InkCanvas.InkPresenter
property). This object provides all default inking functionality
exposed by the InkCanvas, along with a comprehensive set of
APIs for additional customization and personalization.
For more information about how to use the InkPresenter, see
Recognize Windows Ink strokes as text and Store and retrieve
Windows Ink stroke data.

InkToolbar Add a default InkToolbar to a Universal Windows Platform


(UWP) inking app, add a custom pen button to the
InkToolbar, and bind the custom pen button to a custom pen
definition.A XAML UI platform control containing a
customizable and extensible collection of buttons that activate
ink-related features in an associated InkCanvas.
For more information about how to use the InkToolbar, see
Add an InkToolbar to a Universal Windows Platform (UWP)
inking app.

IInkD2DRenderer Enables the rendering of ink strokes onto the designated


Direct2D device context of a Universal Windows app, instead
of the default InkCanvas control. This enables full
customization of the inking experience.
For more information, see the Complex ink sample.

Basic inking with InkCanvas


For basic inking functionality, just place an InkCanvas anywhere on a page.
By default, the InkCanvas supports ink input only from a pen. The input is either rendered as an ink stroke using
default settings for color and thickness (a black ballpoint pen with a thickness of 2 pixels), or treated as a stroke
eraser (when input is from an eraser tip or the pen tip modified with an erase button).
NOTE
If an eraser tip or button is not present, the InkCanvas can be configured to process input from the pen tip as an erase
stroke.

In this example, an InkCanvas overlays a background image.

<Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">


<Grid.RowDefinitions>
<RowDefinition Height="Auto"/>
<RowDefinition Height="*"/>
</Grid.RowDefinitions>
<StackPanel x:Name="HeaderPanel" Orientation="Horizontal" Grid.Row="0">
<TextBlock x:Name="Header"
Text="Basic ink sample"
Style="{ThemeResource HeaderTextBlockStyle}"
Margin="10,0,0,0" />
</StackPanel>
<Grid Grid.Row="1">
<Image Source="Assets\StoreLogo.png" />
<InkCanvas x:Name="inkCanvas" />
</Grid>
</Grid>

This series of images shows how pen input is rendered by this InkCanvas control.

The blank InkCanvas with a The InkCanvas with ink strokes. The InkCanvas with one stroke erased
background image. (note how erase operates on an entire
stroke, not a portion).

The inking functionality supported by the InkCanvas control is provided by a code-behind object called the
InkPresenter.
For basic inking, you don't have to concern yourself with the InkPresenter. However, to customize and configure
inking behavior on the InkCanvas, you must access its corresponding InkPresenter object.

Basic customization with InkPresenter


An InkPresenter object is instantiated with each InkCanvas control.
Along with providing all default inking behaviors of its corresponding InkCanvas control, the InkPresenter
provides a comprehensive set of APIs for additional stroke customization. This includes stroke properties,
supported input device types, and whether input is processed by the object or passed to the app.

NOTE
The InkPresenter cannot be instantiated directly. Instead, it is accessed through the InkPresenter property of the
InkCanvas.
Here, we configure the InkPresenter to interpret input data from both pen and mouse as ink strokes. We also set
some initial ink stroke attributes used for rendering strokes to the InkCanvas.

public MainPage()
{
this.InitializeComponent();

// Set supported inking device types.


inkCanvas.InkPresenter.InputDeviceTypes =
Windows.UI.Core.CoreInputDeviceTypes.Mouse |
Windows.UI.Core.CoreInputDeviceTypes.Pen;

// Set initial ink stroke attributes.


InkDrawingAttributes drawingAttributes = new InkDrawingAttributes();
drawingAttributes.Color = Windows.UI.Colors.Black;
drawingAttributes.IgnorePressure = false;
drawingAttributes.FitToCurve = true;
inkCanvas.InkPresenter.UpdateDefaultDrawingAttributes(drawingAttributes);
}

Ink stroke attributes can be set dynamically to accommodate user preferences or app requirements.
Here, we let a user choose from a list of ink colors.

<Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">


<Grid.RowDefinitions>
<RowDefinition Height="Auto"/>
<RowDefinition Height="*"/>
</Grid.RowDefinitions>
<StackPanel x:Name="HeaderPanel" Orientation="Horizontal" Grid.Row="0">
<TextBlock x:Name="Header"
Text="Basic ink customization sample"
VerticalAlignment="Center"
Style="{ThemeResource HeaderTextBlockStyle}"
Margin="10,0,0,0" />
<TextBlock Text="Color:"
Style="{StaticResource SubheaderTextBlockStyle}"
VerticalAlignment="Center"
Margin="50,0,10,0"/>
<ComboBox x:Name="PenColor"
VerticalAlignment="Center"
SelectedIndex="0"
SelectionChanged="OnPenColorChanged">
<ComboBoxItem Content="Black"/>
<ComboBoxItem Content="Red"/>
</ComboBox>
</StackPanel>
<Grid Grid.Row="1">
<Image Source="Assets\StoreLogo.png" />
<InkCanvas x:Name="inkCanvas" />
</Grid>
</Grid>

We then handle changes to the selected color and update the ink stroke attributes accordingly.
// Update ink stroke color for new strokes.
private void OnPenColorChanged(object sender, SelectionChangedEventArgs e)
{
if (inkCanvas != null)
{
InkDrawingAttributes drawingAttributes =
inkCanvas.InkPresenter.CopyDefaultDrawingAttributes();

string value = ((ComboBoxItem)PenColor.SelectedItem).Content.ToString();

switch (value)
{
case "Black":
drawingAttributes.Color = Windows.UI.Colors.Black;
break;
case "Red":
drawingAttributes.Color = Windows.UI.Colors.Red;
break;
default:
drawingAttributes.Color = Windows.UI.Colors.Black;
break;
};

inkCanvas.InkPresenter.UpdateDefaultDrawingAttributes(drawingAttributes);
}
}

These images shows how pen input is processed and customized by the InkPresenter.

The InkCanvas with default black ink strokes. The InkCanvas with user selected red ink strokes.

To provide functionality beyond inking and erasing, such as stroke selection, your app must identify specific input
for the InkPresenter to pass through unprocessed for handling by your app.

Pass-through input for advanced processing


By default, InkPresenter processes all input as either an ink stroke or an erase stroke. This includes input modified
by a secondary hardware affordance such as a pen barrel button, a right mouse button, or similar.
When using these secondary affordances, users typically expect some additional functionality or a modified
behavior.
In some cases, you might need to expose basic ink functionality for pens without secondary affordances
(functionality not usually associated with the pen tip), other input device types, or additional functionality, or
modified behavior, based on a user selection in your app's UI.
To support this, InkPresenter can be configured to leave specific input unprocessed. This unprocessed input is
then passed through to your app for processing.
The following code example steps through how to enable stroke selection when input is modified with a pen
barrel button (or right mouse button).
For this example, we use the MainPage.xaml and MainPage.xaml.cs files to host all code.
1. First, we set up the UI in MainPage.xaml.
Here, we add a canvas (below the InkCanvas) to draw the selection stroke. Using a separate layer to draw
the selection stroke leaves the InkCanvas and its content untouched.

<Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">


<Grid.RowDefinitions>
<RowDefinition Height="Auto"/>
<RowDefinition Height="*"/>
</Grid.RowDefinitions>
<StackPanel x:Name="HeaderPanel" Orientation="Horizontal" Grid.Row="0">
<TextBlock x:Name="Header"
Text="Advanced ink customization sample"
VerticalAlignment="Center"
Style="{ThemeResource HeaderTextBlockStyle}"
Margin="10,0,0,0" />
</StackPanel>
<Grid Grid.Row="1">
<!-- Canvas for displaying selection UI. -->
<Canvas x:Name="selectionCanvas"/>
<!-- Inking area -->
<InkCanvas x:Name="inkCanvas"/>
</Grid>
</Grid>

2. In MainPage.xaml.cs, we declare a couple of global variables for keeping references to aspects of the
selection UI. Specifically, the selection lasso stroke and the bounding rectangle that highlights the selected
strokes.

// Stroke selection tool.


private Polyline lasso;
// Stroke selection area.
private Rect boundingRect;

3. Next, we configure the InkPresenter to interpret input data from both pen and mouse as ink strokes, and
set some initial ink stroke attributes used for rendering strokes to the InkCanvas.
Most importantly, we use the InputProcessingConfiguration property of the InkPresenter to indicate
that any modified input should be processed by the app. Modified input is specified by assigning
InputProcessingConfiguration.RightDragAction a value of
InkInputRightDragAction.LeaveUnprocessed.
We then assign listeners for the unprocessed PointerPressed, PointerMoved, and PointerReleased
events passed through by the InkPresenter. All selection functionality is implemented in the handlers for
these events.
Finally, we assign listeners for the StrokeStarted and StrokesErased events of the InkPresenter. We use
the handlers for these events to clean up the selection UI if a new stroke is started or an existing stroke is
erased.

public MainPage()
{
this.InitializeComponent();

// Set supported inking device types.


inkCanvas.InkPresenter.InputDeviceTypes =
Windows.UI.Core.CoreInputDeviceTypes.Mouse |
Windows.UI.Core.CoreInputDeviceTypes.Pen;

// Set initial ink stroke attributes.


InkDrawingAttributes drawingAttributes = new InkDrawingAttributes();
drawingAttributes.Color = Windows.UI.Colors.Black;
drawingAttributes.IgnorePressure = false;
drawingAttributes.FitToCurve = true;
inkCanvas.InkPresenter.UpdateDefaultDrawingAttributes(drawingAttributes);

// By default, the InkPresenter processes input modified by


// a secondary affordance (pen barrel button, right mouse
// button, or similar) as ink.
// To pass through modified input to the app for custom processing
// on the app UI thread instead of the background ink thread, set
// InputProcessingConfiguration.RightDragAction to LeaveUnprocessed.
inkCanvas.InkPresenter.InputProcessingConfiguration.RightDragAction =
InkInputRightDragAction.LeaveUnprocessed;

// Listen for unprocessed pointer events from modified input.


// The input is used to provide selection functionality.
inkCanvas.InkPresenter.UnprocessedInput.PointerPressed +=
UnprocessedInput_PointerPressed;
inkCanvas.InkPresenter.UnprocessedInput.PointerMoved +=
UnprocessedInput_PointerMoved;
inkCanvas.InkPresenter.UnprocessedInput.PointerReleased +=
UnprocessedInput_PointerReleased;

// Listen for new ink or erase strokes to clean up selection UI.


inkCanvas.InkPresenter.StrokeInput.StrokeStarted +=
StrokeInput_StrokeStarted;
inkCanvas.InkPresenter.StrokesErased +=
InkPresenter_StrokesErased;
}
4. We then define handlers for the unprocessed PointerPressed, PointerMoved, and PointerReleased
events passed through by the InkPresenter.
All selection functionality is implemented in these handlers, including the lasso stroke and the bounding
rectangle.

// Handle unprocessed pointer events from modified input.


// The input is used to provide selection functionality.
// Selection UI is drawn on a canvas under the InkCanvas.
private void UnprocessedInput_PointerPressed(
InkUnprocessedInput sender, PointerEventArgs args)
{
// Initialize a selection lasso.
lasso = new Polyline()
{
Stroke = new SolidColorBrush(Windows.UI.Colors.Blue),
StrokeThickness = 1,
StrokeDashArray = new DoubleCollection() { 5, 2 },
};

lasso.Points.Add(args.CurrentPoint.RawPosition);

selectionCanvas.Children.Add(lasso);
}

private void UnprocessedInput_PointerMoved(


InkUnprocessedInput sender, PointerEventArgs args)
{
// Add a point to the lasso Polyline object.
lasso.Points.Add(args.CurrentPoint.RawPosition);
}

private void UnprocessedInput_PointerReleased(


InkUnprocessedInput sender, PointerEventArgs args)
{
// Add the final point to the Polyline object and
// select strokes within the lasso area.
// Draw a bounding box on the selection canvas
// around the selected ink strokes.
lasso.Points.Add(args.CurrentPoint.RawPosition);

boundingRect =
inkCanvas.InkPresenter.StrokeContainer.SelectWithPolyLine(
lasso.Points);

DrawBoundingRect();
}
5. To conclude the PointerReleased event handler, we clear the selection layer of all content (the lasso stroke)
and then draw a single bounding rectangle around the ink strokes encompassed by the lasso area.

// Draw a bounding rectangle, on the selection canvas, encompassing


// all ink strokes within the lasso area.
private void DrawBoundingRect()
{
// Clear all existing content from the selection canvas.
selectionCanvas.Children.Clear();

// Draw a bounding rectangle only if there are ink strokes


// within the lasso area.
if (!((boundingRect.Width == 0) ||
(boundingRect.Height == 0) ||
boundingRect.IsEmpty))
{
var rectangle = new Rectangle()
{
Stroke = new SolidColorBrush(Windows.UI.Colors.Blue),
StrokeThickness = 1,
StrokeDashArray = new DoubleCollection() { 5, 2 },
Width = boundingRect.Width,
Height = boundingRect.Height
};

Canvas.SetLeft(rectangle, boundingRect.X);
Canvas.SetTop(rectangle, boundingRect.Y);

selectionCanvas.Children.Add(rectangle);
}
}

6. Finally, we define handlers for the StrokeStarted and StrokesErased InkPresenter events.
These both just call the same cleanup function to clear the current selection whenever a new stroke is
detected.
// Handle new ink or erase strokes to clean up selection UI.
private void StrokeInput_StrokeStarted(
InkStrokeInput sender, Windows.UI.Core.PointerEventArgs args)
{
ClearSelection();
}

private void InkPresenter_StrokesErased(


InkPresenter sender, InkStrokesErasedEventArgs args)
{
ClearSelection();
}

7. Here's the function to remove all selection UI from the selection canvas when a new stroke is started or an
existing stroke is erased.

// Clean up selection UI.


private void ClearSelection()
{
var strokes = inkCanvas.InkPresenter.StrokeContainer.GetStrokes();
foreach (var stroke in strokes)
{
stroke.Selected = false;
}
ClearDrawnBoundingRect();
}

private void ClearDrawnBoundingRect()


{
if (selectionCanvas.Children.Any())
{
selectionCanvas.Children.Clear();
boundingRect = Rect.Empty;
}
}

Custom ink rendering


By default, ink input is processed on a low-latency background thread and rendered "wet" as it is drawn. When the
stroke is completed (pen or finger lifted, or mouse button released), the stroke is processed on the UI thread and
rendered "dry" to the InkCanvas layer (above the application content and replacing the wet ink).
The ink platform enables you to override this behavior and completely customize the inking experience by custom
drying the ink input.
Custom drying requires an IInkD2DRenderer object to manage the ink input and render it to the Direct2D device
context of your Universal Windows app, instead of the default InkCanvas control.
By calling ActivateCustomDrying (before the InkCanvas is loaded), an app creates an InkSynchronizer object
to customize how an ink stroke is rendered dry to a SurfaceImageSource or VirtualSurfaceImageSource. For
example, an ink stroke could be rasterized and integrated into application content instead of as a separate
InkCanvas layer.
For a full example of this functionality, see the Complex ink sample.
NOTE
Custom drying and the InkToolbar
If your app overrides the default ink rendering behavior of the InkPresenter with a custom drying implementation, the
rendered ink strokes are no longer available to the InkToolbar and the built-in erase commands of the InkToolbar do not
work as expected. To provide erase functionality, you must handle all pointer events, perform hit-testing on each stroke, and
override the built-in "Erase all ink" command.

Other articles in this section


TOPIC DESCRIPTION

Recognize ink strokes Convert ink strokes to text using handwriting recognition, or
to shapes using custom recognition.

Store and retrieve ink strokes Store ink stroke data in a Graphics Interchange Format (GIF)
file using embedded Ink Serialized Format (ISF) metadata.

Add an InkToolbar to a UWP inking app Add a default InkToolbar to a Universal Windows Platform
(UWP) inking app, add a custom pen button to the
InkToolbar, and bind the custom pen button to a custom pen
definition.

Related articles
Handle pointer input
Identify input devices
APIs
Windows.Devices.Input
Windows.UI.Input.Inking
Windows.UI.Input.Inking.Core
Samples
Ink sample
Simple ink sample
Complex ink sample
Coloring book sample
Family notes sample
Basic input sample
Low latency input sample
User interaction mode sample
Focus visuals sample
Archive Samples
Input: Device capabilities sample
Input: XAML user input events sample
XAML scrolling, panning, and zooming sample
Input: Gestures and manipulations with GestureRecognizer
Recognize Windows Ink strokes as text
3/6/2017 13 min to read Edit on GitHub

Convert ink strokes to text using the handwriting recognition support in Windows Ink.

Important APIs
InkCanvas
Windows.UI.Input.Inking

Handwriting recognition is built in to the Windows ink platform, and supports an extensive set of locales and
languages.
For all examples here, add the namespace references required for ink functionality. This includes
"Windows.UI.Input.Inking".

Basic handwriting recognition


Here, we demonstrate how to use the handwriting recognition engine, associated with the default installed
language pack, to interpret a set of strokes on an InkCanvas.
The recognition is initiated by the user clicking a button when they are finished writing.
1. First, we set up the UI.
The UI includes a "Recognize" button, the InkCanvas, and an area to display recognition results.

<Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">


<Grid.RowDefinitions>
<RowDefinition Height="Auto"/>
<RowDefinition Height="*"/>
</Grid.RowDefinitions>
<StackPanel x:Name="HeaderPanel"
Orientation="Horizontal"
Grid.Row="0">
<TextBlock x:Name="Header"
Text="Basic ink recognition sample"
Style="{ThemeResource HeaderTextBlockStyle}"
Margin="10,0,0,0" />
<Button x:Name="recognize"
Content="Recognize"
Margin="50,0,10,0"/>
</StackPanel>
<Grid Grid.Row="1">
<Grid.RowDefinitions>
<RowDefinition Height="*"/>
<RowDefinition Height="Auto"/>
</Grid.RowDefinitions>
<InkCanvas x:Name="inkCanvas"
Grid.Row="0"/>
<TextBlock x:Name="recognitionResult"
Grid.Row="1"
Margin="50,0,10,0"/>
</Grid>
</Grid>
2. We then set some basic ink input behaviors.
The InkPresenter is configured to interpret input data from both pen and mouse as ink strokes
(InputDeviceTypes). Ink strokes are rendered on the InkCanvas using the specified
InkDrawingAttributes. A listener for the click event on the "Recognize" button is also declared.

public MainPage()
{
this.InitializeComponent();

// Set supported inking device types.


inkCanvas.InkPresenter.InputDeviceTypes =
Windows.UI.Core.CoreInputDeviceTypes.Mouse |
Windows.UI.Core.CoreInputDeviceTypes.Pen;

// Set initial ink stroke attributes.


InkDrawingAttributes drawingAttributes = new InkDrawingAttributes();
drawingAttributes.Color = Windows.UI.Colors.Black;
drawingAttributes.IgnorePressure = false;
drawingAttributes.FitToCurve = true;
inkCanvas.InkPresenter.UpdateDefaultDrawingAttributes(drawingAttributes);

// Listen for button click to initiate recognition.


recognize.Click += Recognize_Click;
}

3. Finally, we perform the basic handwriting recognition. For this example, we use the click event handler of
the "Recognize" button to perform the handwriting recognition.
An InkPresenter stores all ink strokes in an InkStrokeContainer object. The strokes are exposed through
the StrokeContainer property of the InkPresenter and retrieved using the GetStrokes method.

// Get all strokes on the InkCanvas.


IReadOnlyList<InkStroke> currentStrokes = inkCanvas.InkPresenter.StrokeContainer.GetStrokes();

An InkRecognizerContainer is created to manage the handwriting recognition process.

// Create a manager for the InkRecognizer object


// used in handwriting recognition.
InkRecognizerContainer inkRecognizerContainer =
new InkRecognizerContainer();

RecognizeAsync is called to retrieve a set of InkRecognitionResult objects.


Recognition results are produced for each word that is detected by an InkRecognizer.

// Recognize all ink strokes on the ink canvas.


IReadOnlyList<InkRecognitionResult> recognitionResults =
await inkRecognizerContainer.RecognizeAsync(
inkCanvas.InkPresenter.StrokeContainer,
InkRecognitionTarget.All);

Each InkRecognitionResult object contains a set of text candidates. The topmost item in this list is
considered by the recognition engine to be the best match, followed by the remaining candidates in order of
decreasing confidence.
We iterate through each InkRecognitionResult and compile the list of candidates. The candidates are then
displayed and the InkStrokeContainer is cleared (which also clears the InkCanvas).
string str = "Recognition result\n";
// Iterate through the recognition results.
foreach (var result in recognitionResults)
{
// Get all recognition candidates from each recognition result.
IReadOnlyList<string> candidates = result.GetTextCandidates();
str += "Candidates: " + candidates.Count.ToString() + "\n";
foreach (string candidate in candidates)
{
str += candidate + " ";
}
}
// Display the recognition candidates.
recognitionResult.Text = str;
// Clear the ink canvas once recognition is complete.
inkCanvas.InkPresenter.StrokeContainer.Clear();

Here's the click handler example, in full.


// Handle button click to initiate recognition.
private async void Recognize_Click(object sender, RoutedEventArgs e)
{
// Get all strokes on the InkCanvas.
IReadOnlyList<InkStroke> currentStrokes = inkCanvas.InkPresenter.StrokeContainer.GetStrokes();

// Ensure an ink stroke is present.


if (currentStrokes.Count > 0)
{
// Create a manager for the InkRecognizer object
// used in handwriting recognition.
InkRecognizerContainer inkRecognizerContainer =
new InkRecognizerContainer();

// inkRecognizerContainer is null if a recognition engine is not available.


if (!(inkRecognizerContainer == null))
{
// Recognize all ink strokes on the ink canvas.
IReadOnlyList<InkRecognitionResult> recognitionResults =
await inkRecognizerContainer.RecognizeAsync(
inkCanvas.InkPresenter.StrokeContainer,
InkRecognitionTarget.All);
// Process and display the recognition results.
if (recognitionResults.Count > 0)
{
string str = "Recognition result\n";
// Iterate through the recognition results.
foreach (var result in recognitionResults)
{
// Get all recognition candidates from each recognition result.
IReadOnlyList<string> candidates = result.GetTextCandidates();
str += "Candidates: " + candidates.Count.ToString() + "\n";
foreach (string candidate in candidates)
{
str += candidate + " ";
}
}
// Display the recognition candidates.
recognitionResult.Text = str;
// Clear the ink canvas once recognition is complete.
inkCanvas.InkPresenter.StrokeContainer.Clear();
}
else
{
recognitionResult.Text = "No recognition results.";
}
}
else
{
Windows.UI.Popups.MessageDialog messageDialog = new Windows.UI.Popups.MessageDialog("You must install handwriting
recognition engine.");
await messageDialog.ShowAsync();
}
}
else
{
recognitionResult.Text = "No ink strokes to recognize.";
}
}

International recognition
A comprehensive subset of languages supported by Windows can be used for handwriting recognition.
For a list of languages supported by the InkRecognizer see the InkRecognizer.Name property.
Your app can query the set of installed handwriting recognition engines and use one of those or let a user select
their preferred language.
Note
Users can see a list of installed languages by going to Settings -> Time & Language. Installed languages are
listed under Languages.
To install new language packs and enable handwriting recognition for that language:
1. Go to Settings > Time & language > Region & language.
2. Select Add a language.
3. Select a language from the list, then choose the region version. The language is now listed on the Region &
language page.
4. Click the language and select Options.
5. On the Language options page, download the Handwriting recognition engine (they can also download
the full language pack, speech recognition engine, and keyboard layout here).
Here, we demonstrate how to use the handwriting recognition engine to interpret a set of strokes on an
InkCanvas based on the selected recognizer.
The recognition is initiated by the user clicking a button when they are finished writing.
1. First, we set up the UI.
The UI includes a "Recognize" button, a combo box that lists all installed handwriting recognizers, the
InkCanvas, and an area to display recognition results.
<Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">
<Grid.RowDefinitions>
<RowDefinition Height="Auto"/>
<RowDefinition Height="*"/>
</Grid.RowDefinitions>
<StackPanel x:Name="HeaderPanel"
Orientation="Horizontal"
Grid.Row="0">
<TextBlock x:Name="Header"
Text="Advanced international ink recognition sample"
Style="{ThemeResource HeaderTextBlockStyle}"
Margin="10,0,0,0" />
<ComboBox x:Name="comboInstalledRecognizers"
Margin="50,0,10,0">
<ComboBox.ItemTemplate>
<DataTemplate>
<StackPanel Orientation="Horizontal">
<TextBlock Text="{Binding Name}" />
</StackPanel>
</DataTemplate>
</ComboBox.ItemTemplate>
</ComboBox>
<Button x:Name="buttonRecognize"
Content="Recognize"
IsEnabled="False"
Margin="50,0,10,0"/>
</StackPanel>
<Grid Grid.Row="1">
<Grid.RowDefinitions>
<RowDefinition Height="*"/>
<RowDefinition Height="Auto"/>
</Grid.RowDefinitions>
<InkCanvas x:Name="inkCanvas"
Grid.Row="0"/>
<TextBlock x:Name="recognitionResult"
Grid.Row="1"
Margin="50,0,10,0"/>
</Grid>
</Grid>

2. We then set some basic ink input behaviors.


The InkPresenter is configured to interpret input data from both pen and mouse as ink strokes
(InputDeviceTypes). Ink strokes are rendered on the InkCanvas using the specified
InkDrawingAttributes.
We call an InitializeRecognizerList function to populate the recognizer combo box with a list of installed
handwriting recognizers.
We also declare listeners for the click event on the "Recognize" button and the selection changed event on
the recognizer combo box.
public MainPage()
{
this.InitializeComponent();

// Set supported inking device types.


inkCanvas.InkPresenter.InputDeviceTypes =
Windows.UI.Core.CoreInputDeviceTypes.Mouse |
Windows.UI.Core.CoreInputDeviceTypes.Pen;

// Set initial ink stroke attributes.


InkDrawingAttributes drawingAttributes = new InkDrawingAttributes();
drawingAttributes.Color = Windows.UI.Colors.Black;
drawingAttributes.IgnorePressure = false;
drawingAttributes.FitToCurve = true;
inkCanvas.InkPresenter.UpdateDefaultDrawingAttributes(drawingAttributes);

// Populate the recognizer combo box with installed recognizers.


InitializeRecognizerList();

// Listen for combo box selection.


comboInstalledRecognizers.SelectionChanged +=
comboInstalledRecognizers_SelectionChanged;

// Listen for button click to initiate recognition.


buttonRecognize.Click += Recognize_Click;
}

3. We populate the recognizer combo box with a list of installed handwriting recognizers.
An InkRecognizerContainer is created to manage the handwriting recognition process. Use this object to
call GetRecognizers and retrieve the list of installed recognizers to populate the recognizer combo box.

// Populate the recognizer combo box with installed recognizers.


private void InitializeRecognizerList()
{
// Create a manager for the handwriting recognition process.
inkRecognizerContainer = new InkRecognizerContainer();
// Retrieve the collection of installed handwriting recognizers.
IReadOnlyList<InkRecognizer> installedRecognizers =
inkRecognizerContainer.GetRecognizers();
// inkRecognizerContainer is null if a recognition engine is not available.
if (!(inkRecognizerContainer == null))
{
comboInstalledRecognizers.ItemsSource = installedRecognizers;
buttonRecognize.IsEnabled = true;
}
}

4. Update the handwriting recognizer if the recognizer combo box selection changes.
Use the InkRecognizerContainer to call SetDefaultRecognizer based on the selected recognizer from
the recognizer combo box.

// Handle recognizer change.


private void comboInstalledRecognizers_SelectionChanged(
object sender, SelectionChangedEventArgs e)
{
inkRecognizerContainer.SetDefaultRecognizer(
(InkRecognizer)comboInstalledRecognizers.SelectedItem);
}

5. Finally, we perform the handwriting recognition based on the selected handwriting recognizer. For this
example, we use the click event handler of the "Recognize" button to perform the handwriting recognition.
An InkPresenter stores all ink strokes in an InkStrokeContainer object. The strokes are exposed through
the StrokeContainer property of the InkPresenter and retrieved using the GetStrokes method.

// Get all strokes on the InkCanvas.


IReadOnlyList<InkStroke> currentStrokes =
inkCanvas.InkPresenter.StrokeContainer.GetStrokes();

RecognizeAsync is called to retrieve a set of InkRecognitionResult objects.


Recognition results are produced for each word that is detected by an InkRecognizer.

// Recognize all ink strokes on the ink canvas.


IReadOnlyList<InkRecognitionResult> recognitionResults =
await inkRecognizerContainer.RecognizeAsync(
inkCanvas.InkPresenter.StrokeContainer,
InkRecognitionTarget.All);

Each InkRecognitionResult object contains a set of text candidates. The topmost item in this list is
considered by the recognition engine to be the best match, followed by the remaining candidates in order of
decreasing confidence.
We iterate through each InkRecognitionResult and compile the list of candidates. The candidates are then
displayed and the InkStrokeContainer is cleared (which also clears the InkCanvas).

string str = "Recognition result\n";


// Iterate through the recognition results.
foreach (InkRecognitionResult result in recognitionResults)
{
// Get all recognition candidates from each recognition result.
IReadOnlyList<string> candidates =
result.GetTextCandidates();
str += "Candidates: " + candidates.Count.ToString() + "\n";
foreach (string candidate in candidates)
{
str += candidate + " ";
}
}
// Display the recognition candidates.
recognitionResult.Text = str;
// Clear the ink canvas once recognition is complete.
inkCanvas.InkPresenter.StrokeContainer.Clear();

Here's the click handler example, in full.


// Handle button click to initiate recognition.
private async void Recognize_Click(object sender, RoutedEventArgs e)
{
// Get all strokes on the InkCanvas.
IReadOnlyList<InkStroke> currentStrokes =
inkCanvas.InkPresenter.StrokeContainer.GetStrokes();

// Ensure an ink stroke is present.


if (currentStrokes.Count > 0)
{
// inkRecognizerContainer is null if a recognition engine is not available.
if (!(inkRecognizerContainer == null))
{
// Recognize all ink strokes on the ink canvas.
IReadOnlyList<InkRecognitionResult> recognitionResults =
await inkRecognizerContainer.RecognizeAsync(
inkCanvas.InkPresenter.StrokeContainer,
InkRecognitionTarget.All);
// Process and display the recognition results.
if (recognitionResults.Count > 0)
{
string str = "Recognition result\n";
// Iterate through the recognition results.
foreach (InkRecognitionResult result in recognitionResults)
{
// Get all recognition candidates from each recognition result.
IReadOnlyList<string> candidates =
result.GetTextCandidates();
str += "Candidates: " + candidates.Count.ToString() + "\n";
foreach (string candidate in candidates)
{
str += candidate + " ";
}
}
// Display the recognition candidates.
recognitionResult.Text = str;
// Clear the ink canvas once recognition is complete.
inkCanvas.InkPresenter.StrokeContainer.Clear();
}
else
{
recognitionResult.Text = "No recognition results.";
}
}
else
{
Windows.UI.Popups.MessageDialog messageDialog =
new Windows.UI.Popups.MessageDialog(
"You must install handwriting recognition engine.");
await messageDialog.ShowAsync();
}
}
else
{
recognitionResult.Text = "No ink strokes to recognize.";
}
}

Dynamic handwriting recognition


The previous two examples require the user to press a button to start recognition. Your app can also perform
dynamic recognition using stroke input paired with a basic timing function.
For this example, we'll use the same UI and stroke settings as the previous international recognition example.
1. Like the previous examples, the InkPresenter is configured to interpret input data from both pen and
mouse as ink strokes (InputDeviceTypes), and ink strokes are rendered on the InkCanvas using the
specified InkDrawingAttributes.
Instead of a button to initiate recognition, we add listeners for two InkPresenter stroke events
(StrokesCollected and StrokeStarted), and set up a basic timer (DispatcherTimer) with a one second
Tick interval.
public MainPage()
{
this.InitializeComponent();

// Set supported inking device types.


inkCanvas.InkPresenter.InputDeviceTypes =
Windows.UI.Core.CoreInputDeviceTypes.Mouse |
Windows.UI.Core.CoreInputDeviceTypes.Pen;

// Set initial ink stroke attributes.


InkDrawingAttributes drawingAttributes = new InkDrawingAttributes();
drawingAttributes.Color = Windows.UI.Colors.Black;
drawingAttributes.IgnorePressure = false;
drawingAttributes.FitToCurve = true;
inkCanvas.InkPresenter.UpdateDefaultDrawingAttributes(drawingAttributes);

// Populate the recognizer combo box with installed recognizers.


InitializeRecognizerList();

// Listen for combo box selection.


comboInstalledRecognizers.SelectionChanged +=
comboInstalledRecognizers_SelectionChanged;

// Listen for stroke events on the InkPresenter to


// enable dynamic recognition.
// StrokesCollected is fired when the user stops inking by
// lifting their pen or finger, or releasing the mouse button.
inkCanvas.InkPresenter.StrokesCollected +=
inkCanvas_StrokesCollected;
// StrokeStarted is fired when ink input is first detected.
inkCanvas.InkPresenter.StrokeInput.StrokeStarted +=
inkCanvas_StrokeStarted;

// Timer to manage dynamic recognition.


recoTimer = new DispatcherTimer();
recoTimer.Interval = new TimeSpan(0, 0, 1);
recoTimer.Tick += recoTimer_Tick;
}

// Handler for the timer tick event calls the recognition function.
private void recoTimer_Tick(object sender, object e)
{
Recognize_Tick();
}

// Handler for the InkPresenter StrokeStarted event.


// If a new stroke starts before the next timer tick event,
// stop the timer as the new stroke is likely the continuation
// of a single handwriting entry.
private void inkCanvas_StrokeStarted(InkStrokeInput sender, PointerEventArgs args)
{
recoTimer.Stop();
}

// Handler for the InkPresenter StrokesCollected event.


// Start the recognition timer when the user stops inking by
// lifting their pen or finger, or releasing the mouse button.
// After one second of no ink input, recognition is initiated.
private void inkCanvas_StrokesCollected(InkPresenter sender, InkStrokesCollectedEventArgs args)
{
recoTimer.Start();
}

2. Here are the handlers for the three events we added in the first step.
StrokesCollected
Start the recognition timer when the user stops inking by lifting their pen or finger, or releasing the mouse
button. After one second of no ink input, recognition is initiated.
StrokeStarted
If a new stroke starts before the next timer tick event, stop the timer as the new stroke is likely the
continuation of a single handwriting entry.
Tick
Call the recognition function after one second of no ink input.

// Handler for the timer tick event calls the recognition function.
private void recoTimer_Tick(object sender, object e)
{
Recognize_Tick();
}

// Handler for the InkPresenter StrokeStarted event.


// If a new stroke starts before the next timer tick event,
// stop the timer as the new stroke is likely the continuation
// of a single handwriting entry.
private void inkCanvas_StrokeStarted(InkStrokeInput sender, PointerEventArgs args)
{
recoTimer.Stop();
}

// Handler for the InkPresenter StrokesCollected event.


// Start the recognition timer when the user stops inking by
// lifting their pen or finger, or releasing the mouse button.
// After one second of no ink input, recognition is initiated.
private void inkCanvas_StrokesCollected(InkPresenter sender, InkStrokesCollectedEventArgs args)
{
recoTimer.Start();
}

3. Finally, we perform the handwriting recognition based on the selected handwriting recognizer. For this
example, we use the Tick event handler of a DispatcherTimer to initiate the handwriting recognition.
An InkPresenter stores all ink strokes in an InkStrokeContainer object. The strokes are exposed through
the StrokeContainer property of the InkPresenter and retrieved using the GetStrokes method.

// Get all strokes on the InkCanvas.


IReadOnlyList<InkStroke> currentStrokes = inkCanvas.InkPresenter.StrokeContainer.GetStrokes();

RecognizeAsync is called to retrieve a set of InkRecognitionResult objects.


Recognition results are produced for each word that is detected by an InkRecognizer.

// Recognize all ink strokes on the ink canvas.


IReadOnlyList<InkRecognitionResult> recognitionResults =
await inkRecognizerContainer.RecognizeAsync(
inkCanvas.InkPresenter.StrokeContainer,
InkRecognitionTarget.All);

Each InkRecognitionResult object contains a set of text candidates. The topmost item in this list is
considered by the recognition engine to be the best match, followed by the remaining candidates in order of
decreasing confidence.
We iterate through each InkRecognitionResult and compile the list of candidates. The candidates are then
displayed and the InkStrokeContainer is cleared (which also clears the InkCanvas).
string str = "Recognition result\n";
// Iterate through the recognition results.
foreach (InkRecognitionResult result in recognitionResults)
{
// Get all recognition candidates from each recognition result.
IReadOnlyList<string> candidates = result.GetTextCandidates();
str += "Candidates: " + candidates.Count.ToString() + "\n";
foreach (string candidate in candidates)
{
str += candidate + " ";
}
}
// Display the recognition candidates.
recognitionResult.Text = str;
// Clear the ink canvas once recognition is complete.
inkCanvas.InkPresenter.StrokeContainer.Clear();

Here's the recognition function, in full.


// Respond to timer Tick and initiate recognition.
private async void Recognize_Tick()
{
// Get all strokes on the InkCanvas.
IReadOnlyList<InkStroke> currentStrokes = inkCanvas.InkPresenter.StrokeContainer.GetStrokes();

// Ensure an ink stroke is present.


if (currentStrokes.Count > 0)
{
// inkRecognizerContainer is null if a recognition engine is not available.
if (!(inkRecognizerContainer == null))
{
// Recognize all ink strokes on the ink canvas.
IReadOnlyList<InkRecognitionResult> recognitionResults =
await inkRecognizerContainer.RecognizeAsync(
inkCanvas.InkPresenter.StrokeContainer,
InkRecognitionTarget.All);
// Process and display the recognition results.
if (recognitionResults.Count > 0)
{
string str = "Recognition result\n";
// Iterate through the recognition results.
foreach (InkRecognitionResult result in recognitionResults)
{
// Get all recognition candidates from each recognition result.
IReadOnlyList<string> candidates = result.GetTextCandidates();
str += "Candidates: " + candidates.Count.ToString() + "\n";
foreach (string candidate in candidates)
{
str += candidate + " ";
}
}
// Display the recognition candidates.
recognitionResult.Text = str;
// Clear the ink canvas once recognition is complete.
inkCanvas.InkPresenter.StrokeContainer.Clear();
}
else
{
recognitionResult.Text = "No recognition results.";
}
}
else
{
Windows.UI.Popups.MessageDialog messageDialog = new Windows.UI.Popups.MessageDialog("You must install handwriting
recognition engine.");
await messageDialog.ShowAsync();
}
}
else
{
recognitionResult.Text = "No ink strokes to recognize.";
}

// Stop the dynamic recognition timer.


recoTimer.Stop();
}

Related articles
Pen and stylus interactions
Samples
Ink sample
Simple ink sample
Complex ink sample
Coloring book sample
Family notes sample
Store and retrieve Windows Ink stroke data
3/6/2017 8 min to read Edit on GitHub

UWP apps that support Windows Ink can serialize and deserialize ink strokes to an Ink Serialized Format (ISF) file.
The ISF file is a GIF image with additional metadata for all ink stroke properties and behaviors. Apps that are not
ink-enabled, can view the static GIF image, including alpha-channel background transparency.

Important APIs
InkCanvas
Windows.UI.Input.Inking

NOTE
ISF is the most compact persistent representation of ink. It can be embedded within a binary document format, such as a GIF
file, or placed directly on the Clipboard.

Save ink strokes to a file


Here, we demonstrate how to save ink strokes drawn on an InkCanvas control.
1. First, we set up the UI.
The UI includes "Save", "Load", and "Clear" buttons, and the InkCanvas.

<Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">


<Grid.RowDefinitions>
<RowDefinition Height="Auto"/>
<RowDefinition Height="*"/>
</Grid.RowDefinitions>
<StackPanel x:Name="HeaderPanel" Orientation="Horizontal" Grid.Row="0">
<TextBlock x:Name="Header"
Text="Basic ink store sample"
Style="{ThemeResource HeaderTextBlockStyle}"
Margin="10,0,0,0" />
<Button x:Name="btnSave"
Content="Save"
Margin="50,0,10,0"/>
<Button x:Name="btnLoad"
Content="Load"
Margin="50,0,10,0"/>
<Button x:Name="btnClear"
Content="Clear"
Margin="50,0,10,0"/>
</StackPanel>
<Grid Grid.Row="1">
<InkCanvas x:Name="inkCanvas" />
</Grid>
</Grid>

2. We then set some basic ink input behaviors.


The InkPresenter is configured to interpret input data from both pen and mouse as ink strokes
(InputDeviceTypes), and listeners for the click events on the buttons are declared.
public MainPage()
{
this.InitializeComponent();

// Set supported inking device types.


inkCanvas.InkPresenter.InputDeviceTypes =
Windows.UI.Core.CoreInputDeviceTypes.Mouse |
Windows.UI.Core.CoreInputDeviceTypes.Pen;

// Listen for button click to initiate save.


btnSave.Click += btnSave_Click;
// Listen for button click to initiate load.
btnLoad.Click += btnLoad_Click;
// Listen for button click to clear ink canvas.
btnClear.Click += btnClear_Click;
}

3. Finally, we save the ink in the click event handler of the Save button.
A FileSavePicker lets the user select both the file and the location where the ink data is saved.
Once a file is selected, we open an IRandomAccessStream stream set to ReadWrite.
We then call SaveAsync to serialize the ink strokes managed by the InkStrokeContainer to the stream.
// Save ink data to a file.
private async void btnSave_Click(object sender, RoutedEventArgs e)
{
// Get all strokes on the InkCanvas.
IReadOnlyList<InkStroke> currentStrokes = inkCanvas.InkPresenter.StrokeContainer.GetStrokes();

// Strokes present on ink canvas.


if (currentStrokes.Count > 0)
{
// Let users choose their ink file using a file picker.
// Initialize the picker.
Windows.Storage.Pickers.FileSavePicker savePicker =
new Windows.Storage.Pickers.FileSavePicker();
savePicker.SuggestedStartLocation =
Windows.Storage.Pickers.PickerLocationId.DocumentsLibrary;
savePicker.FileTypeChoices.Add(
"GIF with embedded ISF",
new List<string>() { ".gif" });
savePicker.DefaultFileExtension = ".gif";
savePicker.SuggestedFileName = "InkSample";

// Show the file picker.


Windows.Storage.StorageFile file =
await savePicker.PickSaveFileAsync();
// When chosen, picker returns a reference to the selected file.
if (file != null)
{
// Prevent updates to the file until updates are
// finalized with call to CompleteUpdatesAsync.
Windows.Storage.CachedFileManager.DeferUpdates(file);
// Open a file stream for writing.
IRandomAccessStream stream = await file.OpenAsync(Windows.Storage.FileAccessMode.ReadWrite);
// Write the ink strokes to the output stream.
using (IOutputStream outputStream = stream.GetOutputStreamAt(0))
{
await inkCanvas.InkPresenter.StrokeContainer.SaveAsync(outputStream);
await outputStream.FlushAsync();
}
stream.Dispose();

// Finalize write so other apps can update file.


Windows.Storage.Provider.FileUpdateStatus status =
await Windows.Storage.CachedFileManager.CompleteUpdatesAsync(file);

if (status == Windows.Storage.Provider.FileUpdateStatus.Complete)
{
// File saved.
}
else
{
// File couldn&#39;t be saved.
}
}
// User selects Cancel and picker returns null.
else
{
// Operation cancelled.
}
}
}
NOTE
GIF is the only file format supported for saving ink data. However, the LoadAsync method (demonstrated in the next
section) does support additional formats for backward compatibility.

Load ink strokes from a file


Here, we demonstrate how to load ink strokes from a file and render them on an InkCanvas control.
1. First, we set up the UI.
The UI includes "Save", "Load", and "Clear" buttons, and the InkCanvas.

<Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">


<Grid.RowDefinitions>
<RowDefinition Height="Auto"/>
<RowDefinition Height="*"/>
</Grid.RowDefinitions>
<StackPanel x:Name="HeaderPanel" Orientation="Horizontal" Grid.Row="0">
<TextBlock x:Name="Header"
Text="Basic ink store sample"
Style="{ThemeResource HeaderTextBlockStyle}"
Margin="10,0,0,0" />
<Button x:Name="btnSave"
Content="Save"
Margin="50,0,10,0"/>
<Button x:Name="btnLoad"
Content="Load"
Margin="50,0,10,0"/>
<Button x:Name="btnClear"
Content="Clear"
Margin="50,0,10,0"/>
</StackPanel>
<Grid Grid.Row="1">
<InkCanvas x:Name="inkCanvas" />
</Grid>
</Grid>

2. We then set some basic ink input behaviors.


The InkPresenter is configured to interpret input data from both pen and mouse as ink strokes
(InputDeviceTypes), and listeners for the click events on the buttons are declared.

public MainPage()
{
this.InitializeComponent();

// Set supported inking device types.


inkCanvas.InkPresenter.InputDeviceTypes =
Windows.UI.Core.CoreInputDeviceTypes.Mouse |
Windows.UI.Core.CoreInputDeviceTypes.Pen;

// Listen for button click to initiate save.


btnSave.Click += btnSave_Click;
// Listen for button click to initiate load.
btnLoad.Click += btnLoad_Click;
// Listen for button click to clear ink canvas.
btnClear.Click += btnClear_Click;
}

3. Finally, we load the ink in the click event handler of the Load button.
A FileOpenPicker lets the user select both the file and the location from where to retrieve the saved ink
data.
Once a file is selected, we open an IRandomAccessStream stream set to Read.
We then call LoadAsync to read, de-serialize, and load the saved ink strokes into the InkStrokeContainer.
Loading the strokes into the InkStrokeContainer causes the InkPresenter to immediately render them to
the InkCanvas.

NOTE
All existing strokes in the InkStrokeContainer are cleared before new strokes are loaded.

// Load ink data from a file.


private async void btnLoad_Click(object sender, RoutedEventArgs e)
{
// Let users choose their ink file using a file picker.
// Initialize the picker.
Windows.Storage.Pickers.FileOpenPicker openPicker =
new Windows.Storage.Pickers.FileOpenPicker();
openPicker.SuggestedStartLocation =
Windows.Storage.Pickers.PickerLocationId.DocumentsLibrary;
openPicker.FileTypeFilter.Add(".gif");
// Show the file picker.
Windows.Storage.StorageFile file = await openPicker.PickSingleFileAsync();
// User selects a file and picker returns a reference to the selected file.
if (file != null)
{
// Open a file stream for reading.
IRandomAccessStream stream = await file.OpenAsync(Windows.Storage.FileAccessMode.Read);
// Read from file.
using (var inputStream = stream.GetInputStreamAt(0))
{
await inkCanvas.InkPresenter.StrokeContainer.LoadAsync(inputStream);
}
stream.Dispose();
}
// User selects Cancel and picker returns null.
else
{
// Operation cancelled.
}
}

NOTE
GIF is the only file format supported for saving ink data. However, the LoadAsync method does support the following
formats for backward compatibility.

FORMAT DESCRIPTION

InkSerializedFormat Specifies ink that is persisted using ISF. This is the most
compact persistent representation of ink. It can be embedded
within a binary document format or placed directly on the
Clipboard.
FORMAT DESCRIPTION

Base64InkSerializedFormat Specifies ink that is persisted by encoding the ISF as a base64


stream. This format is provided so ink can be encoded directly
in an XML or HTML file.

Gif Specifies ink that is persisted by using a GIF file that contains
ISF as metadata embedded within the file. This enables ink to
be viewed in applications that are not ink-enabled and
maintain its full ink fidelity when it returns to an ink-enabled
application. This format is ideal when transporting ink content
within an HTML file and for making it usable by ink and non-
ink applications.

Base64Gif Specifies ink that is persisted by using a base64-encoded


fortified GIF. This format is provided when ink is to be
encoded directly in an XML or HTML file for later conversion
into an image. A possible use of this is in an XML format
generated to contain all ink information and used to generate
HTML through Extensible Stylesheet Language
Transformations (XSLT).

Copy and paste ink strokes with the clipboard


Here, we demonstrate how to use the clipboard to transfer ink strokes between apps.
To support clipboard functionality, the built-in InkStrokeContainer cut and copy commands require one or more
ink strokes be selected.
For this example, we enable stroke selection when input is modified with a pen barrel button (or right mouse
button). For a complete example of how to implement stroke selection, see Pass-through input for advanced
processing in Pen and stylus interactions.
1. First, we set up the UI.
The UI includes "Cut", "Copy", "Paste", and "Clear" buttons, along with the InkCanvas and a selection
canvas.
<Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">
<Grid.RowDefinitions>
<RowDefinition Height="Auto"/>
<RowDefinition Height="*"/>
</Grid.RowDefinitions>
<StackPanel x:Name="HeaderPanel" Orientation="Horizontal" Grid.Row="0">
<TextBlock x:Name="tbHeader"
Text="Basic ink store sample"
Style="{ThemeResource HeaderTextBlockStyle}"
Margin="10,0,0,0" />
<Button x:Name="btnCut"
Content="Cut"
Margin="20,0,10,0"/>
<Button x:Name="btnCopy"
Content="Copy"
Margin="20,0,10,0"/>
<Button x:Name="btnPaste"
Content="Paste"
Margin="20,0,10,0"/>
<Button x:Name="btnClear"
Content="Clear"
Margin="20,0,10,0"/>
</StackPanel>
<Grid x:Name="gridCanvas" Grid.Row="1">
<!-- Canvas for displaying selection UI. -->
<Canvas x:Name="selectionCanvas"/>
<!-- Inking area -->
<InkCanvas x:Name="inkCanvas"/>
</Grid>
</Grid>

2. We then set some basic ink input behaviors.


The InkPresenter is configured to interpret input data from both pen and mouse as ink strokes
(InputDeviceTypes). Listeners for the click events on the buttons as well as pointer and stroke events for
selection functionality are also declared here.
For a complete example of how to implement stroke selection, see Pass-through input for advanced
processing in Pen and stylus interactions.
public MainPage()
{
this.InitializeComponent();

// Set supported inking device types.


inkCanvas.InkPresenter.InputDeviceTypes =
Windows.UI.Core.CoreInputDeviceTypes.Mouse |
Windows.UI.Core.CoreInputDeviceTypes.Pen;

// Listen for button click to cut ink strokes.


btnCut.Click += btnCut_Click;
// Listen for button click to copy ink strokes.
btnCopy.Click += btnCopy_Click;
// Listen for button click to paste ink strokes.
btnPaste.Click += btnPaste_Click;
// Listen for button click to clear ink canvas.
btnClear.Click += btnClear_Click;

// By default, the InkPresenter processes input modified by


// a secondary affordance (pen barrel button, right mouse
// button, or similar) as ink.
// To pass through modified input to the app for custom processing
// on the app UI thread instead of the background ink thread, set
// InputProcessingConfiguration.RightDragAction to LeaveUnprocessed.
inkCanvas.InkPresenter.InputProcessingConfiguration.RightDragAction =
InkInputRightDragAction.LeaveUnprocessed;

// Listen for unprocessed pointer events from modified input.


// The input is used to provide selection functionality.
inkCanvas.InkPresenter.UnprocessedInput.PointerPressed +=
UnprocessedInput_PointerPressed;
inkCanvas.InkPresenter.UnprocessedInput.PointerMoved +=
UnprocessedInput_PointerMoved;
inkCanvas.InkPresenter.UnprocessedInput.PointerReleased +=
UnprocessedInput_PointerReleased;

// Listen for new ink or erase strokes to clean up selection UI.


inkCanvas.InkPresenter.StrokeInput.StrokeStarted +=
StrokeInput_StrokeStarted;
inkCanvas.InkPresenter.StrokesErased +=
InkPresenter_StrokesErased;
}

3. Finally, after adding stroke selection support, we implement clipboard functionality in the click event
handlers of the Cut, Copy, and Paste buttons.
For cut, we first call CopySelectedToClipboard on the InkStrokeContainer of the InkPresenter.
We then call DeleteSelected to remove the strokes from the ink canvas.
Finally, we delete all selection strokes from the selection canvas.

private void btnCut_Click(object sender, RoutedEventArgs e)


{
inkCanvas.InkPresenter.StrokeContainer.CopySelectedToClipboard();
inkCanvas.InkPresenter.StrokeContainer.DeleteSelected();
ClearSelection();
}
// Clean up selection UI.
private void ClearSelection()
{
var strokes = inkCanvas.InkPresenter.StrokeContainer.GetStrokes();
foreach (var stroke in strokes)
{
stroke.Selected = false;
}
ClearDrawnBoundingRect();
}

private void ClearDrawnBoundingRect()


{
if (selectionCanvas.Children.Any())
{
selectionCanvas.Children.Clear();
boundingRect = Rect.Empty;
}
}

For copy, we simply call CopySelectedToClipboard on the InkStrokeContainer of the InkPresenter.

private void btnCopy_Click(object sender, RoutedEventArgs e)


{
inkCanvas.InkPresenter.StrokeContainer.CopySelectedToClipboard();
}

For paste, we call CanPasteFromClipboard to ensure that the content on the clipboard can be pasted to
the ink canvas.
If so, we call PasteFromClipboard to insert the clipboard ink strokes into the InkStrokeContainer of the
InkPresenter, which then renders the strokes to the ink canvas.

private void btnPaste_Click(object sender, RoutedEventArgs e)


{
if (inkCanvas.InkPresenter.StrokeContainer.CanPasteFromClipboard())
{
inkCanvas.InkPresenter.StrokeContainer.PasteFromClipboard(
new Point(0, 0));
}
else
{
// Cannot paste from clipboard.
}
}

Related articles
Pen and stylus interactions
Samples
Ink sample
Simple ink sample
Complex ink sample
Coloring book sample
Family notes sample
Add an InkToolbar to a Universal Windows Platform
(UWP) inking app
3/6/2017 17 min to read Edit on GitHub

There are two different controls that facilitate inking in Universal Windows Platform (UWP) apps: InkCanvas and
InkToolbar.
The InkCanvas control provides basic Windows Ink functionality. Use it to render pen input as either an ink stroke
(using default settings for color and thickness) or an erase stroke.

For InkCanvas implementation details, see Pen and stylus interactions in UWP apps.

As a completely transparent overlay, the InkCanvas does not provide any built-in UI for setting ink stroke
properties. If you want to change the default inking experience, let users set ink stroke properties, and support
other custom inking features, you have two options:
In code-behind, use the underlying InkPresenter object bound to the InkCanvas.
The InkPresenter APIs support extensive customization of the inking experience. For more detail, see Pen
and stylus interactions in UWP apps.
Bind an InkToolbar to the InkCanvas. By default, the InkToolbar provides a basic UI for activating ink
features and setting ink properties such as stroke size, ink color, and pen tip shape.
We discuss the InkToolbar in this topic.

Important APIs
InkCanvas class
InkToolbar class
InkPresenter class
Windows.UI.Input.Inking

Default InkToolbar
By default, the InkToolbar includes buttons for drawing, erasing, highlighting, and displaying a ruler. Depending on
the feature, other settings and commands, such as ink color, stroke thickness, erase all ink, are provided in a flyout.
Default Windows Ink toolbar
To add a basic default InkToolbar:
1. In MainPage.xaml, declare a container object (for this example, we use a Grid control) for the inking surface.
2. Declare an InkCanvas object as a child of the container. (The InkCanvas size is inherited from the container.)
3. Declare an InkToolbar and use the TargetInkCanvas attribute to bind it to the InkCanvas. Ensure the InkToolbar
is declared after the InkCanvas. If not, the InkCanvas overlay renders the InkToolbar inaccessible.

<Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">


<Grid.RowDefinitions>
<RowDefinition Height="Auto"/>
<RowDefinition Height="*"/>
</Grid.RowDefinitions>
<StackPanel x:Name="HeaderPanel" Orientation="Horizontal" Grid.Row="0">
<TextBlock x:Name="Header"
Text="Basic ink sample"
Style="{ThemeResource HeaderTextBlockStyle}"
Margin="10,0,0,0" />
</StackPanel>
<Grid Grid.Row="1">
<Image Source="Assets\StoreLogo.png" />
<InkCanvas x:Name="inkCanvas" />
<InkToolbar x:Name="inkToolbar"
VerticalAlignment="Top"
TargetInkCanvas="{x:Bind inkCanvas}" />
</Grid>
</Grid>

Basic customization
In this section, we cover some basic Windows Ink toolbar customization scenarios.
Specify the selected button

Windows Ink toolbar with pencil button selected at initialization


By default, the first (or leftmost) button is selected when your app is launched and the toolbar is initialized. In the
default Windows Ink toolbar, this is the ballpoint pen button.
Because the framework defines the order of the built-in buttons, the first button might not be the pen or tool you
want to activate by default.
You can override this default behavior and specify the selected button on the toolbar.
For this example, we initialize the default toolbar with the pencil button selected and the pencil activated (instead of
the ballpoint pen).
1. Use the XAML declaration for the InkCanvas and InkToolbar from the previous example.
2. In code-behind, set up a handler for the Loaded event of the InkToolbar object.

/// <summary>
/// An empty page that can be used on its own or navigated to within a Frame.
/// Here, we set up InkToolbar event listeners.
/// </summary>
public MainPage_CodeBehind()
{
this.InitializeComponent();
// Add handlers for InkToolbar events.
inkToolbar.Loaded += inkToolbar_Loaded;
}

3. In the handler for the Loaded event:


a. Get a reference to the built-in InkToolbarPencilButton.
Passing an InkToolbarTool.Pencil object in the GetToolButton method returns an
InkToolbarToolButton object for the InkToolbarPencilButton.
b. Set ActiveTool to the object returned in the previous step.

/// <summary>
/// Handle the Loaded event of the InkToolbar.
/// By default, the active tool is set to the first tool on the toolbar.
/// Here, we set the active tool to the pencil button.
/// </summary>
/// <param name="sender"></param>
/// <param name="e"></param>
private void inkToolbar_Loaded(object sender, RoutedEventArgs e)
{
InkToolbarToolButton pencilButton = inkToolbar.GetToolButton(InkToolbarTool.Pencil);
inkToolbar.ActiveTool = pencilButton;
}

Specify the built-in buttons

Specific buttons included at initialization


As mentioned, the Windows Ink toolbar includes a collection of default, built-in buttons. These buttons are
displayed in the following order (from left to right):
InkToolbarBallpointPenButton
InkToolbarPencilButton
InkToolbarHighlighterButton
InkToolbarEraserButton
InkToolbarRulerButton
For this example, we initialize the toolbar with only the built-in ballpoint pen, pencil, and eraser buttons.
You can do this using either XAML or code-behind.
XAML
Modify the XAML declaration for the InkCanvas and InkToolbar from the first example.
Add an InitialControls attribute and set its value to "None". This clears the default collection of built-in buttons.
Add the specific InkToolbar buttons required by your app. Here, we add InkToolbarBallpointPenButton,
InkToolbarPencilButton, and InkToolbarEraserButton only. > [!NOTE] > Buttons are added to the toolbar in the
order defined by the framework, not the order specified here.

<Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">


<Grid.RowDefinitions>
<RowDefinition Height="Auto"/>
<RowDefinition Height="*"/>
</Grid.RowDefinitions>
<StackPanel x:Name="HeaderPanel" Orientation="Horizontal" Grid.Row="0">
<TextBlock x:Name="Header"
Text="Basic ink sample"
Style="{ThemeResource HeaderTextBlockStyle}"
Margin="10,0,0,0" />
</StackPanel>
<Grid Grid.Row="1">
<Image Source="Assets\StoreLogo.png" />
<!-- Clear the default InkToolbar buttons by setting InitialControls to None. -->
<!-- Set the active tool to the pencil button. -->
<InkCanvas x:Name="inkCanvas" />
<InkToolbar x:Name="inkToolbar"
VerticalAlignment="Top"
TargetInkCanvas="{x:Bind inkCanvas}"
InitialControls="None">
<!--
Add only the ballpoint pen, pencil, and eraser.
Note that the buttons are added to the toolbar in the order
defined by the framework, not the order we specify here.
-->
<InkToolbarEraserButton />
<InkToolbarBallpointPenButton />
<InkToolbarPencilButton/>
</InkToolbar>
</Grid>
</Grid>

Code-behind
1. Use the XAML declaration for the InkCanvas and InkToolbar from the first example.
<Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">
<Grid.RowDefinitions>
<RowDefinition Height="Auto"/>
<RowDefinition Height="*"/>
</Grid.RowDefinitions>
<StackPanel x:Name="HeaderPanel" Orientation="Horizontal" Grid.Row="0">
<TextBlock x:Name="Header"
Text="Basic ink sample"
Style="{ThemeResource HeaderTextBlockStyle}"
Margin="10,0,0,0" />
</StackPanel>
<Grid Grid.Row="1">
<Image Source="Assets\StoreLogo.png" />
<InkCanvas x:Name="inkCanvas" />
<InkToolbar x:Name="inkToolbar"
VerticalAlignment="Top"
TargetInkCanvas="{x:Bind inkCanvas}" />
</Grid>
</Grid>

2. In code-behind, set up a handler for the Loading event of the InkToolbar object.

/// <summary>
/// An empty page that can be used on its own or navigated to within a Frame.
/// Here, we set up InkToolbar event listeners.
/// </summary>
public MainPage_CodeBehind()
{
this.InitializeComponent();
// Add handlers for InkToolbar events.
inkToolbar.Loading += inkToolbar_Loading;
}

3. Set InitialControls to "None".


4. Create object references for the buttons required by your app. Here, we add InkToolbarBallpointPenButton,
InkToolbarPencilButton, and InkToolbarEraserButton only.

NOTE
Buttons are added to the toolbar in the order defined by the framework, not the order specified here.

5. Add the buttons to the InkToolbar.


/// <summary>
/// Handles the Loading event of the InkToolbar.
/// Here, we identify the buttons to include on the InkToolbar.
/// </summary>
/// <param name="sender">The InkToolbar</param>
/// <param name="args">The InkToolbar event data.
/// If there is no event data, this parameter is null</param>
private void inkToolbar_Loading(FrameworkElement sender, object args)
{
// Clear all built-in buttons from the InkToolbar.
inkToolbar.InitialControls = InkToolbarInitialControls.None;

// Add only the ballpoint pen, pencil, and eraser.


// Note that the buttons are added to the toolbar in the order
// defined by the framework, not the order we specify here.
InkToolbarBallpointPenButton ballpoint = new InkToolbarBallpointPenButton();
InkToolbarPencilButton pencil = new InkToolbarPencilButton();
InkToolbarEraserButton eraser = new InkToolbarEraserButton();
inkToolbar.Children.Add(eraser);
inkToolbar.Children.Add(ballpoint);
inkToolbar.Children.Add(pencil);
}

Custom buttons and inking features


You can customize and extend the collection of buttons (and associated inking features) that are provided through
the InkToolbar.
The InkToolbar consists of two distinct groups of button types:
1. A group of "tool" buttons containing the built-in drawing, erasing, and highlighting buttons. Custom pens
and tools are added here.

Note Feature selection is mutually exclusive.

2. A group of "toggle" buttons containing the built-in ruler button. Custom toggles are added here.

Note Features are not mutually exclusive and can be used concurrently with other active tools.

Depending on your application and the inking functionality required, you can add any of the following buttons
(bound to your custom ink features) to the InkToolbar:
Custom pen a pen for which the ink color palette and pen tip properties, such as shape, rotation, and size, are
defined by the host app.
Custom tool a non-pen tool, defined by the host app.
Custom toggle Sets the state of an app-defined feature to on or off. When turned on, the feature works in
conjunction with the active tool.

Note You cannot change the display order of the built-in buttons. The default display order is: Ballpoint pen,
pencil, highlighter, eraser, and ruler. Custom pens are appended to the last default pen, custom tool buttons are
added between the last pen button and the eraser button and custom toggle buttons are added after the ruler
button. (Custom buttons are added in the order they are specified.)

Custom pen
You can create a custom pen (activated through a custom pen button) where you define the ink color palette and
pen tip properties, such as shape, rotation, and size.
Custom calligraphic pen button
For this example, we define a custom pen with a broad tip that enables basic calligraphic ink strokes. We also
customize the collection of brushes in the palette displayed on the button flyout.
Code-behind
First, we define our custom pen and specify the drawing attributes in code-behind. We reference this custom pen
from XAML later.
1. Right click the project in Solution Explorer and select Add -> New item.
2. Under Visual C# -> Code, add a new Class file and call it CalligraphicPen.cs.
3. In Calligraphic.cs, replace the default using block with the following:

using System.Numerics;
using Windows.UI;
using Windows.UI.Input.Inking;
using Windows.UI.Xaml.Controls;
using Windows.UI.Xaml.Media;

4. Specify that the CalligraphicPen class is derived from InkToolbarCustomPen.

class CalligraphicPen : InkToolbarCustomPen


{
}

5. Override CreateInkDrawingAttributesCore to specify your own brush and stroke size.

class CalligraphicPen : InkToolbarCustomPen


{
protected override InkDrawingAttributes
CreateInkDrawingAttributesCore(Brush brush, double strokeWidth)
{
}
}

6. Create an InkDrawingAttributes object and set the pen tip shape, tip rotation, stroke size, and ink color.
class CalligraphicPen : InkToolbarCustomPen
{
protected override InkDrawingAttributes
CreateInkDrawingAttributesCore(Brush brush, double strokeWidth)
{
InkDrawingAttributes inkDrawingAttributes =
new InkDrawingAttributes();
inkDrawingAttributes.PenTip = PenTipShape.Circle;
inkDrawingAttributes.Size =
new Windows.Foundation.Size(strokeWidth, strokeWidth * 20);
SolidColorBrush solidColorBrush = brush as SolidColorBrush;
if (solidColorBrush != null)
{
inkDrawingAttributes.Color = solidColorBrush.Color;
}
else
{
inkDrawingAttributes.Color = Colors.Black;
}

Matrix3x2 matrix = Matrix3x2.CreateRotation(45);


inkDrawingAttributes.PenTipTransform = matrix;

return inkDrawingAttributes;
}
}

XAML
Next, we add the necessary references to the custom pen in MainPage.xaml.
1. We declare a local page resource dictionary that creates a reference to the custom pen ( CalligraphicPen )
defined in CalligraphicPen.cs, and a brush collection supported by the custom pen ( CalligraphicPenPalette ).

<Page.Resources>
<!-- Add the custom CalligraphicPen to the page resources. -->
<local:CalligraphicPen x:Key="CalligraphicPen" />
<!-- Specify the colors for the palette of the custom pen. -->
<BrushCollection x:Key="CalligraphicPenPalette">
<SolidColorBrush Color="Blue" />
<SolidColorBrush Color="Red" />
</BrushCollection>
</Page.Resources>

2. We then add an InkToolbar with a child InkToolbarCustomPenButton element.


The custom pen button includes the two static resource references declared in the page resources:
CalligraphicPen and CalligraphicPenPalette .

We also specify the range for the stroke size slider (MinStrokeWidth, MaxStrokeWidth, and
SelectedStrokeWidth), the selected brush (SelectedBrushIndex), and the icon for the custom pen button
(SymbolIcon).
<Grid Grid.Row="1">
<InkCanvas x:Name="inkCanvas" />
<InkToolbar x:Name="inkToolbar"
VerticalAlignment="Top"
TargetInkCanvas="{x:Bind inkCanvas}">
<InkToolbarCustomPenButton
CustomPen="{StaticResource CalligraphicPen}"
Palette="{StaticResource CalligraphicPenPalette}"
MinStrokeWidth="1" MaxStrokeWidth="3" SelectedStrokeWidth="2"
SelectedBrushIndex ="1">
<SymbolIcon Symbol="Favorite" />
<InkToolbarCustomPenButton.ConfigurationContent>
<InkToolbarPenConfigurationControl />
</InkToolbarCustomPenButton.ConfigurationContent>
</InkToolbarCustomPenButton>
</InkToolbar>
</Grid>

Custom toggle
You can create a custom toggle (activated through a custom toggle button) to set the state of an app-defined
feature to on or off. When turned on, the feature works in conjunction with the active tool.
In this example, we define a custom toggle button that enables inking with touch input (by default, touch inking is
not enabled).

NOTE
If you need to support inking with touch, we recommended that you enable it using a CustomToggleButton, with the icon
and tooltip specified in this example.

Typically, touch input is used for direct manipulation of an object or the app UI. To demonstrate the differences in
behavior when touch inking is enabled, we place the InkCanvas within a ScrollViewer container and set the
dimensions of the ScrollViewer to be smaller than the InkCanvas.
When the app starts, only pen inking is supported and touch is used to pan or zoom the inking surface. When
touch inking is enabled, the inking surface cannot be panned or zoomed through touch input.

NOTE
See Inking controls for both InkCanvas and InkToolbar UX guidelines. The following recommendations are relevant to this
example:
The InkToolbar, and inking in general, is best experienced through an active pen. However, inking with mouse and touch
can be supported if required by your app.
If supporting inking with touch input, we recommend using the "ED5F" icon from the "Segoe MLD2 Assets" font for the
toggle button, with a "Touch writing" tooltip.

XAML
1. First, we declare an InkToolbarCustomToggleButton element (toggleButton) with a Click event listener that
specifies the event handler (Toggle_Custom).
<Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">
<Grid.RowDefinitions>
<RowDefinition Height="Auto"/>
<RowDefinition Height="*"/>
</Grid.RowDefinitions>

<StackPanel Grid.Row="0"
x:Name="HeaderPanel"
Orientation="Horizontal">
<TextBlock x:Name="Header"
Text="Basic ink sample"
Style="{ThemeResource HeaderTextBlockStyle}"
Margin="10" />
</StackPanel>

<ScrollViewer Grid.Row="1"
HorizontalScrollBarVisibility="Auto"
VerticalScrollBarVisibility="Auto">

<Grid HorizontalAlignment="Left" VerticalAlignment="Top">


<Grid.RowDefinitions>
<RowDefinition Height="Auto"/>
<RowDefinition Height="*"/>
</Grid.RowDefinitions>

<InkToolbar Grid.Row="0"
Margin="10"
x:Name="inkToolbar"
VerticalAlignment="Top"
TargetInkCanvas="{x:Bind inkCanvas}">
<InkToolbarCustomToggleButton
x:Name="toggleButton"
Click="CustomToggle_Click"
ToolTipService.ToolTip="Touch Writing">
<SymbolIcon Symbol="{x:Bind TouchWritingIcon}"/>
</InkToolbarCustomToggleButton>
</InkToolbar>

<ScrollViewer Grid.Row="1"
Height="500"
Width="500"
x:Name="scrollViewer"
ZoomMode="Enabled"
MinZoomFactor=".1"
VerticalScrollMode="Enabled"
VerticalScrollBarVisibility="Auto"
HorizontalScrollMode="Enabled"
HorizontalScrollBarVisibility="Auto">

<Grid x:Name="outputGrid"
Height="1000"
Width="1000"
Background="{ThemeResource SystemControlBackgroundChromeWhiteBrush}">
<InkCanvas x:Name="inkCanvas"/>
</Grid>

</ScrollViewer>
</Grid>
</ScrollViewer>
</Grid>

Code-behind
1. In the previous snippet, we declared a Click event listener and handler (Toggle_Custom) on the custom
toggle button for touch inking (toggleButton). This handler simply toggles support for
CoreInputDeviceTypes.Touch through the InputDeviceTypes property of the InkPresenter.
We also specified an icon for the button using the SymbolIcon element and the {x:Bind} markup extension
that binds it to a field defined in the code-behind file (TouchWritingIcon).
The following snippet includes both the Click event handler and the definition of TouchWritingIcon.

namespace Ink_Basic_InkToolbar
{
/// <summary>
/// An empty page that can be used on its own or navigated to within a Frame.
/// </summary>
public sealed partial class MainPage_AddCustomToggle : Page
{
Symbol TouchWritingIcon = (Symbol)0xED5F;

public MainPage_AddCustomToggle()
{
this.InitializeComponent();
}

// Handler for the custom toggle button that enables touch inking.
private void CustomToggle_Click(object sender, RoutedEventArgs e)
{
if (toggleButton.IsChecked == true)
{
inkCanvas.InkPresenter.InputDeviceTypes |= CoreInputDeviceTypes.Touch;
}
else
{
inkCanvas.InkPresenter.InputDeviceTypes &= ~CoreInputDeviceTypes.Touch;
}
}
}
}

Custom tool
You can create a custom tool button to invoke a non-pen tool that is defined by your app.
By default, an InkPresenter processes all input as either an ink stroke or an erase stroke. This includes input
modified by a secondary hardware affordance such as a pen barrel button, a right mouse button, or similar.
However, InkPresenter can be configured to leave specific input unprocessed, which can then be passed through
to your app for custom processing.
In this example, we define a custom tool button that, when selected, causes subsequent strokes to be processed
and rendered as a selection lasso (dashed line) instead of ink. All ink strokes within the bounds of the selection area
are set to Selected.

NOTE
See Inking controls for both InkCanvas and InkToolbar UX guidelines. The following recommendation is relevant to this
example:
If providing stroke selection, we recommend using the "EF20" icon from the "Segoe MLD2 Assets" font for the tool
button, with a "Selection tool" tooltip.

XAML
1. First, we declare an InkToolbarCustomToolButton element (customToolButton) with a Click event listener
that specifies the event handler (customToolButton_Click) where stroke selection is configured. (We've also
added a set of buttons for copying, cutting, and pasting the stroke selection.)
2. We also add a Canvas element for drawing our selection stroke. Using a separate layer to draw the selection
stroke ensures the InkCanvas and its content remain untouched.

<Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">


<Grid.RowDefinitions>
<RowDefinition Height="Auto"/>
<RowDefinition Height="Auto"/>
<RowDefinition Height="*"/>
</Grid.RowDefinitions>
<StackPanel x:Name="HeaderPanel" Orientation="Horizontal" Grid.Row="0">
<TextBlock x:Name="Header"
Text="Basic ink sample"
Style="{ThemeResource HeaderTextBlockStyle}"
Margin="10,0,0,0" />
</StackPanel>
<StackPanel x:Name="ToolPanel" Orientation="Horizontal" Grid.Row="1">
<InkToolbar x:Name="inkToolbar"
VerticalAlignment="Top"
TargetInkCanvas="{x:Bind inkCanvas}">
<InkToolbarCustomToolButton
x:Name="customToolButton"
Click="customToolButton_Click"
ToolTipService.ToolTip="Selection tool">
<SymbolIcon Symbol="{x:Bind SelectIcon}"/>
</InkToolbarCustomToolButton>
</InkToolbar>
<Button x:Name="cutButton"
Content="Cut"
Click="cutButton_Click"
Width="100"
Margin="5,0,0,0"/>
<Button x:Name="copyButton"
Content="Copy"
Click="copyButton_Click"
Width="100"
Margin="5,0,0,0"/>
<Button x:Name="pasteButton"
Content="Paste"
Click="pasteButton_Click"
Width="100"
Margin="5,0,0,0"/>
</StackPanel>
<Grid Grid.Row="2" x:Name="outputGrid"
Background="{ThemeResource SystemControlBackgroundChromeWhiteBrush}"
Height="Auto">
<!-- Canvas for displaying selection UI. -->
<Canvas x:Name="selectionCanvas"/>
<!-- Canvas for displaying ink. -->
<InkCanvas x:Name="inkCanvas" />
</Grid>
</Grid>

Code-behind
1. We then handle the Click event for the InkToolbarCustomToolButton in the MainPage.xaml.cs code-
behind file.
This handler configures the InkPresenter to pass unprocessed input through to the app.
For a more detailed step through of this code: See the Pass-through input for advanced processing section
of Pen interactions and Windows Ink in UWP apps.
We also specified an icon for the button using the SymbolIcon element and the {x:Bind} markup extension
that binds it to a field defined in the code-behind file (SelectIcon).
The following snippet includes both the Click event handler and the definition of SelectIcon.

namespace Ink_Basic_InkToolbar
{
/// <summary>
/// An empty page that can be used on its own or navigated to within a Frame.
/// </summary>
public sealed partial class MainPage_AddCustomTool : Page
{
// Icon for custom selection tool button.
Symbol SelectIcon = (Symbol)0xEF20;

// Stroke selection tool.


private Polyline lasso;
// Stroke selection area.
private Rect boundingRect;

public MainPage_AddCustomTool()
{
this.InitializeComponent();

// Listen for new ink or erase strokes to clean up selection UI.


inkCanvas.InkPresenter.StrokeInput.StrokeStarted +=
StrokeInput_StrokeStarted;
inkCanvas.InkPresenter.StrokesErased +=
InkPresenter_StrokesErased;
}

private void customToolButton_Click(object sender, RoutedEventArgs e)


{
// By default, the InkPresenter processes input modified by
// a secondary affordance (pen barrel button, right mouse
// button, or similar) as ink.
// To pass through modified input to the app for custom processing
// on the app UI thread instead of the background ink thread, set
// InputProcessingConfiguration.RightDragAction to LeaveUnprocessed.
inkCanvas.InkPresenter.InputProcessingConfiguration.RightDragAction =
InkInputRightDragAction.LeaveUnprocessed;

// Listen for unprocessed pointer events from modified input.


// The input is used to provide selection functionality.
inkCanvas.InkPresenter.UnprocessedInput.PointerPressed +=
UnprocessedInput_PointerPressed;
inkCanvas.InkPresenter.UnprocessedInput.PointerMoved +=
UnprocessedInput_PointerMoved;
inkCanvas.InkPresenter.UnprocessedInput.PointerReleased +=
UnprocessedInput_PointerReleased;
}

// Handle new ink or erase strokes to clean up selection UI.


private void StrokeInput_StrokeStarted(
InkStrokeInput sender, Windows.UI.Core.PointerEventArgs args)
{
ClearSelection();
}

private void InkPresenter_StrokesErased(


InkPresenter sender, InkStrokesErasedEventArgs args)
{
ClearSelection();
}

private void cutButton_Click(object sender, RoutedEventArgs e)


{
inkCanvas.InkPresenter.StrokeContainer.CopySelectedToClipboard();
inkCanvas.InkPresenter.StrokeContainer.DeleteSelected();
ClearSelection();
}
}

private void copyButton_Click(object sender, RoutedEventArgs e)


{
inkCanvas.InkPresenter.StrokeContainer.CopySelectedToClipboard();
}

private void pasteButton_Click(object sender, RoutedEventArgs e)


{
if (inkCanvas.InkPresenter.StrokeContainer.CanPasteFromClipboard())
{
inkCanvas.InkPresenter.StrokeContainer.PasteFromClipboard(
new Point(0, 0));
}
else
{
// Cannot paste from clipboard.
}
}

// Clean up selection UI.


private void ClearSelection()
{
var strokes = inkCanvas.InkPresenter.StrokeContainer.GetStrokes();
foreach (var stroke in strokes)
{
stroke.Selected = false;
}
ClearBoundingRect();
}

private void ClearBoundingRect()


{
if (selectionCanvas.Children.Any())
{
selectionCanvas.Children.Clear();
boundingRect = Rect.Empty;
}
}

// Handle unprocessed pointer events from modifed input.


// The input is used to provide selection functionality.
// Selection UI is drawn on a canvas under the InkCanvas.
private void UnprocessedInput_PointerPressed(
InkUnprocessedInput sender, PointerEventArgs args)
{
// Initialize a selection lasso.
lasso = new Polyline()
{
Stroke = new SolidColorBrush(Windows.UI.Colors.Blue),
StrokeThickness = 1,
StrokeDashArray = new DoubleCollection() { 5, 2 },
};

lasso.Points.Add(args.CurrentPoint.RawPosition);

selectionCanvas.Children.Add(lasso);
}

private void UnprocessedInput_PointerMoved(


InkUnprocessedInput sender, PointerEventArgs args)
{
// Add a point to the lasso Polyline object.
lasso.Points.Add(args.CurrentPoint.RawPosition);
}

private void UnprocessedInput_PointerReleased(


InkUnprocessedInput sender, PointerEventArgs args)
{
// Add the final point to the Polyline object and
// Add the final point to the Polyline object and
// select strokes within the lasso area.
// Draw a bounding box on the selection canvas
// around the selected ink strokes.
lasso.Points.Add(args.CurrentPoint.RawPosition);

boundingRect =
inkCanvas.InkPresenter.StrokeContainer.SelectWithPolyLine(
lasso.Points);

DrawBoundingRect();
}

// Draw a bounding rectangle, on the selection canvas, encompassing


// all ink strokes within the lasso area.
private void DrawBoundingRect()
{
// Clear all existing content from the selection canvas.
selectionCanvas.Children.Clear();

// Draw a bounding rectangle only if there are ink strokes


// within the lasso area.
if (!((boundingRect.Width == 0) ||
(boundingRect.Height == 0) ||
boundingRect.IsEmpty))
{
var rectangle = new Rectangle()
{
Stroke = new SolidColorBrush(Windows.UI.Colors.Blue),
StrokeThickness = 1,
StrokeDashArray = new DoubleCollection() { 5, 2 },
Width = boundingRect.Width,
Height = boundingRect.Height
};

Canvas.SetLeft(rectangle, boundingRect.X);
Canvas.SetTop(rectangle, boundingRect.Y);

selectionCanvas.Children.Add(rectangle);
}
}
}
}

Custom ink rendering


By default, ink input is processed on a low-latency background thread and rendered "wet" as it is drawn. When the
stroke is completed (pen or finger lifted, or mouse button released), the stroke is processed on the UI thread and
rendered "dry" to the InkCanvas layer (above the application content and replacing the wet ink).
The ink platform enables you to override this behavior and completely customize the inking experience by custom
drying the ink input.
For more info on custom drying, see Pen interactions and Windows Ink in UWP apps.

NOTE
Custom drying and the InkToolbar
If your app overrides the default ink rendering behavior of the InkPresenter with a custom drying implementation, the
rendered ink strokes are no longer available to the InkToolbar and the built-in erase commands of the InkToolbar do not
work as expected. To provide erase functionality, you must handle all pointer events, perform hit-testing on each stroke, and
override the built-in "Erase all ink" command.
Related articles
Pen and stylus interactions
Samples
Ink sample
Simple ink sample
Complex ink sample
Speech interactions
3/6/2017 12 min to read Edit on GitHub

Integrate speech recognition and text-to-speech (also known as TTS, or speech synthesis) directly into the user
experience of your app.
Other speech components
See the Cortana design guidelines if you are exposing app functionality in the Cortana UI.
Speech recognition: converts words spoken by the user into text for form input, for text dictation, to specify an
action or command, and to accomplish tasks. Both pre-defined grammars for free-text dictation and web search,
and custom grammars authored using Speech Recognition Grammar Specification (SRGS) Version 1.0 are
supported.
TTS: uses a speech synthesis engine (voice) to convert a text string into spoken words. The input string can be
either basic, unadorned text or more complex Speech Synthesis Markup Language (SSML). SSML provides a
standard way to control characteristics of speech output, such as pronunciation, volume, pitch, rate or speed, and
emphasis.

NOTE
Using Cortana and customized voice commands, your app can be launched in the foreground (the app takes focus, just as if
it was launched from the Start menu) or activated as a background service (Cortana retains focus but provides results from
the app).
Commands that require additional context or user input (such as sending a message to a specific contact) are best handled
in a foreground app, while basic commands can be handled in Cortana through a background app.
If you are exposing functionality as a background service through voice commands in the Cortana UI, see the Cortana
design guidelines.

Designed and implemented thoughtfully, speech can be a robust and enjoyable way for people to interact with
your app, complementing, or even replacing, keyboard, mouse, touch, and gestures.

Speech interaction design


These guidelines and recommendations describe how to best integrate both speech recognition and TTS into the
interaction experience of your app.
If you are considering supporting speech interactions in your app:
What actions can be taken through speech? Can a user navigate between pages, invoke commands, or enter
data as text fields, brief notes, or long messages?
Is speech input a good option for completing a task?
How does a user know when speech input is available?
Is the app always listening, or does the user need to take an action for the app to enter listening mode?
What phrases initiate an action or behavior? Do the phrases and actions need to be enumerated on screen?
Are prompt, confirmation, and disambiguation screens or TTS required?
What is the interaction dialog between app and user?
Is a custom or constrained vocabulary required (such as medicine, science, or locale) for the context of your
app?
Is network connectivity required?

Text input
Speech for text input can range from short form (single word or phrase) to long form (continuous dictation). Short
form input must be less than 10 seconds in length, while long form input session can be up to two minutes in
length. (Long form input can be restarted without user intervention to give the impression of continuous
dictation.)
You should provide a visual cue to indicate that speech recognition is supported and available to the user and
whether the user needs to turn it on. For example, a command bar button with a microphone glyph (see
Command bars) can be used to show both availability and state.
Provide ongoing recognition feedback to minimize any apparent lack of response while recognition is being
performed.
Let users revise recognition text using keyboard input, disambiguation prompts, suggestions, or additional speech
recognition.
Stop recognition if input is detected from a device other than speech recognition, such as touch or keyboard. This
probably indicates that the user has moved onto another task, such as correcting the recognition text or interacting
with other form fields.
Specify the length of time for which no speech input indicates that recognition is over. Do not automatically restart
recognition after this period of time as it typically indicates the user has stopped engaging with your app.
Disable all continuous recognition UI and terminate the recognition session if a network connection is not
available. Continuous recogntion requires a network connection.

Commanding
Speech input can initiate actions, invoke commands, and accomplish tasks.
If space permits, consider displaying the supported responses for the current app context, with examples of valid
input. This reduces the potential responses your app has to process and also eliminates confusion for the user.
Try to frame your questions such that they elicit as specific a response as possible. For example, "What do you
want to do today?" is very open ended and would require a very large grammar definition due to how varied the
responses could be. Alternatively, "Would you like to play a game or listen to music?" constrains the response to
one of two valid answers with a correspondingly small grammar definition. A small grammar is much easier to
author and results in much more accurate recognition results.
Request confirmation from the user when speech recognition confidence is low. If the user's intent is unclear, it's
better to get clarification than to initiate an unintended action.
You should provide a visual cue to indicate that speech recognition is supported and available to the user and
whether the user needs to turn it on. For example, a command bar button with a microphone glyph (see
Guidelines for command bars) can be used to show both availability and state.
If the speech recognition switch is typically out of view, consider displaying a state indicator in the content area of
the app.
If recognition is initiated by the user, consider using the built-in recognition experience for consistency. The built-in
experience includes customizable screens with prompts, examples, disambiguations, confirmations, and errors.
The screens vary depending on the specified constraints:
Pre-defined grammar (dictation or web search)
The Listening screen.
The Thinking screen.
The Heard you say screen or the error screen.
List of words or phrases, or a SRGS grammar file
The Listening screen.
The Did you say screen, if what the user said could be interpreted as more than one potential result.
The Heard you say screen or the error screen.
On the Listening screen you can:
Customize the heading text.
Provide example text of what the user can say.
Specify whether the Heard you say screen is shown.
Read the recognized string back to the user on the Heard you say screen.
Here is an example of the built-in recognition flow for a speech recognizer that uses a SRGS-defined constraint. In
this example, speech recognition is successful.

Always listening
Your app can listen for and recognize speech input as soon as the app is launched, without user intervention.
You should customize the grammar constraints based on the app context. This keeps the speech recognition
experience very targeted and relevant to the current task, and minimizes errors.

"What can I say?"


When speech input is enabled, it's important to help users discover what exactly can be understood and what
actions can be performed.
If speech recognition is user enabled, consider using the command bar or a menu command to show all words
and phrases supported in the current context.
If speech recognition is always on, consider adding the phrase "What can I say?" to every page. When the user says
this phrase, display all words and phrases supported in the current context. Using this phrase provides a consistent
way for users to discover speech capabilities across the system.

Recognition failures
Speech recognition will fail. Failures happen when audio quality is poor, when only part of a phrase is recognized,
or when no input is detected at all.
Handle failure gracefully, help a user understand why recognition failed, and recover.
Your app should inform the user that they weren't understood and that they need to try again.
Consider providing examples of one or more supported phrases. The user is likely to repeat a suggested phrase,
which increases recognition success.
You should display a list of potential matches for a user to select from. This can be far more efficient than going
through the recognition process again.
You should always support alternative input types, which is especially helpful for handling repeated recognition
failures. For example, you could suggest that the user try to use a keyboard, or use touch or a mouse to select from
a list of potential matches.
Use the built-in speech recognition experience as it includes screens that inform the user that recognition was not
successful and lets the user make another recognition attempt.
Listen for and try to correct issues in the audio input. The speech recognizer can detect issues with the audio
quality that might adversely affect speech recognition accuracy. You can use the information provided by the
speech recognizer to inform the user of the issue and let them take corrective action, if possible. For example, if the
volume setting on the microphone is too low, you can prompt the user to speak louder or turn the volume up.

Constraints
Constraints, or grammars, define the spoken words and phrases that can be matched by the speech recognizer.
You can specify one of the pre-defined web service grammars or you can create a custom grammar that is
installed with your app.
Predefined grammars
Predefined dictation and web-search grammars provide speech recognition for your app without requiring you to
author a grammar. When using these grammars, speech recognition is performed by a remote web service and
the results are returned to the device
The default free-text dictation grammar can recognize most words and phrases that a user can say in a
particular language, and is optimized to recognize short phrases. Free-text dictation is useful when you don't
want to limit the kinds of things a user can say. Typical uses include creating notes or dictating the content for a
message.
The web-search grammar, like a dictation grammar, contains a large number of words and phrases that a user
might say. However, it is optimized to recognize terms that people typically use when searching the web.

NOTE
Because predefined dictation and web-search grammars can be large, and because they are online (not on the device),
performance might not be as fast as with a custom grammar installed on the device.

These predefined grammars can be used to recognize up to 10 seconds of speech input and require no authoring
effort on your part. However, they do require connection to a network.
Custom grammars
A custom grammar is designed and authored by you and is installed with your app. Speech recognition using a
custom constraint is performed on the device.
Programmatic list constraints provide a lightweight approach to creating simple grammars using a list of
words or phrases. A list constraint works well for recognizing short, distinct phrases. Explicitly specifying all
words in a grammar also improves recognition accuracy, as the speech recognition engine must only process
speech to confirm a match. The list can also be programmatically updated.
An SRGS grammar is a static document that, unlike a programmatic list constraint, uses the XML format
defined by the SRGS Version 1.0. An SRGS grammar provides the greatest control over the speech
recognition experience by letting you capture multiple semantic meanings in a single recognition.
Here are some tips for authoring SRGS grammars:
Keep each grammar small. Grammars that contain fewer phrases tend to provide more accurate
recognition than larger grammars that contain many phrases. It's better to have several smaller
grammars for specific scenarios than to have a single grammar for your entire app.
Let users know what to say for each app context and enable and disable grammars as needed.
Design each grammar so users can speak a command in a variety of ways. For example, you can use the
GARBAGE rule to match speech input that your grammar does not define. This lets users speak
additional words that have no meaning to your app. For example, "give me", "and", "uh", "maybe", and so
on.
Use the sapi:subset element to help match speech input. This is a Microsoft extension to the SRGS
specification to help match partial phrases.
Try to avoid defining phrases in your grammar that contain only one syllable. Recognition tends to be
more accurate for phrases containing two or more syllables.
Avoid using phrases that sound similar. For example, phrases such as "hello", "bellow", and "fellow" can
confuse the recognition engine and result in poor recognition accuracy.

NOTE
Which type of constraint type you use depends on the complexity of the recognition experience you want to create. Any
could be the best choice for a specific recognition task, and you might find uses for all types of constraints in your app.

Custom pronunciations
If your app contains specialized vocabulary with unusual or fictional words, or words with uncommon
pronunciations, you might be able to improve recognition performance for those words by defining custom
pronunciations.
For a small list of words and phrases, or a list of infrequently used words and phrases, you can create custom
pronunciations in a SRGS grammar. See token Element for more info.
For larger lists of words and phrases, or frequently used words and phrases, you can create separate
pronunciation lexicon documents. See About Lexicons and Phonetic Alphabets for more info.

Testing
Test speech recognition accuracy and any supporting UI with your app's target audience. This is the best way to
determine the effectiveness of the speech interaction experience in your app. For example, are users getting poor
recognition results because your app isn't listening for a common phrase?
Either modify the grammar to support this phrase or provide users with a list of supported phrases. If you already
provide the list of supported phrases, ensure it is easily discoverable.
Text-to-speech (TTS)
TTS generates speech output from plain text or SSML.
Try to design prompts that are polite and encouraging.
Consider whether you should read long strings of text. It's one thing to listen to a text message, but quite another
to listen to a long list of search results that are difficult to remember.
You should provide media controls to let users pause, or stop, TTS.
You should listen to all TTS strings to ensure they are intelligible and sound natural.
Stringing together an unusual sequence of words or speaking part numbers or punctuation might cause a
phrase to become unintelligible.
Speech can sound unnatural when the prosody or cadence is different from how a native speaker would say a
phrase.
Both issues can be addressed bu using SSML instead of plain text as input to the speech synthesizer. For more info
about SSML, see Use SSML to Control Synthesized Speech and Speech Synthesis Markup Language Reference.

Other articles in this section


TOPIC DESCRIPTION

Speech recognition Use speech recognition to provide input, specify an action or


command, and accomplish tasks.

Specify the speech recognizer language Learn how to select an installed language to use for speech
recognition.

Define custom recognition constraints Learn how to define and use custom constraints for speech
recognition.

Enable continuous dictation Learn how to capture and recognize long-form, continuous
dictation speech input.

Manage issues with audio input Learn how to manage issues with speech-recognition
accuracy caused by audio-input quality.

Set speech recognition timeouts Set how long a speech recognizer ignores silence or
unrecognizable sounds (babble) and continues listening for
speech input.

Related articles
Speech interactions
Cortana interactions
Samples
Speech recognition and speech synthesis sample
Speech recognition
3/6/2017 6 min to read Edit on GitHub

Use speech recognition to provide input, specify an action or command, and accomplish tasks.

Important APIs
Windows.Media.SpeechRecognition

Speech recognition is made up of a speech runtime, recognition APIs for programming the runtime, ready-to-use
grammars for dictation and web search, and a default system UI that helps users discover and use speech
recognition features.

Set up the audio feed


Ensure that your device has a microphone or the equivalent.
Set the Microphone device capability (DeviceCapability) in the App package manifest (package.appxmanifest
file) to get access to the microphones audio feed. This allows the app to record audio from connected
microphones.
See App capability declarations.

Recognize speech input


A constraint defines the words and phrases (vocabulary) that an app recognizes in speech input. Constraints are at
the core of speech recognition and give your app great over the accuracy of speech recognition.
You can use various types of constraints when performing speech recognition:
1. Predefined grammars (SpeechRecognitionTopicConstraint).
Predefined dictation and web-search grammars provide speech recognition for your app without requiring
you to author a grammar. When using these grammars, speech recognition is performed by a remote web
service and the results are returned to the device.
The default free-text dictation grammar can recognize most words and phrases that a user can say in a
particular language, and is optimized to recognize short phrases. The predefined dictation grammar is used
if you don't specify any constraints for your SpeechRecognizer object. Free-text dictation is useful when
you don't want to limit the kinds of things a user can say. Typical uses include creating notes or dictating the
content for a message.
The web-search grammar, like a dictation grammar, contains a large number of words and phrases that a
user might say. However, it is optimized to recognize terms that people typically use when searching the
web.
Note Because predefined dictation and web-search grammars can be large, and because they are online
(not on the device), performance might not be as fast as with a custom grammar installed on the device.
These predefined grammars can be used to recognize up to 10 seconds of speech input and require no authoring effort on your part. However,
they do require a connection to a network.

To use web-service constraints, speech input and dictation support must be enabled in **Settings** by turning on the "Get to know me" option
in the Settings -&gt; Privacy -&gt; Speech, inking, and typing page.

Here, we show how to test whether speech input is enabled and open the Settings -&gt; Privacy -&gt; Speech, inking, and typing page, if not.

First, we initialize a global variable (HResultPrivacyStatementDeclined) to the HResult value of 0x80045509. See [Exception handling for in C\# or
Visual Basic](https://msdn.microsoft.com/library/windows/apps/dn532194).

private static uint HResultPrivacyStatementDeclined = 0x80045509;</code></pre></td>


</tr>
</tbody>
</table>

We then catch any standard exceptions during recogntion and test if the [**HResult**]
(https://msdn.microsoft.com/library/windows/apps/br206579) value is equal to the value of the HResultPrivacyStatementDeclined variable. If so,
we display a warning and call `await Windows.System.Launcher.LaunchUriAsync(new Uri("ms-settings:privacy-accounts"));` to open the
Settings page.

<colgroup>
<col width="100%" />
</colgroup>
<thead>
<tr class="header">
<th align="left">C#</th>
</tr>
</thead>
<tbody>
<tr class="odd">
catch (Exception exception)
{
// Handle the speech privacy policy error.
if ((uint)exception.HResult == HResultPrivacyStatementDeclined)
{
resultTextBlock.Visibility = Visibility.Visible;
resultTextBlock.Text = "The privacy statement was declined.
Go to Settings -> Privacy -> Speech, inking and typing, and ensure you
have viewed the privacy policy, and &#39;Get To Know You&#39; is enabled.";
// Open the privacy/speech, inking, and typing settings page.
await Windows.System.Launcher.LaunchUriAsync(new Uri("ms-settings:privacy-accounts"));
}
else
{
var messageDialog = new Windows.UI.Popups.MessageDialog(exception.Message, "Exception");
await messageDialog.ShowAsync();
}
}

1. Programmatic list constraints (SpeechRecognitionListConstraint).


Programmatic list constraints provide a lightweight approach to creating simple grammars using a list of
words or phrases. A list constraint works well for recognizing short, distinct phrases. Explicitly specifying all
words in a grammar also improves recognition accuracy, as the speech recognition engine must only
process speech to confirm a match. The list can also be programmatically updated.
A list constraint consists of an array of strings that represents speech input that your app will accept for a
recognition operation. You can create a list constraint in your app by creating a speech-recognition list-
constraint object and passing an array of strings. Then, add that object to the constraints collection of the
recognizer. Recognition is successful when the speech recognizer recognizes any one of the strings in the
array.
2. SRGS grammars (SpeechRecognitionGrammarFileConstraint).
An Speech Recognition Grammar Specification (SRGS) grammar is a static document that, unlike a
programmatic list constraint, uses the XML format defined by the SRGS Version 1.0. An SRGS grammar
provides the greatest control over the speech recognition experience by letting you capture multiple
semantic meanings in a single recognition.
3. Voice command constraints (SpeechRecognitionVoiceCommandDefinitionConstraint)
Use a Voice Command Definition (VCD) XML file to define the commands that the user can say to initiate
actions when activating your app. For more detail, see Launch a foreground app with voice commands in
Cortana.
Note Which type of constraint type you use depends on the complexity of the recognition experience you want to
create. Any could be the best choice for a specific recognition task, and you might find uses for all types of
constraints in your app. To get started with constraints, see Define custom recognition constraints.
The predefined Universal Windows app dictation grammar recognizes most words and short phrases in a
language. It is activated by default when a speech recognizer object is instantiated without custom constraints.
In this example, we show how to:
Create a speech recognizer.
Compile the default Universal Windows app constraints (no grammars have been added to the speech
recognizer's grammar set).
Start listening for speech by using the basic recognition UI and TTS feedback provided by the
RecognizeWithUIAsync method. Use the RecognizeAsync method if the default UI is not required.

private async void StartRecognizing_Click(object sender, RoutedEventArgs e)


{
// Create an instance of SpeechRecognizer.
var speechRecognizer = new Windows.Media.SpeechRecognition.SpeechRecognizer();

// Compile the dictation grammar by default.


await speechRecognizer.CompileConstraintsAsync();

// Start recognition.
Windows.Media.SpeechRecognition.SpeechRecognitionResult speechRecognitionResult = await
speechRecognizer.RecognizeWithUIAsync();

// Do something with the recognition result.


var messageDialog = new Windows.UI.Popups.MessageDialog(speechRecognitionResult.Text, "Text spoken");
await messageDialog.ShowAsync();
}

Customize the recognition UI


When your app attempts speech recognition by calling SpeechRecognizer.RecognizeWithUIAsync, several
screens are shown in the following order.
If you're using a constraint based on a predefined grammar (dictation or web search):
The Listening screen.
The Thinking screen.
The Heard you say screen or the error screen.
If you're using a constraint based on a list of words or phrases, or a constraint based on a SRGS grammar file:
The Listening screen.
The Did you say screen, if what the user said could be interpreted as more than one potential result.
The Heard you say screen or the error screen.
The following image shows an example of the flow between screens for a speech recognizer that uses a constraint
based on a SRGS grammar file. In this example, speech recognition was successful.

The Listening screen can provide examples of words or phrases that the app can recognize. Here, we show how to
use the properties of the SpeechRecognizerUIOptions class (obtained by calling the
SpeechRecognizer.UIOptions property) to customize content on the Listening screen.
private async void WeatherSearch_Click(object sender, RoutedEventArgs e)
{
// Create an instance of SpeechRecognizer.
var speechRecognizer = new Windows.Media.SpeechRecognition.SpeechRecognizer();

// Listen for audio input issues.


speechRecognizer.RecognitionQualityDegrading += speechRecognizer_RecognitionQualityDegrading;

// Add a web search grammar to the recognizer.


var webSearchGrammar = new
Windows.Media.SpeechRecognition.SpeechRecognitionTopicConstraint(Windows.Media.SpeechRecognition.SpeechRecognitionScenario.Web
Search, "webSearch");

speechRecognizer.UIOptions.AudiblePrompt = "Say what you want to search for...";


speechRecognizer.UIOptions.ExampleText = @"Ex. &#39;weather for London&#39;";
speechRecognizer.Constraints.Add(webSearchGrammar);

// Compile the constraint.


await speechRecognizer.CompileConstraintsAsync();

// Start recognition.
Windows.Media.SpeechRecognition.SpeechRecognitionResult speechRecognitionResult = await
speechRecognizer.RecognizeWithUIAsync();
//await speechRecognizer.RecognizeWithUIAsync();

// Do something with the recognition result.


var messageDialog = new Windows.UI.Popups.MessageDialog(speechRecognitionResult.Text, "Text spoken");
await messageDialog.ShowAsync();
}

Related articles
Developers
Speech interactions Designers
Speech design guidelines Samples
Speech recognition and speech synthesis sample
Specify the speech recognizer language
3/6/2017 2 min to read Edit on GitHub

Learn how to select an installed language to use for speech recognition.

Important APIs
SupportedTopicLanguages
SupportedGrammarLanguages
Language

Here, we enumerate the languages installed on a system, identify which is the default language, and select a
different language for recognition.
**Prerequisites: **
This topic builds on Speech recognition.
You should have a basic understanding of speech recognition and recognition constraints.
If you're new to developing Universal Windows Platform (UWP) apps, have a look through these topics to get
familiar with the technologies discussed here.
Create your first app
Learn about events with Events and routed events overview
**User experience guidelines: **
For helpful tips about designing a useful and engaging speech-enabled app, see Speech design guidelines .

Identify the default language


A speech recognizer uses the system speech language as its default recognition language. This language is set by
the user on the device Settings > System > Speech > Speech Language screen.
We identify the default language by checking the SystemSpeechLanguage static property.

var language = SpeechRecognizer.SystemSpeechLanguage; </code></pre></td>


</tr>
</tbody>
</table>

Confirm an installed language


Installed languages can vary between devices. You should verify the existence of a language if you depend on it for
a particular constraint.
Note A reboot is required after a new language pack is installed. An exception with error code
SPERR_NOT_FOUND (0x8004503a) is raised if the specified language is not supported or has not finished
installing.
Determine the supported languages on a device by checking one of two static properties of the
SpeechRecognizer class:
SupportedTopicLanguagesThe collection of Language objects used with predefined dictation and web
search grammars.
SupportedGrammarLanguagesThe collection of Language objects used with a list constraint or a
Speech Recognition Grammar Specification (SRGS) file.

Specify a language
To specify a language, pass a Language object in the SpeechRecognizer constructor.
Here, we specify "en-US" as the recognition language.

<colgroup>
<col width="100%" />
</colgroup>
<thead>
<tr class="header">
<th align="left">C#</th>
</tr>
</thead>
<tbody>
<tr class="odd">
var language = new Windows.Globalization.Language(en-US);
var recognizer = new SpeechRecognizer(language);

Remarks
A topic constraint can be configured by adding a SpeechRecognitionTopicConstraint to the Constraints
collection of the SpeechRecognizer and then calling CompileConstraintsAsync. A
SpeechRecognitionResultStatus of TopicLanguageNotSupported is returned if the recognizer is not initialized
with a supported topic language.
A list constraint is configured by adding a SpeechRecognitionListConstraint to the Constraints collection of the
SpeechRecognizer and then calling CompileConstraintsAsync. You cannot specify the language of a custom list
directly. Instead, the list will be processed using the language of the recognizer.
An SRGS grammar is an open-standard XML format represented by the
SpeechRecognitionGrammarFileConstraint class. Unlike custom lists, you can specify the language of the
grammar in the SRGS markup. CompileConstraintsAsync fails with a SpeechRecognitionResultStatus of
TopicLanguageNotSupported if the recognizer is not initialized to the same language as the SRGS markup.

Related articles
Developers
Speech interactions
Designers
Speech design guidelines
Samples
Speech recognition and speech synthesis sample
Define custom recognition constraints
3/6/2017 5 min to read Edit on GitHub

Learn how to define and use custom constraints for speech recognition.

Important APIs
SpeechRecognitionTopicConstraint
SpeechRecognitionListConstraint
SpeechRecognitionGrammarFileConstraint

Speech recognition requires at least one constraint to define a recognizable vocabulary. If no constraint is specified,
the predefined dictation grammar of Universal Windows apps is used. See Speech recognition.

Add constraints
Use the SpeechRecognizer.Constraints property to add constraints to a speech recognizer.
Here, we cover the three kinds of speech recognition constraints used from within an app. (For voice command
constraints, see Launch a foreground app with voice commands in Cortana.)
SpeechRecognitionTopicConstraintA constraint based on a predefined grammar (dictation or web search).
SpeechRecognitionListConstraintA constraint based on a list of words or phrases.
SpeechRecognitionGrammarFileConstraintA constraint defined in a Speech Recognition Grammar
Specification (SRGS) file.
Each speech recognizer can have one constraint collection. Only these combinations of constraints are valid:
A single-topic constraint, or predefined grammar (dictation or web search). No other constraints are allowed.
A combination of list constraints and/or grammar-file constraints.
Remember: **Call the [SpeechRecognizer.CompileConstraintsAsync**]
(https://msdn.microsoft.com/library/windows/apps/dn653240) method to compile the constraints before starting
the recognition process.

Specify a web-search grammar (SpeechRecognitionTopicConstraint)


Topic constraints (dictation or web-search grammar) must be added to the constraints collection of a speech
recognizer.
Here, we add a web-search grammar to the constraints collection.
private async void WeatherSearch_Click(object sender, RoutedEventArgs e)
{
// Create an instance of SpeechRecognizer.
var speechRecognizer = new Windows.Media.SpeechRecognition.SpeechRecognizer();

// Listen for audio input issues.


speechRecognizer.RecognitionQualityDegrading += speechRecognizer_RecognitionQualityDegrading;

// Add a web search grammar to the recognizer.


var webSearchGrammar = new
Windows.Media.SpeechRecognition.SpeechRecognitionTopicConstraint(Windows.Media.SpeechRecognition.SpeechRecognitionScenario.Web
Search, "webSearch");

speechRecognizer.UIOptions.AudiblePrompt = "Say what you want to search for...";


speechRecognizer.UIOptions.ExampleText = @"Ex. &#39;weather for London&#39;";
speechRecognizer.Constraints.Add(webSearchGrammar);

// Compile the constraint.


await speechRecognizer.CompileConstraintsAsync();

// Start recognition.
Windows.Media.SpeechRecognition.SpeechRecognitionResult speechRecognitionResult = await speechRecognizer.RecognizeWithUIAsync();
//await speechRecognizer.RecognizeWithUIAsync();

// Do something with the recognition result.


var messageDialog = new Windows.UI.Popups.MessageDialog(speechRecognitionResult.Text, "Text spoken");
await messageDialog.ShowAsync();
}

Specify a programmatic list constraint


(SpeechRecognitionListConstraint)
List constraints must be added to the constraints collection of a speech recognizer.
Keep the following points in mind:
You can add multiple list constraints to a constraints collection.
You can use any collection that implements IIterable<String> for the string values.
Here, we programmatically specify an array of words as a list constraint and add it to the constraints collection of a
speech recognizer.
private async void YesOrNo_Click(object sender, RoutedEventArgs e)
{
// Create an instance of SpeechRecognizer.
var speechRecognizer = new Windows.Media.SpeechRecognition.SpeechRecognizer();

// You could create this array dynamically.


string[] responses = { "Yes", "No" };

// Add a list constraint to the recognizer.


var listConstraint = new Windows.Media.SpeechRecognition.SpeechRecognitionListConstraint(responses, "yesOrNo");

speechRecognizer.UIOptions.ExampleText = @"Ex. &#39;yes&#39;, &#39;no&#39;";


speechRecognizer.Constraints.Add(listConstraint);

// Compile the constraint.


await speechRecognizer.CompileConstraintsAsync();

// Start recognition.
Windows.Media.SpeechRecognition.SpeechRecognitionResult speechRecognitionResult = await speechRecognizer.RecognizeWithUIAsync();

// Do something with the recognition result.


var messageDialog = new Windows.UI.Popups.MessageDialog(speechRecognitionResult.Text, "Text spoken");
await messageDialog.ShowAsync();
}

Specify an SRGS grammar constraint


(SpeechRecognitionGrammarFileConstraint)
SRGS grammar files must be added to the constraints collection of a speech recognizer.
The SRGS Version 1.0 is the industry-standard markup language for creating XML-format grammars for speech
recognition. Although Universal Windows apps provide alternatives to using SRGS for creating speech-recognition
grammars, you might find that using SRGS to create grammars produces the best results, particularly for more
involved speech recognition scenarios.
SRGS grammars provide a full set of features to help you architect complex voice interaction for your apps. For
example, with SRGS grammars you can:
Specify the order in which words and phrases must be spoken to be recognized.
Combine words from multiple lists and phrases to be recognized.
Link to other grammars.
Assign a weight to an alternative word or phrase to increase or decrease the likelihood that it will be used to
match speech input.
Include optional words or phrases.
Use special rules that help filter out unspecified or unanticipated input, such as random speech that doesn't
match the grammar, or background noise.
Use semantics to define what speech recognition means to your app.
Specify pronunciations, either inline in a grammar or via a link to a lexicon.
For more info about SRGS elements and attributes, see the SRGS Grammar XML Reference . To get started creating
an SRGS grammar, see How to Create a Basic XML Grammar.
Keep the following points in mind:
You can add multiple grammar-file constraints to a constraints collection.
Use the .grxml file extension for XML-based grammar documents that conform to SRGS rules.
This example uses an SRGS grammar defined in a file named srgs.grxml (described later). In the file properties, the
Package Action is set to Content with Copy to Output Directory set to Copy always:

private async void Colors_Click(object sender, RoutedEventArgs e)


{
// Create an instance of SpeechRecognizer.
var speechRecognizer = new Windows.Media.SpeechRecognition.SpeechRecognizer();

// Add a grammar file constraint to the recognizer.


var storageFile = await Windows.Storage.StorageFile.GetFileFromApplicationUriAsync(new Uri("ms-appx:///Colors.grxml"));
var grammarFileConstraint = new Windows.Media.SpeechRecognition.SpeechRecognitionGrammarFileConstraint(storageFile, "colors");

speechRecognizer.UIOptions.ExampleText = @"Ex. &#39;blue background&#39;, &#39;green text&#39;";


speechRecognizer.Constraints.Add(grammarFileConstraint);

// Compile the constraint.


await speechRecognizer.CompileConstraintsAsync();

// Start recognition.
Windows.Media.SpeechRecognition.SpeechRecognitionResult speechRecognitionResult = await speechRecognizer.RecognizeWithUIAsync();

// Do something with the recognition result.


var messageDialog = new Windows.UI.Popups.MessageDialog(speechRecognitionResult.Text, "Text spoken");
await messageDialog.ShowAsync();
}

This SRGS file (srgs.grxml) includes semantic interpretation tags. These tags provide a mechanism for returning
grammar match data to your app. Grammars must conform to the World Wide Web Consortium (W3C) Semantic
Interpretation for Speech Recognition (SISR) 1.0 specification.
Here, we listen for variants of "yes" and "no".
<grammar xml:lang="en-US"
root="yesOrNo"
version="1.0"
tag-format="semantics/1.0"
xmlns="http://www.w3.org/2001/06/grammar">

<!-- The following rules recognize variants of yes and no. -->
<rule id="yesOrNo">
<one-of>
<item>
<one-of>
<item>yes</item>
<item>yeah</item>
<item>yep</item>
<item>yup</item>
<item>un huh</item>
<item>yay yus</item>
</one-of>
<tag>out="yes";</tag>
</item>
<item>
<one-of>
<item>no</item>
<item>nope</item>
<item>nah</item>
<item>uh uh</item>
</one-of>
<tag>out="no";</tag>
</item>
</one-of>
</rule>
</grammar>

Manage constraints
After a constraint collection is loaded for recognition, your app can manage which constraints are enabled for
recognition operations by setting the IsEnabled property of a constraint to true or false. The default setting is
true.
It's usually more efficient to load constraints once, enabling and disabling them as needed, rather than to load,
unload, and compile constraints for each recognition operation. Use the IsEnabled property, as required.
Restricting the number of constraints serves to limit the amount of data that the speech recognizer needs to search
and match against the speech input. This can improve both the performance and the accuracy of speech
recognition.
Decide which constraints are enabled based on the phrases that your app can expect in the context of the current
recognition operation. For example, if the current app context is to display a color, you probably don't need to
enable a constraint that recognizes the names of animals.
To prompt the user for what can be spoken, use the SpeechRecognizerUIOptions.AudiblePrompt and
SpeechRecognizerUIOptions.ExampleText properties, which are set by means of the
SpeechRecognizer.UIOptions property. Preparing users for what they can say during the recognition operation
increases the likelihood that they will speak a phrase that can be matched to an active constraint.

Related articles
Speech interactions
Samples
Speech recognition and speech synthesis sample
Continuous dictation
3/6/2017 7 min to read Edit on GitHub

Learn how to capture and recognize long-form, continuous dictation speech input.

Important APIs
SpeechContinuousRecognitionSession
ContinuousRecognitionSession

In Speech recognition, you learned how to capture and recognize relatively short speech input using the
RecognizeAsync or RecognizeWithUIAsync methods of a SpeechRecognizer object, for example, when
composing a short message service (SMS) message or when asking a question.
For longer, continuous speech recognition sessions, such as dictation or email, use the
ContinuousRecognitionSession property of a SpeechRecognizer to obtain a
SpeechContinuousRecognitionSession object.

Set up
Your app needs a few objects to manage a continuous dictation session:
An instance of a SpeechRecognizer object.
A reference to a UI dispatcher to update the UI during dictation.
A way to track the accumulated words spoken by the user.
Here, we declare a SpeechRecognizer instance as a private field of the code-behind class. Your app needs to store
a reference elsewhere if you want continuous dictation to persist beyond a single Extensible Application Markup
Language (XAML) page.

private SpeechRecognizer speechRecognizer;

During dictation, the recognizer raises events from a background thread. Because a background thread cannot
directly update the UI in XAML, your app must use a dispatcher to update the UI in response to recognition events.
Here, we declare a private field that will be initialized later with the UI dispatcher.

// Speech events may originate from a thread other than the UI thread.
// Keep track of the UI thread dispatcher so that we can update the
// UI in a thread-safe manner.
private CoreDispatcher dispatcher;

To track what the user is saying, you need to handle recognition events raised by the speech recognizer. These
events provide the recognition results for chunks of user utterances.
Here, we use a StringBuilder object to hold all the recognition results obtained during the session. New results are
appended to the StringBuilder as they are processed.

private StringBuilder dictatedTextBuilder;


Initialization
During the initialization of continuous speech recognition, you must:
Fetch the dispatcher for the UI thread if you update the UI of your app in the continuous recognition event
handlers.
Initialize the speech recognizer.
Compile the built-in dictation grammar. Note Speech recognition requires at least one constraint to define a
recognizable vocabulary. If no constraint is specified, a predefined dictation grammar is used. See Speech
recognition.
Set up the event listeners for recognition events.
In this example, we initialize speech recognition in the OnNavigatedTo page event.
1. Because events raised by the speech recognizer occur on a background thread, create a reference to the
dispatcher for updates to the UI thread. OnNavigatedTo is always invoked on the UI thread.

this.dispatcher = CoreWindow.GetForCurrentThread().Dispatcher;

2. We then initialize the SpeechRecognizer instance.

this.speechRecognizer = new SpeechRecognizer();

3. We then add and compile the grammar that defines all of the words and phrases that can be recognized by
the SpeechRecognizer.
If you don't specify a grammar explicitly, a predefined dictation grammar is used by default. Typically, the
default grammar is best for general dictation.
Here, we call CompileConstraintsAsync immediately without adding a grammar.

SpeechRecognitionCompilationResult result =
await speechRecognizer.CompileConstraintsAsync();

Handle recognition events


You can capture a single, brief utterance or phrase by calling RecognizeAsync or RecognizeWithUIAsync.
However, to capture a longer, continuous recognition session, we specify event listeners to run in the background
as the user speaks and define handlers to build the dictation string.
We then use the ContinuousRecognitionSession property of our recognizer to obtain a
SpeechContinuousRecognitionSession object that provides methods and events for managing a continuous
recognition session.
Two events in particular are critical:
ResultGenerated, which occurs when the recognizer has generated some results.
Completed, which occurs when the continuous recognition session has ended.
The ResultGenerated event is raised as the user speaks. The recognizer continuously listens to the user and
periodically raises an event that passes a chunk of speech input. You must examine the speech input, using the
Result property of the event argument, and take appropriate action in the event handler, such as appending the
text to a StringBuilder object.
As an instance of SpeechRecognitionResult, the Result property is useful for determining whether you want to
accept the speech input. A SpeechRecognitionResult provides two properties for this:
Status indicates whether the recognition was successful. Recognition can fail for a variety of reasons.
Confidence indicates the relative confidence that the recognizer understood the correct words.
Here are the basic steps for supporting continuous recognition:
1. Here, we register the handler for the ResultGenerated continuous recognition event in the
OnNavigatedTo page event.

speechRecognizer.ContinuousRecognitionSession.ResultGenerated +=
ContinuousRecognitionSession_ResultGenerated;

2. We then check the Confidence property. If the value of Confidence is Medium or better, we append the text
to the StringBuilder. We also update the UI as we collect input.
Note the ResultGenerated event is raised on a background thread that cannot update the UI directly. If a
handler needs to update the UI (as the [Speech and TTS sample] does), you must dispatch the updates to the
UI thread through the RunAsync method of the dispatcher.

private async void ContinuousRecognitionSession_ResultGenerated(


SpeechContinuousRecognitionSession sender,
SpeechContinuousRecognitionResultGeneratedEventArgs args)
{

if (args.Result.Confidence == SpeechRecognitionConfidence.Medium ||
args.Result.Confidence == SpeechRecognitionConfidence.High)
{
dictatedTextBuilder.Append(args.Result.Text + " ");

await dispatcher.RunAsync(CoreDispatcherPriority.Normal, () =>


{
dictationTextBox.Text = dictatedTextBuilder.ToString();
btnClearText.IsEnabled = true;
});
}
else
{
await dispatcher.RunAsync(CoreDispatcherPriority.Normal, () =>
{
dictationTextBox.Text = dictatedTextBuilder.ToString();
});
}
}

3. We then handle the Completed event, which indicates the end of continuous dictation.
The session ends when you call the StopAsync or CancelAsync methods (described the next section). The
session can also end when an error occurs, or when the user has stopped speaking. Check the Status
property of the event argument to determine why the session ended (SpeechRecognitionResultStatus).
Here, we register the handler for the Completed continuous recognition event in the OnNavigatedTo page
event.

speechRecognizer.ContinuousRecognitionSession.Completed +=
ContinuousRecognitionSession_Completed;

4. The event handler checks the Status property to determine whether the recognition was successful. It also
handles the case where the user has stopped speaking. Often, a TimeoutExceeded is considered successful
recognition as it means the user has finished speaking. You should handle this case in your code for a good
experience.
Note the ResultGenerated event is raised on a background thread that cannot update the UI directly. If a
handler needs to update the UI (as the [Speech and TTS sample] does), you must dispatch the updates to the
UI thread through the RunAsync method of the dispatcher.

private async void ContinuousRecognitionSession_Completed(


SpeechContinuousRecognitionSession sender,
SpeechContinuousRecognitionCompletedEventArgs args)
{
if (args.Status != SpeechRecognitionResultStatus.Success)
{
if (args.Status == SpeechRecognitionResultStatus.TimeoutExceeded)
{
await dispatcher.RunAsync(CoreDispatcherPriority.Normal, () =>
{
rootPage.NotifyUser(
"Automatic Time Out of Dictation",
NotifyType.StatusMessage);

DictationButtonText.Text = " Continuous Recognition";


dictationTextBox.Text = dictatedTextBuilder.ToString();
});
}
else
{
await dispatcher.RunAsync(CoreDispatcherPriority.Normal, () =>
{
rootPage.NotifyUser(
"Continuous Recognition Completed: " + args.Status.ToString(),
NotifyType.StatusMessage);

DictationButtonText.Text = " Continuous Recognition";


});
}
}
}

Provide ongoing recognition feedback


When people converse, they often rely on context to fully understand what is being said. Similarly, the speech
recognizer often needs context to provide high-confidence recognition results. For example, by themselves, the
words "weight" and "wait" are indistinguishable until more context can be gleaned from surrounding words. Until
the recognizer has some confidence that a word, or words, have been recognized correctly, it will not raise the
ResultGenerated event.
This can result in a less than ideal experience for the user as they continue speaking and no results are provided
until the recognizer has high enough confidence to raise the ResultGenerated event.
Handle the HypothesisGenerated event to improve this apparent lack of responsiveness. This event is raised
whenever the recognizer generates a new set of potential matches for the word being processed. The event
argument provides an Hypothesis property that contains the current matches. Show these to the user as they
continue speaking and reassure them that processing is still active. Once confidence is high and a recognition result
has been determined, replace the interim Hypothesis results with the final Result provided in the
ResultGenerated event.
Here, we append the hypothetical text and an ellipsis ("") to the current value of the output TextBox. The contents
of the text box are updated as new hypotheses are generated and until the final results are obtained from the
ResultGenerated event.

private async void SpeechRecognizer_HypothesisGenerated(


SpeechRecognizer sender,
SpeechRecognitionHypothesisGeneratedEventArgs args)
{

string hypothesis = args.Hypothesis.Text;


string textboxContent = dictatedTextBuilder.ToString() + " " + hypothesis + " ...";

await dispatcher.RunAsync(CoreDispatcherPriority.Normal, () =>


{
dictationTextBox.Text = textboxContent;
btnClearText.IsEnabled = true;
});
}

Start and stop recognition


Before starting a recognition session, check the value of the speech recognizer State property. The speech
recognizer must be in an Idle state.
After checking the state of the speech recognizer, we start the session by calling the StartAsync method of the
speech recognizer's ContinuousRecognitionSession property.

if (speechRecognizer.State == SpeechRecognizerState.Idle)
{
await speechRecognizer.ContinuousRecognitionSession.StartAsync();
}

Recognition can be stopped in two ways:


StopAsync lets any pending recognition events complete (ResultGenerated continues to be raised until all
pending recognition operations are complete).
CancelAsync terminates the recognition session immediately and discards any pending results.
After checking the state of the speech recognizer, we stop the session by calling the CancelAsync method of the
speech recognizer's ContinuousRecognitionSession property.

if (speechRecognizer.State != SpeechRecognizerState.Idle)
{
await speechRecognizer.ContinuousRecognitionSession.CancelAsync();
}

[!NOTE]
A ResultGenerated event can occur after a call to CancelAsync.
Because of multithreading, a ResultGenerated event might still remain on the stack when CancelAsync is called.
If so, the ResultGenerated event still fires.
If you set any private fields when canceling the recognition session, always confirm their values in the
ResultGenerated handler. For example, don't assume a field is initialized in your handler if you set them to null
when you cancel the session.

Related articles
Speech interactions
Samples
Speech recognition and speech synthesis sample
Manage issues with audio input
3/6/2017 1 min to read Edit on GitHub

Learn how to manage issues with speech-recognition accuracy caused by audio-input quality.

Important APIs
SpeechRecognizer
RecognitionQualityDegrading
SpeechRecognitionAudioProblem

Assess audio-input quality


When speech recognition is active, use the RecognitionQualityDegrading event of your speech recognizer to
determine whether one or more audio issues might be interfering with speech input. The event argument
(SpeechRecognitionQualityDegradingEventArgs) provides the Problem property, which describes the issues
detected with the audio input.
Recognition can be affected by too much background noise, a muted microphone, and the volume or speed of the
speaker.
Here, we configure a speech recognizer and start listening for the RecognitionQualityDegrading event.

private async void WeatherSearch_Click(object sender, RoutedEventArgs e)


{
// Create an instance of SpeechRecognizer.
var speechRecognizer = new Windows.Media.SpeechRecognition.SpeechRecognizer();

// Listen for audio input issues.


speechRecognizer.RecognitionQualityDegrading += speechRecognizer_RecognitionQualityDegrading;

// Add a web search grammar to the recognizer.


var webSearchGrammar = new
Windows.Media.SpeechRecognition.SpeechRecognitionTopicConstraint(Windows.Media.SpeechRecognition.SpeechRecognitionScenario.Web
Search, "webSearch");

speechRecognizer.UIOptions.AudiblePrompt = "Say what you want to search for...";


speechRecognizer.UIOptions.ExampleText = @"Ex. &#39;weather for London&#39;";
speechRecognizer.Constraints.Add(webSearchGrammar);

// Compile the constraint.


await speechRecognizer.CompileConstraintsAsync();

// Start recognition.
Windows.Media.SpeechRecognition.SpeechRecognitionResult speechRecognitionResult = await speechRecognizer.RecognizeWithUIAsync();
//await speechRecognizer.RecognizeWithUIAsync();

// Do something with the recognition result.


var messageDialog = new Windows.UI.Popups.MessageDialog(speechRecognitionResult.Text, "Text spoken");
await messageDialog.ShowAsync();
}

Manage the speech-recognition experience


Use the description provided by the Problem property to help the user improve conditions for recognition.
Here, we create a handler for the RecognitionQualityDegrading event that checks for a low volume level. We
then use a SpeechSynthesizer object to suggest that the user try speaking louder.

private async void speechRecognizer_RecognitionQualityDegrading(


Windows.Media.SpeechRecognition.SpeechRecognizer sender,
Windows.Media.SpeechRecognition.SpeechRecognitionQualityDegradingEventArgs args)
{
// Create an instance of a speech synthesis engine (voice).
var speechSynthesizer =
new Windows.Media.SpeechSynthesis.SpeechSynthesizer();

// If input speech is too quiet, prompt the user to speak louder.


if (args.Problem == Windows.Media.SpeechRecognition.SpeechRecognitionAudioProblem.TooQuiet)
{
// Generate the audio stream from plain text.
Windows.Media.SpeechSynthesis.SpeechSynthesisStream stream;
try
{
stream = await speechSynthesizer.SynthesizeTextToStreamAsync("Try speaking louder");
stream.Seek(0);
}
catch (Exception)
{
stream = null;
}

// Send the stream to the MediaElement declared in XAML.


await CoreApplication.MainView.CoreWindow.Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.High, () =>
{
this.media.SetSource(stream, stream.ContentType);
});
}
}

Related articles
Speech interactions
Samples
Speech recognition and speech synthesis sample
Set speech recognition timeouts
3/6/2017 1 min to read Edit on GitHub

Set how long a speech recognizer ignores silence or unrecognizable sounds (babble) and continues listening for
speech input.

Important APIs
Timeouts
SpeechRecognizerTimeouts

Set a timeout
Here, we specify various Timeouts values:
InitialSilenceTimeout - The length of time that a SpeechRecognizer detects silence (before any recognition
results have been generated) and assumes speech input is not forthcoming.
BabbleTimeout - The length of time that a SpeechRecognizer continues to listen to unrecognizable sounds
(babble) before it assumes speech input has ended and finalizes the recognition operation.
EndSilenceTimeout - The length of time that a SpeechRecognizer detects silence (after recognition results have
been generated) and assumes speech input has ended.
Note Timeouts can be set on a per-recognizer basis.

// Set timeout settings.


recognizer.Timeouts.InitialSilenceTimeout = TimeSpan.FromSeconds(6.0);
recognizer.Timeouts.BabbleTimeout = TimeSpan.FromSeconds(4.0);
recognizer.Timeouts.EndSilenceTimeout = TimeSpan.FromSeconds(1.2);

Related articles
Speech interactions Samples
Speech recognition and speech synthesis sample
Touch interactions
3/6/2017 19 min to read Edit on GitHub

Design your app with the expectation that touch will be the primary input method of your users. If you use UWP
controls, support for touchpad, mouse, and pen/stylus requires no additional programming, because UWP apps
provide this for free.
However, keep in mind that a UI optimized for touch is not always superior to a traditional UI. Both provide
advantages and disadvantages that are unique to a technology and application. In the move to a touch-first UI, it is
important to understand the core differences between touch (including touchpad), pen/stylus, mouse, and
keyboard input.

Important APIs
Windows.UI.Xaml.Input
Windows.UI.Core
Windows.Devices.Input

Many devices have multi-touch screens that support using one or more fingers (or touch contacts) as input. The
touch contacts, and their movement, are interpreted as touch gestures and manipulations to support various user
interactions.
The Universal Windows Platform (UWP) includes a number of different mechanisms for handling touch input,
enabling you to create an immersive experience that your users can explore with confidence. Here, we cover the
basics of using touch input in a UWP app.
Touch interactions require three things:
A touch-sensitive display.
The direct contact (or proximity to, if the display has proximity sensors and supports hover detection) of one or
more fingers on that display.
Movement of the touch contacts (or lack thereof, based on a time threshold).
The input data provided by the touch sensor can be:
Interpreted as a physical gesture for direct manipulation of one or more UI elements (such as panning, rotating,
resizing, or moving). In contrast, interacting with an element through its properties window, dialog box, or other
UI affordance is considered indirect manipulation.
Recognized as an alternative input method, such as mouse or pen.
Used to complement or modify aspects of other input methods, such as smudging an ink stroke drawn with a
pen.
Touch input typically involves the direct manipulation of an element on the screen. The element responds
immediately to any touch contact within its hit test area, and reacts appropriately to any subsequent movement of
the touch contacts, including removal.
Custom touch gestures and interactions should be designed carefully. They should be intuitive, responsive, and
discoverable, and they should let users explore your app with confidence.
Ensure that app functionality is exposed consistently across every supported input device type. If necessary, use
some form of indirect input mode, such as text input for keyboard interactions, or UI affordances for mouse and
pen.
Remember that traditional input devices (such as mouse and keyboard), are familiar and appealing to many users.
They can offer speed, accuracy, and tactile feedback that touch might not.
Providing unique and distinctive interaction experiences for all input devices will support the widest range of
capabilities and preferences, appeal to the broadest possible audience, and attract more customers to your app.

Compare touch interaction requirements


The following table shows some of the differences between input devices that you should consider when you
design touch-optimized UWP apps.

FACTOR TOUCH INTERACTIONS MOUSE, KEYBOARD, TOUCHPAD


PEN/STYLUS INTERACTIONS

Precision The contact area of a The mouse and pen/stylus Same as mouse.
fingertip is greater than a supply a precise x-y
single x-y coordinate, which coordinate.
increases the chances of
unintended command
activations.

The shape of the contact Mouse movements and Same as mouse.


area changes throughout pen/stylus strokes supply
the movement. precise x-y coordinates.
Keyboard focus is explicit.

There is no mouse cursor to The mouse cursor, Same as mouse.


assist with targeting. pen/stylus cursor, and
keyboard focus all assist with
targeting.

Human anatomy Fingertip movements are It's easier to perform a Same as mouse.
imprecise, because a straight-line motion with the
straight-line motion with mouse or pen/stylus
one or more fingers is because the hand that
difficult. This is due to the controls them travels a
curvature of hand joints and shorter physical distance
the number of joints than the cursor on the
involved in the motion. screen.

Some areas on the touch The mouse and pen/stylus Finger posture and grip can
surface of a display device can reach any part of the be an issue.
can be difficult to reach due screen while any control
to finger posture and the should be accessible by the
user's grip on the device. keyboard through tab order.

Objects might be obscured Indirect input devices do not Same as mouse.


by one or more fingertips or cause occlusion.
the user's hand. This is
known as occlusion.
Object state Touch uses a two-state A mouse, pen/stylus, Same as mouse.
model: the touch surface of and keyboard all expose
a display device is either a three-state model: up
touched (on) or not (off). (off), down (on), and
There is no hover state that hover (focus).
can trigger additional visual
feedback. Hover lets users explore
and learn through
tooltips associated with
UI elements. Hover and
focus effects can relay
which objects are
interactive and also help
with targeting.

Rich interaction Supports multi-touch: Supports a single input Same as touch.


multiple input points point.
(fingertips) on a touch
surface.

Supports direct No support for direct Same as mouse.


manipulation of objects manipulation as mouse,
through gestures such as pen/stylus, and keyboard are
tapping, dragging, sliding, indirect input devices.
pinching, and rotating.

Note
Indirect input has had the benefit of more than 25 years of refinement. Features such as hover-triggered tooltips
have been designed to solve UI exploration specifically for touchpad, mouse, pen/stylus, and keyboard input. UI
features like this have been re-designed for the rich experience provided by touch input, without compromising
the user experience for these other devices.

Use touch feedback


Appropriate visual feedback during interactions with your app helps users recognize, learn, and adapt to how their
interactions are interpreted by both the app and Windows 8. Visual feedback can indicate successful interactions,
relay system status, improve the sense of control, reduce errors, help users understand the system and input
device, and encourage interaction.
Visual feedback is critical when the user relies on touch input for activities that require accuracy and precision
based on location. Display feedback whenever and wherever touch input is detected, to help the user understand
any custom targeting rules that are defined by your app and its controls.

Targeting
Targeting is optimized through:
Touch target sizes
Clear size guidelines ensure that applications provide a comfortable UI that contains objects and controls
that are easy and safe to target.
Contact geometry
The entire contact area of the finger determines the most likely target object.
Scrubbing
Items within a group are easily re-targeted by dragging the finger between them (for example, radio
buttons). The current item is activated when the touch is released.
Rocking
Densely packed items (for example, hyperlinks) are easily re-targeted by pressing the finger down and,
without sliding, rocking it back and forth over the items. Due to occlusion, the current item is identified
through a tooltip or the status bar and is activated when the touch is released.

Accuracy
Design for sloppy interactions by using:
Snap-points that can make it easier to stop at desired locations when users interact with content.
Directional "rails" that can assist with vertical or horizontal panning, even when the hand moves in a slight arc.
For more information, see Guidelines for panning.

Occlusion
Finger and hand occlusion is avoided through:
Size and positioning of UI
Make UI elements big enough so that they cannot be completely covered by a fingertip contact area.
Position menus and pop-ups above the contact area whenever possible.
Tooltips
Show tooltips when a user maintains finger contact on an object. This is useful for describing object
functionality. The user can drag the fingertip off the object to avoid invoking the tooltip.
For small objects, offset tooltips so they are not covered by the fingertip contact area. This is helpful for
targeting.
Handles for precision
Where precision is required (for example, text selection), provide selection handles that are offset to
improve accuracy. For more information, see Guidelines for selecting text and images (Windows Runtime
apps).

Timing
Avoid timed mode changes in favor of direct manipulation. Direct manipulation simulates the direct, real-time
physical handling of an object. The object responds as the fingers are moved.
A timed interaction, on the other hand, occurs after a touch interaction. Timed interactions typically depend on
invisible thresholds like time, distance, or speed to determine what command to perform. Timed interactions have
no visual feedback until the system performs the action.
Direct manipulation provides a number of benefits over timed interactions:
Instant visual feedback during interactions make users feel more engaged, confident, and in control.
Direct manipulations make it safer to explore a system because they are reversibleusers can easily step back
through their actions in a logical and intuitive manner.
Interactions that directly affect objects and mimic real world interactions are more intuitive, discoverable, and
memorable. They don't rely on obscure or abstract interactions.
Timed interactions can be difficult to perform, as users must reach arbitrary and invisible thresholds.
In addition, the following are strongly recommended:
Manipulations should not be distinguished by the number of fingers used.
Interactions should support compound manipulations. For example, pinch to zoom while dragging the fingers
to pan.
Interactions should not be distinguished by time. The same interaction should have the same outcome
regardless of the time taken to perform it. Time-based activations introduce mandatory delays for users and
detract from both the immersive nature of direct manipulation and the perception of system
responsiveness.
Note An exception to this is where you use specific timed interactions to assist in learning and exploration
(for example, press and hold).
Appropriate descriptions and visual cues have a great effect on the use of advanced interactions.

App views
Tweak the user interaction experience through the pan/scroll and zoom settings of your app views. An app view
dictates how a user accesses and manipulates your app and its content. Views also provide behaviors such as
inertia, content boundary bounce, and snap points.
Pan and scroll settings of the ScrollViewer control dictate how users navigate within a single view, when the
content of the view doesn't fit within the viewport. A single view can be, for example, a page of a magazine or
book, the folder structure of a computer, a library of documents, or a photo album.
Zoom settings apply to both optical zoom (supported by the ScrollViewer control) and the Semantic Zoom
control. Semantic Zoom is a touch-optimized technique for presenting and navigating large sets of related data or
content within a single view. It works by using two distinct modes of classification, or zoom levels. This is
analogous to panning and scrolling within a single view. Panning and scrolling can be used in conjunction with
Semantic Zoom.
Use app views and events to modify the pan/scroll and zoom behaviors. This can provide a smoother interaction
experience than is possible through the handling of pointer and gesture events.
For more info about app views, see Controls, layouts, and text.

Custom touch interactions


If you implement your own interaction support, keep in mind that users expect an intuitive experience involving
direct interaction with the UI elements in your app. We recommend that you model your custom interactions on
the platform control libraries to keep things consistent and discoverable. The controls in these libraries provide the
full user interaction experience, including standard interactions, animated physics effects, visual feedback, and
accessibility. Create custom interactions only if there is a clear, well-defined requirement and basic interactions
don't support your scenario.
To provide customized touch support, you can handle various UIElement events. These events are grouped into
three levels of abstraction.
Static gesture events are triggered after an interaction is complete. Gesture events include Tapped,
DoubleTapped, RightTapped, and Holding.
You can disable gesture events on specific elements by setting IsTapEnabled, IsDoubleTapEnabled,
IsRightTapEnabled, and IsHoldingEnabled to false.
Pointer events such as PointerPressed and PointerMoved provide low-level details for each touch
contact, including pointer motion and the ability to distinguish press and release events.
A pointer is a generic input type with a unified event mechanism. It exposes basic info, such as screen
position, on the active input source, which can be touch, touchpad, mouse, or pen.
Manipulation gesture events, such as ManipulationStarted, indicate an ongoing interaction. They start
firing when the user touches an element and continue until the user lifts their finger(s), or the manipulation
is canceled.
Manipulation events include multi-touch interactions such as zooming, panning, or rotating, and
interactions that use inertia and velocity data such as dragging. The information provided by the
manipulation events doesn't identify the form of the interaction that was performed, but rather includes
data such as position, translation delta, and velocity. You can use this touch data to determine the type of
interaction that should be performed.
Here is the basic set of touch gestures supported by the UWP.

NAME TYPE DESCRIPTION

Tap Static gesture One finger touches the screen and lifts
up.

Press and hold Static gesture One finger touches the screen and
stays in place.

Slide Manipulation gesture One or more fingers touch the screen


and move in the same direction.

Swipe Manipulation gesture One or more fingers touch the screen


and move a short distance in the same
direction.

Turn Manipulation gesture Two or more fingers touch the screen


and move in a clockwise or counter-
clockwise arc.

Pinch Manipulation gesture Two or more fingers touch the screen


and move closer together.

Stretch Manipulation gesture Two or more fingers touch the screen


and move farther apart.

Gesture events
For details about individual controls, see Controls list.

Pointer events
Pointer events are raised by a variety of active input sources, including touch, touchpad, pen, and mouse (they
replace traditional mouse events.)
Pointer events are based on a single input point (finger, pen tip, mouse cursor) and do not support velocity-based
interactions.
Here is a list of pointer events and their related event argument.

EVENT OR CLASS DESCRIPTION

PointerPressed Occurs when a single finger touches the screen.


EVENT OR CLASS DESCRIPTION

PointerReleased Occurs when that same touch contact is lifted.

PointerMoved Occurs when the pointer is dragged across the screen.

PointerEntered Occurs when a pointer enters the hit test area of an element.

PointerExited Occurs when a pointer exits the hit test area of an element.

PointerCanceled Occurs when a touch contact is abnormally lost.

PointerCaptureLost Occurs when a pointer capture is taken by another element.

PointerWheelChanged Occurs when the delta value of a mouse wheel changes.

PointerRoutedEventArgs Provides data for all pointer events.

The following example shows how to use the PointerPressed, PointerReleased, and PointerExited events to
handle a tap interaction on a Rectangle object.
First, a Rectangle named touchRectangle is created in Extensible Application Markup Language (XAML).

<Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">


<Rectangle Name="touchRectangle"
Height="100" Width="200" Fill="Blue" />
</Grid>

<Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">


<Rectangle Name="touchRectangle"
Height="100" Width="200" Fill="Blue" />
</Grid>

<Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">


<Rectangle Name="touchRectangle"
Height="100" Width="200" Fill="Blue" />
</Grid>

Next, listeners for the PointerPressed, PointerReleased, and PointerExited events are specified.

MainPage::MainPage()
{
InitializeComponent();

// Pointer event listeners.


touchRectangle->PointerPressed += ref new PointerEventHandler(this, &amp;MainPage::touchRectangle_PointerPressed);
touchRectangle->PointerReleased += ref new PointerEventHandler(this, &amp;MainPage::touchRectangle_PointerReleased);
touchRectangle->PointerExited += ref new PointerEventHandler(this, &amp;MainPage::touchRectangle_PointerExited);
}
public MainPage()
{
this.InitializeComponent();

// Pointer event listeners.


touchRectangle.PointerPressed += touchRectangle_PointerPressed;
touchRectangle.PointerReleased += touchRectangle_PointerReleased;
touchRectangle.PointerExited += touchRectangle_PointerExited;
}

Public Sub New()

&#39; This call is required by the designer.


InitializeComponent()

&#39; Pointer event listeners.


AddHandler touchRectangle.PointerPressed, AddressOf touchRectangle_PointerPressed
AddHandler touchRectangle.PointerReleased, AddressOf Me.touchRectangle_PointerReleased
AddHandler touchRectangle.PointerExited, AddressOf touchRectangle_PointerExited

End Sub

Finally, the PointerPressed event handler increases the Height and Width of the Rectangle, while the
PointerReleased and PointerExited event handlers set the Height and Width back to their starting values.
// Handler for pointer exited event.
void MainPage::touchRectangle_PointerExited(Object^ sender, PointerRoutedEventArgs^ e)
{
Rectangle^ rect = (Rectangle^)sender;

// Pointer moved outside Rectangle hit test area.


// Reset the dimensions of the Rectangle.
if (nullptr != rect)
{
rect->Width = 200;
rect->Height = 100;
}
}

// Handler for pointer released event.


void MainPage::touchRectangle_PointerReleased(Object^ sender, PointerRoutedEventArgs^ e)
{
Rectangle^ rect = (Rectangle^)sender;

// Reset the dimensions of the Rectangle.


if (nullptr != rect)
{
rect->Width = 200;
rect->Height = 100;
}
}

// Handler for pointer pressed event.


void MainPage::touchRectangle_PointerPressed(Object^ sender, PointerRoutedEventArgs^ e)
{
Rectangle^ rect = (Rectangle^)sender;

// Change the dimensions of the Rectangle.


if (nullptr != rect)
{
rect->Width = 250;
rect->Height = 150;
}
}
// Handler for pointer exited event.
private void touchRectangle_PointerExited(object sender, PointerRoutedEventArgs e)
{
Rectangle rect = sender as Rectangle;

// Pointer moved outside Rectangle hit test area.


// Reset the dimensions of the Rectangle.
if (null != rect)
{
rect.Width = 200;
rect.Height = 100;
}
}
// Handler for pointer released event.
private void touchRectangle_PointerReleased(object sender, PointerRoutedEventArgs e)
{
Rectangle rect = sender as Rectangle;

// Reset the dimensions of the Rectangle.


if (null != rect)
{
rect.Width = 200;
rect.Height = 100;
}
}

// Handler for pointer pressed event.


private void touchRectangle_PointerPressed(object sender, PointerRoutedEventArgs e)
{
Rectangle rect = sender as Rectangle;

// Change the dimensions of the Rectangle.


if (null != rect)
{
rect.Width = 250;
rect.Height = 150;
}
}
&#39; Handler for pointer exited event.
Private Sub touchRectangle_PointerExited(sender As Object, e As PointerRoutedEventArgs)
Dim rect As Rectangle = CType(sender, Rectangle)

&#39; Pointer moved outside Rectangle hit test area.


&#39; Reset the dimensions of the Rectangle.
If (rect IsNot Nothing) Then
rect.Width = 200
rect.Height = 100
End If
End Sub

&#39; Handler for pointer released event.


Private Sub touchRectangle_PointerReleased(sender As Object, e As PointerRoutedEventArgs)
Dim rect As Rectangle = CType(sender, Rectangle)

&#39; Reset the dimensions of the Rectangle.


If (rect IsNot Nothing) Then
rect.Width = 200
rect.Height = 100
End If
End Sub

&#39; Handler for pointer pressed event.


Private Sub touchRectangle_PointerPressed(sender As Object, e As PointerRoutedEventArgs)
Dim rect As Rectangle = CType(sender, Rectangle)

&#39; Change the dimensions of the Rectangle.


If (rect IsNot Nothing) Then
rect.Width = 250
rect.Height = 150
End If
End Sub

Manipulation events
Use manipulation events if you need to support multiple finger interactions in your app, or interactions that
require velocity data.
You can use manipulation events to detect interactions such as drag, zoom, and hold.
Here is a list of manipulation events and related event arguments.

EVENT OR CLASS DESCRIPTION

ManipulationStarting event Occurs when the manipulation processor is first created.

ManipulationStarted event Occurs when an input device begins a manipulation on the


UIElement.

ManipulationDelta event Occurs when the input device changes position during a
manipulation.

ManipulationInertiaStarting event Occurs when the input device loses contact with the
UIElement object during a manipulation and inertia begins.

ManipulationCompleted event Occurs when a manipulation and inertia on the UIElement are
complete.

ManipulationStartingRoutedEventArgs Provides data for the ManipulationStarting event.


EVENT OR CLASS DESCRIPTION

ManipulationStartedRoutedEventArgs Provides data for the ManipulationStarted event.

ManipulationDeltaRoutedEventArgs Provides data for the ManipulationDelta event.

ManipulationInertiaStartingRoutedEventArgs Provides data for the ManipulationInertiaStarting event.

ManipulationVelocities Describes the speed at which manipulations occur.

ManipulationCompletedRoutedEventArgs Provides data for the ManipulationCompleted event.

A gesture consists of a series of manipulation events. Each gesture starts with a ManipulationStarted event, such
as when a user touches the screen.
Next, one or more ManipulationDelta events are fired. For example, if you touch the screen and then drag your
finger across the screen. Finally, a ManipulationCompleted event is raised when the interaction finishes.
Note If you don't have a touch-screen monitor, you can test your manipulation event code in the simulator using a
mouse and mouse wheel interface.
The following example shows how to use the ManipulationDelta events to handle a slide interaction on a
Rectangle and move it across the screen.
First, a Rectangle named touchRectangle is created in XAML with a Height and Width of 200.

<Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">


<Rectangle Name="touchRectangle"
Width="200" Height="200" Fill="Blue"
ManipulationMode="All"/>
</Grid>

<Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">


<Rectangle Name="touchRectangle"
Width="200" Height="200" Fill="Blue"
ManipulationMode="All"/>
</Grid>

<Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">


<Rectangle Name="touchRectangle"
Width="200" Height="200" Fill="Blue"
ManipulationMode="All"/>
</Grid>

Next, a global TranslateTransform named dragTranslation is created for translating the Rectangle. A
ManipulationDelta event listener is specified on the Rectangle, and dragTranslation is added to the
RenderTransform of the Rectangle.

// Global translation transform used for changing the position of


// the Rectangle based on input data from the touch contact.
Windows::UI::Xaml::Media::TranslateTransform^ dragTranslation;
// Global translation transform used for changing the position of
// the Rectangle based on input data from the touch contact.
private TranslateTransform dragTranslation;

&#39; Global translation transform used for changing the position of


&#39; the Rectangle based on input data from the touch contact.
Private dragTranslation As TranslateTransform

MainPage::MainPage()
{
InitializeComponent();

// Listener for the ManipulationDelta event.


touchRectangle->ManipulationDelta +=
ref new ManipulationDeltaEventHandler(
this,
&amp;MainPage::touchRectangle_ManipulationDelta);
// New translation transform populated in
// the ManipulationDelta handler.
dragTranslation = ref new TranslateTransform();
// Apply the translation to the Rectangle.
touchRectangle->RenderTransform = dragTranslation;
}

public MainPage()
{
this.InitializeComponent();

// Listener for the ManipulationDelta event.


touchRectangle.ManipulationDelta += touchRectangle_ManipulationDelta;
// New translation transform populated in
// the ManipulationDelta handler.
dragTranslation = new TranslateTransform();
// Apply the translation to the Rectangle.
touchRectangle.RenderTransform = this.dragTranslation;
}

Public Sub New()

&#39; This call is required by the designer.


InitializeComponent()

&#39; Listener for the ManipulationDelta event.


AddHandler touchRectangle.ManipulationDelta,
AddressOf testRectangle_ManipulationDelta
&#39; New translation transform populated in
&#39; the ManipulationDelta handler.
dragTranslation = New TranslateTransform()
&#39; Apply the translation to the Rectangle.
touchRectangle.RenderTransform = dragTranslation

End Sub

Finally, in the ManipulationDelta event handler, the position of the Rectangle is updated by using the
TranslateTransform on the Delta property.
// Handler for the ManipulationDelta event.
// ManipulationDelta data is loaded into the
// translation transform and applied to the Rectangle.
void MainPage::touchRectangle_ManipulationDelta(Object^ sender,
ManipulationDeltaRoutedEventArgs^ e)
{
// Move the rectangle.
dragTranslation->X += e->Delta.Translation.X;
dragTranslation->Y += e->Delta.Translation.Y;

// Handler for the ManipulationDelta event.


// ManipulationDelta data is loaded into the
// translation transform and applied to the Rectangle.
void touchRectangle_ManipulationDelta(object sender,
ManipulationDeltaRoutedEventArgs e)
{
// Move the rectangle.
dragTranslation.X += e.Delta.Translation.X;
dragTranslation.Y += e.Delta.Translation.Y;
}

&#39; Handler for the ManipulationDelta event.


&#39; ManipulationDelta data Is loaded into the
&#39; translation transform And applied to the Rectangle.
Private Sub testRectangle_ManipulationDelta(
sender As Object,
e As ManipulationDeltaRoutedEventArgs)

&#39; Move the rectangle.


dragTranslation.X = (dragTranslation.X + e.Delta.Translation.X)
dragTranslation.Y = (dragTranslation.Y + e.Delta.Translation.Y)

End Sub

Routed events
All of the pointer events, gesture events and manipulation events mentioned here are implemented as routed
events. This means that the event can potentially be handled by objects other than the one that originally raised the
event. Successive parents in an object tree, such as the parent containers of a UIElement or the root Page of your
app, can choose to handle these events even if the original element does not. Conversely, any object that does
handle the event can mark the event handled so that it no longer reaches any parent element. For more info about
the routed event concept and how it affects how you write handlers for routed events, see Events and routed
events overview.

Dos and don'ts


Design applications with touch interaction as the primary expected input method.
Provide visual feedback for interactions of all types (touch, pen, stylus, mouse, etc.)
Optimize targeting by adjusting touch target size, contact geometry, scrubbing and rocking.
Optimize accuracy through the use of snap points and directional "rails".
Provide tooltips and handles to help improve touch accuracy for tightly packed UI items.
Don't use timed interactions whenever possible (example of appropriate use: touch and hold).
Don't use the number of fingers used to distinguish the manipulation whenever possible.
Related articles
Handle pointer input
Identify input devices
Samples
Basic input sample
Low latency input sample
User interaction mode sample
Focus visuals sample
Archive Samples
Input: Device capabilities sample
Input: XAML user input events sample
XAML scrolling, panning, and zooming sample
Input: Gestures and manipulations with GestureRecognizer
Touchpad design guidelines
3/6/2017 4 min to read Edit on GitHub

Design your app so that users can interact with it through a touchpad. A touchpad combines both indirect multi-
touch input with the precision input of a pointing device, such as a mouse. This combination makes the touchpad
suited to both a touch-optimized UI and the smaller targets of productivity apps.

Touchpad interactions require three things:


A standard touchpad or a Windows Precision Touchpad.
Precision touchpads are optimized for Universal Windows Platform (UWP) devices. They enable the system
to handle certain aspects of the touchpad experience natively, such as finger tracking and palm detection, for
a more consistent experience across devices.
The direct contact of one or more fingers on the touchpad.
Movement of the touch contacts (or lack thereof, based on a time threshold).
The input data provided by the touchpad sensor can be:
Interpreted as a physical gesture for direct manipulation of one or more UI elements (such as panning, rotating,
resizing, or moving). In contrast, interacting with an element through its properties window or other dialog box
is considered indirect manipulation.
Recognized as an alternative input method, such as mouse or pen.
Used to complement or modify aspects of other input methods, such as smudging an ink stroke drawn with a
pen.
A touchpad combines indirect multi-touch input with the precision input of a pointing device, such as a mouse. This
combination makes the touchpad suited to both touch-optimized UI and the typically smaller targets of productivity
apps and the desktop environment. Optimize your Windows Store app design for touch input and get touchpad
support by default.
Because of the convergence of interaction experiences supported by touchpads, we recommend using the
PointerEntered event to provide mouse-style UI commands in addition to the built-in support for touch input. For
example, use previous and next buttons to let users flip through pages of content as well as pan through the
content.
The gestures and guidelines discussed in this topic can help to ensure that your app supports touchpad input
seamlessly and with minimal code.
The touchpad language
A concise set of touchpad interactions are used consistently throughout the system. Optimize your app for touch
and mouse input and this language makes your app feel instantly familiar for your users, increasing their
confidence and making your app easier to learn and use.
Users can set far more Precision Touchpad gestures and interaction behaviors than they can for a standard
touchpad. These two images show the different touchpad settings pages from Settings > Devices > Mouse &
touchpad for a standard touchpad and a Precision Touchpad, respectively.

Standard\ touchpad\ settings


Windows\ Precision\ Touchpad\ settings

Here are some examples of touchpad-optimized gestures for performing common tasks.

TERM DESCRIPTION

Three-finger tap User preference to search with Cortana or show Action


Center.

Three finger slide User preference to open the virtual desktop Task View,
show Desktop, or switch between open apps.

Single finger tap for primary action Use a single finger to tap an element and invoke its
primary action (such as launching an app or executing a
command).
TERM DESCRIPTION

Two finger tap to right-click Tap with two fingers simultaneously on an element to
select it and display contextual commands.

Two finger slide to pan Slide is used primarily for panning interactions but can
also be used for moving, drawing, or writing.

Pinch and stretch to zoom The pinch and stretch gestures are commonly used for
resizing and Semantic Zoom.

Single finger press and slide to rearrange Drag an element.

Single finger press and slide to select text Press within selectable text and slide to select it. Double-
tap to select a word.

Left and right click zone Emulate the left and right button functionality of a mouse
device.

Hardware
Query the mouse device capabilities (MouseCapabilities) to identify what aspects of your app UI the touchpad
hardware can access directly. We recommend providing UI for both touch and mouse input.
For more info about querying device capabilities, see Identify input devices.

Visual feedback
When a touchpad cursor is detected (through move or hover events), show mouse-specific UI to indicate
functionality exposed by the element. If the touchpad cursor doesn't move for a certain amount of time, or if the
user initiates a touch interaction, make the touchpad UI gradually fade away. This keeps the UI clean and
uncluttered.
Don't use the cursor for hover feedback, the feedback provided by the element is sufficient (see Cursors below).
Don't display visual feedback if an element doesn't support interaction (such as static text).
Don't use focus rectangles with touchpad interactions. Reserve these for keyboard interactions.
Display visual feedback concurrently for all elements that represent the same input target.
For more general guidance about visual feedback, see Guidelines for visual feedback.

Cursors
A set of standard cursors is available for a touchpad pointer. These are used to indicate the primary action of an
element.
Each standard cursor has a corresponding default image associated with it. The user or an app can replace the
default image associated with any standard cursor at any time. Windows Store apps specify a cursor image through
the PointerCursor function.
If you need to customize the mouse cursor:

Always use the arrow cursor ( ) for clickable elements. don't use the pointing hand cursor ( ) for links or other
interactive elements. Instead, use hover effects (described earlier).
Use the text cursor ( ) for selectable text.
Use the move cursor ( ) when moving is the primary action (such as dragging or cropping). Don't use the
move cursor for elements where the primary action is navigation (such as Start tiles).
Use the horizontal, vertical and diagonal resize cursors ( , , , ), when an object is resizable.
Use the grasping hand cursors ( , ) when panning content within a fixed canvas (such as a map).

Related articles
Handle pointer input
Identify input devices Samples
Basic input sample
Low latency input sample
User interaction mode sample
Focus visuals sample Archive Samples
Input: Device capabilities sample
Input: XAML user input events sample
XAML scrolling, panning, and zooming sample
Input: Gestures and manipulations with GestureRecognizer
Multiple inputs
3/6/2017 1 min to read Edit on GitHub

Just as people use a combination of voice and gesture when communicating with each other, multiple types and
modes of input can also be useful when interacting with an app.
To accommodate as many users and devices as possible, we recommend that you design your apps to work with as
many input types as possible (gesture, speech, touch, touchpad, mouse, and keyboard). Doing so will maximize
flexibility, usability, and accessibility.
To begin, consider the various scenarios in which your app handles input. Try to be consistent throughout your app,
and remember that the platform controls provide built-in support for multiple input types.
Can users interact with the application through multiple input devices?
Are all input methods supported at all times? With certain controls? At specific times or circumstances?
Does one input method take priority?

Single (or exclusive)-mode interactions


With single-mode interactions, multiple input types are supported, but only one can be used per action. For
example, speech recognition for commands, and gestures for navigation; or, text entry using touch or gestures,
depending on proximity.

Multimodal interactions
With multimodal interactions, multiple input methods in sequence are used to complete a single action.
Speech + gesture
The user points to a product, and then says Add to cart.
Speech + touch
The user selects a photo using press and hold, and then says Send photo.
Optical zoom and resizing
3/6/2017 3 min to read Edit on GitHub

This article describes Windows zooming and resizing elements and provides user experience guidelines for using
these interaction mechanisms in your apps.

Important APIs
Windows.UI.Input
Input (XAML)

Optical zoom lets users magnify their view of the content within a content area (it is performed on the content area
itself), whereas resizing enables users to change the relative size of one or more objects without changing the view
of the content area (it is performed on the objects within the content area).
Both optical zoom and resizing interactions are performed through the pinch and stretch gestures (moving fingers
farther apart zooms in and moving them closer together zooms out), or by holding the Ctrl key down while
scrolling the mouse scroll wheel, or by holding the Ctrl key down (with the Shift key, if no numeric keypad is
available) and pressing the plus (+) or minus (-) key.
The following diagrams demonstrate the differences between resizing and optical zooming.
Optical zoom: User selects an area, and then zooms into the entire area.

Resize: User selects an object within an area, and resizes that object.

Note
Optical zoom shouldn't be confused with Semantic Zoom. Although the same gestures are used for both
interactions, semantic zoom refers to the presentation and navigation of content organized within a single view
(such as the folder structure of a computer, a library of documents, or a photo album).
Dos and don'ts
Use the following guidelines for apps that support either resizing or optical zooming:
If maximum and minimum size constraints or boundaries are defined, use visual feedback to demonstrate when
the user reaches or exceeds those boundaries.
Use snap points to influence zooming and resizing behavior by providing logical points at which to stop the
manipulation and ensure a specific subset of content is displayed in the viewport. Provide snap points for
common zoom levels or logical views to make it easier for a user to select those levels. For example, photo
apps might provide a resizing snap point at 100% or, in the case of mapping apps, snap points might be
useful at city, state, and country views.
Snap points enable users to be imprecise and still achieve their goals. If you're using XAML, see the snap
points properties of ScrollViewer. For JavaScript and HTML, use -ms-content-zoom-snap-points.
There are two types of snap-points:
Proximity - After the contact is lifted, a snap point is selected if inertia stops within a distance threshold of
the snap point. Proximity snap points still allow a zoom or resize to end between snap points.
Mandatory - The snap point selected is the one that immediately precedes or succeeds the last snap point
crossed before the contact was lifted (depending on the direction and velocity of the gesture). A
manipulation must end on a mandatory snap point.
Use inertia physics. These include the following:
Deceleration: Occurs when the user stops pinching or stretching. This is similar to sliding to a stop on a
slippery surface.
Bounce: A slight bounce-back effect occurs when a size constraint or boundary is passed.
Space controls according to the Guidelines for targeting.
Provide scaling handles for constrained resizing. Isometric, or proportional, resizing is the default if the handles
are not specified.
Don't use zooming to navigate the UI or expose additional controls within your app, use a panning region
instead. For more info on panning, see Guidelines for panning.
Don't put resizable objects within a resizable content area. Exceptions to this include:
Drawing applications where resizable items can appear on a resizable canvas or art board.
Webpages with an embedded object such as a map.
Note
In all cases, the content area is resized unless all touch points are within the resizable object.

Related articles
Samples
Basic input sample
Low latency input sample
User interaction mode sample
Focus visuals sample
Archive samples
Input: XAML user input events sample
Input: Device capabilities sample
Input: Touch hit testing sample
XAML scrolling, panning, and zooming sample
Input: Simplified ink sample
Input: Windows 8 gestures sample
Input: Manipulations and gestures (C++) sample
DirectX touch input sample
Guidelines for panning
3/6/2017 8 min to read Edit on GitHub

Panning or scrolling lets users navigate within a single view, to display the content of the view that does not fit
within the viewport. Examples of views include the folder structure of a computer, a library of documents, or a
photo album.

Important APIs
Windows.UI.Input
Windows.UI.Xaml.Input

Dos and don'ts


Panning indicators and scroll bars
Ensure panning/scrolling is possible before loading content into your app.
Display panning indicators and scroll bars to provide location and size cues. Hide them if you provide a
custom navigation feature.
Note Unlike standard scroll bars, panning indicators are purely informative. They are not exposed to input
devices and cannot be manipulated in any way.
Single-axis panning (one-dimensional overflow)
Use one-axis panning for content regions that extend beyond one viewport boundary (vertical or
horizontal).
Vertical panning for a one-dimensional list of items.
Horizontal panning for a grid of items.
Dont use mandatory snap-points with single-axis panning if a user must be able to pan and stop between
snap-points. Mandatory snap-points guarantee that the user will stop on a snap-point. Use proximity snap-
points instead.
Freeform panning (two-dimensional overflow)
Use two-axis panning for content regions that extend beyond both viewport boundaries (vertical and
horizontal).
Override the default rails behavior and use freeform panning for unstructured content where the user is
likely to move in multiple directions.
Freeform panning is typically suited to navigating within images or maps.
Paged view
Use mandatory snap-points when the content is composed of discrete elements or you want to display an
entire element. This can include pages of a book or magazine, a column of items, or individual images.
A snap-point should be placed at each logical boundary.
Each element should be sized or scaled to fit the view.
Logical and key points
Use proximity snap-points if there are key points or logical places in the content that a user will likely stop.
For example, a section header.
If maximum and minimum size constraints or boundaries are defined, use visual feedback to demonstrate
when the user reaches or exceeds those boundaries.
Chaining embedded or nested content
Use single-axis panning (typically horizontal) and column layouts for text and grid-based content. In these
cases, content typically wraps and flows naturally from column to column and keeps the user experience
consistent and discoverable across Windows Store apps.
Don't use embedded pannable regions to display text or item lists. Because the panning indicators and
scroll bars are displayed only when the input contact is detected within the region, it is not an intuitive or
discoverable user experience.
Don't chain or place one pannable region within another pannable region if they both pan in the same
direction, as shown here. This can result in the parent area being panned unintentionally when a boundary
for the child area is reached. Consider making the panning axis perpendicular.

Additional usage guidance


Panning with touch, by using a swipe or slide gesture with one or more fingers, is like scrolling with the mouse.
The panning interaction is most similar to rotating the mouse wheel or sliding the scroll box, rather than clicking
the scroll bar. Unless a distinction is made in an API or required by some device-specific Windows UI, we simply
refer to both interactions as panning.
Depending on the input device, the user pans within a pannable region by using one of these:
A mouse, touchpad, or active pen/stylus to click the scroll arrows, drag the scroll box, or click within the scroll
bar.
The wheel button of the mouse to emulate dragging the scroll box.
The extended buttons (XBUTTON1 and XBUTTON2), if supported by the mouse.
The keyboard arrow keys to emulate dragging the scroll box or the page keys to emulate clicking within the
scroll bar.
Touch, touchpad, or passive pen/stylus to slide or swipe the fingers in the desired direction.
Sliding involves moving the fingers slowly in the panning direction. This results in a one-to-one relationship,
where the content pans at the same speed and distance as the fingers. Swiping, which involves rapidly sliding and
lifting the fingers, results in the following physics being applied to the panning animation:
Deceleration (inertia): Lifting the fingers causes panning to start decelerating. This is similar to sliding to a stop
on a slippery surface.
Absorption: Panning momentum during deceleration causes a slight bounce-back effect if either a snap point
or a content area boundary is reached.
Types of panning
Windows 8 supports three types of panning:
Single axis - panning is supported in one direction only (horizontal or vertical).
Rails - panning is supported in all directions. However, once the user crosses a distance threshold in a specific
direction, then panning is restricted to that axis.
Freeform - panning is supported in all directions.
Panning UI
The interaction experience for panning is unique to the input device while still providing similar functionality.
Pannable regions Pannable region behaviors are exposed to Windows Store app using JavaScript developers at
design time through Cascading Style Sheets (CSS).
There are two panning display modes based on the input device detected:
Panning indicators for touch.
Scroll bars for other input devices, including mouse, touchpad, keyboard, and stylus.
Note Panning indicators are only visible when the touch contact is within the pannable region. Similarly, the scroll
bar is only visible when the mouse cursor, pen/stylus cursor, or keyboard focus is within the scrollable region.
Panning indicators Panning indicators are similar to the scroll box in a scroll bar. They indicate the proportion of
displayed content to total pannable area and the relative position of the displayed content in the pannable area.
The following diagram shows two pannable areas of different lengths and their panning indicators.

Panning behaviors Snap points Panning with the swipe gesture introduces inertia behavior into the interaction
when the touch contact is lifted. With inertia, the content continues to pan until some distance threshold is reached
without direct input from the user. Use snap points to modify this inertia behavior.
Snap points specify logical stops in your app content. Cognitively, snap points act as a paging mechanism for the
user and minimize fatigue from excessive sliding or swiping in large pannable regions. With them, you can handle
imprecise user input and ensure a specific subset of content or key information is displayed in the viewport.
There are two types of snap-points:
Proximity - After the contact is lifted, a snap point is selected if inertia stops within a distance threshold of the
snap point. Panning can still stop between proximity snap points.
Mandatory - The snap point selected is the one that immediately precedes or succeeds the last snap point
crossed before the contact was lifted (depending on the direction and velocity of the gesture). Panning must
stop on a mandatory snap point.
Panning snap-points are useful for applications such as web browsers and photo albums that emulate paginated
content or have logical groupings of items that can be dynamically regrouped to fit within a viewport or display.
The following diagrams show how panning to a certain point and releasing causes the content to automatically
pan to a logical location.
Swipe to pan. Lift touch contact. Pannable region stops at the snap
point, not where the touch contact was
lifted.

Rails Content can be wider and taller than the dimensions and resolution of a display device. For this reason, two-
dimensional panning (horizontal and vertical) is often necessary. Rails improve the user experience in these cases
by emphasizing panning along the axis of motion (vertical or horizontal).
The following diagram demonstrates the concept of rails.

Chaining embedded or nested content


After a user hits a zoom or scroll limit on an element that has been nested within another zoomable or scrollable
element, you can specify whether that parent element should continue the zooming or scrolling operation begun
in its child element. This is called zoom or scroll chaining.
Chaining is used for panning within a single-axis content area that contains one or more single-axis or freeform
panning regions (when the touch contact is within one of these child regions). When the panning boundary of the
child region is reached in a specific direction, panning is then activated on the parent region in the same direction.
When a pannable region is nested inside another pannable region it's important to specify enough space between
the container and the embedded content. In the following diagrams, one pannable region is placed inside another
pannable region, each going in perpendicular directions. There is plenty of space for users to pan in each region.
Without enough space, as shown in the following diagram, the embedded pannable region can interfere with
panning in the container and result in unintentional panning in one or more of the pannable regions.

This guidance is also useful for apps such as photo albums or mapping apps that support unconstrained panning
within an individual image or map while also supporting single-axis panning within the album (to the previous or
next images) or details area. In apps that provide a detail or options area corresponding to a freeform panning
image or map, we recommend that the page layout start with the details and options area as the unconstrained
panning area of the image or map might interfere with panning to the details area.

Related articles
Custom user interactions
Optimize ListView and GridView
Keyboard accessibility
Samples
Basic input sample
Low latency input sample
User interaction mode sample
Focus visuals sample
Archive samples
Input: XAML user input events sample
Input: Device capabilities sample
Input: Touch hit testing sample
XAML scrolling, panning, and zooming sample
Input: Simplified ink sample
Input: Windows 8 gestures sample
Input: Manipulations and gestures (C++) sample
DirectX touch input sample
Rotation
3/6/2017 3 min to read Edit on GitHub

This article describes the new Windows UI for rotation and provides user experience guidelines that should be
considered when using this new interaction mechanism in your UWP app.

Important APIs
Windows.UI.Input
Windows.UI.Xaml.Input

Dos and don'ts


Use rotation to help users directly rotate UI elements.

Additional usage guidance


Overview of rotation
Rotation is the touch-optimized technique used by UWP apps to enable users to turn an object in a circular
direction (clockwise or counter-clockwise).
Depending on the input device, the rotation interaction is performed using:
A mouse or active pen/stylus to move the rotation gripper of a selected object.
Touch or passive pen/stylus to turn the object in the desired direction using the rotate gesture.
When to use rotation
Use rotation to help users directly rotate UI elements. The following diagrams show some of the supported finger
positions for the rotation interaction.

Note
Intuitively, and in most cases, the rotation point is one of the two touch points unless the user can specify a rotation
point unrelated to the contact points (for example, in a drawing or layout application). The following images
demonstrate how the user experience can be degraded if the rotation point is not constrained in this way.
This first picture shows the initial (thumb) and secondary (index finger) touch points: the index finger is touching a
tree and the thumb is touching a log.
In this second picture, rotation is performed
around the initial (thumb) touch point. After the rotation, the index finger is still touching the tree trunk and the
thumb is still touching the log (the rotation point).

In this third picture, the center of rotation has


been defined by the application (or set by the user) to be the center point of the picture. After the rotation, because
the picture did not rotate around one of the fingers, the illusion of direct manipulation is broken (unless the user
has chosen this setting).

In this last picture, the center of rotation has


been defined by the application (or set by the user) to be a point in the middle of the left edge of the picture. Again,
unless the user has chosen this setting, the illusion of direct manipulation is broken in this case.
Windows 8 supports three types of rotation: free, constrained, and combined.

TYPE DESCRIPTION

Free rotation Free rotation enables a user to rotate content freely


anywhere in a 360 degree arc. When the user releases the
object, the object remains in the chosen position. Free
rotation is useful for drawing and layout applications such
as Microsoft PowerPoint, Word, Visio, and Paint; and
Adobe Photoshop, Illustrator, and Flash.

Constrained rotation Constrained rotation supports free rotation during the


manipulation but enforces snap points at 90 degree
increments (0, 90, 180, and 270) upon release. When the
user releases the object, the object automatically rotates
to the nearest snap point.
Constrained rotation is the most common method of
rotation, and it functions in a similar way to scrolling
content. Snap points let a user be imprecise and still
achieve their goal. Constrained rotation is useful for
applications such as web browsers and photo albums.

Combined rotation Combined rotation supports free rotation with zones


(similar to rails in Guidelines for panning) at each of the 90
degree snap points enforced by constrained rotation. If
the user releases the object outside of one of 90 degree
zones, the object remains in that position; otherwise, the
object automatically rotates to a snap point.

Note A user interface rail is a feature in which an area


around a target constrains movement towards some specific
value or location to influence its selection.

Related topics
Samples
Basic input sample
Low latency input sample
User interaction mode sample
Focus visuals sample
Archive samples
Input: XAML user input events sample
Input: Device capabilities sample
Input: Touch hit testing sample
XAML scrolling, panning, and zooming sample
Input: Simplified ink sample
Input: Gestures and manipulations with GestureRecognizer
Input: Manipulations and gestures (C++) sample
DirectX touch input sample
Selecting text and images
3/6/2017 4 min to read Edit on GitHub

This article describes selecting and manipulating text, images, and controls and provides user experience guidelines
that should be considered when using these mechanisms in your apps.

Important APIs
Windows.UI.Xaml.Input
Windows.UI.Input

Dos and don'ts


Use font glyphs when implementing your own gripper UI. The gripper is a combination of two Segoe UI
fonts that are available system-wide. Using font resources simplifies rendering issues at different dpi and
works well with the various UI scaling plateaus. When implementing your own grippers, they should share
the following UI traits:
Circular shape
Visible against any background
Consistent size
Provide a margin around the selectable content to accommodate the gripper UI. If your app enables text
selection in a region that doesn't pan/scroll, allow a 1/2 gripper margin on the left and right sides of the text
area and 1 gripper height on the top and bottom sides of the text area (as shown in the following images).
This ensures that the entire gripper UI is exposed to the user and minimizes unintended interactions with
other edge-based UI.

Hide grippers UI during interaction. Eliminates occlusion by the grippers during the interaction. This is useful
when a gripper isn't completely obscured by the finger or there are multiple text selection grippers. This
eliminates visual artifacts when displaying child windows.
Don't allow selection of UI elements such as controls, labels, images, proprietary content, and so on.
Typically, Windows applications allow selection only within specific controls. Controls such as buttons,
labels, and logos are not selectable. Assess whether selection is an issue for your app and, if so, identify the
areas of the UI where selection should be prohibited.

Additional usage guidance


Text selection and manipulation is particularly susceptible to user experience challenges introduced by touch
interactions. Mouse, pen/stylus, and keyboard input are highly granular: a mouse click or pen/stylus contact is
typically mapped to a single pixel, and a key is pressed or not pressed. Touch input is not granular; it's difficult to
map the entire surface of a fingertip to a specific x-y location on the screen to place a text caret accurately.
Considerations and recommendations
Use the built-in controls exposed through the language frameworks in Windows to build apps that provide the full
platform user interaction experience, including selection and manipulation behaviors. You'll find the interaction
functionality of the built-in controls sufficient for the majority of UWP apps.
When using standard UWP text controls, the selection behaviors and visuals described in this topic cannot be
customized.
Text selection
If your app requires a custom UI that supports text selection, we recommend that you follow the Windows selection
behaviors described here.
Editable and non-editable content
With touch, selection interactions are performed primarily through gestures such as a tap to set an insertion cursor
or select a word, and a slide to modify a selection. As with other Windows touch interactions, timed interactions are
limited to the press and hold gesture to display informational UI. For more information, see Guidelines for visual
feedback.
Windows recognizes two possible states for selection interactions, editable and non-editable, and adjusts selection
UI, feedback, and functionality accordingly.
Editable content
Tapping within the left half of a word places the cursor to the immediate left of the word, while tapping within the
right half places the cursor to the immediate right of the word.
The following image demonstrates how to place an initial insertion cursor with gripper by tapping near the
beginning or ending of a word.

The following image demonstrates how to adjust a selection by dragging the gripper.
The following images demonstrate how to invoke the context menu by tapping within the selection or on a gripper
(press and hold can also be used).

Note These interactions vary somewhat in the case of a misspelled word. Tapping a word that is marked as
misspelled will both highlight the entire word and invoke the suggested spelling context menu.
Non-editable content
The following image demonstrates how to select a word by tapping within the word (no spaces are included in the
initial selection).
Follow the same procedures as for editable text to adjust the selection and display the context menu.
Object manipulation
Wherever possible, use the same (or similar) gripper resources as text selection when implementing custom object
manipulation in a UWP app. This helps provide a consistent interaction experience across the platform.
For example, grippers can also be used in image processing apps that support resizing and cropping or media
player apps that provide adjustable progress bars, as shown in the following images.

Media player with adjustable progress bar.

Image editor with cropping grippers.

Related articles
For developers
Custom user interactions
Samples
Basic input sample
Low latency input sample
User interaction mode sample
Focus visuals sample
Archive samples
Input: XAML user input events sample
Input: Device capabilities sample
Input: Touch hit testing sample
XAML scrolling, panning, and zooming sample
Input: Simplified ink sample
Input: Windows 8 gestures sample
Input: Manipulations and gestures (C++) sample
DirectX touch input sample
Guidelines for targeting
3/6/2017 5 min to read Edit on GitHub

Touch targeting in Windows uses the full contact area of each finger that is detected by a touch digitizer. The larger,
more complex set of input data reported by the digitizer is used to increase precision when determining the user's
intended (or most likely) target.

Important APIs
Windows.UI.Core
Windows.UI.Input
Windows.UI.Xaml.Input

This topic describes the use of contact geometry for touch targeting and provides best practices for targeting in
UWP apps.

Measurements and scaling


To remain consistent across different screen sizes and pixel densities, all target sizes are represented in physical
units (millimeters). Physical units can be converted to pixels by using the following equation:
Pixels = Pixel Density Measurement
The following example uses this formula to calculate the pixel size of a 9 mm target on a 135 pixel per inch (PPI)
display at a 1x scaling plateau:
Pixels = 135 PPI 9 mm
Pixels = 135 PPI (0.03937 inches per mm 9 mm)
Pixels = 135 PPI 0.35433 inches
Pixels = 48 pixels
This result must be adjusted according to each scaling plateau defined by the system.

Thresholds
Distance and time thresholds may be used to determine the outcome of an interaction.
For example, when a touch-down is detected, a tap is registered if the object is dragged less than 2.7 mm from the
touch-down point and the touch is lifted within 0.1 second or less of the touch-down. Moving the finger beyond
this 2.7 mm threshold results in the object being dragged and either selected or moved (for more information, see
Guidelines for cross-slide). Depending on your app, holding the finger down for longer than 0.1 second may cause
the system to perform a self-revealing interaction (for more information, see Guidelines for visual feedback).

Target sizes
In general, set your touch target size to 9 mm square or greater (48x48 pixels on a 135 PPI display at a 1.0x scaling
plateau). Avoid using touch targets that are less than 7 mm square.
The following diagram shows how target size is typically a combination of a visual target, actual target size, and
any padding between the actual target and other potential targets.
The following table lists the minimum and recommended sizes for the components of a touch target.

TARGET COMPONENT MINIMUM SIZE RECOMMENDED SIZE

Padding 2 mm Not applicable.

Visual target size < 60% of actual size 90-100% of actual size
Most users won't realize a visual
target is touchable if it's less than
4.2 mm square (60% of the
recommended minimum target size
of 7 mm).

Actual target size 7 mm square Greater than or equal to 9 mm square


(48 x 48 px @ 1x)

Total target size 11 x 11 mm (approximately 60 px: three 13.5 x 13.5 mm (72 x 72 px @ 1x)
20-px grid units @ 1x) This implies that the size of the
actual target and padding
combined should be larger than
their respective minimums.

These target size recommendations can be adjusted as required by your particular scenario. Some of the
considerations that went into these recommendations include:
Frequency of Touches: Consider making targets that are repeatedly or frequently pressed larger than the
minimum size.
Error Consequence: Targets that have severe consequences if touched in error should have greater padding and
be placed further from the edge of the content area. This is especially true for targets that are touched
frequently.
Position in the content area
Form factor and screen size
Finger posture
Touch visualizations
Hardware and touch digitizers

Targeting assistance
Windows provides targeting assistance to support scenarios where the minimum size or padding
recommendations presented here are not applicable; for example, hyperlinks on a webpage, calendar controls,
drop down lists and combo boxes, or text selection.
These targeting platform improvements and user interface behaviors work together with visual feedback
(disambiguation UI) to improve user accuracy and confidence. For more information, see Guidelines for visual
feedback.
If a touchable element must be smaller than the recommended minimum target size, the following techniques can
be used to minimize the targeting issues that result.

Tethering
Tethering is a visual cue (a connector from a contact point to the bounding rectangle of an object) used to indicate
to a user that they are connected to, and interacting with, an object even though the input contact isn't directly in
contact with the object. This can occur when:
A touch contact was first detected within some proximity threshold to an object and this object was identified as
the most likely target of the contact.
A touch contact was moved off an object but the contact is still within a proximity threshold.
This feature is not exposed to Windows Store app using JavaScript developers.

Scrubbing
Scrubbing means to touch anywhere within a field of targets and slide to select the desired target without lifting
the finger until it is over the desired target. This is also referred to as "take-off activation", where the object that is
activated is the one that was last touched when the finger was lifted from the screen.
Use the following guidelines when you design scrubbing interactions:
Scrubbing is used in conjunction with disambiguation UI. For more information, see Guidelines for visual
feedback.
The recommended minimum size for a scrubbing touch target is 20 px (3.75 mm @ 1x size).
Scrubbing takes precedence when performed on a pannable surface, such as a webpage.
Scrubbing targets should be close together.
An action is canceled when the user drags a finger off a scrubbing target.
Tethering to a scrubbing target is specified if the actions performed by the target are non-destructive, such as
switching between dates on a calendar.
Tethering is specified in a single direction, horizontally or vertically.

Related articles
Samples
Basic input sample
Low latency input sample
User interaction mode sample
Focus visuals sample
Archive samples
Input: XAML user input events sample
Input: Device capabilities sample
Input: Touch hit testing sample
XAML scrolling, panning, and zooming sample
Input: Simplified ink sample
Input: Windows 8 gestures sample
Input: Manipulations and gestures (C++) sample
DirectX touch input sample
Guidelines for visual feedback
3/6/2017 3 min to read Edit on GitHub

Use visual feedback to show users when their interactions are detected, interpreted, and handled. Visual feedback
can help users by encouraging interaction. It indicates the success of an interaction, which improves the user's
sense of control. It also relays system status and reduces errors.

Important APIs
Windows.Devices.Input
Windows.UI.Input
Windows.UI.Core

Recommendations
Try to remain as close to the original control template as possible, for optimal control and application
performance.
Don't use touch visualizations in situations where they might interfere with the use of the app. For more info,
see ShowGestureFeedback.
Don't display feedback unless it is absolutely necessary. Keep the UI clean and uncluttered by not showing
visual feedback unless you are adding value that is not available elsewhere.
Try not to dramatically customize the visual feedback behaviors of the built-in Windows gestures, as this can
create an inconsistent and confusing user experience.

Additional usage guidance


Contact visualizations are especially critical for touch interactions that require accuracy and precision. For
example, your app should clearly indicate the location of a tap to let a user know if they missed their target, how
much they missed it by, and what adjustments they must make.
Using the default XAML platform controls available ensures that your app works correctly on all devices and in all
input situations. If your app features custom interactions that require customized feedback, you should ensure the
feedback is appropriate, spans input devices, and doesn't distract a user from their task. This can be a particular
issue in game or drawing apps, where the visual feedback might conflict with or obscure critical UI.
[!IMPORTANT] We don't recommend changing the interaction behavior of the built-in gestures.
Feedback Across Devices
Visual feedback is generally dependent on the input device (touch, touchpad, mouse, pen/stylus, keyboard, and so
on). For example, the built-in feedback for a mouse usually involves moving and changing the cursor, while touch
and pen require contact visualizations, and keyboard input and navigation uses focus rectangles and highlighting.
Use ShowGestureFeedback to set feedback behavior for the platform gestures.
If customizing feedback UI, ensure you provide feedback that supports, and is suitable for, all input modes.
Here are some examples of built-in contact visualizations in Windows.
Touch visualization Mouse/touchpad Pen visualization Keyboard visualization
visualization

High Visibility Focus Visuals


All Windows apps sport a more defined focus visual around interactable controls within the application. These
new focus visuals are fully customizable as well as disableable when needed.

Color Branding & Customizing


Border Properties
There are two parts to the high visibility focus visuals: the primary border and the secondary border. The primary
border is 2px thick, and runs around the outside of the secondary border. The secondary border is 1px thick and
runs around the inside of the primary border.

To change the thickness of either border type (primary or secondary) use the FocusVisualPrimaryThickness or
FocusVisualSecondaryThickness, respectively:

<Slider Width="200" FocusVisualPrimaryThickness="5" FocusVisualSecondaryThickness="2"/>

The margin is a property of type Thickness, and therefore the margin can be customized to appear only on
certain sides of the control. See below:
The margin is the space between the control's visual bounds and the start of the focus visuals secondary border.
The default margin is 1px away from the control bounds. You can edit this margin on a per-control basis, by
changing the FocusVisualMargin property:

<Slider Width="200" FocusVisualMargin="-5"/>

A negative margin will push the border away from the center of the control, and a positive margin will move the
border closer to the center of the control.
To turn off focus visuals on the control entirely, simply disabled UseSystemFocusVisuals:

<Slider Width="200" UseSystemFocusVisuals="False"/>

The thickness, margin, or whether or not the app-developer wishes to have the focus visuals at all, is determined
on a per-control basis.
Color Properties
There are only two color properties for the focus visuals: the primary border color, and the secondary border
color. These focus visual border colors can be changed per-control on an page level, and globally on an app-wide
level:
To brand focus visuals app-wide, override the system brushes:

<SolidColorBrush x:Key="SystemControlFocusVisualPrimaryBrush" Color="DarkRed"/>


<SolidColorBrush x:Key="SystemControlFocusVisualSecondaryBrush" Color="Pink"/>

To change the colors on a per-control basis, just edit the focus visual properties on the desired control:

<Slider Width="200" FocusVisualPrimaryBrush="DarkRed" FocusVisualSecondaryBrush="Pink"/>


Related articles
For designers
Guidelines for panning
For developers
Custom user interactions
Samples
Basic input sample
Low latency input sample
User interaction mode sample
Focus visuals sample
Archive samples
Input: XAML user input events sample
Input: Device capabilities sample
Input: Touch hit testing sample
XAML scrolling, panning, and zooming sample
Input: Simplified ink sample
Input: Windows 8 gestures sample
Input: Manipulations and gestures (C++) sample
DirectX touch input sample
Identify input devices
3/6/2017 3 min to read Edit on GitHub

Identify the input devices connected to a Universal Windows Platform (UWP) device and identify their capabilities
and attributes.

Important APIs
Windows.Devices.Input
Windows.UI.Input
Windows.UI.Xaml.Input

Retrieve mouse properties


The Windows.Devices.Input namespace contains the MouseCapabilities class used to retrieve the properties
exposed by one or more connected mice. Just create a new MouseCapabilities object and get the properties
you're interested in.
Note The values returned by the properties discussed here are based on all detected mice: Boolean properties
return non-zero if at least one mouse supports a specific capability, and numeric properties return the maximum
value exposed by any one mouse.
The following code uses a series of TextBlock elements to display the individual mouse properties and values.

private void GetMouseProperties()


{
MouseCapabilities mouseCapabilities = new Windows.Devices.Input.MouseCapabilities();
MousePresent.Text = mouseCapabilities.MousePresent != 0 ? "Yes" : "No";
VertWheel.Text = mouseCapabilities.VerticalWheelPresent != 0 ? "Yes" : "No";
HorzWheel.Text = mouseCapabilities.HorizontalWheelPresent != 0 ? "Yes" : "No";
SwappedButtons.Text = mouseCapabilities.SwapButtons != 0 ? "Yes" : "No";
NumButtons.Text = mouseCapabilities.NumberOfButtons.ToString();
}

Retrieve keyboard properties


The Windows.Devices.Input namespace contains the KeyboardCapabilities class used to retrieve whether a
keyboard is connected. Just create a new KeyboardCapabilities object and get the KeyboardPresent property.
The following code uses a TextBlock element to display the keyboard property and value.

private void GetKeyboardProperties()


{
KeyboardCapabilities keyboardCapabilities = new Windows.Devices.Input.KeyboardCapabilities();
KeyboardPresent.Text = keyboardCapabilities.KeyboardPresent != 0 ? "Yes" : "No";
}

Retrieve touch properties


The Windows.Devices.Input namespace contains the TouchCapabilities class used to retrieve whether any
touch digitizers are connected. Just create a new TouchCapabilities object and get the properties you're
interested in.
Note The values returned by the properties discussed here are based on all detected touch digitizers: Boolean
properties return non-zero if at least one digitizer supports a specific capability, and numeric properties return the
maximum value exposed by any one digitizer.
The following code uses a series of TextBlock elements to display the touch properties and values.

private void GetTouchProperties()


{
TouchCapabilities touchCapabilities = new Windows.Devices.Input.TouchCapabilities();
TouchPresent.Text = touchCapabilities.TouchPresent != 0 ? "Yes" : "No";
Contacts.Text = touchCapabilities.Contacts.ToString();
}

Retrieve pointer properties


The Windows.Devices.Input namespace contains the PointerDevice class used to retrieve whether any
detected devices support pointer input (touch, touchpad, mouse, or pen). Just create a new PointerDevice object
and get the properties you're interested in.
Note The values returned by the properties discussed here are based on all detected pointer devices: Boolean
properties return non-zero if at least one device supports a specific capability, and numeric properties return the
maximum value exposed by any one pointer device.
The following code uses a table to display the properties and values for each pointer device.

private void GetPointerDevices()


{
IReadOnlyList<PointerDevice> pointerDevices = Windows.Devices.Input.PointerDevice.GetPointerDevices();
int gridRow = 0;
int gridColumn = 0;

for (int i = 0; i < pointerDevices.Count; i++)


{
// Pointer device type.
TextBlock textBlock1 = new TextBlock();
Grid_PointerProps.Children.Add(textBlock1);
textBlock1.Text = (i + 1).ToString() + " Pointer Device Type:";
Grid.SetRow(textBlock1, gridRow);
Grid.SetColumn(textBlock1, gridColumn);

TextBlock textBlock2 = new TextBlock();


textBlock2.Text = pointerDevices[i].PointerDeviceType.ToString();
Grid_PointerProps.Children.Add(textBlock2);
Grid.SetRow(textBlock2, gridRow++);
Grid.SetColumn(textBlock2, gridColumn + 1);

// Is external?
TextBlock textBlock3 = new TextBlock();
Grid_PointerProps.Children.Add(textBlock3);
textBlock3.Text = (i + 1).ToString() + " Is External?";
Grid.SetRow(textBlock3, gridRow);
Grid.SetColumn(textBlock3, gridColumn);

TextBlock textBlock4 = new TextBlock();


Grid_PointerProps.Children.Add(textBlock4);
textBlock4.Text = pointerDevices[i].IsIntegrated.ToString();
Grid.SetRow(textBlock4, gridRow++);
Grid.SetColumn(textBlock4, gridColumn + 1);

// Maximum contacts.
TextBlock textBlock5 = new TextBlock();
TextBlock textBlock5 = new TextBlock();
Grid_PointerProps.Children.Add(textBlock5);
textBlock5.Text = (i + 1).ToString() + " Max Contacts:";
Grid.SetRow(textBlock5, gridRow);
Grid.SetColumn(textBlock5, gridColumn);

TextBlock textBlock6 = new TextBlock();


Grid_PointerProps.Children.Add(textBlock6);
textBlock6.Text = pointerDevices[i].MaxContacts.ToString();
Grid.SetRow(textBlock6, gridRow++);
Grid.SetColumn(textBlock6, gridColumn + 1);

// Physical device rectangle.


TextBlock textBlock7 = new TextBlock();
Grid_PointerProps.Children.Add(textBlock7);
textBlock7.Text = (i + 1).ToString() + " Physical Device Rect:";
Grid.SetRow(textBlock7, gridRow);
Grid.SetColumn(textBlock7, gridColumn);

TextBlock textBlock8 = new TextBlock();


Grid_PointerProps.Children.Add(textBlock8);
textBlock8.Text = pointerDevices[i].PhysicalDeviceRect.X.ToString() + "," +
pointerDevices[i].PhysicalDeviceRect.Y.ToString() + "," +
pointerDevices[i].PhysicalDeviceRect.Width.ToString() + "," +
pointerDevices[i].PhysicalDeviceRect.Height.ToString();
Grid.SetRow(textBlock8, gridRow++);
Grid.SetColumn(textBlock8, gridColumn + 1);

// Screen rectangle.
TextBlock textBlock9 = new TextBlock();
Grid_PointerProps.Children.Add(textBlock9);
textBlock9.Text = (i + 1).ToString() + " Screen Rect:";
Grid.SetRow(textBlock9, gridRow);
Grid.SetColumn(textBlock9, gridColumn);

TextBlock textBlock10 = new TextBlock();


Grid_PointerProps.Children.Add(textBlock10);
textBlock10.Text = pointerDevices[i].ScreenRect.X.ToString() + "," +
pointerDevices[i].ScreenRect.Y.ToString() + "," +
pointerDevices[i].ScreenRect.Width.ToString() + "," +
pointerDevices[i].ScreenRect.Height.ToString();
Grid.SetRow(textBlock10, gridRow++);
Grid.SetColumn(textBlock10, gridColumn + 1);

gridColumn += 2;
gridRow = 0;
}

Related articles
Samples
Basic input sample
Low latency input sample
User interaction mode sample
Archive samples
Input: Device capabilities sample
Custom text input
3/6/2017 6 min to read Edit on GitHub

The core text APIs in the Windows.UI.Text.Core namespace enable a Universal Windows Platform (UWP) app to
receive text input from any text service supported on Windows devices. The APIs are similar to the Text Services
Framework APIs in that the app is not required to have detailed knowledge of the text services. This enables the app
to receive text in any language and from any input type, like keyboard, speech, or pen.

Important APIs
Windows.UI.Text.Core
CoreTextEditContext

Why use core text APIs?


For many apps, the XAML or HTML text box controls are sufficient for text input and editing. However, if your app
handles complex text scenarios, like a word processing app, you might need the flexibility of a custom text edit
control. You could use the CoreWindow keyboard APIs to create your text edit control, but these don't provide a
way to receive composition-based text input, which is required to support East Asian languages.
Instead, use the Windows.UI.Text.Core APIs when you need to create a custom text edit control. These APIs are
designed to give you a lot of flexibility in processing text input, in any language, and let you provide the text
experience best suited to your app. Text input and edit controls built with the core text APIs can receive text input
from all existing text input methods on Windows devices, from Text Services Framework based Input Method
Editors (IMEs) and handwriting on PCs to the WordFlow keyboard (which provides auto-correction, prediction, and
dictation) on mobile devices.

Architecture
The following is a simple representation of the text input system.
"Application" represents a UWP app hosting a custom edit control built using the core text APIs.
The Windows.UI.Text.Core APIs facilitate the communication with text services through Windows.
Communication between the text edit control and the text services is handled primarily through a
CoreTextEditContext object that provides the methods and events to facilitate the communication.
Text ranges and selection
Edit controls provide space for text entry and users expect to edit text anywhere in this space. Here, we explain the
text positioning system used by the core text APIs and how ranges and selections are represented in this system.
Application caret position
Text ranges used with the core text APIs are expressed in terms of caret positions. An "Application Caret Position
(ACP)" is a zero-based number that indicates the count of characters from the start of the text stream immediately
before the caret, as shown here.

Text ranges and selection


Text ranges and selections are represented by the CoreTextRange structure which contains two fields:

FIELD DATA TYPE DESCRIPTION

StartCaretPosition Number [JavaScript] System.Int32 [.NET]

EndCaretPosition Number [JavaScript] System.Int32 [.NET]

For example, in the text range shown previously, the range [0, 5] specifies the word "Hello". StartCaretPosition
must always be less than or equal to the EndCaretPosition. The range [5, 0] is invalid.
Insertion point
The current caret position, frequently referred to as the insertion point, is represented by setting the
StartCaretPosition to be equal to the EndCaretPosition.
Noncontiguous selection
Some edit controls support noncontiguous selections. For example, Microsoft Office apps support multiple
arbitrary selections, and many source code editors support column selection. However, the core text APIs do not
support noncontiguous selections. Edit controls must report only a single contiguous selection, most often the
active sub-range of the noncontiguous selections.
For example, consider this text stream:
There are two selections: [0,
1] and [6, 11]. The edit control must report only one of them; either [0, 1] or [6, 11].

Working with text


The CoreTextEditContext class enables text flow between Windows and edit controls through the TextUpdating
event, the TextRequested event, and the NotifyTextChanged method.
Your edit control receives text through TextUpdating events that are generated when users interact with text input
methods like keyboards, speech, or IMEs.
When you change text in your edit control, for example, by pasting text into the control, you need to notify
Windows by calling NotifyTextChanged.
If the text service requires the new text, then a TextRequested event is raised. You must provide the new text in the
TextRequested event handler.
Accepting text updates
Your edit control should typically accept text update requests because they represent the text the user wants to
enter. In the TextUpdating event handler, these actions are expected of your edit control:
1. Insert the text specified in CoreTextTextUpdatingEventArgs.Text in the position specified in
CoreTextTextUpdatingEventArgs.Range.
2. Place selection at the position specified in CoreTextTextUpdatingEventArgs.NewSelection.
3. Notify the system that the update succeeded by setting CoreTextTextUpdatingEventArgs.Result to
CoreTextTextUpdatingResult.Succeeded.
For example, this is the state of an edit control before the user types "d". The insertion point is at [10, 10].

When the user types "d", a


TextUpdating event is raised with the following CoreTextTextUpdatingEventArgs data:
Range = [10, 10]
Text = "d"
NewSelection = [11, 11]
In your edit control, apply the specified changes and set Result to Succeeded. Here's the state of the control after
the changes are applied.

Rejecting text updates


Sometimes, you cannot apply text updates because the requested range is in an area of the edit control that should
not be changed. In this case, you should not apply any changes. Instead, notify the system that the update failed by
setting CoreTextTextUpdatingEventArgs.Result to CoreTextTextUpdatingResult.Failed.
For example, consider an edit control that accepts only an e-mail address. Spaces should be rejected because e-mail
addresses cannot contain spaces, so when TextUpdating events are raised for the space key, you should simply
set Result to Failed in your edit control.
Notifying text changes
Sometimes, your edit control makes changes to text such as when text is pasted or auto-corrected. In these cases,
you must notify the text services of these changes by calling the NotifyTextChanged method.
For example, this is the state of an edit control before the user pastes "World". The insertion point is at [6, 6].

The user performs the paste action and the edit control ends
up with the following text:

When this happens, you


should call NotifyTextChanged with these arguments:
modifiedRange = [6, 6]
newLength = 5
newSelection = [11, 11]
One or more TextRequested events will follow, which you handle to update the text that the text services are
working with.
Overriding text updates
In your edit control, you might want to override a text update to provide auto-correction features.
For example, consider an edit control that provides a correction feature that formalizes contractions. This is the
state of the edit control before the user types the space key to trigger the correction. The insertion point is at [3, 3].

The user presses the space key and a corresponding TextUpdating event is
raised. The edit control accepts the text update. This is the state of the edit control for a brief moment before the
correction is completed. The insertion point is at [4, 4].

Outside of the TextUpdating event handler, the edit control makes the
following correction. This is the state of the edit control after the correction is complete. The insertion point is at [5,
5].

When this happens, you should call NotifyTextChanged with


these arguments:
modifiedRange = [1, 2]
newLength = 2
newSelection = [5, 5]
One or more TextRequested events will follow, which you handle to update the text that the text services are
working with.
Providing requested text
It's important for text services to have the correct text to provide features like auto-correction or prediction,
especially for text that already existed in the edit control, from loading a document, for example, or text that is
inserted by the edit control as explained in previous sections. Therefore, whenever a TextRequested event is
raised, you must provide the text currently in your edit control for the specified range.
There will be times the Range in CoreTextTextRequest specifies a range that your edit control cannot
accommodate as-is. For example, the Range is larger than the size of the edit control at the time of the
TextRequested event, or the end of the Range is out of bounds. In these cases, you should return whatever range
makes sense, which is typically a subset of the requested range.

Related articles
Samples
Custom Edit Control sample
Archive samples
XAML text editing sample
Handle pointer input
3/6/2017 22 min to read Edit on GitHub

Receive, process, and manage input data from pointing devices, such as touch, mouse, pen/stylus, and touchpad,
in Universal Windows Platform (UWP) apps.

Important APIs
Windows.Devices.Input
Windows.UI.Input
Windows.UI.Xaml.Input

Important
If you implement your own interaction support, keep in mind that users expect an intuitive experience involving
direct interaction with the UI elements in your app. We recommend that you model your custom interactions on
the Controls list to keep things consistent and discoverable. The platform controls provide the full Universal
Windows Platform (UWP) user interaction experience, including standard interactions, animated physics effects,
visual feedback, and accessibility. Create custom interactions only if there is a clear, well-defined requirement and
basic interactions don't support your scenario.

Pointers
Many interaction experiences involve the user identifying the object they want to interact with by pointing at it
using input devices such as touch, mouse, pen/stylus, and touchpad. Because the raw Human Interface Device
(HID) data provided by these input devices includes many common properties, the info is promoted into a unified
input stack and exposed as consolidated, device-agnostic pointer data. Your UWP apps can then consume this
data without worrying about the input device being used.
Note Device-specific info is also promoted from the raw HID data should your app require it.
Each input point (or contact) on the input stack is represented by a Pointer object exposed through the
PointerRoutedEventArgs parameter provided by various pointer events. In the case of multi-pen or multi-touch
input, each contact is treated as a unique input point.

Pointer events
Pointer events expose basic info such as detection state (in range or in contact) and device type, and extended info
such as location, pressure, and contact geometry. In addition, specific device properties such as which mouse
button a user pressed or whether the pen eraser tip is being used are also available. If your app needs to
differentiate between input devices and their capabilities, see Identify input devices.
UWP apps can listen for the following pointer events:
Note Call CapturePointer to constrain pointer input to a specific UI element. When a pointer is captured by an
element, only that object receives the pointer input events, even when the pointer moves outside the bounding
area of the object. You typically capture the pointer within a PointerPressed event handler as IsInContact
(mouse button pressed, touch or stylus in contact) must be true for CapturePointer to be successful.
EVENT DESCRIPTION

PointerCanceled Occurs when a pointer is canceled by the platform.


Touch pointers are canceled when a pen is detected
within range of the input surface.
An active contact is not detected for more than 100
ms.
Monitor/display is changed (resolution, settings,
multi-mon configuration).
The desktop is locked or the user has logged off.
The number of simultaneous contacts exceeded the
number supported by the device.

PointerCaptureLost Occurs when another UI element captures the pointer,


the pointer was released, or another pointer was
programmatically captured.

Note There is no corresponding pointer capture event.

PointerEntered Occurs when a pointer enters the bounding area of an


element. This can happen in slightly different ways for
touch, touchpad, mouse, and pen input.
Touch requires a finger contact to fire this event,
either from a direct touch down on the element or
from moving into the bounding area of the element.
Mouse and touchpad both have an on-screen cursor
that is always visible and fires this event even if no
mouse or touchpad button is pressed.
Like touch, pen fires this event with a direct pen down
on the element or from moving into the bounding
area of the element. However, pen also has a hover
state (IsInRange) that, when true, fires this event.

PointerExited Occurs when a pointer leaves the bounding area of an


element. This can happen in slightly different ways for
touch, touchpad, mouse, and pen input.
Touch requires a finger contact and fires this event
when the pointer moves out of the bounding area of
the element.
Mouse and touchpad both have an on-screen cursor
that is always visible and fires this event even if no
mouse or touchpad button is pressed.
Like touch, pen fires this event when moving out of
the bounding area of the element. However, pen also
has a hover state (IsInRange) that fires this event
when the state changes from true to false.
EVENT DESCRIPTION

PointerMoved Occurs when a pointer changes coordinates, button state,


pressure, tilt, or contact geometry (for example, width
and height) within the bounding area of an element. This
can happen in slightly different ways for touch, touchpad,
mouse, and pen input.
Touch requires a finger contact and fires this event
only when in contact within the bounding area of the
element.
Mouse and touchpad both have an on-screen cursor
that is always visible and fires this event even if no
mouse or touchpad button is pressed.
Like touch, pen fires this event when in contact within
the bounding area of the element. However, pen also
has a hover state (IsInRange) that, when true and
within the bounding area of the element, fires this
event.

PointerPressed Occurs when the pointer indicates a press action (such as


a touch down, mouse button down, pen down, or
touchpad button down) within the bounding area of an
element.
CapturePointer must be called from the handler for this
event.

PointerReleased Occurs when the pointer indicates a release action (such


as a touch up, mouse button up, pen up, or touchpad
button up) within the bounding area of an element or, if
the pointer is captured, outside the bounding area.

PointerWheelChanged Occurs when the mouse wheel is rotated.


Mouse input is associated with a single pointer assigned
when mouse input is first detected. Clicking a mouse
button (left, wheel, or right) creates a secondary
association between the pointer and that button through
the PointerMoved event.

Example
Here's some code examples from a basic pointer tracking app that show how to listen for and handle pointer
events and get various properties for active pointers.
Create the UI
For this example, we use a rectangle ( targetContainer ) as the target object for pointer input. The color of the target
changes when the pointer status changes.
Details for each pointer are displayed in a floating text block that moves with the pointer. The pointer events
themselves are displayed to the left of the rectangle (for reporting event sequence).
This is the Extensible Application Markup Language (XAML) for this example.
<Page
x:Class="PointerInput.MainPage"
IsTabStop="false"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:local="using:PointerInput"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
mc:Ignorable="d"
Name="page">

<Grid Background="{StaticResource ApplicationForegroundThemeBrush}">


<Grid.ColumnDefinitions>
<ColumnDefinition Width="*" />
<ColumnDefinition Width="69.458" />
<ColumnDefinition Width="80.542"/>
</Grid.ColumnDefinitions>
<Grid.RowDefinitions>
<RowDefinition Height="*" />
<RowDefinition Height="320" />
<RowDefinition Height="*"/>
</Grid.RowDefinitions>
<Canvas Name="Container"
Grid.Column="0"
Grid.Row="1"
HorizontalAlignment="Center"
VerticalAlignment="Center"
Margin="245,0"
Height="320" Width="640">
<Rectangle Name="Target"
Fill="#FF0000"
Stroke="Black"
StrokeThickness="0"
Height="320" Width="640" />
</Canvas>
<Button Name="buttonClear"
Foreground="White"
Width="100"
Height="100">
clear
</Button>
<TextBox Name="eventLog"
Grid.Column="1"
Grid.Row="0"
Grid.RowSpan="3"
Background="#000000"
TextWrapping="Wrap"
Foreground="#FFFFFF"
ScrollViewer.VerticalScrollBarVisibility="Visible"
BorderThickness="0" Grid.ColumnSpan="2"/>
</Grid>
</Page>

Listen for pointer events


In most cases, we recommend that you get pointer info through the PointerRoutedEventArgs of the event
handler.
If the event argument doesn't expose the pointer details required, you can get access to extended PointerPoint
info exposed through the GetCurrentPoint and GetIntermediatePoints methods of PointerRoutedEventArgs.
For this example, we use a rectangle ( targetContainer ) as the target object for pointer input. The color of the target
changes when the pointer status changes.
The following code sets up the target object, declares global variables, and identifies the various pointer event
listeners for the target.

// For this example, we track simultaneous contacts in case the


// number of contacts has reached the maximum supported by the device.
// Depending on the device, additional contacts might be ignored
// (PointerPressed not fired).
uint numActiveContacts;
Windows.Devices.Input.TouchCapabilities touchCapabilities = new Windows.Devices.Input.TouchCapabilities();

// Dictionary to maintain information about each active contact.


// An entry is added during PointerPressed/PointerEntered events and removed
// during PointerReleased/PointerCaptureLost/PointerCanceled/PointerExited events.
Dictionary<uint, Windows.UI.Xaml.Input.Pointer> contacts;

public MainPage()
{
this.InitializeComponent();
numActiveContacts = 0;
// Initialize the dictionary.
contacts = new Dictionary<uint, Windows.UI.Xaml.Input.Pointer>((int)touchCapabilities.Contacts);
// Declare the pointer event handlers.
Target.PointerPressed += new PointerEventHandler(Target_PointerPressed);
Target.PointerEntered += new PointerEventHandler(Target_PointerEntered);
Target.PointerReleased += new PointerEventHandler(Target_PointerReleased);
Target.PointerExited += new PointerEventHandler(Target_PointerExited);
Target.PointerCanceled += new PointerEventHandler(Target_PointerCanceled);
Target.PointerCaptureLost += new PointerEventHandler(Target_PointerCaptureLost);
Target.PointerMoved += new PointerEventHandler(Target_PointerMoved);
Target.PointerWheelChanged += new PointerEventHandler(Target_PointerWheelChanged);

buttonClear.Click += new RoutedEventHandler(ButtonClear_Click);


}

private void ButtonClear_Click(object sender, RoutedEventArgs e)


{
eventLog.Text = "";
}

Handle pointer events


Next, we use UI feedback to demonstrate basic pointer event handlers.
This handler manages a PointerPressed event. We add the event to the event log, add the pointer to the
pointer array used for tracking the pointers of interest, and display the pointer details.
Note PointerPressed and PointerReleased events do not always occur in pairs. Your app should listen
for and handle any event that might conclude a pointer down action (such as PointerExited,
PointerCanceled, and PointerCaptureLost).
// PointerPressed and PointerReleased events do not always occur in pairs.
// Your app should listen for and handle any event that might conclude a pointer down action
// (such as PointerExited, PointerCanceled, and PointerCaptureLost).
// For this example, we track the number of contacts in case the
// number of contacts has reached the maximum supported by the device.
// Depending on the device, additional contacts might be ignored
// (PointerPressed not fired).
void Target_PointerPressed(object sender, PointerRoutedEventArgs e)
{
Windows.UI.Xaml.Input.Pointer ptr = e.Pointer;

// Update event sequence.


eventLog.Text += "\nDown: " + ptr.PointerId;

// Change background color of target when pointer contact detected.


Target.Fill = new SolidColorBrush(Windows.UI.Colors.Green);

// Prevent most handlers along the event route from handling the same event again.
e.Handled = true;

// Lock the pointer to the target.


Target.CapturePointer(e.Pointer);

// Update event sequence.


eventLog.Text += "\nPointer captured: " + ptr.PointerId;

// Check if pointer already exists (for example, enter occurred prior to press).
if (contacts.ContainsKey(ptr.PointerId))
{
return;
}
// Add contact to dictionary.
contacts[ptr.PointerId] = ptr;
++numActiveContacts;

// Display pointer details.


createInfoPop(e);
}

This handler manages a PointerEntered event. We add the event to the event log, add the pointer to the
pointer collection, and display the pointer details.
private void Target_PointerEntered(object sender, PointerRoutedEventArgs e)
{
Windows.UI.Xaml.Input.Pointer ptr = e.Pointer;

// Update event sequence.


eventLog.Text += "\nEntered: " + ptr.PointerId;

if (contacts.Count == 0)
{
// Change background color of target when pointer contact detected.
Target.Fill = new SolidColorBrush(Windows.UI.Colors.Blue);
}

// Check if pointer already exists (if enter occurred prior to down).


if (contacts.ContainsKey(ptr.PointerId))
{
return;
}

// Add contact to dictionary.


contacts[ptr.PointerId] = ptr;
++numActiveContacts;

// Prevent most handlers along the event route from handling the same event again.
e.Handled = true;

// Display pointer details.


createInfoPop(e);
}

This handler manages a PointerMoved event. We add the event to the event log and update the pointer
details.
Important Mouse input is associated with a single pointer assigned when mouse input is first detected.
Clicking a mouse button (left, wheel, or right) creates a secondary association between the pointer and that
button through the PointerPressed event. The PointerReleased event is fired only when that same
mouse button is released (no other button can be associated with the pointer until this event is complete).
Because of this exclusive association, other mouse button clicks are routed through the PointerMoved
event.
private void Target_PointerMoved(object sender, PointerRoutedEventArgs e)
{
Windows.UI.Xaml.Input.Pointer ptr = e.Pointer;

// Multiple, simultaneous mouse button inputs are processed here.


// Mouse input is associated with a single pointer assigned when
// mouse input is first detected.
// Clicking additional mouse buttons (left, wheel, or right) during
// the interaction creates secondary associations between those buttons
// and the pointer through the pointer pressed event.
// The pointer released event is fired only when the last mouse button
// associated with the interaction (not necessarily the initial button)
// is released.
// Because of this exclusive association, other mouse button clicks are
// routed through the pointer move event.
if (ptr.PointerDeviceType == Windows.Devices.Input.PointerDeviceType.Mouse)
{
// To get mouse state, we need extended pointer details.
// We get the pointer info through the getCurrentPoint method
// of the event argument.
Windows.UI.Input.PointerPoint ptrPt = e.GetCurrentPoint(Target);
if (ptrPt.Properties.IsLeftButtonPressed)
{
eventLog.Text += "\nLeft button: " + ptrPt.PointerId;
}
if (ptrPt.Properties.IsMiddleButtonPressed)
{
eventLog.Text += "\nWheel button: " + ptrPt.PointerId;
}
if (ptrPt.Properties.IsRightButtonPressed)
{
eventLog.Text += "\nRight button: " + ptrPt.PointerId;
}
}

// Prevent most handlers along the event route from handling the same event again.
e.Handled = true;

// Display pointer details.


updateInfoPop(e);
}

This handler manages a PointerWheelChanged event. We add the event to the event log, add the pointer to
the pointer array (if necessary), and display the pointer details.
private void Target_PointerWheelChanged(object sender, PointerRoutedEventArgs e)
{
Windows.UI.Xaml.Input.Pointer ptr = e.Pointer;

// Update event sequence.


eventLog.Text += "\nMouse wheel: " + ptr.PointerId;

// Check if pointer already exists (for example, enter occurred prior to wheel).
if (contacts.ContainsKey(ptr.PointerId))
{
return;
}

// Add contact to dictionary.


contacts[ptr.PointerId] = ptr;
++numActiveContacts;

// Prevent most handlers along the event route from handling the same event again.
e.Handled = true;

// Display pointer details.


createInfoPop(e);
}

This handler manages a PointerReleased event where contact with the digitizer is terminated. We add the
event to the event log, remove the pointer from the pointer collection, and update the pointer details.
void Target_PointerReleased(object sender, PointerRoutedEventArgs e)
{
Windows.UI.Xaml.Input.Pointer ptr = e.Pointer;

// Update event sequence.


eventLog.Text += "\nUp: " + ptr.PointerId;

// If event source is mouse or touchpad and the pointer is still


// over the target, retain pointer and pointer details.
// Return without removing pointer from pointers dictionary.
// For this example, we assume a maximum of one mouse pointer.
if (ptr.PointerDeviceType != Windows.Devices.Input.PointerDeviceType.Mouse)
{
// Update target UI.
Target.Fill = new SolidColorBrush(Windows.UI.Colors.Red);

destroyInfoPop(ptr);

// Remove contact from dictionary.


if (contacts.ContainsKey(ptr.PointerId))
{
contacts[ptr.PointerId] = null;
contacts.Remove(ptr.PointerId);
--numActiveContacts;
}

// Release the pointer from the target.


Target.ReleasePointerCapture(e.Pointer);

// Update event sequence.


eventLog.Text += "\nPointer released: " + ptr.PointerId;

// Prevent most handlers along the event route from handling the same event again.
e.Handled = true;
}
else
{
Target.Fill = new SolidColorBrush(Windows.UI.Colors.Blue);
}

This handler manages a PointerExited event where contact with the digitizer is maintained. We add the event
to the event log, remove the pointer from the pointer array, and update the pointer details.
private void Target_PointerExited(object sender, PointerRoutedEventArgs e)
{
Windows.UI.Xaml.Input.Pointer ptr = e.Pointer;

// Update event sequence.


eventLog.Text += "\nPointer exited: " + ptr.PointerId;

// Remove contact from dictionary.


if (contacts.ContainsKey(ptr.PointerId))
{
contacts[ptr.PointerId] = null;
contacts.Remove(ptr.PointerId);
--numActiveContacts;
}

// Update the UI and pointer details.


destroyInfoPop(ptr);

if (contacts.Count == 0)
{
Target.Fill = new SolidColorBrush(Windows.UI.Colors.Red);
}

// Prevent most handlers along the event route from handling the same event again.
e.Handled = true;
}

This handler manages a PointerCanceled event. We add the event to the event log, remove the pointer from
the pointer array, and update the pointer details.

// Fires for for various reasons, including:


// - Touch contact canceled by pen coming into range of the surface.
// - The device doesn&#39;t report an active contact for more than 100ms.
// - The desktop is locked or the user logged off.
// - The number of simultaneous contacts exceeded the number supported by the device.
private void Target_PointerCanceled(object sender, PointerRoutedEventArgs e)
{
Windows.UI.Xaml.Input.Pointer ptr = e.Pointer;

// Update event sequence.


eventLog.Text += "\nPointer canceled: " + ptr.PointerId;

// Remove contact from dictionary.


if (contacts.ContainsKey(ptr.PointerId))
{
contacts[ptr.PointerId] = null;
contacts.Remove(ptr.PointerId);
--numActiveContacts;
}

destroyInfoPop(ptr);

if (contacts.Count == 0)
{
Target.Fill = new SolidColorBrush(Windows.UI.Colors.Black);
}
// Prevent most handlers along the event route from handling the same event again.
e.Handled = true;
}

This handler manages a PointerCaptureLost event. We add the event to the event log, remove the pointer
from the pointer array, and update the pointer details.
Note PointerCaptureLost can occur instead of PointerReleased. Pointer capture can be lost for various
reasons.

// Fires for for various reasons, including:


// - User interactions
// - Programmatic capture of another pointer
// - Captured pointer was deliberately released
// PointerCaptureLost can fire instead of PointerReleased.
private void Target_PointerCaptureLost(object sender, PointerRoutedEventArgs e)
{
Windows.UI.Xaml.Input.Pointer ptr = e.Pointer;

// Update event sequence.


eventLog.Text += "\nPointer capture lost: " + ptr.PointerId;

// Remove contact from dictionary.


if (contacts.ContainsKey(ptr.PointerId))
{
contacts[ptr.PointerId] = null;
contacts.Remove(ptr.PointerId);
--numActiveContacts;
}

destroyInfoPop(ptr);

if (contacts.Count == 0)
{
Target.Fill = new SolidColorBrush(Windows.UI.Colors.Black);
}
// Prevent most handlers along the event route from handling the same event again.
e.Handled = true;
}

Get pointer properties


As stated earlier, you must get most extended pointer info from a Windows.UI.Input.PointerPoint object
obtained through the GetCurrentPoint and GetIntermediatePoints methods of PointerRoutedEventArgs.
First, we create a new TextBlock for each pointer.

void createInfoPop(PointerRoutedEventArgs e)
{
TextBlock pointerDetails = new TextBlock();
Windows.UI.Input.PointerPoint ptrPt = e.GetCurrentPoint(Target);
pointerDetails.Name = ptrPt.PointerId.ToString();
pointerDetails.Foreground = new SolidColorBrush(Windows.UI.Colors.White);
pointerDetails.Text = queryPointer(ptrPt);

TranslateTransform x = new TranslateTransform();


x.X = ptrPt.Position.X + 20;
x.Y = ptrPt.Position.Y + 20;
pointerDetails.RenderTransform = x;

Container.Children.Add(pointerDetails);
}

Then we provide a way to update the pointer info in an existing TextBlock associated with that pointer.
void updateInfoPop(PointerRoutedEventArgs e)
{
foreach (var pointerDetails in Container.Children)
{
if (pointerDetails.GetType().ToString() == "Windows.UI.Xaml.Controls.TextBlock")
{
TextBlock _TextBlock = (TextBlock)pointerDetails;
if (_TextBlock.Name == e.Pointer.PointerId.ToString())
{
// To get pointer location details, we need extended pointer info.
// We get the pointer info through the getCurrentPoint method
// of the event argument.
Windows.UI.Input.PointerPoint ptrPt = e.GetCurrentPoint(Target);
TranslateTransform x = new TranslateTransform();
x.X = ptrPt.Position.X + 20;
x.Y = ptrPt.Position.Y + 20;
pointerDetails.RenderTransform = x;
_TextBlock.Text = queryPointer(ptrPt);
}
}
}
}

Finally, we query various pointer properties.


String queryPointer(PointerPoint ptrPt)
{
String details = "";

switch (ptrPt.PointerDevice.PointerDeviceType)
{
case Windows.Devices.Input.PointerDeviceType.Mouse:
details += "\nPointer type: mouse";
break;
case Windows.Devices.Input.PointerDeviceType.Pen:
details += "\nPointer type: pen";
if (ptrPt.IsInContact)
{
details += "\nPressure: " + ptrPt.Properties.Pressure;
details += "\nrotation: " + ptrPt.Properties.Orientation;
details += "\nTilt X: " + ptrPt.Properties.XTilt;
details += "\nTilt Y: " + ptrPt.Properties.YTilt;
details += "\nBarrel button pressed: " + ptrPt.Properties.IsBarrelButtonPressed;
}
break;
case Windows.Devices.Input.PointerDeviceType.Touch:
details += "\nPointer type: touch";
details += "\nrotation: " + ptrPt.Properties.Orientation;
details += "\nTilt X: " + ptrPt.Properties.XTilt;
details += "\nTilt Y: " + ptrPt.Properties.YTilt;
break;
default:
details += "\nPointer type: n/a";
break;
}

GeneralTransform gt = Target.TransformToVisual(page);
Point screenPoint;

screenPoint = gt.TransformPoint(new Point(ptrPt.Position.X, ptrPt.Position.Y));


details += "\nPointer Id: " + ptrPt.PointerId.ToString() +
"\nPointer location (parent): " + ptrPt.Position.X + ", " + ptrPt.Position.Y +
"\nPointer location (screen): " + screenPoint.X + ", " + screenPoint.Y;
return details;
}

Complete example
The following is the C# code for this example. For links to more complex samples, see Related articles at the
bottom of this page .

using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using Windows.Foundation;
using Windows.Foundation.Collections;
using Windows.UI.Input;
using Windows.UI.Xaml;
using Windows.UI.Xaml.Controls;
using Windows.UI.Xaml.Controls.Primitives;
using Windows.UI.Xaml.Data;
using Windows.UI.Xaml.Input;
using Windows.UI.Xaml.Media;
using Windows.UI.Xaml.Navigation;

// The Blank Page item template is documented at http://go.microsoft.com/fwlink/?LinkId=234238

namespace PointerInput
{
/// <summary>
/// An empty page that can be used on its own or navigated to within a Frame.
/// </summary>
public sealed partial class MainPage : Page
{
// For this example, we track simultaneous contacts in case the
// number of contacts has reached the maximum supported by the device.
// Depending on the device, additional contacts might be ignored
// (PointerPressed not fired).
uint numActiveContacts;
Windows.Devices.Input.TouchCapabilities touchCapabilities = new Windows.Devices.Input.TouchCapabilities();

// Dictionary to maintain information about each active contact.


// An entry is added during PointerPressed/PointerEntered events and removed
// during PointerReleased/PointerCaptureLost/PointerCanceled/PointerExited events.
Dictionary<uint, Windows.UI.Xaml.Input.Pointer> contacts;

public MainPage()
{
this.InitializeComponent();
numActiveContacts = 0;
// Initialize the dictionary.
contacts = new Dictionary<uint, Windows.UI.Xaml.Input.Pointer>((int)touchCapabilities.Contacts);
// Declare the pointer event handlers.
Target.PointerPressed += new PointerEventHandler(Target_PointerPressed);
Target.PointerEntered += new PointerEventHandler(Target_PointerEntered);
Target.PointerReleased += new PointerEventHandler(Target_PointerReleased);
Target.PointerExited += new PointerEventHandler(Target_PointerExited);
Target.PointerCanceled += new PointerEventHandler(Target_PointerCanceled);
Target.PointerCaptureLost += new PointerEventHandler(Target_PointerCaptureLost);
Target.PointerMoved += new PointerEventHandler(Target_PointerMoved);
Target.PointerWheelChanged += new PointerEventHandler(Target_PointerWheelChanged);

buttonClear.Click += new RoutedEventHandler(ButtonClear_Click);


}

private void ButtonClear_Click(object sender, RoutedEventArgs e)


{
eventLog.Text = "";
}

// PointerPressed and PointerReleased events do not always occur in pairs.


// Your app should listen for and handle any event that might conclude a pointer down action
// (such as PointerExited, PointerCanceled, and PointerCaptureLost).
// For this example, we track the number of contacts in case the
// number of contacts has reached the maximum supported by the device.
// Depending on the device, additional contacts might be ignored
// (PointerPressed not fired).
void Target_PointerPressed(object sender, PointerRoutedEventArgs e)
{
Windows.UI.Xaml.Input.Pointer ptr = e.Pointer;

// Update event sequence.


eventLog.Text += "\nDown: " + ptr.PointerId;

// Change background color of target when pointer contact detected.


Target.Fill = new SolidColorBrush(Windows.UI.Colors.Green);

// Prevent most handlers along the event route from handling the same event again.
e.Handled = true;

// Lock the pointer to the target.


Target.CapturePointer(e.Pointer);

// Update event sequence.


eventLog.Text += "\nPointer captured: " + ptr.PointerId;

// Check if pointer already exists (for example, enter occurred prior to press).
if (contacts.ContainsKey(ptr.PointerId))
if (contacts.ContainsKey(ptr.PointerId))
{
return;
}
// Add contact to dictionary.
contacts[ptr.PointerId] = ptr;
++numActiveContacts;

// Display pointer details.


createInfoPop(e);
}

void Target_PointerReleased(object sender, PointerRoutedEventArgs e)


{
Windows.UI.Xaml.Input.Pointer ptr = e.Pointer;

// Update event sequence.


eventLog.Text += "\nUp: " + ptr.PointerId;

// If event source is mouse or touchpad and the pointer is still


// over the target, retain pointer and pointer details.
// Return without removing pointer from pointers dictionary.
// For this example, we assume a maximum of one mouse pointer.
if (ptr.PointerDeviceType != Windows.Devices.Input.PointerDeviceType.Mouse)
{
// Update target UI.
Target.Fill = new SolidColorBrush(Windows.UI.Colors.Red);

destroyInfoPop(ptr);

// Remove contact from dictionary.


if (contacts.ContainsKey(ptr.PointerId))
{
contacts[ptr.PointerId] = null;
contacts.Remove(ptr.PointerId);
--numActiveContacts;
}

// Release the pointer from the target.


Target.ReleasePointerCapture(e.Pointer);

// Update event sequence.


eventLog.Text += "\nPointer released: " + ptr.PointerId;

// Prevent most handlers along the event route from handling the same event again.
e.Handled = true;
}
else
{
Target.Fill = new SolidColorBrush(Windows.UI.Colors.Blue);
}

private void Target_PointerMoved(object sender, PointerRoutedEventArgs e)


{
Windows.UI.Xaml.Input.Pointer ptr = e.Pointer;

// Multiple, simultaneous mouse button inputs are processed here.


// Mouse input is associated with a single pointer assigned when
// mouse input is first detected.
// Clicking additional mouse buttons (left, wheel, or right) during
// the interaction creates secondary associations between those buttons
// and the pointer through the pointer pressed event.
// The pointer released event is fired only when the last mouse button
// associated with the interaction (not necessarily the initial button)
// is released.
// Because of this exclusive association, other mouse button clicks are
// routed through the pointer move event.
if (ptr.PointerDeviceType == Windows.Devices.Input.PointerDeviceType.Mouse)
if (ptr.PointerDeviceType == Windows.Devices.Input.PointerDeviceType.Mouse)
{
// To get mouse state, we need extended pointer details.
// We get the pointer info through the getCurrentPoint method
// of the event argument.
Windows.UI.Input.PointerPoint ptrPt = e.GetCurrentPoint(Target);
if (ptrPt.Properties.IsLeftButtonPressed)
{
eventLog.Text += "\nLeft button: " + ptrPt.PointerId;
}
if (ptrPt.Properties.IsMiddleButtonPressed)
{
eventLog.Text += "\nWheel button: " + ptrPt.PointerId;
}
if (ptrPt.Properties.IsRightButtonPressed)
{
eventLog.Text += "\nRight button: " + ptrPt.PointerId;
}
}

// Prevent most handlers along the event route from handling the same event again.
e.Handled = true;

// Display pointer details.


updateInfoPop(e);
}

private void Target_PointerEntered(object sender, PointerRoutedEventArgs e)


{
Windows.UI.Xaml.Input.Pointer ptr = e.Pointer;

// Update event sequence.


eventLog.Text += "\nEntered: " + ptr.PointerId;

if (contacts.Count == 0)
{
// Change background color of target when pointer contact detected.
Target.Fill = new SolidColorBrush(Windows.UI.Colors.Blue);
}

// Check if pointer already exists (if enter occurred prior to down).


if (contacts.ContainsKey(ptr.PointerId))
{
return;
}

// Add contact to dictionary.


contacts[ptr.PointerId] = ptr;
++numActiveContacts;

// Prevent most handlers along the event route from handling the same event again.
e.Handled = true;

// Display pointer details.


createInfoPop(e);
}

private void Target_PointerWheelChanged(object sender, PointerRoutedEventArgs e)


{
Windows.UI.Xaml.Input.Pointer ptr = e.Pointer;

// Update event sequence.


eventLog.Text += "\nMouse wheel: " + ptr.PointerId;

// Check if pointer already exists (for example, enter occurred prior to wheel).
if (contacts.ContainsKey(ptr.PointerId))
{
return;
}
// Add contact to dictionary.
contacts[ptr.PointerId] = ptr;
++numActiveContacts;

// Prevent most handlers along the event route from handling the same event again.
e.Handled = true;

// Display pointer details.


createInfoPop(e);
}

// Fires for for various reasons, including:


// - User interactions
// - Programmatic capture of another pointer
// - Captured pointer was deliberately released
// PointerCaptureLost can fire instead of PointerReleased.
private void Target_PointerCaptureLost(object sender, PointerRoutedEventArgs e)
{
Windows.UI.Xaml.Input.Pointer ptr = e.Pointer;

// Update event sequence.


eventLog.Text += "\nPointer capture lost: " + ptr.PointerId;

// Remove contact from dictionary.


if (contacts.ContainsKey(ptr.PointerId))
{
contacts[ptr.PointerId] = null;
contacts.Remove(ptr.PointerId);
--numActiveContacts;
}

destroyInfoPop(ptr);

if (contacts.Count == 0)
{
Target.Fill = new SolidColorBrush(Windows.UI.Colors.Black);
}
// Prevent most handlers along the event route from handling the same event again.
e.Handled = true;
}

// Fires for for various reasons, including:


// - Touch contact canceled by pen coming into range of the surface.
// - The device doesn&#39;t report an active contact for more than 100ms.
// - The desktop is locked or the user logged off.
// - The number of simultaneous contacts exceeded the number supported by the device.
private void Target_PointerCanceled(object sender, PointerRoutedEventArgs e)
{
Windows.UI.Xaml.Input.Pointer ptr = e.Pointer;

// Update event sequence.


eventLog.Text += "\nPointer canceled: " + ptr.PointerId;

// Remove contact from dictionary.


if (contacts.ContainsKey(ptr.PointerId))
{
contacts[ptr.PointerId] = null;
contacts.Remove(ptr.PointerId);
--numActiveContacts;
}

destroyInfoPop(ptr);

if (contacts.Count == 0)
{
Target.Fill = new SolidColorBrush(Windows.UI.Colors.Black);
}
// Prevent most handlers along the event route from handling the same event again.
e.Handled = true;
e.Handled = true;
}

private void Target_PointerExited(object sender, PointerRoutedEventArgs e)


{
Windows.UI.Xaml.Input.Pointer ptr = e.Pointer;

// Update event sequence.


eventLog.Text += "\nPointer exited: " + ptr.PointerId;

// Remove contact from dictionary.


if (contacts.ContainsKey(ptr.PointerId))
{
contacts[ptr.PointerId] = null;
contacts.Remove(ptr.PointerId);
--numActiveContacts;
}

// Update the UI and pointer details.


destroyInfoPop(ptr);

if (contacts.Count == 0)
{
Target.Fill = new SolidColorBrush(Windows.UI.Colors.Red);
}

// Prevent most handlers along the event route from handling the same event again.
e.Handled = true;
}

void createInfoPop(PointerRoutedEventArgs e)
{
TextBlock pointerDetails = new TextBlock();
Windows.UI.Input.PointerPoint ptrPt = e.GetCurrentPoint(Target);
pointerDetails.Name = ptrPt.PointerId.ToString();
pointerDetails.Foreground = new SolidColorBrush(Windows.UI.Colors.White);
pointerDetails.Text = queryPointer(ptrPt);

TranslateTransform x = new TranslateTransform();


x.X = ptrPt.Position.X + 20;
x.Y = ptrPt.Position.Y + 20;
pointerDetails.RenderTransform = x;

Container.Children.Add(pointerDetails);
}

void destroyInfoPop(Windows.UI.Xaml.Input.Pointer ptr)


{
foreach (var pointerDetails in Container.Children)
{
if (pointerDetails.GetType().ToString() == "Windows.UI.Xaml.Controls.TextBlock")
{
TextBlock _TextBlock = (TextBlock)pointerDetails;
if (_TextBlock.Name == ptr.PointerId.ToString())
{
Container.Children.Remove(pointerDetails);
}
}
}
}

void updateInfoPop(PointerRoutedEventArgs e)
{
foreach (var pointerDetails in Container.Children)
{
if (pointerDetails.GetType().ToString() == "Windows.UI.Xaml.Controls.TextBlock")
{
TextBlock _TextBlock = (TextBlock)pointerDetails;
if (_TextBlock.Name == e.Pointer.PointerId.ToString())
{
{
// To get pointer location details, we need extended pointer info.
// We get the pointer info through the getCurrentPoint method
// of the event argument.
Windows.UI.Input.PointerPoint ptrPt = e.GetCurrentPoint(Target);
TranslateTransform x = new TranslateTransform();
x.X = ptrPt.Position.X + 20;
x.Y = ptrPt.Position.Y + 20;
pointerDetails.RenderTransform = x;
_TextBlock.Text = queryPointer(ptrPt);
}
}
}
}

String queryPointer(PointerPoint ptrPt)


{
String details = "";

switch (ptrPt.PointerDevice.PointerDeviceType)
{
case Windows.Devices.Input.PointerDeviceType.Mouse:
details += "\nPointer type: mouse";
break;
case Windows.Devices.Input.PointerDeviceType.Pen:
details += "\nPointer type: pen";
if (ptrPt.IsInContact)
{
details += "\nPressure: " + ptrPt.Properties.Pressure;
details += "\nrotation: " + ptrPt.Properties.Orientation;
details += "\nTilt X: " + ptrPt.Properties.XTilt;
details += "\nTilt Y: " + ptrPt.Properties.YTilt;
details += "\nBarrel button pressed: " + ptrPt.Properties.IsBarrelButtonPressed;
}
break;
case Windows.Devices.Input.PointerDeviceType.Touch:
details += "\nPointer type: touch";
details += "\nrotation: " + ptrPt.Properties.Orientation;
details += "\nTilt X: " + ptrPt.Properties.XTilt;
details += "\nTilt Y: " + ptrPt.Properties.YTilt;
break;
default:
details += "\nPointer type: n/a";
break;
}

GeneralTransform gt = Target.TransformToVisual(page);
Point screenPoint;

screenPoint = gt.TransformPoint(new Point(ptrPt.Position.X, ptrPt.Position.Y));


details += "\nPointer Id: " + ptrPt.PointerId.ToString() +
"\nPointer location (parent): " + ptrPt.Position.X + ", " + ptrPt.Position.Y +
"\nPointer location (screen): " + screenPoint.X + ", " + screenPoint.Y;
return details;
}
}
}

Related articles
Samples
Basic input sample
Low latency input sample
User interaction mode sample
Focus visuals sample
Archive samples
Input: XAML user input events sample
Input: Device capabilities sample
Input: Manipulations and gestures (C++) sample
Input: Touch hit testing sample
XAML scrolling, panning, and zooming sample
Input: Simplified ink sample
Device primer for Universal Windows Platform
(UWP) apps
3/6/2017 7 min to read Edit on GitHub

Getting to know the devices that support Universal Windows Platform (UWP) apps will help you offer the best user
experience for each form factor. When designing for a particular device, the main considerations include how the
app will appear on that device, where, when, and how the app will be used on that device, and how the user will
interact with that device.

PCs and laptops


Windows PCs and laptops include a wide array of devices and screen sizes. In general, PCs and laptops can display
more info than phone or tablets.
Screen sizes
13 and greater
Typical usage
Apps on desktops and laptops see shared use, but by one user at a time, and usually for longer periods.
UI considerations
Apps can have a windowed view, the size of which is determined by the user. Depending on window size,
there can be between one and three frames. On larger monitors, the app can have more than three frames.
When using an app on a desktop or laptop, the user has control over app files. As an app designer, be sure
to provide the mechanisms to manage your apps content. Consider including commands and features such
as "Save As", "Recent files", and so on.
System back is optional. When an app developer chooses to show it, it appears in the app title bar.
Inputs
Mouse
Keyboard
Touch on laptops and all-in-one desktops.
Gamepads, such as the Xbox controller, are sometimes used.
Typical device capabilities
Camera
Microphone

Tablets and 2-in-1s


Ultra-portable tablet computers are equipped with touchscreens, cameras, microphones, and accelerometers.
Tablet screen sizes usually range from 7 to 13.3. 2-in-1 devices can act like either a tablet or a laptop with a
keyboard and mouse, depending on the configuration (usually involving folding the screen back or tilting it
upright).
Screen sizes
7 to 13.3 for tablet
13.3" and greater for 2-in-1

Typical usage
About 80% of tablet use is by the owner, with the other 20% being shared use.
Its most commonly used at home as a companion device while watching TV.
Its used for longer periods than phones and phablets.
Text is entered in short bursts.
UI considerations
In both landscape and portrait orientations, tablets allow two frames at a time.
System back is located on the navigation bar.
Inputs
Touch
Stylus
External keyboard (occasionally)
Mouse (occasionally)
Voice (occasionally)
Typical device capabilities
Camera
Microphone
Movement sensors
Location sensors

NOTE
Most of the considerations for PCs and laptops apply to 2-in-1s as well.

Xbox and TV
The experience of sitting on your couch across the room, using a gamepad or remote to interact with your TV, is
called the 10-foot experience. It is so named because the user is generally sitting approximately 10 feet away
from the screen. This provides unique challenges that aren't present in, say, the 2-foot experience, or interacting
with a PC. If you are developing an app for Xbox One or any other device that's connected to a TV screen and
might use a gamepad or remote for input, you should always keep this in mind.
Designing your UWP app for the 10-foot experience is very different from designing for any of the other device
categories listed here. For more information, see Designing for Xbox and TV.
Screen sizes
24" and up

Typical usage
Often shared among several people, though is also often used by just one person.
Usually used for longer periods.
Most commonly used at home, staying in one place.
Rarely asks for text input because it takes longer with a gamepad or remote.
Orientation of the screen is fixed.
Usually only runs one app at a time, but it may be possible to snap apps to the side (such as on Xbox).
UI considerations
Apps usually stay the same size, unless another app is snapped to the side.
System back is useful functionality that is offered in most Xbox apps, accessed using the B button on the
gamepad.
Since the customer is sitting approximately 10 feet away from the screen, make sure that UI is large and clear
enough to be visible.
Inputs
Gamepad (such as an Xbox controller)
Remote
Voice (occasionally, if the customer has a Kinect or headset)
Typical device capabilities
Camera (occasionally, if the customer has a Kinect)
Microphone (occasionally, if the customer has a Kinect or headset)
Movement sensors (occasionally, if the customer has a Kinect)

Phones and phablets


The most widely-used of all computing devices, phones can do a lot with limited screen real estate and basic
inputs. Phones are available in a variety of sizes; larger phones are called phablets. App experiences on phablets
are similar to those on phones, but the increased screen real estate of phablets enable some key changes in
content consumption.
With Continuum for Phones, a new experience for compatible Windows 10 mobile devices, users can connect their
phones to a monitor and even use a mouse and keyboard to make their phones work like a laptop. (For more info,
see the Continuum for Phone article.)
Screen sizes
4'' to 5'' for phone
5.5'' to 7'' for phablet

Typical usage
Primarily used in portrait orientation, mostly due to the ease of holding the phone with one hand and being
able to fully interact with it that way, but there are some experiences that work well in landscape, such as
viewing photos and video, reading a book, and composing text.
Mostly used by just one person, the owner of the device.
Always within reach, usually stashed in a pocket or a bag.
Used for brief periods of time.
Users are often multitasking when using the phone.
Text is entered in short bursts.
UI considerations
The small size of a phone's screen allows only one frame at a time to be viewed in both portrait and
landscape orientations. All hierarchical navigation patterns on a phone use the "drill" model, with the user
navigating through single-frame UI layers.
Similar to phones, phablets in portrait mode can view only one frame at a time. But with the greater screen
real estate available on a phablet, users have the ability to rotate to landscape orientation and stay there, so
two app frames can be visible at a time.
In both landscape and portrait orientations, be sure that there's enough screen real estate for the app bar
when the on-screen keyboard is up.
Inputs
Touch
Voice
Typical device capabilities
Microphone
Camera
Movement sensors
Location sensors

Surface Hub devices


Microsoft Surface Hub is a large-screen team collaboration device designed for simultaneous use by multiple
users.
Screen sizes
55 and 84''

Typical usage
Apps on Surface Hub see shared use for short periods of time, such as in meetings.
Surface Hub devices are mostly stationary and rarely moved.
UI considerations
Apps on Surface Hub can appear in one of four states - full (standard full-screen view), background (hidden
from view while the app is still running, available in task switcher), fill (a fixed view that occupies the available
stage area), and snapped (variable view that occupies the right or left sides of the stage).
In snapped mode or fill modes, the system displays the Skype sidebar and shrinks the app horizontally.
System back is optional. When an app developer chooses to show it, it appears in the app title bar.
Inputs
Touch
Pen
Voice
Keyboard (on-screen/remote)
Touchpad (remote)
Typical device capabilities
Camera
Microphone

Windows IoT devices


Windows IoT devices are an emerging class of devices centered around embedding small electronics, sensors, and
connectivity within physical objects. These devices are usually connected through a network or the Internet to
report on the real-world data they sense, and in some cases act on it. Devices can either have no screen (also
known as headless devices) or are connected to a small screen (known as headed devices) with a screen size
usually 3.5 or smaller.
Screen sizes
3.5'' or smaller
Some devices have no screen

Typical usage
Usually connected through a network or the Internet to report on the real-world data they sense, and in some
cases act on it.
These devices can only run one application at a time unlike phones or other larger devices.
It isnt something that is interacted with all the time, but instead is available when you need it, out of the way
when you dont.
App doesnt have a dedicated back affordance, that is the developers responsibility.
UI considerations
"headless" devices have no screen.
Display for headed devices is minimal, only showing what is necessary due to limited screen real estate and
functionality.
Orientation is most times locked, so your app only needs to consider one display direction.
Inputs
Variable, depending on the device
Typical device capabilities
Variable, depending on the device
Designing for Xbox and TV
3/6/2017 50 min to read Edit on GitHub

Design your Universal Windows Platform (UWP) app so that it looks good and functions well on Xbox One and
television screens.

Overview
The Universal Windows Platform lets you create delightful experiences across multiple Windows 10 devices. Most
of the functionality provided by the UWP framework enables apps to use the same user interface (UI) across these
devices, without additional work. However, tailoring and optimizing your app to work great on Xbox One and TV
screens requires special considerations.
The experience of sitting on your couch across the room, using a gamepad or remote to interact with your TV, is
called the 10-foot experience. It is so named because the user is generally sitting approximately 10 feet away
from the screen. This provides unique challenges that aren't present in, say, the 2-foot experience, or interacting
with a PC. If you are developing an app for Xbox One or any other device that outputs to the TV screen and uses a
controller for input, you should always keep this in mind.
Not all of the steps in this article are required to make your app work well for 10-foot experiences, but
understanding them and making the appropriate decisions for your app will result in a better 10-foot experience
tailored for your app's specific needs. As you bring your app to life in the 10-foot environment, consider the
following design principles.
Simple
Designing for the 10-foot environment presents a unique set of challenges. Resolution and viewing distance can
make it difficult for people to process too much information. Try to keep your design clean, reduced to the
simplest possible components. The amount of information displayed on a TV should be comparable to what you'd
see on a mobile phone, rather than on a desktop.

Coherent
UWP apps in the 10-foot environment should be intuitive and easy to use. Make the focus clear and unmistakable.
Arrange content so that movement across the space is consistent and predictable. Give people the shortest path to
what they want to do.

All movies shown in the screenshot are available on Microsoft Movies & TV.
Captivating
The most immersive, cinematic experiences take place on the big screen. Edge-to-edge scenery, elegant motion,
and vibrant use of color and typography take your apps to the next level. Be bold and beautiful.

Optimizations for the 10-foot experience


Now that you know the principles of good UWP app design for the 10-foot experience, read through the following
overview of the specific ways you can optimize your app and make for a great user experience.
FEATURE DESCRIPTION

Gamepad and remote control Making sure that your app works well with gamepad and
remote is the most important step in optimizing for 10-foot
experiences. There are several gamepad and remote-specific
improvements that you can make to optimize the user
interaction experience on a device where their actions are
somewhat limited.

XY focus navigation and interaction The UWP provides XY focus navigation that allows the user
to navigate around your app's UI. However, this limits the
user to navigating up, down, left, and right.
Recommendations for dealing with this and other
considerations are outlined in this section.

Mouse mode In some user interfaces, such as maps and drawing surfaces, it
is not possible or practical to use XY focus navigation. For
these interfaces, the UWP provides mouse mode to let the
gamepad/remote navigate freely, like a mouse on a desktop
computer.

Focus visual The focus visual is the border around the UI element that
currently has focus. This helps orient the user so that they can
easily navigate your UI without getting lost. If the focus is not
clearly visible, the user could get lost in your UI and not have
a great experience.

Focus engagement Setting focus engagement on a UI element requires the user


to press the A/Select button in order to interact with it. This
can help create a better experience for the user when
navigating your app's UI.

UI element sizing The Universal Windows Platform uses scaling and effective
pixels to scale the UI according to the viewing distance.
Understanding sizing and applying it across your UI will help
optimize your app for the 10-foot environment.

TV-safe area The UWP will automatically avoid displaying any UI in TV-
unsafe areas (areas close to the edges of the screen) by
default. However, this creates a "boxed-in" effect in which the
UI looks letterboxed. For your app to be truly immersive on
TV, you will want to modify it so that it extends to the edges
of the screen on TVs that support it.

Colors The UWP supports color themes, and an app that respects
the system theme will default to dark on Xbox One. If your
app has a specific color theme, you should consider that some
colors don't work well for TV and should be avoided.

Sound Sounds play a key role in the 10-foot experience, helping to


immerse and give feedback to the user. The UWP provides
functionality that automatically turns on sounds for common
controls when the app is running on Xbox One. Find out more
about the sound support built into the UWP and learn how to
take advantage of it.
FEATURE DESCRIPTION

Guidelines for UI controls There are several UI controls that work well across multiple
devices, but have certain considerations when used on TV.
Read about some best practices for using these controls when
designing for the 10-foot experience.

Custom visual state trigger for Xbox To tailor your UWP app for the 10-foot experience, we
recommend that you use a custom visual state trigger to
make layout changes when the app detects that it has been
launched on an Xbox console.

NOTE
Most of the code snippets in this topic are in XAML/C#; however, the principles and concepts apply to all UWP apps. If
you're developing an HTML/JavaScript UWP app for Xbox, check out the excellent TVHelpers library on GitHub.

Gamepad and remote control


Just like keyboard and mouse are for PC, and touch is for phone and tablet, gamepad and remote control are the
main input devices for the 10-foot experience. This section introduces what the hardware buttons are and what
they do. In XY focus navigation and interaction and Mouse mode, you will learn how to optimize your app when
using these input devices.
The quality of gamepad and remote behavior that you get out-of-the-box depends on how well keyboard is
supported in your app. A good way to ensure that your app will work well with gamepad/remote is to make sure
that it works well with keyboard on PC, and then test with gamepad/remote to find weak spots in your UI.
Hardware buttons
Throughout this document, buttons will be referred to by the names given in the following diagram.

As you can see from the diagram, there are some buttons that are supported on gamepad that are not supported
on remote control, and vice versa. While you can use buttons that are only supported on one input device to make
navigating the UI faster, be aware that using them for critical interactions may create a situation where the user is
unable to interact with certain parts of the UI.
The following table lists all of the hardware buttons supported by UWP apps, and which input device supports
them.

BUTTON GAMEPAD REMOTE CONTROL

A/Select button Yes Yes

B/Back button Yes Yes

Directional pad (D-pad) Yes Yes

Menu button Yes Yes

View button Yes Yes

X and Y buttons Yes No

Left stick Yes No

Right stick Yes No

Left and right triggers Yes No

Left and right bumpers Yes No

OneGuide button No Yes

Volume button No Yes

Channel button No Yes

Media control buttons No Yes

Mute button No Yes

Built-in button support


The UWP automatically maps existing keyboard input behavior to gamepad and remote control input. The
following table lists these built-in mappings.

KEYBOARD GAMEPAD/REMOTE

Arrow keys D-pad (also left stick on gamepad)

Spacebar A/Select button

Enter A/Select button

Escape B/Back button*

*When neither the KeyDown nor KeyUp events for the B button are handled by the app, the
SystemNavigationManager.BackRequested event will be fired, which should result in back navigation within the
app. However, you have to implement this yourself, as in the following code snippet:
// This code goes in the MainPage class

public MainPage()
{
this.InitializeComponent();

// Handling Page Back navigation behaviors


SystemNavigationManager.GetForCurrentView().BackRequested +=
SystemNavigationManager_BackRequested;
}

private void SystemNavigationManager_BackRequested(


object sender,
BackRequestedEventArgs e)
{
if (!e.Handled)
{
e.Handled = this.BackRequested();
}
}

public Frame AppFrame { get { return this.Frame; } }

private bool BackRequested()


{
// Get a hold of the current frame so that we can inspect the app back stack
if (this.AppFrame == null)
return false;

// Check to see if this is the top-most page on the app back stack
if (this.AppFrame.CanGoBack)
{
// If not, set the event to handled and go back to the previous page in the
// app.
this.AppFrame.GoBack();
return true;
}
return false;
}

UWP apps on Xbox One also support pressing the Menu button to open context menus. For more information, see
CommandBar and ContextFlyout.
Accelerator support
Accelerator buttons are buttons that can be used to speed up navigation through a UI. However, these buttons
may be unique to a certain input device, so keep in mind that not all users will be able to use these functions. In
fact, gamepad is currently the only input device that supports accelerator functions for UWP apps on Xbox One.
The following table lists the accelerator support built into the UWP, as well as that which you can implement on
your own. Utilize these behaviors in your custom UI to provide a consistent and friendly user experience.

INTERACTION KEYBOARD GAMEPAD BUILT-IN FOR: RECOMMENDED FOR:

Page up/down Page up/down Left/right triggers CalendarView, Views that support
ListBox, ListViewBase, vertical scrolling
ListView,
ScrollViewer ,
Selector,
LoopingSelector,
ComboBox, FlipView
INTERACTION KEYBOARD GAMEPAD BUILT-IN FOR: RECOMMENDED FOR:

Page left/right None Left/right bumpers Pivot, ListBox, Views that support
ListViewBase, horizontal scrolling
ListView,
ScrollViewer ,
Selector,
LoopingSelector,
FlipView

Zoom in/out Ctrl +/- Left/right triggers None ScrollViewer , views


that support zooming
in and out

Open/close nav pane None View None Navigation panes

Search None Y button None Shortcut to the main


search function in the
app

XY focus navigation and interaction


If your app supports proper focus navigation for keyboard, this will translate well to gamepad and remote control.
Navigation with the arrow keys is mapped to the D-pad (as well as the left stick on gamepad), and interaction
with UI elements is mapped to the Enter/Select key (see Gamepad and remote control).
Many events and properties are used by both keyboard and gamepadthey both fire KeyDown and KeyUp
events, and they both will only navigate to controls that have the properties IsTabStop="True" and Visibility="Visible" .
For keyboard design guidance, see Keyboard interactions.
If keyboard support is implemented properly, your app will work reasonably well; however, there may be some
extra work required to support every scenario. Think about your app's specific needs to provide the best user
experience possible.

IMPORTANT
Mouse mode is enabled by default for UWP apps running on Xbox One. To disable mouse mode and enable XY focus
navigation, set Application.RequiresPointerMode=WhenRequested .

Debugging focus issues


The FocusManager.GetFocusedElement method will tell you which element currently has focus. This is useful for
situations where the location of the focus visual may not be obvious. You can log this information to the Visual
Studio output window like so:

page.GotFocus += (object sender, RoutedEventArgs e) =>


{
FrameworkElement focus = FocusManager.GetFocusedElement() as FrameworkElement;
if (focus != null)
{
Debug.WriteLine("got focus: " + focus.Name + " (" +
focus.GetType().ToString() + ")");
}
};

There are three common reasons why XY navigation might not work the way you expect:
The IsTabStop or Visibility property is set wrong.
The control getting focus is actually bigger than you thinkXY navigation looks at the total size of the control
(ActualWidth and ActualHeight), not just the portion of the control that renders something interesting.
One focusable control is on top of anotherXY navigation doesn't support controls that are overlapped.
If XY navigation is still not working the way you expect after fixing these issues, you can manually point to the
element that you want to get focus using the method described in Overriding the default navigation.
If XY navigation is working as intended but no focus visual is displayed, one of the following issues may be the
cause:
You re-templated the control and didn't include a focus visual. Set UseSystemFocusVisuals="True" or add a focus
visual manually.
You moved the focus by calling Focus(FocusState.Pointer) . The FocusState parameter controls what happens to the
focus visual. Generally you should set this to FocusState.Programmatic , which keeps the focus visual visible if it was
visible before, and hidden if it was hidden before.
The rest of this section goes into detail about common design challenges when using XY navigation, and offers
several ways to solve them.
Inaccessible UI
Because XY focus navigation limits the user to moving up, down, left, and right, you may end up with scenarios
where parts of the UI are inaccessible. The following diagram illustrates an example of the kind of UI layout that XY
focus navigation doesn't support. Note that the element in the middle is not accessible by using gamepad/remote
because the vertical and horizontal navigation will be prioritized and the middle element will never be high
enough priority to get focus.

If for some reason rearranging the UI is not possible, use one of the techniques discussed in the next section to
override the default focus behavior.
Overriding the default navigation
While the Universal Windows Platform tries to ensure that D-pad/left stick navigation makes sense to the user, it
cannot guarantee behavior that is optimized for your app's intentions. The best way to ensure that navigation is
optimized for your app is to test it with a gamepad and confirm that every UI element can be accessed by the user
in a manner that makes sense for your app's scenarios. In case your app's scenarios call for a behavior not
achieved through the XY focus navigation provided, consider following the recommendations in the following
sections and/or overriding the behavior to place the focus on a logical item.
The following code snippet shows how you might override the XY focus navigation behavior:
<StackPanel>
<Button x:Name="MyBtnLeft"
Content="Search" />
<Button x:Name="MyBtnRight"
Content="Delete"/>
<Button x:Name="MyBtnTop"
Content="Update" />
<Button x:Name="MyBtnDown"
Content="Undo" />
<Button Content="Home"
XYFocusLeft="{x:Bind MyBtnLeft}"
XYFocusRight="{x:Bind MyBtnRight}"
XYFocusDown="{x:Bind MyBtnDown}"
XYFocusUp="{x:Bind MyBtnTop}" />
</StackPanel>

In this case, when focus is on the Home button and the user navigates to the left, focus will move to the MyBtnLeft
button; if the user navigates to the right, focus will move to the MyBtnRight button; and so on.
To prevent the focus from moving from a control in a certain direction, use the XYFocus* property to point it at
the same control:

<Button Name="HomeButton"
Content="Home"
XYFocusLeft ="{x:Bind HomeButton}" />

Using these XYFocus properties, a control parent can also force the navigation of its children when the next focus
candidate is out of its visual tree, unless the child who has the focus uses the same XYFocus property.

<StackPanel Orientation="Horizontal" Margin="300,300">


<UserControl XYFocusRight="{x:Bind ButtonThree}">
<StackPanel>
<Button Content="One"/>
<Button Content="Two"/>
</StackPanel>
</UserControl>
<StackPanel>
<Button x:Name="ButtonThree" Content="Three"/>
<Button Content="Four"/>
</StackPanel>
</StackPanel>

In the sample above, if the focus is on Button Two and the user navigates to the right, the best focus candidate is
Button Four; however, the focus is moved to Button Three because the parent UserControl forces it to navigate
there when it is out of its visual tree.
Path of least clicks
Try to allow the user to perform the most common tasks in the least number of clicks. In the following example,
the TextBlock is placed between the Play button (which initially gets focus) and a commonly used element, so that
an unnecessary element is placed in between priority tasks.
In the following example, the TextBlock is placed above the Play button instead. Simply rearranging the UI so that
unnecessary elements are not placed in between priority tasks will greatly improve your app's usability.

CommandBar and ContextFlyout


When using a CommandBar, keep in mind the issue of scrolling through a list as mentioned in Problem: UI
elements located after long scrolling list/grid. The following image shows a UI layout with the CommandBar on the
bottom of a list/grid. The user would need to scroll all the way down through the list/grid to reach the CommandBar .

What if you put the CommandBar above the list/grid? While a user who scrolled down the list/grid would have to
scroll back up to reach the CommandBar , it is slightly less navigation than the previous configuration. Note that this
is assuming that your app's initial focus is placed next to or above the CommandBar ; this approach won't work as
well if the initial focus is below the list/grid. If these CommandBar items are global action items that don't have to be
accessed very often (such as a Sync button), it may be acceptable to have them above the list/grid.
While you can't stack a CommandBar 's items vertically, placing them against the scroll direction (for example, to the
left or right of a vertically scrolling list, or the top or bottom of a horizontally scrolling list) is another option you
may want to consider if it works well for your UI layout.
If your app has a CommandBar whose items need to be readily accessible by users, you may want to consider
placing these items inside a ContextFlyout and removing them from the CommandBar . ContextFlyout is a property of
UIElement and is the context menu associated with that element. On PC, when you right-click on an element with a
ContextFlyout , that context menu will pop up. On Xbox One, this will happen when you press the Menu button
while the focus is on such an element.
UI layout challenges
Some UI layouts are more challenging due to the nature of XY focus navigation, and should be evaluated on a
case-by-case basis. While there is no single "right" way, and which solution you choose is up to your app's specific
needs, there are some techniques that you can employ to make a great TV experience.
To understand this better, let's look at an imaginary app that illustrates some of these issues and techniques to
overcome them.

NOTE
This fake app is meant to illustrate UI problems and potential solutions to them, and is not intended to show the best user
experience for your particular app.

The following is an imaginary real estate app which shows a list of houses available for sale, a map, a description
of a property, and other information. This app poses three challenges that you can overcome by using the
following techniques:
UI rearrange
Focus engagement
Mouse mode
Problem: UI elements located after long scrolling list/grid
The ListView of properties shown in the following image is a very long scrolling list. If engagement is not required
on the ListView , when the user navigates to the list, focus will be placed on the first item in the list. For the user to
reach the Previous or Next button, they must go through all the items in the list. In cases like this where requiring
the user to traverse the entire list is painfulthat is, when the list is not short enough for this experience to be
acceptableyou may want to consider other options.

Solutions
UI rearrange
Unless your initial focus is placed at the bottom of the page, UI elements placed above a long scrolling list are
typically more easily accessible than if placed below. If this new layout works for other devices, changing the
layout for all device families instead of doing special UI changes just for Xbox One might be a less costly approach.
Additionally, placing UI elements against the scrolling direction (that is, horizontally to a vertically scrolling list, or
vertically to a horizontally scrolling list) will make for even better accessibility.
Focus engagement
When engagement is required, the entire ListView becomes a single focus target. The user will be able to bypass
the contents of the list to get to the next focusable element. Read more about what controls support engagement
and how to use them in Focus engagement.

Problem: ScrollViewer without any focusable elements


Because XY focus navigation relies on navigating to one focusable UI element at a time, a ScrollViewer that doesn't
contain any focusable elements (such as one with only text, as in this example) may cause a scenario where the
user isn't able to view all of the content in the ScrollViewer . For solutions to this and other related scenarios, see
Focus engagement.

Problem: Free-scrolling UI
When your app requires a freely scrolling UI, such as a drawing surface or, in this example, a map, XY focus
navigation simply doesn't work. In such cases, you can turn on mouse mode to allow the user to navigate freely
inside a UI element.
Mouse mode
As described in XY focus navigation and interaction, on Xbox One the focus is moved by using an XY navigation
system, allowing the user to shift the focus from control to control by moving up, down, left, and right. However,
some controls, such as WebView and MapControl, require a mouse-like interaction where users can freely move
the pointer inside the boundaries of the control. There are also some apps where it makes sense for the user to be
able to move the pointer across the entire page, having an experience with gamepad/remote similar to what users
can find on a PC with a mouse.
For these scenarios, you should request a pointer (mouse mode) for the entire page, or on a control inside a page.
For example, your app could have a page that has a WebView control that uses mouse mode only while inside the
control, and XY focus navigation everywhere else. To request a pointer, you can specify whether you want it when
a control or page is engaged or when a page has focus.

NOTE
Requesting a pointer when a control gets focus is not supported.

For both XAML and hosted web apps running on Xbox One, mouse mode is turned on by default for the entire
app. It is highly recommended that you turn this off and optimize your app for XY navigation. To do this, set the
Application.RequiresPointerMode property to WhenRequested so that you only enable mouse mode when a control or
page calls for it.
To do this in a XAML app, use the following code in your App class:

public App()
{
this.InitializeComponent();
this.RequiresPointerMode =
Windows.UI.Xaml.ApplicationRequiresPointerMode.WhenRequested;
this.Suspending += OnSuspending;
}

For more information, including sample code for HTML/JavaScript, see How to disable mouse mode.
The following diagram shows the button mappings for gamepad/remote in mouse mode.

NOTE
Mouse mode is only supported on Xbox One with gamepad/remote. On other device families and input types it is silently
ignored.

Use the RequiresPointer property on a control or page to activate mouse mode on it. RequiresPointer has three
possible values: Never (the default value), WhenEngaged , and WhenFocused .

NOTE
RequiresPointer is a new API and not yet documented.

Activating mouse mode on a control


When the user engages a control with RequiresPointer="WhenEngaged" , mouse mode is activated on the control until
the user disengages it. The following code snippet demonstrates a simple MapControl that activates mouse mode
when engaged:

<Page>
<Grid>
<MapControl IsEngagementRequired="true"
RequiresPointer="WhenEngaged"/>
</Grid>
</Page>

NOTE
If a control activates mouse mode when engaged, it must also require engagement with IsEngagementRequired="true" ;
otherwise, mouse mode will never be activated.

When a control is in mouse mode, its nested controls will be in mouse mode as well. The requested mode of its
children will be ignoredit's impossible for a parent to be in mouse mode but a child not to be.
Additionally, the requested mode of a control is only inspected when it gets focus, so the mode won't change
dynamically while it has focus.
Activating mouse mode on a page
When a page has the property RequiresPointer="WhenFocused" , mouse mode will be activated for the whole page
when it gets focus. The following code snippet demonstrates giving a page this property:

<Page RequiresPointer="WhenFocused">
...
</Page>

NOTE
The WhenFocused value is only supported on Page objects. If you try to set this value on a control, an exception will be
thrown.

Disabling mouse mode for full screen content


Usually when displaying video or other types of content in full screen, you will want to hide the cursor because it
can distract the user. This scenario occurs when the rest of the app uses mouse mode, but you want to turn it off
when showing full screen content. To accomplish this, put the full screen content on its own Page , and follow the
steps below.
1. In the App object, set RequiresPointerMode="WhenRequested" .
2. In every Page object except for the full screen Page , set RequiresPointer="WhenFocused" .
3. For the full screen Page , set RequiresPointer="Never" .
This way, the cursor will never appear when showing full screen content.

Focus visual
The focus visual is the border around the UI element that currently has focus. This helps orient the user so that
they can easily navigate your UI without getting lost.
With a visual update and numerous customization options added to focus visual, developers can trust that a single
focus visual will work well on PCs and Xbox One, as well as on any other Windows 10 devices that support
keyboard and/or gamepad/remote.
While the same focus visual can be used across different platforms, the context in which the user encounters it is
slightly different for the 10-foot experience. You should assume that the user is not paying full attention to the
entire TV screen, and therefore it is important that the currently focused element is clearly visible to the user at all
times to avoid the frustration of searching for the visual.
It is also important to keep in mind that the focus visual is displayed by default when using a gamepad or remote
control, but not a keyboard. Thus, even if you don't implement it, it will appear when you run your app on Xbox
One.
Initial focus visual placement
When launching an app or navigating to a page, place the focus on a UI element that makes sense as the first
element on which the user would take action. For example, a photo app may place focus on the first item in the
gallery, and a music app navigated to a detailed view of a song might place focus on the play button for ease of
playing music.
Try to put initial focus in the top left region of your app (or top right for a right-to-left flow). Most users tend to
focus on that corner first because that's where app content flow generally begins.
Making focus clearly visible
One focus visual should always be visible on the screen so that the user can pick up where they left off without
searching for the focus. Similarly, there should be a focusable item onscreen at all timesfor example, don't use
pop-ups with only text and no focusable elements.
An exception to this rule would be for full-screen experiences, such as watching videos or viewing images, in which
cases it would not be appropriate to show the focus visual.
Customizing the focus visual
If you'd like to customize the focus visual, you can do so by modifying the properties related to the focus visual for
each control. There are several such properties that you can take advantage of to personalize your app.
You can even opt out of the system-provided focus visuals by drawing your own using visual states. To learn
more, see VisualState.
Light dismiss overlay
To call the user's attention to the UI elements that the user is currently manipulating with the game controller or
remote control, the UWP automatically adds a "smoke" layer that covers areas outside of the popup UI when the
app is running on Xbox One. This requires no extra work, but is something to keep in mind when designing your
UI. You can set the LightDismissOverlayMode property on any FlyoutBase to enable or disable the smoke layer; it
defaults to Auto , meaning that it is enabled on Xbox and disabled elsewhere. For more information, see Modal vs
light dismiss.

Focus engagement
Focus engagement is intended to make it easier to use a gamepad or remote to interact with an app.

NOTE
Setting focus engagement does not impact keyboard or other input devices.

When the property IsFocusEngagementEnabled on a FrameworkElement object is set to True , it marks the control as
requiring focus engagement. This means that the user must press the A/Select button to "engage" the control and
interact with it. When they are finished, they can press the B/Back button to disengage the control and navigate
out of it.

NOTE
IsFocusEngagementEnabled is a new API and not yet documented.

Focus trapping
Focus trapping is what happens when a user attempts to navigate an app's UI but becomes "trapped" within a
control, making it difficult or even impossible to move outside of that control.
The following example shows UI that creates focus trapping.

If the user wants to navigate from the left button to the right button, it would be logical to assume that all they'd
have to do is press right on the D-pad/left stick twice. However, if the Slider doesn't require engagement, the
following behavior would occur: when the user presses right the first time, focus would shift to the Slider , and
when they press right again, the Slider 's handle would move to the right. The user would keep moving the handle
to the right and wouldn't be able to get to the button.
There are several approaches to getting around this issue. One is to design a different layout, similar to the real
estate app example in XY focus navigation and interaction where we relocated the Previous and Next buttons
above the ListView . Stacking the controls vertically instead of horizontally as in the following image would solve
the problem.

Now the user can navigate to each of the controls by pressing up and down on the D-pad/left stick, and when the
Slider has focus, they can press left and right to move the Slider handle, as expected.

Another approach to solving this problem is to require engagement on the Slider . If you set
IsFocusEngagementEnabled="True" , this will result in the following behavior.

When the Slider requires focus engagement, the user can get to the button on the right simply by pressing right
on the D-pad/left stick twice. This solution is great because it requires no UI adjustment and produces the expected
behavior.
Items controls
Aside from the Slider control, there are other controls which you may want to require engagement, such as:
ListBox
ListView
GridView
FlipView
Unlike the Slider control, these controls don't trap focus within themselves; however, they can cause usability
issues when they contain large amounts of data. The following is an example of a ListView that contains a large
amount of data.
Similar to the Slider example, let's try to navigate from the button at the top to the button at the bottom with a
gamepad/remote. Starting with focus on the top button, pressing down on the D-pad/stick will place the focus on
the first item in the ListView ("Item 1"). When the user presses down again, the next item in the list gets focus, not
the button on the bottom. To get to the button, the user must navigate through every item in the ListView first. If
the ListView contains a large amount of data, this could be inconvenient and not an optimal user experience.
To solve this problem, set the property IsFocusEngagementEnabled="True" on the ListView to require engagement on it.
This will allow the user to quickly skip over the ListView by simply pressing down. However, they will not be able
to scroll through the list or choose an item from it unless they engage it by pressing the A/Select button when it
has focus, and then pressing the B/Back button to disengage.

ScrollViewer
Slightly different from these controls is the ScrollViewer, which has its own quirks to consider. If you have a
ScrollViewer with focusable content, by default navigating to the ScrollViewer will allow you to move through its
focusable elements. Like in a ListView , you must scroll through each item to navigate outside of the ScrollViewer .
If the has no focusable contentfor example, if it only contains textyou can set
ScrollViewer
IsFocusEngagementEnabled="True" so the user can engage the ScrollViewer by using the A/Select button. After they
have engaged, they can scroll through the text by using the D-pad/left stick, and then press the B/Back button to
disengage when they're finished.
Another approach would be to set IsTabStop="True" on the ScrollViewer so that the user doesn't have to engage the
controlthey can simply place focus on it and then scroll by using the D-pad/left stick when there are no
focusable elements within the ScrollViewer .
Focus engagement defaults
Some controls cause focus trapping commonly enough to warrant their default settings to require focus
engagement, while others have focus engagement turned off by default but can benefit from turning it on. The
following table lists these controls and their default focus engagement behaviors.

CONTROL FOCUS ENGAGEMENT DEFAULT

CalendarDatePicker On

FlipView Off

GridView Off

ListBox Off

ListView Off

ScrollViewer Off

SemanticZoom Off

Slider On

All other UWP controls will result in no behavioral or visual changes when IsFocusEngagementEnabled="True" .

UI element sizing
Because the user of an app in the 10-foot environment is using a remote control or gamepad and is sitting several
feet away from the screen, there are some UI considerations that need to be factored into your design. Make sure
that the UI has an appropriate content density and is not too cluttered so that the user can easily navigate and
select elements. Remember: simplicity is key.
Scale factor and adaptive layout
Scale factor helps with ensuring that UI elements are displayed with the right sizing for the device on which the
app is running. On desktop, this setting can be found in Settings > System > Display as a sliding value. This
same setting exists on phone as well if the device supports it.

On Xbox One, there is no such system setting; however, for UWP UI elements to be sized appropriately for TV, they
are scaled at a default of 200% for XAML apps and 150% for HTML apps. As long as UI elements are appropriately
sized for other devices, they will be appropriately sized for TV. Xbox One renders your app at 1080p (1920 x 1080
pixels). Therefore, when bringing an app from other devices such as PC, ensure that the UI looks great at 960 x 540
px at 100% scale (or 1280 x 720 px at 100% scale for HTML apps) utilizing adaptive techniques.
Designing for Xbox is a little different from designing for PC because you only need to worry about one resolution,
1920 x 1080. It doesn't matter if the user has a TV that has better resolutionUWP apps will always scale to
1080p.
Correct asset sizes from the 200% (or 150% for HTML apps) set will also be pulled in for your app when running
on Xbox One, regardless of TV resolution.
Content density
When designing your app, remember that the user will be viewing the UI from a distance and interacting with it by
using a remote or game controller, which takes more time to navigate than using mouse or touch input.
Sizes of UI controls
Interactive UI elements should be sized at a minimum height of 32 epx (effective pixels). This is the default for
common UWP controls, and when used at 200% scale, it ensures that UI elements are visible from a distance and
helps reduce content density.

Number of clicks
When the user is navigating from one edge of the TV screen to the other, it should take no more than six clicks to
simplify your UI. Again, the principle of simplicity applies here. For more details, see Path of least clicks.

Text sizes
To make your UI visible from a distance, use the following rules of thumb:
Main text and reading content: 15 epx minimum
Non-critical text and supplemental content: 12 epx minimum
When using larger text in your UI, pick a size that does not limit screen real estate too much, taking up space that
other content could potentially fill.
Opting out of scale factor
We recommend that your app take advantage of scale factor support, which will help it run appropriately on all
devices by scaling for each device type. However, it is possible to opt out of this behavior and design all of your UI
at 100% scale. Note that you cannot change the scale factor to anything other than 100%.
For XAML apps, you can opt out of scale factor by using the following code snippet:

bool result =
Windows.UI.ViewManagement.ApplicationViewScaling.TrySetDisableLayoutScaling(true);

result will inform you whether you successfully opted out.


For more information, including sample code for HTML/JavaScript, see How to turn off scaling.
Please be sure to calculate the appropriate sizes of UI elements by doubling the effective pixel values mentioned in
this topic to actual pixel values (or multiplying by 1.5 for HTML apps).

TV-safe area
Not all TVs display content all the way to the edges of the screen due to historical and technological reasons. By
default, the UWP will avoid displaying any UI content in TV-unsafe areas and instead will only draw the page
background.
The TV-unsafe area is represented by the blue area in the following image.

You can set the background to a static or themed color, or to an image, as the following code snippets
demonstrate.
Theme color

<Page x:Class="Sample.MainPage"
Background="{ThemeResource ApplicationPageBackgroundThemeBrush}"/>

Image
<Page x:Class="Sample.MainPage"
Background="\Assets\Background.png"/>

This is what your app will look like without additional work.

This is not optimal because it gives the app a "boxed-in" effect, with parts of the UI such as the nav pane and grid
seemingly cut off. However, you can make optimizations to extend parts of the UI to the edges of the screen to
give the app a more cinematic effect.
Drawing UI to the edge
We recommend that you use certain UI elements to extend to the edges of the screen to provide more immersion
to the user. These include ScrollViewers, nav panes, and CommandBars.
On the other hand, it's also important that interactive elements and text always avoid the screen edges to ensure
that they won't be cut off on some TVs. We recommend that you draw only non-essential visuals within 5% of the
screen edges. As mentioned in UI element sizing, a UWP app following the Xbox One console's default scale factor
of 200% will utilize an area of 960 x 540 epx, so in your app's UI, you should avoid putting essential UI in the
following areas:
27 epx from the top and bottom
48 epx from the left and right sides
The following sections describe how to make your UI extend to the screen edges.
Core window bounds
For UWP apps targeting only the 10-foot experience, using core window bounds is a more straightforward option.
In the OnLaunched method of App.xaml.cs , add the following code:

Windows.UI.ViewManagement.ApplicationView.GetForCurrentView().SetDesiredBoundsMode
(Windows.UI.ViewManagement.ApplicationViewBoundsMode.UseCoreWindow);

With this line of code, the app window will extend to the edges of the screen, so you will need to move all
interactive and essential UI into the TV-safe area described earlier. Transient UI, such as context menus and opened
ComboBoxes, will automatically remain inside the TV-safe area.
Pane backgrounds
Navigation panes are typically drawn near the edge of the screen, so the background should extend into the TV-
unsafe area so as not to introduce awkward gaps. You can do this by simply changing the color of the nav pane's
background to the color of the app's background.
Using the core window bounds as previously described will allow you to draw your UI to the edges of the screen,
but you should then use positive margins on the SplitView's content to keep it within the TV-safe area.

Here, the nav pane's background has been extended to the edges of the screen, while its navigation items are kept
in the TV-safe area. The content of the SplitView (in this case, a grid of items) has been extended to the bottom of
the screen so that it looks like it continues and isn't cut off, while the top of the grid is still within the TV-safe area.
(Learn more about how to do this in Scrolling ends of lists and grids).
The following code snippet achieves this effect:
<SplitView x:Name="RootSplitView"
Margin="48,0,48,0">
<SplitView.Pane>
<ListView x:Name="NavMenuList"
ContainerContentChanging="NavMenuItemContainerContentChanging"
ItemContainerStyle="{StaticResource NavMenuItemContainerStyle}"
ItemTemplate="{StaticResource NavMenuItemTemplate}"
ItemInvoked="NavMenuList_ItemInvoked"
ItemsSource="{Binding NavMenuListItems}"/>
</SplitView.Pane>
<Frame x:Name="frame"
Navigating="OnNavigatingToPage"
Navigated="OnNavigatedToPage"/>
</SplitView>

CommandBar is another example of a pane that is commonly positioned near one or more edges of the app, and
as such on TV its background should extend to the edges of the screen. It also usually contains a More button,
represented by "..." on the right side, which should remain in the TV-safe area. The following are a few different
strategies to achieve the desired interactions and visual effects.
Option 1: Change the CommandBar background color to either transparent or the same color as the page
background:

<CommandBar x:Name="topbar"
Background="{ThemeResource SystemControlBackgroundAltHighBrush}">
...
</CommandBar>

Doing this will make the CommandBar look like it is on top of the same background as the rest of the page, so the
background seamlessly flows to the edge of the screen.
Option 2: Add a background rectangle whose fill is the same color as the CommandBar background, and have it lie
below the CommandBar and across the rest of the page:

<Rectangle VerticalAlignment="Top"
HorizontalAlignment="Stretch"
Fill="{ThemeResource SystemControlBackgroundChromeMediumBrush}"/>
<CommandBar x:Name="topbar"
VerticalAlignment="Top"
HorizontalContentAlignment="Stretch">
...
</CommandBar>

NOTE
If using this approach, be aware that the More button changes the height of the opened CommandBar if necessary, in order
to show the labels of the AppBarButton s below their icons. We recommend that you move the labels to the right of their
icons to avoid this resizing. For more information, see CommandBar labels.

Both of these approaches also apply to the other types of controls listed in this section.
Scrolling ends of lists and grids
It's common for lists and grids to contain more items than can fit onscreen at the same time. When this is the case,
we recommend that you extend the list or grid to the edge of the screen. Horizontally scrolling lists and grids
should extend to the right edge, and vertically scrolling ones should extend to the bottom.
While a list or grid is extended like this, it's important to keep the focus visual and its associated item inside the
TV-safe area.

The UWP has functionality that will keep the focus visual inside the VisibleBounds, but you need to add padding to
ensure that the list/grid items can scroll into view of the safe area. Specifically, you add a positive margin to the
ListView or GridView's ItemsPresenter, as in the following code snippet:
<Style x:Key="TitleSafeListViewStyle"
TargetType="ListView">
<Setter Property="Template">
<Setter.Value>
<ControlTemplate TargetType="ListView">
<Border BorderBrush="{TemplateBinding BorderBrush}"
Background="{TemplateBinding Background}"
BorderThickness="{TemplateBinding BorderThickness}">
<ScrollViewer x:Name="ScrollViewer"
TabNavigation="{TemplateBinding TabNavigation}"
HorizontalScrollMode="{TemplateBinding ScrollViewer.HorizontalScrollMode}"
HorizontalScrollBarVisibility="{TemplateBinding ScrollViewer.HorizontalScrollBarVisibility}"
IsHorizontalScrollChainingEnabled="{TemplateBinding ScrollViewer.IsHorizontalScrollChainingEnabled}"
VerticalScrollMode="{TemplateBinding ScrollViewer.VerticalScrollMode}"
VerticalScrollBarVisibility="{TemplateBinding ScrollViewer.VerticalScrollBarVisibility}"
IsVerticalScrollChainingEnabled="{TemplateBinding ScrollViewer.IsVerticalScrollChainingEnabled}"
IsHorizontalRailEnabled="{TemplateBinding ScrollViewer.IsHorizontalRailEnabled}"
IsVerticalRailEnabled="{TemplateBinding ScrollViewer.IsVerticalRailEnabled}"
ZoomMode="{TemplateBinding ScrollViewer.ZoomMode}"
IsDeferredScrollingEnabled="{TemplateBinding ScrollViewer.IsDeferredScrollingEnabled}"
BringIntoViewOnFocusChange="{TemplateBinding ScrollViewer.BringIntoViewOnFocusChange}"
AutomationProperties.AccessibilityView="Raw">
<ItemsPresenter Header="{TemplateBinding Header}"
HeaderTemplate="{TemplateBinding HeaderTemplate}"
HeaderTransitions="{TemplateBinding HeaderTransitions}"
Footer="{TemplateBinding Footer}"
FooterTemplate="{TemplateBinding FooterTemplate}"
FooterTransitions="{TemplateBinding FooterTransitions}"
Padding="{TemplateBinding Padding}"
Margin="0,27,0,27"/>
</ScrollViewer>
</Border>
</ControlTemplate>
</Setter.Value>
</Setter>
</Style>

You would put the previous code snippet in either the page or app resources, and then access it in the following
way:

<Page>
<Grid>
<ListView Style="{StaticResource TitleSafeListViewStyle}"
... />

NOTE
This code snippet is specifically for ListView s; for a GridView style, set the TargetType attribute for both the
ControlTemplate and the Style to GridView .

Colors
By default, the Universal Windows Platform doesn't do anything to alter your app's colors. That said, there are
improvements that you can make to the set of colors your app uses to improve the visual experience on TV.
Application theme
You can choose an Application theme (dark or light) according to what is right for your app, or you can opt out
of theming. Read more about general recommendations for themes in Color themes.
The UWP also allows apps to dynamically set the theme based on the system settings provided by the devices on
which they run. While the UWP always respects the theme settings specified by the user, each device also provides
an appropriate default theme. Because of the nature of Xbox One, which is expected to have more media
experiences than productivity experiences, it defaults to a dark system theme. If your app's theme is based on the
system settings, expect it to default to dark on Xbox One.
Accent color
The UWP provides a convenient way to expose the accent color that the user has selected from their system
settings.
On Xbox One, the user is able to select a user color, just as they can select an accent color on a PC. As long as your
app calls these accent colors through brushes or color resources, the color that the user selected in the system
settings will be used. Note that accent colors on Xbox One are per user, not per system.
Please also note that the set of user colors on Xbox One is not the same as that on PCs, phones, and other devices.
This is partly due to the fact that these colors are hand-picked to make for the best 10-foot experience on Xbox
One, following the same methodologies and strategies explained in this article.
As long as your app uses a brush resource such as SystemControlForegroundAccentBrush, or a color resource
(SystemAccentColor), or instead calls accent colors directly through the UIColorType.Accent* API, those colors
are replaced with accent colors appropriate for TV. High contrast brush colors are also pulled in from the system
the same way as on a PC and phone, but with TV-appropriate colors.
To learn more about accent color in general, see Accent color.
Color variance among TVs
When designing for TV, note that colors display quite differently depending on the TV on which they are rendered.
Don't assume colors will look exactly as they do on your monitor. If your app relies on subtle differences in color
to differentiate parts of the UI, colors could blend together and users could get confused. Try to use colors that are
different enough that users will be able to clearly differentiate them, regardless of the TV they are using.
TV-safe colors
A color's RGB values represent intensities for red, green, and blue. TVs don't handle extreme intensities very well;
therefore, you should avoid using these colors when designing for the 10-foot experience. They can produce an
odd banded effect, or appear washed out on certain TVs. Additionally, high-intensity colors may cause blooming
(nearby pixels start drawing the same colors).
While there are different schools of thought in what are considered TV-safe colors, colors within the RGB values of
16-235 (or 10-EB in hexadecimal) are generally safe to use for TV.
Fixing TV-unsafe colors
Fixing TV-unsafe colors individually by adjusting their RGB values to be within the TV-safe range is typically
referred to as color clamping. This method may be appropriate for an app that doesn't use a rich color palette.
However, fixing colors using only this method may cause colors to collide with each other, which doesn't provide
for the best 10-foot experience.
To optimize your color palette for TV, we recommend that you first ensure that your colors are TV-safe through a
method such as color clamping, then use a method called scaling.
This involves scaling all of your colors' RGB values by a certain factor to get them within the TV-safe range. Scaling
all of your app's colors helps prevent color collision and makes for a better 10-foot experience.

Assets
When making changes to colors, make sure to also update assets accordingly. If your app uses a color in XAML
that is supposed to look the same as an asset color, but you only update the XAML code, your assets will look off-
color.
UWP color sample
UWP color themes are designed around the app's background being either black for the dark theme or white for
the light theme. Because neither black nor white are TV-safe, these colors needed to be fixed by using clamping.
After they were fixed, all the other colors needed to be adjusted through scaling to retain the necessary contrast.
The following sample code provides a color theme that has been optimized for TV use:
<Application.Resources>
<ResourceDictionary>
<ResourceDictionary.ThemeDictionaries>
<ResourceDictionary x:Key="Default">
<SolidColorBrush x:Key="ApplicationPageBackgroundThemeBrush"
Color="#FF101010"/>
<Color x:Key="SystemAltHighColor">#FF101010</Color>
<Color x:Key="SystemAltLowColor">#33101010</Color>
<Color x:Key="SystemAltMediumColor">#99101010</Color>
<Color x:Key="SystemAltMediumHighColor">#CC101010</Color>
<Color x:Key="SystemAltMediumLowColor">#66101010</Color>
<Color x:Key="SystemBaseHighColor">#FFEBEBEB</Color>
<Color x:Key="SystemBaseLowColor">#33EBEBEB</Color>
<Color x:Key="SystemBaseMediumColor">#99EBEBEB</Color>
<Color x:Key="SystemBaseMediumHighColor">#CCEBEBEB</Color>
<Color x:Key="SystemBaseMediumLowColor">#66EBEBEB</Color>
<Color x:Key="SystemChromeAltLowColor">#FFDDDDDD</Color>
<Color x:Key="SystemChromeBlackHighColor">#FF101010</Color>
<Color x:Key="SystemChromeBlackLowColor">#33101010</Color>
<Color x:Key="SystemChromeBlackMediumLowColor">#66101010</Color>
<Color x:Key="SystemChromeBlackMediumColor">#CC101010</Color>
<Color x:Key="SystemChromeDisabledHighColor">#FF333333</Color>
<Color x:Key="SystemChromeDisabledLowColor">#FF858585</Color>
<Color x:Key="SystemChromeHighColor">#FF767676</Color>
<Color x:Key="SystemChromeLowColor">#FF1F1F1F</Color>
<Color x:Key="SystemChromeMediumColor">#FF262626</Color>
<Color x:Key="SystemChromeMediumLowColor">#FF2B2B2B</Color>
<Color x:Key="SystemChromeWhiteColor">#FFEBEBEB</Color>
<Color x:Key="SystemListLowColor">#19EBEBEB</Color>
<Color x:Key="SystemListMediumColor">#33EBEBEB</Color>
</ResourceDictionary>
<ResourceDictionary x:Key="Light">
<SolidColorBrush x:Key="ApplicationPageBackgroundThemeBrush"
Color="#FFEBEBEB" />
<Color x:Key="SystemAltHighColor">#FFEBEBEB</Color>
<Color x:Key="SystemAltLowColor">#33EBEBEB</Color>
<Color x:Key="SystemAltMediumColor">#99EBEBEB</Color>
<Color x:Key="SystemAltMediumHighColor">#CCEBEBEB</Color>
<Color x:Key="SystemAltMediumLowColor">#66EBEBEB</Color>
<Color x:Key="SystemBaseHighColor">#FF101010</Color>
<Color x:Key="SystemBaseLowColor">#33101010</Color>
<Color x:Key="SystemBaseMediumColor">#99101010</Color>
<Color x:Key="SystemBaseMediumHighColor">#CC101010</Color>
<Color x:Key="SystemBaseMediumLowColor">#66101010</Color>
<Color x:Key="SystemChromeAltLowColor">#FF1F1F1F</Color>
<Color x:Key="SystemChromeBlackHighColor">#FF101010</Color>
<Color x:Key="SystemChromeBlackLowColor">#33101010</Color>
<Color x:Key="SystemChromeBlackMediumLowColor">#66101010</Color>
<Color x:Key="SystemChromeBlackMediumColor">#CC101010</Color>
<Color x:Key="SystemChromeDisabledHighColor">#FFCCCCCC</Color>
<Color x:Key="SystemChromeDisabledLowColor">#FF7A7A7A</Color>
<Color x:Key="SystemChromeHighColor">#FFB2B2B2</Color>
<Color x:Key="SystemChromeLowColor">#FFDDDDDD</Color>
<Color x:Key="SystemChromeMediumColor">#FFCCCCCC</Color>
<Color x:Key="SystemChromeMediumLowColor">#FFDDDDDD</Color>
<Color x:Key="SystemChromeWhiteColor">#FFEBEBEB</Color>
<Color x:Key="SystemListLowColor">#19101010</Color>
<Color x:Key="SystemListMediumColor">#33101010</Color>
</ResourceDictionary>
</ResourceDictionary.ThemeDictionaries>
</ResourceDictionary>
</Application.Resources>
NOTE
The light theme SystemChromeMediumLowColor and SystemChromeMediumLowColor are the same color on
purpose and not caused as a result of clamping.

NOTE
Hexadecimal colors are specified in ARGB (Alpha Red Green Blue).

We don't recommend using TV-safe colors on a monitor able to display the full range without clamping because
the colors will look washed out. Instead, load the resource dictionary (previous sample) when your app is running
on Xbox but not other platforms. In the OnLaunched method of App.xaml.cs , add the following check:

if (IsTenFoot)
{
this.Resources.MergedDictionaries.Add(new ResourceDictionary
{
Source = new Uri("ms-appx:///TenFootStylesheet.xaml")
});
}

NOTE
The IsTenFoot variable is defined in Custom visual state trigger for Xbox.

This will ensure that the correct colors will display on whichever device the app is running, providing the user with
a better, more aesthetically pleasing experience.

Guidelines for UI controls


There are several UI controls that work well across multiple devices, but have certain considerations when used on
TV. Read about some best practices for using these controls when designing for the 10-foot experience.
Pivot control
A Pivot provides quick navigation of views within an app through selecting different headers or tabs. The control
underlines whichever header has focus, making it more obvious which header is currently selected when using
gamepad/remote.

You can set the Pivot.IsHeaderItemsCarouselEnabled property to true so that pivots always keep the same
position, rather than having the selected pivot header always move to the first position. This is a better experience
for large-screen displays such as TV, because header wrapping can be distracting to users. If all of the pivot
headers don't fit onscreen at once, there will be a scrollbar to let customers see the other headers; however, you
should make sure that they all fit on the screen to provide the best experience. For more information, see Tabs and
pivots.
Navigation pane
A navigation pane (also known as a hamburger menu) is a navigation control commonly used in UWP apps.
Typically it is a pane with several options to choose from in a list style menu that will take the user to different
pages. Generally this pane starts out collapsed to save space, and the user can open it by clicking on a button.
While nav panes are very accessible with mouse and touch, gamepad/remote makes them less accessible since the
user has to navigate to a button to open the pane. Therefore, a good practice is to have the View button open the
nav pane, as well as allow the user to open it by navigating all the way to the left of the page. This will provide the
user with very easy access to the contents of the pane. For more information about how nav panes behave in
different screen sizes as well as best practices for gamepad/remote navigation, see Nav panes.
CommandBar labels
It is a good idea to have the labels placed to the right of the icons on a CommandBar so that its height is
minimized and stays consistent. You can do this by setting the CommandBar.DefaultLabelPosition property to
CommandBarDefaultLabelPosition.Right .

Setting this property will also cause the labels to always be displayed, which works well for the 10-foot experience
because it minimizes the number of clicks for the user. This is also a great model for other device types to follow.
Tooltip
The Tooltip control was introduced as a way to provide more information in the UI when the user hovers the
mouse over, or taps and holds their figure on, an element. For gamepad and remote, Tooltip appears after a brief
moment when the element gets focus, stays onscreen for a short time, and then disappears. This behavior could be
distracting if too many Tooltip s are used. Try to avoid using Tooltip when designing for TV.
Button styles
While the standard UWP buttons work well on TV, some visual styles of buttons call attention to the UI better,
which you may want to consider for all platforms, particularly in the 10-foot experience, which benefits from
clearly communicating where the focus is located. To read more about these styles, see Buttons.
Nested UI elements
Nested UI exposes nested actionable items enclosed inside a container UI element where both the nested item as
well as the container item can take independent focus from each other.
Nested UI works well for some input types, but not always for gamepad and remote, which rely on XY navigation.
Be sure to follow the guidance in this topic to ensure that your UI is optimized for the 10-foot environment, and
that the user can access all interactable elements easily. One common solution is to place nested UI elements in a
ContextFlyout (see CommandBar and ContextFlyout).

For more information on nested UI, see Nested UI in list items.


MediaTransportControls
The MediaTransportControls element lets users interact with their media by providing a default playback
experience that allows them to play, pause, turn on closed captions, and more. This control is a property of
MediaPlayerElement and supports two layout options: single-row and double-row. In the single-row layout, the
slider and playback buttons are all located in one row, with the play/pause button located to the left of the slider. In
the double-row layout, the slider occupies its own row, with the playback buttons on a separate lower row. When
designing for the 10-foot experience, the double-row layout should be used, as it provides better navigation for
gamepad. To enable the double-row layout, set IsCompact="False" on the MediaTransportControls element in the
TransportControls property of the MediaPlayerElement .
<MediaPlayerElement x:Name="mediaPlayerElement1"
Source="Assets/video.mp4"
AreTransportControlsEnabled="True">
<MediaPlayerElement.TransportControls>
<MediaTransportControls IsCompact="False"/>
</MediaPlayerElement.TransportControls>
</MediaPlayerElement>

Visit Media playback to learn more about adding media to your app.

![NOTE] MediaPlayerElement is only available in Windows 10, version 1607 and later. If you're developing an app
for an earlier version of Windows 10, you'll need to use MediaElement instead. The recommendations above
apply to MediaElement as well, and the TransportControls property is accessed in the same way.

Search experience
Searching for content is one of the most commonly performed functions in the 10-foot experience. If your app
provides a search experience, it is helpful for the user to have quick access to it by using the Y button on the
gamepad as an accelerator.
Most customers should already be familiar with this accelerator, but if you like you can add a visual Y glyph to the
UI to indicate that the customer can use the button to access search functionality. If you do add this cue, be sure to
use the symbol from the Segoe Xbox MDL2 Symbol font ( &#xE3CC; for XAML apps, \E426 for HTML apps) to
provide consistency with the Xbox shell and other apps.

NOTE
Because the Segoe Xbox MDL2 Symbol font is only available on Xbox, the symbol won't appear correctly on your PC.
However, it will show up on the TV once you deploy to Xbox.

Since the Y button is only available on gamepad, make sure to provide other methods of access to search, such as
buttons in the UI. Otherwise, some customers may not be able to access the functionality.
In the 10-foot experience, it is often easier for customers to use a full screen search experience because there is
limited room on the display. Whether you have full screen or partial-screen, "in-place" search, we recommend that
when the user opens the search experience, the onscreen keyboard appears already opened, ready for the
customer to enter search terms.

Custom visual state trigger for Xbox


To tailor your UWP app for the 10-foot experience, we recommend that you make layout changes when the app
detects that it has been launched on an Xbox console. One way to do this is by using a custom visual state trigger.
Visual state triggers are most useful when you want to edit in Blend for Visual Studio. The following code
snippet shows how to create a visual state trigger for Xbox:
<VisualStateManager.VisualStateGroups>
<VisualStateGroup>
<VisualState>
<VisualState.StateTriggers>
<triggers:DeviceFamilyTrigger DeviceFamily="Windows.Xbox"/>
</VisualState.StateTriggers>
<VisualState.Setters>
<Setter Target="RootSplitView.OpenPaneLength"
Value="368"/>
<Setter Target="RootSplitView.CompactPaneLength"
Value="96"/>
<Setter Target="NavMenuList.Margin"
Value="0,75,0,27"/>
<Setter Target="Frame.Margin"
Value="0,27,48,27"/>
<Setter Target="NavMenuList.ItemContainerStyle"
Value="{StaticResource NavMenuItemContainerXboxStyle}"/>
</VisualState.Setters>
</VisualState>
</VisualStateGroup>
</VisualStateManager.VisualStateGroups>

To create the trigger, add the following class to your app. This is the class that is referenced by the XAML code
above:

class DeviceFamilyTrigger : StateTriggerBase


{
private string _currentDeviceFamily, _queriedDeviceFamily;

public string DeviceFamily


{
get
{
return _queriedDeviceFamily;
}

set
{
_queriedDeviceFamily = value;
_currentDeviceFamily = AnalyticsInfo.VersionInfo.DeviceFamily;
SetActive(_queriedDeviceFamily == _currentDeviceFamily);
}
}
}

After you've added your custom trigger, your app will automatically make the layout modifications you specified
in your XAML code whenever it detects that it is running on an Xbox One console.
Another way you can check whether your app is running on Xbox and then make the appropriate adjustments is
through code. You can use the following simple variable to check if your app is running on Xbox:

bool IsTenFoot = (Windows.System.Profile.AnaylticsInfo.VersionInfo.DeviceFamily ==


"Windows.Xbox");

Then, you can make the appropriate adjustments to your UI in the code block following this check. An example of
this is shown in UWP color sample.

Summary
Designing for the 10-foot experience has special considerations to take into account that make it different from
designing for any other platform. While you can certainly do a straight port of your UWP app to Xbox One and it
will work, it won't necessarily be optimized for the 10-foot experience and can lead to user frustration. Following
the guidelines in this article will make sure that your app is as good as it can be on TV.

Related articles
Device primer for Universal Windows Platform (UWP) apps
Gamepad and remote control interactions
Sound in UWP apps
Usability for UWP apps
3/6/2017 4 min to read Edit on GitHub

Its the little touches, an extra attention to detail, that can transform a good user experience into a truly inclusive
user experience that meets the needs of users around the globe.
The design and coding instructions in this section can make your UWP app more inclusive by adding accessibility
features, enabling globalization and localization, enabling users to customize their experience, and providing help
when users need it.

Accessiblity
Accessibility is about making your app usable by people who have limitations that prevent or impede the use of
conventional user interfaces. For some situations, accessibility requirements are imposed by law. However, it's a
good idea to address accessibility issues regardless of legal requirements so that your apps have the largest
possible audience.

Accessibility overview
This article is an overview of the concepts and technologies related to accessibility scenarios for UWP apps.

Designing inclusive software


Learn about evolving inclusive design with Universal Windows Platform (UWP) apps for Windows 10. Design
and build inclusive software with accessibility in mind.

Developing inclusive Windows apps


This article is a roadmap for developing accessible UWP apps.

Accessibility testing
Testing procedures to follow to ensure that your UWP app is accessible.

Accessibility in the Store


Describes the requirements for declaring your UWP app as accessible in the Windows Store.

Accessibility checklist
Provides a checklist to help you ensure that your UWP app is accessible.

Expose basic accessibility information


Basic accessibility info is often categorized into name, role, and value. This topic describes code to help your app
expose the basic information that assistive technologies need.

Keyboard accessibility
If your app does not provide good keyboard access, users who are blind or have mobility issues can have
difficulty using your app or may not be able to use it at all.
High-contrast themes
Describes the steps needed to ensure your UWP app is usable when a high-contrast theme is active.

Accessible text requirements


This topic describes best practices for accessibility of text in an app, by assuring that colors and backgrounds
satisfy the necessary contrast ratio. This topic also discusses the Microsoft UI Automation roles that text
elements in a UWP app can have, and best practices for text in graphics.

Accessibility practices to avoid


Lists the practices to avoid if you want to create an accessible UWP app.

Custom automation peers


Describes the concept of automation peers for UI Automation, and how you can provide automation support
for your own custom UI class.

Control patterns and interfaces


Lists the Microsoft UI Automation control patterns, the classes that clients use to access them, and the interfaces
providers use to implement them.

Globalization and localization


Windows is used worldwide, by audiences that vary in culture, region, and language. A user may speak any
language, or even multiple languages. A user may be located anywhere in the world, and may speak any language
in any location. You can increase the potential market for your app by designing it to be readily adaptable using
globalization and localization.

Do's and don'ts


Follow these best practices when globalizing your apps for a wider audience and when localizing your apps for
a specific market.

Use global-ready formats


Develop a global-ready app by appropriately formatting dates, times, numbers, and currencies.

Manage language and region


Control how Windows selects UI resources and formats the UI elements of the app, by using the various
language and region settings provided by Windows.

Use patterns to format dates and times


Use the DateTimeFormatting API with custom patterns to display dates and times in exactly the format you
wish.

Adjust layout and fonts, and support RTL


Develop your app to support the layouts and fonts of multiple languages, including RTL (right-to-left) flow
direction.
Prepare your app for localization
Prepare your app for localization to other markets, languages, or regions.

Put UI strings into resources


Put string resources for your UI into resource files. You can then reference those strings from your code or
markup.

App settings
App settings let you the user customize your app, optimizing it for their individual needs and preferences.
Providing the right settings and storing them properly can make a great user experience even better.

Guidelines
Best practices for creating and displaying app settings.

Store and retrieve app data


How to store and retrieve local, roaming, and temporary app data.

In-app help
No matter how well youve designed your app, some users will need a little extra help.

Guidelines for app help


Applications can be complex, and providing effective help for your users can greatly improve their experience.

Instructional UI
Sometimes it can be helpful to teach the user about functions in your app that might not be obvious to them,
such as specific touch interactions. In these cases, you need to present instructions to the user through the UI so
they can discover and use features they might have missed.

In-app help
Most of the time, it's best for help to be displayed within the app, and to be displayed when the user chooses to
view it. Consider the following guidelines when creating in-app help.

External help
Most of the time, it's best for help to be displayed within the app, and to be displayed when the user chooses to
view it. Consider the following guidelines when creating in-app help.
Accessibility
3/13/2017 2 min to read Edit on GitHub

Introduces accessibility concepts that relate to Universal Windows Platform (UWP) apps.
Accessibility is about building experiences that make your application available to people who use technology in a
wide range of environments and approach your user interface with a range of needs and experiences. For some
situations, accessibility requirements are imposed by law. However, it's a good idea to address accessibility issues
regardless of legal requirements so that your apps have the largest possible audience. There's also a Windows
Store declaration regarding accessibility for your app.

NOTE
Declaring the app as accessible is only relevant to the Windows Store.

ARTICLE DESCRIPTION

Accessibility overview This article is an overview of the concepts and technologies


related to accessibility scenarios for UWP apps.

Designing inclusive software Learn about evolving inclusive design with UWP apps for
Windows 10. Design and build inclusive software with
accessibility in mind.

Developing inclusive Windows apps This article is a roadmap for developing accessible UWP apps.

Accessibility testing Testing procedures to follow to ensure that your UWP app is
accessible.

Accessibility in the Store Describes the requirements for declaring your UWP app as
accessible in the Windows Store.

Accessibility checklist Provides a checklist to help you ensure that your UWP app is
accessible.

Expose basic accessibility information Basic accessibility info is often categorized into name, role,
and value. This topic describes code to help your app expose
the basic information that assistive technologies need.

Keyboard accessibility If your app does not provide good keyboard access, users
who are blind or have mobility issues can have difficulty using
your app or may not be able to use it at all.

High-contrast themes Describes the steps needed to ensure your UWP app is
usable when a high-contrast theme is active.

Accessible text requirements This topic describes best practices for accessibility of text in an
app, by assuring that colors and backgrounds satisfy the
necessary contrast ratio. This topic also discusses the
Microsoft UI Automation roles that text elements in a UWP
app can have, and best practices for text in graphics.
ARTICLE DESCRIPTION

Accessibility practices to avoid Lists the practices to avoid if you want to create an accessible
UWP app.

Custom automation peers Describes the concept of automation peers for UI


Automation, and how you can provide automation support
for your own custom UI class.

Control patterns and interfaces Lists the Microsoft UI Automation control patterns, the
classes that clients use to access them, and the interfaces
providers use to implement them.

Related topics
Windows.UI.Xaml.Automation
Get started with Narrator
Accessibility overview
3/13/2017 9 min to read Edit on GitHub

This article is an overview of the concepts and technologies related to accessibility scenarios for Universal Windows
Platform (UWP) apps.

Accessibility and your app


There are many possible disabilities or impairments, including limitations in mobility, vision, color perception,
hearing, speech, cognition, and literacy. However, you can address most requirements by following the guidelines
offered here. This means providing:
Support for keyboard interactions and screen readers.
Support for user customization, such as font, zoom setting (magnification), color, and high-contrast settings.
Alternatives or supplements for parts of your UI.
Controls for XAML provide built-in keyboard support and support for assistive technologies such as screen readers,
which take advantage of accessibility frameworks that already support UWP apps, HTML, and other UI
technologies. This built-in support enables a basic level of accessibility that you can customize with very little work,
by setting just a handful of properties. If you are creating your own custom XAML components and controls, you
can also add similar support to those controls by using the concept of an automation peer.
In addition, data binding, style, and template features make it easy to implement support for dynamic changes to
display settings and text for alternative UIs.

UI Automation
Accessibility support comes primarily from the integrated support for the Microsoft UI Automation framework.
That support is provided through base classes and the built-in behavior of the class implementation for control
types, and an interface representation of the UI Automation provider API. Each control class uses the UI Automation
concepts of automation peers and automation patterns that report the control's role and content to UI Automation
clients. The app is treated as a top-level window by UI Automation, and through the UI Automation framework all
the accessibility-relevant content within that app window is available to a UI Automation client. For more info about
UI Automation, see UI Automation Overview.

Assistive technology
Many user accessibility needs are met by assistive technology products installed by the user or by tools and
settings provided by the operating system. This includes functionality such as screen readers, screen magnification,
and high-contrast settings.
Assistive technology products include a wide variety of software and hardware. These products work through the
standard keyboard interface and accessibility frameworks that report information about the content and structure
of a UI to screen readers and other assistive technologies. Examples of assistive technology products include:
The On-Screen Keyboard, which enables people to use a pointer in place of a keyboard to type text.
Voice-recognition software, which converts spoken words into typed text.
Screen readers, which convert text into spoken words or other forms such as Braille.
The Narrator screen reader, which is specifically part of Windows. Narrator has a touch mode, which can
perform screen reading tasks by processing touch gestures, for when there is no keyboard available.
Programs or settings that adjust the display or areas of it, for example high contrast themes, dots per inch (dpi)
settings of the display, or the Magnifier tool.
Apps that have good keyboard and screen reader support usually work well with various assistive technology
products. In many cases, a UWP app works with these products without additional modification of information or
structure. However, you may want to modify some settings for optimal accessibility experience or to implement
additional support.
Some of the options that you can use for testing basic accessibility scenarios with assistive technologies are listed
in Accessibility testing.

Screen reader support and basic accessibility information


Screen readers provide access to the text in an app by rendering it in some other format, such as spoken language
or Braille output. The exact behavior of a screen reader depends on the software and on the user's configuration of
it.
For example, some screen readers read the entire app UI when the user starts or switches to the app being viewed,
which enables the user to receive all of the available informational content before attempting to navigate it. Some
screen readers also read the text associated with an individual control when it receives focus during tab navigation.
This enables users to orient themselves as they navigate among the input controls of an application. Narrator is an
example of a screen reader that provides both behaviors, depending on user choice.
The most important information that a screen reader or any other assistive technology needs in order to help users
understand or navigate an app is an accessible name for the element parts of the app. In many cases, a control or
element already has an accessible name that is calculated from other property values that you have otherwise
provided. The most common case in which you can use an already-calculated name is with an element that
supports and displays inner text. For other elements, you sometimes need to account for other ways to provide an
accessible name by following best practices for element structure. And sometimes you need to provide a name that
is explicitly intended as the accessible name for app accessibility. For a listing of how many of these calculated
values work in common UI elements, and for more info about accessible names in general, see Basic accessibility
information.
There are several other automation properties available (including the keyboard properties described in the next
section). However, not all screen readers support all automation properties. In general, you should set all
appropriate automation properties and test to provide the widest possible support for screen readers.

Keyboard support
To provide good keyboard support, you must ensure that every part of your application can be used with a
keyboard. If your app uses mostly the standard controls and doesn't use any custom controls, you are most of the
way there already. The basic XAML control model provides built-in keyboard support including tab navigation, text
input, and control-specific support. The elements that serve as layout containers (such as panels) use the layout
order to establish a default tab order. That order is often the correct tab order to use for an accessible
representation of the UI. If you use ListBox and GridView controls to display data, they provide built-in arrow-key
navigation. Or if you use a Button control, it already handles the Spacebar or Enter keys for button activation.
For more info about all the aspects of keyboard support, including tab order and key-based activation or
navigation, see Keyboard accessibility.

Media and captioning


You typically display audiovisual media through a MediaElement object. You can use MediaElement APIs to
control the media playback. For accessibility purposes, provide controls that enable users to play, pause, and stop
the media as needed. Sometimes, media includes additional components that are intended for accessibility, such as
captioning or alternative audio tracks that include narrative descriptions.

Accessible text
Three main aspects of text are relevant to accessibility:
Tools must determine whether the text is to be read as part of a tab-sequence traversal or only as part of an
overall document representation. You can help control this determination by choosing the appropriate element
for displaying the text or by adjusting properties of those text elements. Each text element has a specific
purpose, and that purpose often has a corresponding UI Automation role. Using the wrong element can result in
reporting the wrong role to UI Automation and creating a confusing experience for an assistive technology user.
Many users have sight limitations that make it difficult for them to read text unless it has adequate contrast
against the background. How this impacts the user is not intuitive for app designers who do not have that sight
limitation. For example, for color-blind users, poor color choices in the design can prevent some users from
being able to read the text. Accessibility recommendations that were originally made for web content define
standards for contrast that can avoid these problems in apps as well. For more info, see Accessible text
requirements.
Many users have difficulty reading text that is simply too small. You can prevent this issue by making the text in
your app's UI reasonably large in the first place. However, that's challenging for apps that display large
quantities of text, or text interspersed with other visual elements. In such cases, make sure that the app correctly
interacts with the system features that can scale up the display, so that any text in apps scales up along with it.
(Some users change dpi values as an accessibility option. That option is available from Make things on the
screen larger in Ease of Access, which redirects to a Control Panel UI for Appearance and Personalization
/ Display.)

Supporting high-contrast themes


UI controls use a visual representation that is defined as part of a XAML resource dictionary of themes. One or
more of these themes is specifically used when the system is set for high contrast. When the user switches to high
contrast, by looking up the appropriate theme from a resource dictionary dynamically, all your UI controls will use
an appropriate high-contrast theme too. Just make sure that you haven't disabled the themes by specifying an
explicit style or using another styling technique that prevents the high-contrast themes from loading and
overriding your style changes. For more info, see High-contrast themes.

Design for alternative UI


When you design your apps, consider how they may be used by people with limited mobility, vision, and hearing.
Because assistive technology products make extensive use of standard UI, it is particularly important to provide
good keyboard and screen-reader support even if you make no other adjustments for accessibility.
In many cases, you can convey essential information by using multiple techniques to widen your audience. For
example, you can highlight information using both icon and color information to help users who are color blind,
and you can display visual alerts along with sound effects to help users who are hearing impaired.
If necessary, you can provide alternative, accessible user interface elements that completely remove nonessential
elements and animations, and provide other simplifications to streamline the user experience. The following code
example demonstrates how to display one UserControl instance in place of another depending on a user setting.
XAML
<StackPanel x:Name="LayoutRoot" Background="White">

<CheckBox x:Name="ShowAccessibleUICheckBox" Click="ShowAccessibleUICheckBox_Click">


Show Accessible UI
</CheckBox>

<UserControl x:Name="ContentBlock">
<local:ContentPage/>
</UserControl>

</StackPanel>

Visual Basic

Private Sub ShowAccessibleUICheckBox_Click(ByVal sender As Object,


ByVal e As RoutedEventArgs)

If (ShowAccessibleUICheckBox.IsChecked.Value) Then
ContentBlock.Content = New AccessibleContentPage()
Else
ContentBlock.Content = New ContentPage()
End If
End Sub

C#

private void ShowAccessibleUICheckBox_Click(object sender, RoutedEventArgs e)


{
if ((sender as CheckBox).IsChecked.Value)
{
ContentBlock.Content = new AccessibleContentPage();
}
else
{
ContentBlock.Content = new ContentPage();
}
}

Verification and publishing


For more info about accessibility declarations and publishing your app, see Accessibility in the Store.

NOTE
Declaring the app as accessible is only relevant to the Windows Store.

Assistive technology support in custom controls


When you create a custom control, we recommend that you also implement or extend one or more
AutomationPeer subclasses to provide accessibility support. In some cases, so long as you use the same peer
class as was used by the base control class, the automation support for your derived class is adequate at a basic
level. However, you should test this, and implementing a peer is still recommended as a best practice so that the
peer can correctly report the class name of your new control class. Implementing a custom automation peer has a
few steps involved. For more info, see Custom automation peers.

Assistive technology support in apps that support XAML / Microsoft


DirectX interop
Microsoft DirectX content that's hosted in a XAML UI (using SwapChainPanel or SurfaceImageSource) is not
accessible by default. The XAML SwapChainPanel DirectX interop sample shows how to create UI Automation peers
for the hosted DirectX content. This technique makes the hosted content accessible through UI Automation.

Related topics
Windows.UI.Xaml.Automation
Design for accessibility
XAML accessibility sample
Accessibility
Get started with Narrator
Designing inclusive software for Windows 10
3/6/2017 9 min to read Edit on GitHub

Learn about evolving inclusive design with Universal Windows Platform (UWP) apps for Windows 10. Design and
build inclusive software with accessibility in mind.
At Microsoft, were evolving our design principles and practices. These inform how our experiences look, feel,
function, and behave. Were elevating our perspective.
This new design philosophy is called inclusive design. The idea is to design software with everyone in mind from
the very beginning. This is in contrast to viewing accessibility as a technology you bolt on at the end of the
development process in order to satisfy some small group of users.
We define disability as a mismatch between the needs of the individual and the service, product or environment
offered. Anyone can experience a disability. It is a common human trait to be excluded. - from the Inclusive video
Inclusive design creates better products for everyone. Its about considering the full range of human diversity.
Consider the curb cutouts that you now find on most street corner sidewalks. They were clearly intended to be
used by people in wheelchairs. But now nearly everyone uses them, including people with baby strollers, bicyclists,
skateboarders. Even pedestrians will often use curb cutouts because they are there and provide a better experience.
The television remote control could be considered an assistive technology (AT) for someone with physical
limitations. And yet, today it is nearly impossible to buy a television without one. Before children learn to tie their
shoes, they can wear slip-on or easy fastening shoes. Shoes that are easy to put on and take off are often preferred
in cultures where shoes are removed before entering a home. They are also better for people with dexterity issues
such as arthritis or even a temporarily broken wrist.

Inclusive design principles


The following 4 principles are guiding Microsofts shift to inclusive design:
Think universal: We focus on what unifies people human motivations, relationships, and abilities. This drives
us to consider the broader social impact of our work. The result is an experience that has a diversity of ways for all
people to participate.
Make it personal: Next, we challenge ourselves to create emotional connections. Human-to-human interactions
can inspire better human-to-technology interaction. A persons unique circumstances can improve a design for
everyone. The result is an experience that feels like it was created for one person.
Keep it simple: We start with simplicity as the ultimate unifier. When we reduce clutter people know what to do
next. Theyre inspired to move forward into spaces that are clean, light, and open. The result is an experience thats
honest and timeless.
Create delight: Delightful experiences evoke wonder and discovery. Sometimes its magical. Sometimes its a
detail thats just right. We design these moments to feel like a welcomed change in tempo. The result is an
experience that has momentum and flow.

Inclusive design users


There are essentially two types of users of assistive technology (AT):
1. Those who need it, because of disabilities or impairments, age-related conditions, or temporary conditions
(such as limited mobility from a broken limb)
2. Those who use it out of preference, for a more comfortable or convenient computing experience
The majority of computer users (54 per-cent) are aware of some form of assistive technology, and 44 percent of
computer users use some form of it, but many of them are not using AT that would benefit them (Forrester 2004).
A 2003-2004 study commissioned by Microsoft and conducted by Forrester Research found that over half 57
percent of computer users in the United States between the ages of 18 and 64 could benefit from assistive
technology. Most of these users did not identify themselves as having a disability or being impaired but expressed
certain task-related difficulties or impairments when using a computer. Forrester (2003) also found the following
number of users with these specific difficulties: One in four experiences a visual difficulty. One in four experiences
pain in the wrists or hands. One in five experiences hearing difficulty.
Besides permanent disabilities, the severity and types of difficulties an individual experiences can vary throughout
their life. There is no such thing as a normal human. Our capabilities are always changing. Margaret Meade said,
We are all unique. Being all unique makes us all the same.
Microsoft is dedicated to conducting computer science and software engineering research with goals to enhance
the computing experience and invent novel computing technologies. See Current Microsoft Research and
Development Projects aimed at making the computer more accessible, and easier to see, hear, and interact with.

Practical design steps


If you're all in, then this section is for you. It describes the practical design steps to consider when implementing
inclusive design for your app.
Describe the target audience
Define the potential users of your app. Think through all of their different abilities and characteristics. For example,
age, gender, language, deaf or hard of hearing users, visual impairments, cognitive abilities, learning style, mobility
restrictions, and so on. Is your design meeting their individual needs?
Talk to actual humans with specific needs
Meet with potential users who have diverse characteristics. Make sure you are considering all of their needs when
designing your app. For example, Microsoft discovered that deaf users were turning off toast notifications on their
Xbox consoles. When we asked actual deaf users about this, we learned that toast notifications were obscuring a
section of closed captioning. The fix was to display the toast slight higher on the screen. This was a simple solution
that was not necessarily obvious from the telemetry data that initially revealed the behavior.
Choose a development framework wisely
In the design stage, the development framework you will use (i.e. UWP, Win32, web) is critical to the development
of your product. If you have the luxury of choosing your framework, think about how much effort it will take to
create your controls within the framework. What are the default or built-in accessibility properties that come with
it? Which controls will you need to customize? When choosing your framework, you are essentially choosing how
much of the accessibility controls you will get for free (that is, how much of the controls are already built-in) and
how much will require additional development costs because of control customizations.
Use standard Windows controls whenever possible. These controls are already enabled with the technology
necessary to interface with assistive technologies.
Design a logical hierarchy for your controls
Once you have your framework, design a logical hierarchy to map out your controls. The logical hierarchy of your
app includes the layout and tab order of controls. When assistive technology (AT) programs, such as screen
readers, read your UI, visual presentation is not sufficient; you must provide a programmatic alternative that makes
sense structurally to your users. A logical hierarchy can help you do that. It is a way of studying the layout of your
UI and structuring each element so that users can understand it. A logical hierarchy is mainly used:
1. To provide programs context for the logical (reading) order of the elements in the UI
2. To identify clear boundaries between custom controls and standard controls in the UI
3. To determine how pieces of the UI interact together
A logical hierarchy is a great way to address any potential usability issues. If you cannot structure the UI in a
relatively simple manner, you may have problems with usability. A logical representation of a simple dialog box
should not result in pages of diagrams. For logical hierarchies that become too deep or too wide, you may need to
redesign your UI. For more information, download the Engineering Software for Accessibility eBook.
Design appropriate visual UI settings
When designing the visual UI, ensure that your product has a high contrast setting, uses the default system fonts
and smoothing options, correctly scales to the dots per inch (dpi) screen settings, has default text with at least a 5:1
contrast ratio with the background, and has color combinations that will be easy for users with color deficiencies to
differentiate.
High contrast setting
One of the built-in accessibility features in Windows is High Contrast mode, which heightens the color contrast of
text and images. For some people, increasing the contrast in colors reduces eyestrain and makes it easier to read.
When you verify your UI in high contrast mode, you want to check that controls, such as links, have been coded
consistently and with system colors (not with hard-coded colors) to ensure that they will be able to see all the
controls on the screen that a user not using high contrast would see.
System font settings
To ensure readability and minimize any unexpected distortions to the text, make sure that your product always
adheres to the default system fonts and uses the anti-aliasing and smoothing options. If your product uses custom
fonts, users may face significant readability issues and distractions when they customize the presentation of their
UI (through the use of a screen reader or by using different font styles to view your UI, for instance).
High DPI resolutions
For users with vision impairments, having a scalable UI is important. User interfaces that do not scale correctly in
high dots-per-inch (DPI) resolutions may cause important components to overlap or hide other components and
can become inaccessible.
Color contrast ratio
The updated Section 508 of the Americans with Disability Act (ADA), as well as other legislations, requires that the
default color contrasts between text and its background must be 5:1. For large texts (18-point font sizes, or 14
points and bolded) the required default contrast is 3:1.
Color combinations
About 7 percent of males (and less than 1 percent of females) have some form of color deficiency. Users with
colorblindness have problems distinguishing between certain colors, so it is important that color alone is never
used to convey status or meaning in an application. As for decorative images (such as icons or backgrounds), color
combinations should be chosen in a manner that maximizes the perception of the image by colorblind users. If you
design using these color recommendations from the beginning, your app will already be taking significant steps
toward being inclusive.

Summary seven steps for inclusive design


In summary, follow these seven steps to ensure your software is inclusive.
1. Decide if inclusive design is an important aspect to your software. If it is, learn and appreciate how it enables
real users to live, work, and play, to help guide your design.
2. As you design solutions for your requirements, use controls provided by your framework (standard controls) as
much as possible, and avoid any unnecessary effort and costs of custom controls.
3. Design a logical hierarchy for your product, noting where the standard controls, any custom controls, and
keyboard focus are in the UI.
4. Design useful system settings (such as keyboard navigation, high contrast, and high dpi) into your product.
5. Implement your design, using the Microsoft accessibility developer hub and your frameworks accessibility
specification as a reference point.
6. Test your product with users who have special needs to ensure they will be able to take advantage of the
inclusive design techniques implemented in it.
7. Deliver your finished product and document your implementation for those who may work on the project after
you.

Related topics
Inclusive design
Engineering Software for Accessibility
Microsoft accessibility developer hub
Developing inclusive Windows apps
Accessibility
Developing inclusive Windows apps
3/6/2017 4 min to read Edit on GitHub

This article discusses how to develop accessible Universal Windows Platform (UWP) apps. Specifically, it assumes
that you understand how to design the logical hierarchy for your app. Learn to develop accessible Windows 10
UWP apps that include keyboard navigation, color and contrast settings, and support for assistive technologies.
If you have not yet done so, please start by reading Designing inclusive software.
There are three things you should do to make sure that your app is accessible:
1. Expose your UI elements to programmatic access.
2. Ensure that your app supports keyboard navigation for people who are unable to use a mouse or touchscreen.
3. Make sure that your app supports accessible color and contrast settings.

Programmatic access
Programmatic access is critical for creating accessibility in apps. This is achieved by setting the accessible name
(required) and description (optional) for content and interactive UI elements in your app. This ensures that UI
controls are exposed to assistive technology (AT) such as screen readers (for example, Narrator) or alternative
output devices (such as Braille displays). Without programmatic access, the APIs for assistive technology cannot
interpret information correctly, leaving the user unable to use the products sufficiently, or forcing the AT to use
undocumented programming interfaces or techniques never intended to be used as an accessibility interface.
When UI controls are exposed to assistive technology, the AT is able to determine what actions and options are
available to the user.
For more information about making your app UI elements available to assistive technologies (AT), see Expose basic
accessibility information.

Keyboard navigation
For users who are blind or have mobility issues, being able to navigate the UI with a keyboard is extremely
important. However, only those UI controls that require user interaction to function should be given keyboard
focus. Components that dont require an action, such as static images, do not need keyboard focus.
It is important to remember that unlike navigating with a mouse or touch, keyboard navigation is linear. When
considering keyboard navigation, think about how your user will interact with your product and what the logical
navigation will be. In Western cultures, people read from left to right, top to bottom. It is, therefore, common
practice to follow this pattern for keyboard navigation.
When designing keyboard navigation, examine your UI, and think about these questions:
How are the controls laid out or grouped in the UI?
Are there a few significant groups of controls?
If yes, do those groups contain another level of groups?
Among peer controls, should navigation be done by tabbing around, or via special navigation (such as arrow
keys), or both?
The goal is to help the user understand how the UI is laid out and identify the controls that are actionable. If you
are finding that there are too many tab stops before the user completes the navigation loop, consider grouping
related controls together. Some controls that are related, such as a hybrid control, may need to be addressed at
this early exploration stage. After you begin to develop your product, it is difficult to rework the keyboard
navigation, so plan carefully and plan early!
To learn more about keyboard navigation among UI elements, see Keyboard accessibility.
Also, the Engineering Software for Accessibility eBook has an excellent chapter on this subject titled Designing the
Logical Hierarchy.

Color and contrast


One of the built-in accessibility features in Windows is the High Contrast mode, which heightens the color contrast
of text and images on the computer screen. For some people, increasing the contrast in colors reduces eyestrain
and makes it easier to read. When you verify your UI in high contrast, you want to check that controls have been
coded consistently and with system colors (not with hard-coded colors) to ensure that they will be able to see all
the controls on the screen that a user not using high contrast would see.
XAML

<Button Background="{ThemeResource ButtonBackgroundThemeBrush}">OK</Button>

For more information about using system colors and resources, see XAML theme resources.
As long as you havent overridden system colors, a UWP app supports high-contrast themes by default. If a user
has chosen that they want the system to use a high-contrast theme from system settings or accessibility tools, the
framework automatically uses colors and style settings that produce a high-contrast layout and rendering for
controls and components in the UI.
For more information, see High-contrast themes.
If you have decided to use you own color theme instead of system colors, consider these guidelines:
Color contrast ratio The updated Section 508 of the Americans with Disability Act, as well as other legislation,
requires that the default color contrasts between text and its background must be 5:1. For large text (18-point font
sizes, or 14 points and bolded), the required default contrast is 3:1.
Color combinations About 7 percent of males (and less than 1 percent of females) have some form of color
deficiency. Users with colorblindness have problems distinguishing between certain colors, so it is important that
color alone is never used to convey status or meaning in an application. As for decorative images (such as icons or
backgrounds), color combinations should be chosen in a manner that maximizes the perception of the image by
colorblind users.

Accessibility checklist
Following is an abbreviated version of the accessibility checklist:
1. Set the accessible name (required) and description (optional) for content and interactive UI elements in your
app.
2. Implement keyboard accessibility.
3. Visually verify your UI to ensure that the text contrast is adequate, elements render correctly in the high-
contrast themes, and colors are used correctly.
4. Run accessibility tools, address reported issues, and verify the screen reading experience. (See Accessibility
testing topic.)
5. Make sure your app manifest settings follow accessibility guidelines.
6. Declare your app as accessible in the Windows Store. (See the Accessibility in the store topic.)
For more detail, please see the full Accessibility checklist topic.
Related topics
Designing inclusive software
Inclusive design
Accessibility practices to avoid
Engineering Software for Accessibility
Microsoft accessibility developer hub
Accessibility
Accessibility testing
3/6/2017 9 min to read Edit on GitHub

Testing procedures to follow to ensure that your Universal Windows Platform (UWP) app is accessible.

Run accessibility testing tools


The Windows Software Development Kit (SDK) includes several accessibility testing tools such as AccScope,
Inspect and UI Accessibility Checker. These tools can help you verify the accessibility of your app. Be sure to
verify all app scenarios and UI elements.
You can launch the accessibility testing tools either from a Microsoft Visual Studio command prompt or from the
Windows SDK tools folder (the bin subdirectory of where the Windows SDK is installed on your development
machine).
AccScope
The AccScope tool enables developers and testers to evaluate the accessibility of their app during the app's
development and design, potentially in earlier prototype phases, rather than in the late testing phases of an app's
development cycle. It's particularly intended for testing Narrator accessibility scenarios with your app.
Inspect
Inspect enables you to select any UI element and view its accessibility data. You can view Microsoft UI Automation
properties and control patterns and test the navigational structure of the automation elements in the UI
Automation tree. Use Inspect as you develop the UI to verify how accessibility attributes are exposed in UI
Automation. In some cases the attributes come from the UI Automation support that is already implemented for
default XAML controls. In other cases the attributes come from specific values that you have set in your XAML
markup, as AutomationProperties attached properties.
The following image shows the Inspect tool querying the UI Automation properties of the Edit menu element in
Notepad.

UI Accessibility Checker
UI Accessibility Checker (AccChecker) helps you discover accessibility problems at run time. When your UI is
complete and functional, use AccChecker to test different scenarios, verify the correctness of runtime accessibility
information, and discover runtime issues. You can run AccChecker in UI or command line mode. To run the UI
mode tool, open the AccChecker directory in the Windows SDK bin directory, run acccheckui.exe, and click the
Help menu.
UI Automation Verify
UI Automation Verify (UIA Verify) is an automated testing and verification framework for UI Automation
implementations. UIA Verify can integrate into the test code and conduct regular, automated testing or spot
checks of UI Automation scenarios. To run UIA Verify, run VisualUIAVerifyNative.exe from the UIAVerify
subdirectory.
Accessible Event Watcher
Accessible Event Watcher (AccEvent) tests whether an app's UI elements fire proper UI Automation and
Microsoft Active Accessibility events when UI changes occur. Changes in the UI can occur when the focus changes,
or when a UI element is invoked, selected, or has a state or property change.

NOTE
Most accessibility testing tools mentioned in the documentation run on a PC, not on a phone. You can run some of the
tools while developing and using an emulator, but most of these tools can't expose the UI Automation tree within the
emulator.

Test keyboard accessibility


The best way to test your keyboard accessibility is to unplug your mouse or use the On-Screen Keyboard if you are
using a tablet device. Test keyboard accessibility navigation by using the Tab key. You should be able to cycle
through all interactive UI elements by using Tab key. For composite UI elements, verify that you can navigate
among the parts of elements by using the arrow keys. For example, you should be able to navigate lists of items
using keyboard keys. Finally, make sure that you can invoke all interactive UI elements with the keyboard once
those elements have focus, typically by using the Enter or Spacebar key.

Verify the contrast ratio of visible text


Use color contrast tools to verify that the visible text contrast ratio is acceptable. The exceptions include inactive UI
elements, and logos or decorative text that doesnt convey any information and can be rearranged without
changing the meaning. See Accessible text requirements for more information on contrast ratio and exceptions.
See Techniques for WCAG 2.0 G18 (Resources section) for tools that can test contrast ratios.

NOTE
Some of the tools listed by Techniques for WCAG 2.0 G18 can't be used interactively with a Windows Store app. You may
need to enter foreground and background color values manually in the tool, make screen captures of app UI and then run
the contrast ratio tool over the screen capture image, or run the tool while opening source bitmap files in an image editing
program rather than while that image is loaded by the app.

Verify your app in high contrast


Use your app while a high-contrast theme is active to verify that all the UI elements display correctly. All text
should be readable, and all images should be clear. Adjust the XAML theme-dictionary resources or control
templates to correct any theme issues that come from controls. In cases where prominent high-contrast issues are
not coming from themes or controls (such as from image files), provide separate versions to use when a high-
contrast theme is active.
Verify your app with display settings
Use the system display options that adjust the display's dots per inch (dpi) value, and ensure that your app UI
scales correctly when the dpi value changes. (Some users change dpi values as an accessibility option, it's available
from Ease of Access as well as display properties.) If you find any issues, follow the Guidelines for layout scaling
and provide additional resources for different scaling factors.

Verify main app scenarios by using Narrator


Use Narrator to test the screen reading experience for your app by performing the following steps:
Use these steps to test your app using Narrator with a mouse and keyboard:
1. Start Narrator by pressing Windows logo key + Enter.
2. Navigate your app with the keyboard by using the Tab key, the arrow keys, and the Caps Lock + arrow keys.
3. As you navigate your app, listen as Narrator reads the elements of your UI and verify the following:
For each control, ensure that Narrator reads all visible content. Also ensure that Narrator reads each
control's name, any applicable state (checked, selected, and so on), and the control type (button, check
box, list item, and so on).
If the element is interactive, verify that you can use Narrator to invoke its action by pressing Caps Lock +
Enter.
For each table, ensure that Narrator correctly reads the table name, the table description (if available),
and the row and column headings.
4. Press Caps Lock + Shift + Enter to search your app and verify that all of your controls appear in the search
list, and that the control names are localized and readable.
5. Turn off your monitor and try to accomplish main app scenarios by using only the keyboard and Narrator. To
get the full list of Narrator commands and shortcuts, press Caps Lock + F1.
Starting with Windows 10 version 1607, we introduced a new developer mode in Narrator. Turn on developer
mode when Narrator is already running by pressing Caps Lock + Shift + F12. When developer mode is enabled,
the screen will be masked and will highlight only the accessible objects and the associated text that is exposed
programmatically to Narrator. This gives a you a good visual representation of the information that is exposed to
Narrator.
Use these steps to test your app using Narrator's touch mode:

NOTE
Narrator automatically enters touch mode on devices that support 4+ contacts. Narrator doesn't support multi-monitor
scenarios or multi-touch digitizers on the primary screen.

1. Get familiar with the UI and explore the layout.


Navigate through the UI by using single-finger swipe gestures. Use left or right swipes to move
between items, and up or down swipes to change the category of items being navigated. Categories
include all items, links, tables, headers, and so on. Navigating with single-finger swipe gestures is similar
to navigating with Caps Lock + Arrow.
Use tab gestures to navigate through focusable elements. A three-finger swipe to the right or left
is the same as navigating with Tab and Shift + Tab on a keyboard.
Spatially investigate the UI with a single finger. Drag a single finger up and down, or left and right,
to have Narrator read the items under your finger. You can use the mouse as an alternative because it
uses the same hit-testing logic as dragging a single finger.
Read the entire window and all its contents with a three finger swipe up. This is equivalent to
using Caps Lock + W.
If there is important UI that you cannot reach, you may have an accessibility issue.
2. Interact with a control to test its primary and secondary actions, and its scrolling behavior.
Primary actions include things like activating a button, placing a text caret, and setting focus to the control.
Secondary actions include actions such as selecting a list item or expanding a button that offers multiple
options.
To test a primary action: Double tap, or press with one finger and tap with another.
To test a secondary action: Triple tap, or press with one finger and double tap with another.
To test scrolling behavior: Use two-finger swipes to scroll in the desired direction.
Some controls provide additional actions. To display the full list, enter a single four-finger tap.
If a control responds to the mouse or keyboard but does not respond to a primary or secondary touch
interaction, the control might need to implement additional UI Automation control patterns.
You should also consider using the AccScope tool to test Narrator accessibility scenarios with your app. The
AccScope tool topic describes how to configure AccScope to test Narrator scenarios.

Examine the UI Automation representation for your app


Several of the UI Automation testing tools mentioned previously provide a way to view your app in a way that
deliberately does not consider what the app looks like, and instead represents the app as a structure of UI
Automation elements. This is how UI Automation clients, mainly assistive technologies, will be interacting with
your app in accessibility scenarios.
The AccScope tool provides a particularly interesting view of your app because you can see the UI Automation
elements either as a visual representation or as a list. If you use the visualization, you can drill down into the parts
in a way that you'll be able to correlate with the visual appearance of your app's UI. You can even test the
accessibility of your earliest UI prototypes before you've assigned all the logic to the UI, making sure that both the
visual interaction and accessibility-scenario navigation for your app is in balance.
One aspect that you can test is whether there are elements appearing in the UI Automation element view that you
don't want to appear there. If you find elements you want to omit from the view, or conversely if there are
elements missing, you can use the AutomationProperties.AccessibilityView XAML attached property to adjust
how XAML controls appear in accessibility views. After you've looked at the basic accessibility views, this is also a
good opportunity to recheck your tab sequences or spatial navigation as enabled by arrow keys to make sure
users can reach each of the parts that are interactive and exposed in the control view.

Related topics
Accessibility
Practices to avoid
UI Automation
Accessibility in Windows
Accessibility in the Store
3/6/2017 1 min to read Edit on GitHub

Describes the requirements for declaring your Universal Windows Platform (UWP) app as accessible in the
Windows Store.
While submitting your app to the Windows Store for certification, you can declare your app as accessible.
Declaring your app as accessible makes it easier to discover for users who are interested in accessible apps, such
as users who have visual impairments. Users discover accessible apps by using the Accessible filter while
searching the Windows Store. Declaring your app as accessible also adds the Accessible tag to your apps
description.
By declaring your app as accessible, you state that it has the basic accessibility information that users need for
primary scenarios using one or more of the following:
The keyboard.
A high contrast theme.
A variable dots per inch (dpi) setting.
Common assistive technology such as the Windows accessibility features, including Narrator, Magnifier, and
On-Screen Keyboard.
You should declare your app as accessible if you built and tested it for accessibility. This means that you did the
following:
Set all the relevant accessibility information for UI elements, including name, role, value, and so on.
Implemented full keyboard accessibility, enabling the user to:
Accomplish primary app scenarios by using only the keyboard.
Tab among UI elements in a logical order.
Navigate among UI elements within a control by using the arrow keys.
Use keyboard shortcuts to reach primary app functionality.
Use Narrator touch gestures for Tab and arrow equivalency for devices with no keyboard.
Ensured that your app UI is visually accessible: has a minimum text contrast ratio of 4.5:1, does not rely on
color alone to convey information, and so on.
Used accessibility testing tools such as Inspect and UIAVerify to verify your accessibility implementation, and
resolved all priority 1 errors reported by such tools.
Verified your apps primary scenarios from end to end by using Narrator, Magnifier, On-Screen Keyboard, a
high contrast theme, and adjusted dpi settings.
See the Accessibility checklist for a review of these procedures and links to resources that will help you accomplish
them.

Related topics
Accessibility
Accessibility checklist
3/6/2017 2 min to read Edit on GitHub

Provides a checklist to help you ensure that your Universal Windows Platform (UWP) app is accessible .
Here we provide a checklist you can use to ensure that your app is accessible.
1. Set the accessible name (required) and description (optional) for content and interactive UI elements in your
app.
An accessible name is a short, descriptive text string that a screen reader uses to announce a UI element.
Some UI elements such as TextBlock and TextBox promote their text content as the default accessible
name; see Basic accessibility information.
You should set the accessible name explicitly for images or other controls that do not promote inner text
content as an implicit accessible name. You should use labels for form elements so that the label text can be
used as a LabeledBy target in the Microsoft UI Automation model for correlating labels and inputs. If you
want to provide more UI guidance for users than is typically included in the accessible name, accessible
descriptions and tooltips help users understand the UI.
For more info, see Accessible name and Accessible description.
2. Implement keyboard accessibility:
Test the default tab index order for a UI. Adjust the tab index order if necessary, which may require
enabling or disabling certain controls, or changing the default values of TabIndex on some of the UI
elements.
Use controls that support arrow-key navigation for composite elements. For default controls, the arrow-
key navigation is typically already implemented.
Use controls that support keyboard activation. For default controls, particularly those that support the UI
Automation Invoke pattern, keyboard activation is typically available; check the documentation for that
control.
Set access keys or implement accelerator keys for specific parts of the UI that support interaction.
For any custom controls that you use in your UI, verify that you have implemented these controls with
correct AutomationPeer support for activation, and defined overrides for key handling as needed to
support activation, traversal and access or accelerator keys.
For more info, see Keyboard interactions.
3. Visually verify your UI to ensure that the text contrast is adequate, elements render correctly in the high-
contrast themes, and colors are used correctly.
Use the system display options that adjust the display's dots per inch (dpi) value, and ensure that your
app UI scales correctly when the dpi value changes. (Some users change dpi values as an accessibility
option, it's available from Ease of Access.)
Use a color analyzer tool to verify that the visual text contrast ratio is at least 4.5:1.
Switch to a high contrast theme and verify that the UI for your app is readable and usable.
Ensure that your UI doesnt use color as the only way to convey information.
For more info, see High-contrast themes and Accessible text requirements.
4. Run accessibility tools, address reported issues, and verify the screen reading experience.
Use tools such as Inspect to verify programmatic access, run diagnostic tools such as AccChecker to
discover common errors, and verify the screen reading experience with Narrator.
For more info, see Accessibility testing.
5. Make sure your app manifest settings follow accessibility guidelines.
6. Declare your app as accessible in the Windows Store.
If you implemented the baseline accessibility support, declaring your app as accessible in the Windows
Store can help reach more customers and get some additional good ratings.
For more info, see Accessibility in the Store.

Related topics
Accessibility
Design for accessibility
Practices to avoid
Expose basic accessibility information
3/6/2017 8 min to read Edit on GitHub

Basic accessibility info is often categorized into name, role, and value. This topic describes code to help your app
expose the basic information that assistive technologies need.

Accessible name
An accessible name is a short, descriptive text string that a screen reader uses to announce a UI element. Set the
accessible name for UI elements so that have a meaning that is important for understanding the content or
interacting with the UI. Such elements typically include images, input fields, buttons, controls, and regions.
This table describes how to define or obtain an accessible name for various types of elements in a XAML UI.

ELEMENT TYPE DESCRIPTION

Static text For TextBlock and RichTextBlock elements, an accessible


name is automatically determined from the visible (inner) text.
All of the text in that element is used as the name. See Name
from inner text.

Images The XAML Image element does not have a direct analog to
the HTML alt attribute of img and similar elements. Either
use AutomationProperties.Name to provide a name, or
use the captioning technique. See Accessible names for
images.

Form elements The accessible name for a form element should be the same
as the label that is displayed for that element. See Labels and
LabeledBy.

Buttons and links By default, the accessible name of a button or link is based on
the visible text, using the same rules as described in Name
from inner text. In cases where a button contains only an
image, use AutomationProperties.Name to provide a text-
only equivalent of the button's intended action.

Most container elements such as panels do not promote their content as accessible name. This is because it is the
item content that should report a name and corresponding role, not its container. The container element might
report that it is an element that has children in a Microsoft UI Automation representation, such that the assistive
technology logic can traverse it. But users of assistive technologies don't generally need to know about the
containers and thus most containers aren't named.

Role and value


The controls and other UI elements that are part of the XAML vocabulary implement UI Automation support for
reporting role and value as part of their definitions. You can use UI Automation tools to examine the role and
value information for the controls, or you can read the documentation for the AutomationPeer implementations
of each control. The available roles in a UI Automation framework are defined in the AutomationControlType
enumeration. UI Automation clients such as assistive technologies can obtain role information by calling methods
that the UI Automation framework exposes by using the control's AutomationPeer.
Not all controls have a value. Controls that do have a value report this information to UI Automation through the
peers and patterns that are supported by that control. For example, a TextBox form element does have a value.
An assistive technology can be a UI Automation client and can discover both that a value exists and what the value
is. In this specific case the TextBox supports the IValueProvider pattern through the TextBoxAutomationPeer
definitions.

NOTE
For cases where you use AutomationProperties.Name or other techniques to supply the accessible name explicitly, do
not include the same text as is used by the control role or type information in the accessible name. For example do not
include strings such as "button" or "list" in the name. The role and type information comes from a different UI Automation
property (LocalizedControlType) that is supplied by the default control support for UI Automation. Many assistive
technologies append the LocalizedControlType to the accessible name, so duplicating the role in the accessible name can
result in unnecessarily repeated words. For example, if you give a Button control an accessible name of "button" or include
"button" as the last part of the name, this might be read by screen readers as "button button". You should test this aspect
of your accessibility info using Narrator.

Influencing the UI Automation tree views


The UI Automation framework has a concept of tree views, where UI Automation clients can retrieve the
relationships between elements in a UI using three possible views: raw, control, and content. The control view is
the view that's often used by UI Automation clients because it provides a good representation and organization of
the elements in a UI that are interactive. Testing tools usually enable you to choose which tree view to use when
the tool presents the organization of elements.
By default, any Control derived class and a few other elements will appear in the control view when the UI
Automation framework represents the UI for a Universal Windows Platform (UWP) app. But sometimes you don't
want an element to appear in the control view because of UI composition, where that element is duplicating
information or presenting information that's unimportant for accessibility scenarios. Use the attached property
AutomationProperties.AccessibilityView to change how elements are exposed to the tree views. If you put an
element in the Raw tree, most assistive technologies won't report that element as part of their views. To see some
examples of how this works in existing controls, open the generic.xaml design reference XAML file in a text editor,
and search for AutomationProperties.AccessibilityView in the templates.

Name from inner text


To make it easier to use strings that already exist in the visible UI for accessible name values, many of the controls
and other UI elements provide support for automatically determining a default accessible name based on inner
text within the element, or from string values of content properties.
TextBlock, RichTextBlock, TextBox and RichTextBlock each promote the value of the Text property as the
default accessible name.
Any ContentControl subclass uses an iterative "ToString" technique to find strings in its Content value, and
promotes these strings as the default accessible name.

NOTE
As enforced by UI Automation, the accessible name length cannot be greater than 2048 characters. If a string used for
automatic accessible name determination exceeds that limit, the accessible name is truncated at that point.

Accessible names for images


To support screen readers and to provide the basic identifying information for each element in the UI, you
sometimes must provide text alternatives to non-textual information such as images and charts (excluding any
purely decorative or structural elements). These elements don't have inner text so the accessible name won't have
a calculated value. You can set the accessible name directly by setting the AutomationProperties.Name
attached property as shown in this example.
XAML

<Image Source="product.png"
AutomationProperties.Name="An image of a customer using the product."/>

Alternatively, consider including a text caption that appears in the visible UI and that also serves as the label-
associated accessibility information for the image content. Here's an example:
XAML

<Image HorizontalAlignment="Left" Width="480" x:Name="img_MyPix"


Source="snoqualmie-NF.jpg"
AutomationProperties.LabeledBy="{Binding ElementName=caption_MyPix}"/>
<TextBlock x:Name="caption_MyPix">Mount Snoqualmie Skiing</TextBlock>

Labels and LabeledBy


The preferred way to associate a label with a form element is to use a TextBlock with an x:Name for label text,
and then to set the AutomationProperties.LabeledBy attached property on the form element to reference the
labeling TextBlock by its XAML name. If you use this pattern, when the user clicks the label, the focus moves to
the associated control and assistive technologies can use the label text as the accessible name for the form field.
Here's an example that shows this technique.
XAML

<StackPanel x:Name="LayoutRoot" Background="White">


<StackPanel Orientation="Horizontal">
<TextBlock Name="lbl_FirstName">First name</TextBlock>
<TextBox
AutomationProperties.LabeledBy="{Binding ElementName=lbl_FirstName}"
Name="tbFirstName" Width="100"/>
</StackPanel>
<StackPanel Orientation="Horizontal">
<TextBlock Name="lbl_LastName">Last name</TextBlock>
<TextBox
AutomationProperties.LabeledBy="{Binding ElementName=lbl_LastName}"
Name="tbLastName" Width="100"/>
</StackPanel>
</StackPanel>

Accessible description (optional)


An accessible description provides additional accessibility information about a particular UI element. You typically
provide an accessible description when an accessible name alone does not adequately convey an element's
purpose.
The Narrator screen reader reads an element's accessible description only when the user requests more
information about the element by pressing CapsLock+F.
The accessible name is meant to identify the control rather than to fully document its behavior. If a brief
description is not enough to explain the control, you can set the AutomationProperties.HelpText attached
property in addition to AutomationProperties.Name.

Testing accessibility early and often


Ultimately, the best approach for supporting screen readers is to test your app using a screen reader yourself.
That will show you how the screen reader behaves and what basic accessibility information might be missing
from the app. Then you can adjust the UI or UI Automation property values accordingly. For more info, see
Accessibility testing.
One of the tools you can use for testing accessibility is called AccScope. The AccScope tool is particularly useful
because you can see visual representations of your UI that represent how assistive technologies might view your
app as an automation tree. In particular, there's a Narrator mode that gives a view of how Narrator gets text from
your app and how it organizes the elements in the UI. AccScope is designed so that it can be used and be useful
throughout a development cycle for an app, even during the preliminary design phase. For more info see
AccScope.

Accessible names from dynamic data


Windows supports many controls that can be used to display values that come from an associated data source,
through a feature known as data binding. When you populate lists with data items, you may need to use a
technique that sets accessible names for data-bound list items after the initial list is populated. For more info, see
"Scenario 4" in the XAML accessibility sample.

Accessible names and localization


To make sure that the accessible name is also an element that is localized, you should use correct techniques for
storing localizable strings as resources and then referencing the resource connections with x:Uid directive values.
If the accessible name is coming from an explicitly set AutomationProperties.Name usage, make sure that the
string there is also localizable.
Note that attached properties such as the AutomationProperties properties use a special qualifying syntax for
the resource name, so that the resource references the attached property as applied to a specific element. For
example, the resource name for AutomationProperties.Name as applied to a UI element named MediumButton
is: MediumButton.[using:Windows.UI.Xaml.Automation]AutomationProperties.Name .

Related topics
Accessibility
AutomationProperties.Name
XAML accessibility sample
Accessibility testing
Keyboard accessibility
3/6/2017 11 min to read Edit on GitHub

If your app does not provide good keyboard access, users who are blind or have mobility issues can have difficulty
using your app or may not be able to use it at all.

Keyboard navigation among UI elements


To use the keyboard with a control, the control must have focus, and to receive focus (without using a pointer) the
control must be accessible in a UI design via tab navigation. By default, the tab order of controls is the same as the
order in which they are added to a design surface, listed in XAML, or programmatically added to a container.
In most cases, the default order based on how you defined controls in XAML is the best order, especially because
that is the order in which the controls are read by screen readers. However, the default order does not necessarily
correspond to the visual order. The actual display position might depend on the parent layout container and
certain properties that you can set on the child elements to influence the layout. To be sure your app has a good
tab order, test this behavior yourself. Especially if you have a grid metaphor or table metaphor for your layout, the
order in which users might read versus the tab order could end up different. That's not always a problem in and of
itself. But just make sure to test your app's functionality both as a touchable UI and as a keyboard-accessible UI
and verify that your UI makes sense either way.
You can make the tab order match the visual order by adjusting the XAML. Or you can override the default tab
order by setting the TabIndex property, as shown in the following example of a Grid layout that uses column-first
tab navigation.
XAML

<!--Custom tab order.-->


<Grid>
<Grid.RowDefinitions>...</Grid.RowDefinitions>
<Grid.ColumnDefinitions>...</Grid.ColumnDefinitions>

<TextBlock Grid.Column="1" HorizontalAlignment="Center">Groom</TextBlock>


<TextBlock Grid.Column="2" HorizontalAlignment="Center">Bride</TextBlock>

<TextBlock Grid.Row="1">First name</TextBlock>


<TextBox x:Name="GroomFirstName" Grid.Row="1" Grid.Column="1" TabIndex="1"/>
<TextBox x:Name="BrideFirstName" Grid.Row="1" Grid.Column="2" TabIndex="3"/>

<TextBlock Grid.Row="2">Last name</TextBlock>


<TextBox x:Name="GroomLastName" Grid.Row="2" Grid.Column="1" TabIndex="2"/>
<TextBox x:Name="BrideLastName" Grid.Row="2" Grid.Column="2" TabIndex="4"/>
</Grid>

You may want to exclude a control from the tab order. You typically do this only by making the control
noninteractive, for example by setting its IsEnabled property to false. A disabled control is automatically excluded
from the tab order. But occasionally you might want to exclude a control from the tab order even if it is not
disabled. In this case, you can set the IsTabStop property to false.
Any elements that can have focus are usually in the tab order by default. The exception to this is that certain text-
display types such as RichTextBlock can have focus so that they can be accessed by the clipboard for text
selection; however, they're not in the tab order because it is not expected for static text elements to be in the tab
order. They're not conventionally interactive (they can't be invoked, and don't require text input, but do support
the Text control pattern that supports finding and adjusting selection points in text). Text should not have the
connotation that setting focus to it will enable some action that's possible. Text elements will still be detected by
assistive technologies, and read aloud in screen readers, but that relies on techniques other than finding those
elements in the practical tab order.
Whether you adjust TabIndex values or use the default order, these rules apply:
UI elements with TabIndex equal to 0 are added to the tab order based on declaration order in XAML or child
collections.
UI elements with TabIndex greater than 0 are added to the tab order based on the TabIndex value.
UI elements with TabIndex less than 0 are added to the tab order and appear before any zero value. This
potentially differs from HTML's handling of its tabindex attribute (and negative tabindex was not supported
in older HTML specifications).

Keyboard navigation within a UI element


For composite elements, it is important to ensure proper inner navigation among the contained elements. A
composite element can manage its current active child to reduce the overhead of having all child elements able to
have focus. Such a composite element is included in the tab order, and it handles keyboard navigation events itself.
Many of the composite controls already have some inner navigation logic built into the into control's event
handling. For example, arrow-key traversal of items is enabled by default on the ListView, GridView, ListBox and
FlipView controls.

Keyboard alternatives to pointer actions and events for specific control


elements
Ensure that UI elements that can be clicked can also be invoked by using the keyboard. To use the keyboard with a
UI element, the element must have focus. Only classes that derive from Control support focus and tab navigation.
For UI elements that can be invoked, implement keyboard event handlers for the Spacebar and Enter keys. This
makes the basic keyboard accessibility support complete and enables users to accomplish basic app scenarios by
using only the keyboard; that is, users can reach all interactive UI elements and activate the default functionality.
In cases where an element that you want to use in the UI cannot have focus, you could create your own custom
control. You must set the IsTabStop property to true to enable focus and you must provide a visual indication of
the focused state by creating a visual state that decorates the UI with a focus indicator. However, it is often easier
to use control composition so that the support for tab stops, focus, and Microsoft UI Automation peers and
patterns are handled by the control within which you choose to compose your content.
For example, instead of handling a pointer-pressed event on an Image, you could wrap that element in a Button
to get pointer, keyboard, and focus support.
XAML

<!--Don't do this.-->
<Image Source="sample.jpg" PointerPressed="Image_PointerPressed"/>

<!--Do this instead.-->


<Button Click="Button_Click"><Image Source="sample.jpg"/></Button>

Keyboard shortcuts
In addition to implementing keyboard navigation and activation for your app, it is a good practice to implement
shortcuts for your app's functionality. Tab navigation provides a good, basic level of keyboard support, but with
complex forms you may want to add support for shortcut keys as well. This can make your application more
efficient to use, even for people who use both a keyboard and pointing devices.
A shortcut is a keyboard combination that enhances productivity by providing an efficient way for the user to
access app functionality. There are two kinds of shortcut:
An access key is a shortcut to a piece of UI in your app. Access keys consist of the Alt key plus a letter key.
An accelerator key is a shortcut to an app command. Your app may or may not have UI that corresponds
exactly to the command. Accelerator keys consist of the Ctrl key plus a letter key.
It is imperative that you provide an easy way for users who rely on screen readers and other assistive technology
to discover your app's shortcut keys. Communicate shortcut keys by using tooltips, accessible names, accessible
descriptions, or some other form of on-screen communication. At a minimum, shortcut keys should be well
documented in your app's Help content.
You can document access keys through screen readers by setting the AutomationProperties.AccessKey
attached property to a string that describes the shortcut key. There is also an
AutomationProperties.AcceleratorKey attached property for documenting non-mnemonic shortcut keys,
although screen readers generally treat both properties the same way. Try to document shortcut keys in multiple
ways, using tooltips, automation properties, and written Help documentation.
The following example demonstrates how to document shortcut keys for media play, pause, and stop buttons.
XAML

<Grid KeyDown="Grid_KeyDown">

<Grid.RowDefinitions>
<RowDefinition Height="Auto" />
<RowDefinition Height="Auto" />
</Grid.RowDefinitions>

<MediaElement x:Name="DemoMovie" Source="xbox.wmv"


Width="500" Height="500" Margin="20" HorizontalAlignment="Center" />

<StackPanel Grid.Row="1" Margin="10"


Orientation="Horizontal" HorizontalAlignment="Center">

<Button x:Name="PlayButton" Click="MediaButton_Click"


ToolTipService.ToolTip="Shortcut key: Ctrl+P"
AutomationProperties.AcceleratorKey="Control P">
<TextBlock>Play</TextBlock>
</Button>

<Button x:Name="PauseButton" Click="MediaButton_Click"


ToolTipService.ToolTip="Shortcut key: Ctrl+A"
AutomationProperties.AcceleratorKey="Control A">
<TextBlock>Pause</TextBlock>
</Button>

<Button x:Name="StopButton" Click="MediaButton_Click"


ToolTipService.ToolTip="Shortcut key: Ctrl+S"
AutomationProperties.AcceleratorKey="Control S">
<TextBlock>Stop</TextBlock>
</Button>
</StackPanel>
</Grid>
IMPORTANT
Setting AutomationProperties.AcceleratorKey or AutomationProperties.AccessKey doesn't enable keyboard
functionality. It only reports to the UI Automation framework what keys should be used, so that such information can be
passed on to users via assistive technologies. The implementation for key handling still needs to be done in code, not XAML.
You will still need to attach handlers for KeyDown or KeyUp events on the relevant control in order to actually implement
the keyboard shortcut behavior in your app. Also, the underline text decoration for an access key is not provided
automatically. You must explicitly underline the text for the specific key in your mnemonic as inline Underline formatting if
you wish to show underlined text in the UI.

For simplicity, the preceding example omits the use of resources for strings such as "Ctrl+A". However, you must
also consider shortcut keys during localization. Localizing shortcut keys is relevant because the choice of key to
use as the shortcut key typically depends on the visible text label for the element.
For more guidance about implementing shortcut keys, see Shortcut keys in the Windows User Experience
Interaction Guidelines.
Implementing a key event handler
Input events such as the key events use an event concept called routed events. A routed event can bubble up
through the child elements of a composited control, such that a common control parent can handle events for
multiple child elements. This event model is convenient for defining shortcut key actions for a control that contains
several composite parts that by design cannot have focus or be part of the tab order.
For example code that shows how to write a key event handler that includes checking for modifiers such as the
Ctrl key, see Keyboard interactions.

Keyboard navigation for custom controls


We recommend the use of arrow keys as keyboard shortcuts for navigating among child elements, in cases where
the child elements have a spacial relationship to each other. If tree-view nodes have separate sub-elements for
handling expand-collapse and node activation, use the left and right arrow keys to provide keyboard expand-
collapse functionality. If you have an oriented control that supports directional traversal within the control content,
use the appropriate arrow keys.
Generally you implement custom key handling for custom controls by including an override of OnKeyDown and
OnKeyUp methods as part of the class logic.

An example of a visual state for a focus indicator


We mentioned earlier that any custom control that enables the user to focus it should have a visual focus indicator.
Usually that focus indicator is as simple as drawing a rectangle shape immediately around the control's normal
bounding rectangle. The Rectangle for visual focus is a peer element to the rest of the control's composition in a
control template, but is initially set with a Visibility value of Collapsed because the control isn't focused yet.
Then, when the control does get focus, a visual state is invoked that specifically sets the Visibility of the focus
visual to Visible. Once focus is moved elsewhere, another visual state is called, and the Visibility becomes
Collapsed.
All of the default XAML controls will display an appropriate visual focus indicator when focused (if they can be
focused). There are also potentially different looks depending on the user's selected theme (particularly if the user
is using a high contrast mode.) If you're using the XAML controls in your UI and not replacing the control
templates, you don't need to do anything extra to get visual focus indicators on controls that behave and display
correctly. But if you're intending to retemplate a control, or if you're curious about how XAML controls provide
their visual focus indicators, the remainder of this section explains how this is done in XAML and in the control
logic.
Here's some example XAML that comes from the default XAML template for a Button.
XAML

<ControlTemplate TargetType="Button">
...
<Rectangle
x:Name="FocusVisualWhite"
IsHitTestVisible="False"
Stroke="{ThemeResource FocusVisualWhiteStrokeThemeBrush}"
StrokeEndLineCap="Square"
StrokeDashArray="1,1"
Opacity="0"
StrokeDashOffset="1.5"/>
<Rectangle
x:Name="FocusVisualBlack"
IsHitTestVisible="False"
Stroke="{ThemeResource FocusVisualBlackStrokeThemeBrush}"
StrokeEndLineCap="Square"
StrokeDashArray="1,1"
Opacity="0"
StrokeDashOffset="0.5"/>
...
</ControlTemplate>

So far this is just the composition. To control the focus indicator's visibility, you define visual states that toggle the
Visibility property. This is done using the VisualStateManager.VisualStateGroups attached property, as
applied to the root element that defines the composition.
XAML

<ControlTemplate TargetType="Button">
<Grid>
<VisualStateManager.VisualStateGroups>
<!--other visual state groups here-->
<VisualStateGroup x:Name="FocusStates">
<VisualState x:Name="Focused">
<Storyboard>
<DoubleAnimation
Storyboard.TargetName="FocusVisualWhite"
Storyboard.TargetProperty="Opacity"
To="1" Duration="0"/>
<DoubleAnimation
Storyboard.TargetName="FocusVisualBlack"
Storyboard.TargetProperty="Opacity"
To="1" Duration="0"/>
</VisualState>
<VisualState x:Name="Unfocused" />
<VisualState x:Name="PointerFocused" />
</VisualStateGroup>
<VisualStateManager.VisualStateGroups>
<!--composition is here-->
</Grid>
</ControlTemplate>

Note how only one of the named states adjusts Visibility directly whereas the others are seemingly empty. The
way that visual states work is that as soon as the control uses another state from the same VisualStateGroup, any
animations applied by the previous state are immediately canceled. Because the default Visibility from
composition is Collapsed, this means the rectangle will not appear. The control logic controls this by listening for
focus events like GotFocus and changing the states with GoToState. Often this is already handled for you if you
are using a default control or customizing based on a control that already has that behavior.
Keyboard accessibility and Windows Phone
A Windows Phone device typically doesn't have a dedicated, hardware keyboard. However, a Soft Input Panel (SIP)
can support several keyboard accessibility scenarios. Screen readers can read text input from the Text SIP,
including announcing deletions. Users can discover where their fingers are because the screen reader can detect
that the user is scanning keys, and it reads the scanned key name aloud. Also, some of the keyboard-oriented
accessibility concepts can be mapped to related assistive technology behaviors that don't use a keyboard at all. For
example, even though a SIP won't include a Tab key, Narrator supports a touch gesture that's the equivalent of
pressing the Tab key, so having a useful tab order through the controls in a UI is still an important accessibility
principle. Arrow keys as used for navigating the parts within complex controls are also supported through
Narrator touch gestures. Once focus has reached a control that's not for text input, Narrator supports a gesture
that invokes that control's action.
Keyboard shortcuts aren't typically relevant for Windows Phone apps, because a SIP won't include Control or Alt
keys.

Related topics
Accessibility
Keyboard interactions
Input: Touch keyboard sample
Responding to the appearance of the on-screen keyboard sample
XAML accessibility sample
High contrast themes
3/6/2017 7 min to read Edit on GitHub

Windows supports high contrast themes for the OS and apps that users may choose to enable. High contrast
themes use a small palette of contrasting colors that makes the interface easier to see.
Figure 1. Calculator shown in light theme and High Contrast Black theme.

You can switch to a high contrast theme by using Settings > Ease of access > High contrast.

NOTE
Don't confuse high contrast themes with light and dark themes, which allow a much larger color palette that isn't considered
to have high contrast. For more light and dark themes, see the article on color.

While common controls come with full high contrast support for free, care needs to be taken while customizing
your UI. The most common high contrast bug is caused by hard-coding a color on a control inline.

<!-- Don't do this! -->


<Grid Background="#E6E6E6">

<!-- Instead, create BrandedPageBackgroundBrush and do this. -->


<Grid Background="{ThemeResource BrandedPageBackgroundBrush}">

When the #E6E6E6 color is set inline in the first example, the Grid will retain that background color in all themes. If
the user switches to the High Contrast Black theme, they'll expect your app to have a black background. Since
#E6E6E6 is almost white, some users may not be able to interact with your app.

In the second example, the {ThemeResource} markup extension is used to reference a color in the
ThemeDictionaries collection, a dedicated property of a ResourceDictionary element. ThemeDictionaries
allows XAML to automatically swap colors for you based on the user's current theme.

Theme dictionaries
When you need to change a color from its system default, create a ThemeDictionaries collection for your app.
1. Start by creating the proper plumbing, if it doesn't already exist. In App.xaml, create a ThemeDictionaries
collection, including Default and HighContrast at a minimum.
2. In Default, create the type of Brush you need, usually a SolidColorBrush. Give it a x:Key name specific to what it
is being used for.
3. Assign the Color you want for it.
4. Copy that Brush into HighContrast.

<Application.Resources>
<ResourceDictionary>
<ResourceDictionary.ThemeDictionaries>
<!-- Default is a fallback if a more precise theme isn't called
out below -->
<ResourceDictionary x:Key="Default">
<SolidColorBrush x:Key="BrandedPageBackgroundBrush" Color="#E6E6E6" />
</ResourceDictionary>

<!-- Optional, Light is used in light theme.


If included, Default will be used for Dark theme -->
<ResourceDictionary x:Key="Light">
<SolidColorBrush x:Key="BrandedPageBackgroundBrush" Color="#E6E6E6" />
</ResourceDictionary>

<!-- HighContrast is used in all high contrast themes -->


<ResourceDictionary x:Key="HighContrast">
<SolidColorBrush x:Key="BrandedPageBackgroundBrush" Color="#E6E6E6" />
</ResourceDictionary>
</ResourceDictionary.ThemeDictionaries>
</ResourceDictionary>
</Application.Resources>

The last step is to determine what color to use in high contrast, which is covered in the next section.

NOTE
HighContrast is not the only available key name. There's also HighContrastBlack, HighContrastWhite, and
HighContrastCustom. In most cases, HighContrast is all you need.

High contrast colors


On the Settings > Ease of access > High contrast page, there are 4 high contrast themes by default.
Figure 2. After the user selects an option, the page shows a preview.
Figure 3. Every color swatch on the preview can be clicked to change its value. Every swatch also
directly maps to an XAML color resource.

Each SystemColor*Color resource is a variable that automatically updates color when the user switches high contrast
themes. Following are guidelines for where and when to use each resource.
RESOURCE USAGE

SystemColorWindowTextColor Body copy, headings, lists; any text that can't be interacted
with

SystemColorHotlightColor Hyperlinks

SystemColorGrayTextColor Disabled UI

SystemColorHighlightTextColor Foreground color for text or UI that's in progress, selected, or


currently being interacted with

SystemColorHighlightColor Background color for text or UI that's in progress, selected, or


currently being interacted with

SystemColorButtonTextColor Foreground color for buttons; any UI that can be interacted


with

SystemColorButtonFaceColor Background color for buttons; any UI that can be interacted


with

SystemColorWindowColor Background of pages, panes, popups, and bars

It's often helpful to look to existing apps, Start, or the common controls to see how others have solved high
contrast design problems that are similar to your own.
Do
Respect the background/foreground pairs where possible.
Test in all 4 high contrast themes while your app is running. The user should not have to restart your app when
they switch themes.
Be consistent.
Don't
Hard code a color in the HighContrast theme; use the SystemColor*Color resources.
Choose a color resource for aesthetics. Remember, they change with the theme!
Don't use SystemColorGrayTextColor for body copy that's secondary or acts as a hint.
To continue the earlier example, you need to pick a resource for BrandedPageBackgroundBrush . Because the name
indicates that it will be used for a background, SystemColorWindowColor is a good choice.
<Application.Resources>
<ResourceDictionary>
<ResourceDictionary.ThemeDictionaries>
<!-- Default is a fallback if a more precise theme isn't called
out below -->
<ResourceDictionary x:Key="Default">
<SolidColorBrush x:Key="BrandedPageBackgroundBrush" Color="#E6E6E6" />
</ResourceDictionary>

<!-- Optional, Light is used in light theme.


If included, Default will be used for Dark theme -->
<ResourceDictionary x:Key="Light">
<SolidColorBrush x:Key="BrandedPageBackgroundBrush" Color="#E6E6E6" />
</ResourceDictionary>

<!-- HighContrast is used in all high contrast themes -->


<ResourceDictionary x:Key="HighContrast">
<SolidColorBrush x:Key="BrandedPageBackgroundBrush" Color="{ThemeResource SystemColorWindowColor}" />
</ResourceDictionary>
</ResourceDictionary.ThemeDictionaries>
</ResourceDictionary>
</Application.Resources>

Later in your app, you can now set the background.

<Grid Background="{ThemeResource BrandedPageBackgroundBrush}">

Note how {ThemeResource} is used twice, once to reference SystemColorWindowColor and again to reference
BrandedPageBackgroundBrush . Both are required for your app to theme correctly at run time. This is a good time to
test out the functionality in your app. The Grid's background will automatically update as you switch to a high
contrast theme. It will also update when switching between different high contrast themes.

When to use borders


Pages, panes, popups, and bars should all use SystemColorWindowColor for their background in high contrast. Add a
high contrast-only border where necessary to preserve important boundaries in your UI.
Figure 4. The navigation pane and the page both share the same background color in high contrast. A
high contrast-only border to divide them is essential.

List items
In high contrast, items in a ListView have their background set to SystemColorHighlightColor when they are hovered,
pressed, or selected. Complex list items commonly have a bug where the content of the list item fails to invert its
color when the item is hovered, pressed, or selected. This makes the item impossible to read.
Figure 5. A simple list in light theme (left) and High Contrast Black theme (right). The second item is
selected; note how its text color is inverted in high contrast.

List items with colored text


One culprit is setting TextBlock.Foreground in the ListView's DataTemplate. This is commonly done to establish
visual hierarchy. The Foreground property is set on the ListViewItem, and TextBlocks in the DataTemplate inherit
the correct Foreground color when the item is hovered, pressed, or selected. However, setting Foreground breaks
the inheritance.
Figure 6. Complex list in light theme (left) and High Contrast Black theme (right). Note that in high
contrast, the second line of the selected item failed to invert.

You can work around this by setting Foreground conditionally via a Style that's in a ThemeDictionaries collection.
Because the Foreground is not set by SecondaryBodyTextBlockStyle in HighContrast, its color will correctly invert.
<!-- In App.xaml... -->
<ResourceDictionary.ThemeDictionaries>
<ResourceDictionary x:Key="Default">
<Style
x:Key="SecondaryBodyTextBlockStyle"
TargetType="TextBlock"
BasedOn="{StaticResource BodyTextBlockStyle}">
<Setter Property="Foreground" Value="{StaticResource SystemControlForegroundBaseMediumBrush}" />
</Style>
</ResourceDictionary>

<ResourceDictionary x:Key="Light">
<Style
x:Key="SecondaryBodyTextBlockStyle"
TargetType="TextBlock"
BasedOn="{StaticResource BodyTextBlockStyle}">
<Setter Property="Foreground" Value="{StaticResource SystemControlForegroundBaseMediumBrush}" />
</Style>
</ResourceDictionary>

<ResourceDictionary x:Key="HighContrast">
<!-- The Foreground Setter is omitted in HighContrast -->
<Style
x:Key="SecondaryBodyTextBlockStyle"
TargetType="TextBlock"
BasedOn="{StaticResource BodyTextBlockStyle}" />
</ResourceDictionary>
</ResourceDictionary.ThemeDictionaries>

<!-- Usage in your DataTemplate... -->


<DataTemplate>
<StackPanel>
<TextBlock Style="{StaticResource BodyTextBlockStyle}" Text="Double line list item" />

<!-- Note how ThemeResource is used to reference the Style -->


<TextBlock Style="{ThemeResource SecondaryBodyTextBlockStyle}" Text="Second line of text" />
</StackPanel>
</DataTemplate>

List items with buttons and links


Sometimes list items have more complex controls in them, such as HyperlinkButton or Button. These controls have
their own states for hovered, pressed, and sometimes selected, which don't work overtop of a list item. Hyperlinks
are also yellow in High Contrast Black, which makes them difficult to read when a list item is hovered, pressed, or
selected.
Figure 7. Note how the hyperlink is difficult to read in high contrast.

A solution is to set the background of the DataTemplate to SystemColorWindowColor in high contrast. This creates the
effect of a border in high contrast.
<!-- In App.xaml... -->
<ResourceDictionary.ThemeDictionaries>
<ResourceDictionary x:Key="Default">
<SolidColorBrush x:Key="HighContrastOnlyBackgroundBrush" Color="Transparent" />
</ResourceDictionary>

<ResourceDictionary x:Key="HighContrast">
<SolidColorBrush x:Key="HighContrastOnlyBackgroundBrush" Color="{ThemeResource SystemColorWindowColor}" />
</ResourceDictionary>
</ResourceDictionary.ThemeDictionaries>

<!-- Usage in your ListView... -->


<ListView>
<ListView.ItemContainerStyle>
<Style TargetType="ListViewItem">
<!-- Causes the DataTemplate to fill the entire width and height
of the list item -->
<Setter Property="HorizontalContentAlignment" Value="Stretch" />
<Setter Property="VerticalContentAlignment" Value="Stretch" />

<!-- Padding is handled in the DataTemplate -->


<Setter Property="Padding" Value="0" />
</Style>
</ListView.ItemContainerStyle>
<ListView.ItemTemplate>
<DataTemplate>
<!-- Margin of 2px allows some of the ListViewItem's background
to shine through. An additional left padding of 10px puts the
content a total of 12px from the left edge -->
<StackPanel
Margin="2,2,2,2"
Padding="10,0,0,0"
Background="{ThemeResource HighContrastOnlyBackgroundBrush}">

<!-- Foreground is explicitly set so that it doesn't


disappear on hovered, pressed, or selected -->
<TextBlock
Foreground="{ThemeResource SystemControlForegroundBaseHighBrush}"
Text="Double line list item" />

<HyperlinkButton Content="Hyperlink" />


</StackPanel>
</DataTemplate>
</ListView.ItemTemplate>
</ListView>

Figure 8. The bordered effect is a good fit when you have more complex controls in your list items.

Detecting high contrast


You can programmatically check if the current theme is a high contrast theme by using members of the
AccessibilitySettings class.
NOTE
Make sure you call the AccessibilitySettings constructor from a scope where the app is initialized and is already displaying
content.

Related topics
Accessibility
UI contrast and settings sample
XAML accessibility sample
XAML high contrast sample
AccessibilitySettings
Accessible text requirements
3/6/2017 8 min to read Edit on GitHub

This topic describes best practices for accessibility of text in an app, by assuring that colors and backgrounds
satisfy the necessary contrast ratio. This topic also discusses the Microsoft UI Automation roles that text elements
in a Universal Windows Platform (UWP) app can have, and best practices for text in graphics.

Contrast ratios
Although users always have the option to switch to a high-contrast mode, your app design for text should regard
that option as a last resort. A much better practice is to make sure that your app text meets certain established
guidelines for the level of contrast between text and its background. Evaluation of the level of contrast is based on
deterministic techniques that do not consider color hue. For example, if you have red text on a green background,
that text might not be readable to someone with a color blindness impairment. Checking and correcting the
contrast ratio can prevent these types of accessibility issues.
The recommendations for text contrast documented here are based on a web accessibility standard, G18: Ensuring
that a contrast ratio of at least 4.5:1 exists between text (and images of text) and background behind the text. This
guidance exists in the W3C Techniques for WCAG 2.0 specification.
To be considered accessible, visible text must have a minimum luminosity contrast ratio of 4.5:1 against the
background. Exceptions include logos and incidental text, such as text that is part of an inactive UI component.
Text that is decorative and conveys no information is excluded. For example, if random words are used to create a
background, and the words can be rearranged or substituted without changing meaning, the words are
considered to be decorative and do not need to meet this criterion.
Use color contrast tools to verify that the visible text contrast ratio is acceptable. See Techniques for WCAG 2.0
G18 (Resources section) for tools that can test contrast ratios.

NOTE
Some of the tools listed by Techniques for WCAG 2.0 G18 can't be used interactively with a UWP app. You may need to
enter foreground and background color values manually in the tool, or make screen captures of app UI and then run the
contrast ratio tool over the screen capture image.

Text element roles


A UWP app can use these default elements (commonly called text elements or textedit controls):
TextBlock: role is Text
TextBox: role is Edit
RichTextBlock (and overflow class RichTextBlockOverflow): role is Text
RichEditBox: role is Edit
When a control reports that is has a role of Edit, assistive technologies assume that there are ways for users to
change the values. So if you put static text in a TextBox, you are misreporting the role and thus misreporting the
structure of your app to the accessibility user.
In the text models for XAML, there are two elements that are primarily used for static text, TextBlock and
RichTextBlock. Neither of these are a Control subclass, and as such neither of them are keyboard-focusable or
can appear in the tab order. But that does not mean that assistive technologies can't or won't read them. Screen
readers are typically designed to support multiple modes of reading the content in an app, including a dedicated
reading mode or navigation patterns that go beyond focus and the tab order, like a "virtual cursor". So don't put
your static text into focusable containers just so that tab order gets the user there. Assistive technology users
expect that anything in the tab order is interactive, and if they encounter static text there, that is more confusing
than helpful. You should test this out yourself with Narrator to get a sense of the user experience with your app
when using a screen reader to examine your app's static text.

Auto-suggest accessibility
When a user types into an entry field and a list of potential suggestions appears, this type of scenario is called
auto-suggest. This is common in the To: line of a mail field, the Cortana search box in Windows, the URL entry
field in Microsoft Edge, the location entry field in the Weather app, and so on. If you are using a XAML
AutosuggestBox or the HTML intrinsic controls, then this experience is already hooked up for you by default. To
make this experience accessible the entry field and the list must be associated. This is explained in the
Implementing auto-suggest section.
Narrator has been updated to make this type of experience accessible with a special suggestions mode. At a high
level, when the edit field and list are connected properly the end user will:
Know the list is present and when the list closes
Know how many suggestions are available
Know the selected item, if any
Be able to move Narrator focus to the list
Be able to navigate through a suggestion with all other reading modes

Example of a suggestion list


Implementing auto-suggest
To make this experience accessible the entry field and the list must be associated in the UIA tree. This association is
done with the UIA_ControllerForPropertyId property in desktop apps or the ControlledPeers property in UWP
apps.
At a high level there are 2 types of auto-suggest experiences.
Default selection
If a default selection is made in the list, Narrator looks for a UIA_SelectionItem_ElementSelectedEventId event
in a desktop app, or the AutomationEvents.SelectionItemPatternOnElementSelected event to be fired in a
UWP app. Every time the selection changes, when the user types another letter and the suggestions have been
updated or when a user navigates through the list, the ElementSelected event should be fired.
Example where there is a default selection
No default selection
If there is no default selection, such as in the Weather apps location box, then Narrator looks for the desktop
UIA_LayoutInvalidatedEventId event or the UWP LayoutInvalidated event to be fired on the list every time
the list is updated.

Example where there is no default selection


XAML implementation
If you are using the default XAML AutosuggestBox, then everything is already hooked up for you. If you are
making your own auto-suggest experience using a TextBox and a list then you will need to set the list as
AutomationProperties.ControlledPeers on the TextBox. You must fire the AutomationPropertyChanged
event for the ControlledPeers property every time you add or remove this property and also fire your own
SelectionItemPatternOnElementSelected events or LayoutInvalidated events depending on your type of
scenario, which was explained previously in this article.
HTML implementation
If you are using the intrinsic controls in HTML, then the UIA implementation has already been mapped for you.
Below is an example of an implementation that is already hooked up for you:

<label>Sites <input id="input1" type="text" list="datalist1" /></label>


<datalist id="datalist1">
<option value="http://www.google.com/" label="Google"></option>
<option value="http://www.reddit.com/" label="Reddit"></option>
</datalist>

If you are creating your own controls, you must set up your own ARIA controls, which are explained in the W3C
standards.

Text in graphics
Whenever possible, avoid including text in a graphic. For example, any text that you include in the image source
file that is displayed in the app as an Image element is not automatically accessible or readable by assistive
technologies. If you must use text in graphics, make sure that the AutomationProperties.Name value that you
provide as the equivalent of "alt text" includes that text or a summary of the text's meaning. Similar considerations
apply if you are creating text characters from vectors as part of a Path, or by using Glyphs.
Text font size
Many readers have difficulty reading text in an app when that text is using a text font size that's simply too small
for them to read. You can prevent this issue by making the text in your app's UI reasonably large in the first place.
There are also assistive technologies that are part of Windows, and these enable users to change the view sizes of
apps, or the display in general.
Some users change dots per inch (dpi) values of their primary display as an accessibility option. That option is
available from Make things on the screen larger in Ease of Access, which redirects to a Control Panel UI
for Appearance and Personalization / Display. Exactly which sizing options are available can vary because
this depends on the capabilities of the display device.
The Magnifier tool can enlarge a selected area of the UI. However, it's difficult to use the Magnifier tool for
reading text.

Text scale factor


Various text elements and controls have an IsTextScaleFactorEnabled property. This property has the value true
by default. When its value is true, the setting called Text scaling on the phone (Settings > Ease of access),
causes the text size of text in that element to be scaled up. The scaling will affect text that has a small FontSize to a
greater degree than it will affect text that has a large FontSize. But you can disable that automatic enlargement by
setting an element's IsTextScaleFactorEnabled property to false. Try this markup, adjust the Text size setting
on the phone, and see what happens to the TextBlocks:
XAML

<TextBlock Text="In this case, IsTextScaleFactorEnabled has been left set to its default value of true."
Style="{StaticResource BodyTextBlockStyle}"/>

<TextBlock Text="In this case, IsTextScaleFactorEnabled has been set to false."


Style="{StaticResource BodyTextBlockStyle}" IsTextScaleFactorEnabled="False"/>

Please don't disable automatic enlargement routinely, though, because scaling UI text universally across all apps is
an important accessibility experience for users and they will expect it to work in your app too.
You can also use the TextScaleFactorChanged event and the TextScaleFactor property to find out about
changes to the Text size setting on the phone. Heres how:
C#

{
...
var uiSettings = new Windows.UI.ViewManagement.UISettings();
uiSettings.TextScaleFactorChanged += UISettings_TextScaleFactorChanged;
...
}

private async void UISettings_TextScaleFactorChanged(Windows.UI.ViewManagement.UISettings sender, object args)


{
var messageDialog = new Windows.UI.Popups.MessageDialog(string.Format("It's now {0}", sender.TextScaleFactor), "The text scale factor
has changed");
await messageDialog.ShowAsync();
}

The value of TextScaleFactor is a double in the range [1,2]. The smallest text is scaled up by this amount. You
might be able to use the value to, say, scale graphics to match the text. But remember that not all text is scaled by
the same factor. Generally speaking, the larger text is to begin with, the less its affected by scaling.
These types have an IsTextScaleFactorEnabled property:
ContentPresenter
Control and derived classes
FontIcon
RichTextBlock
TextBlock
TextElement and derived classes

Related topics
Accessibility
Basic accessibility information
XAML text display sample
XAML text editing sample
XAML accessibility sample
Accessibility practices to avoid
3/6/2017 3 min to read Edit on GitHub

If you want to create an accessible Universal Windows Platform (UWP) app, see this list of practices to avoid:
Avoid building custom UI elements if you can use the default Windows controls or controls that have
already implemented Microsoft UI Automation support. Standard Windows controls are accessible by default
and usually require adding only a few accessibility attributes that are app-specific. In contrast, implementing
the AutomationPeer support for a true custom control is somewhat more involved (see Custom automation
peers).
Don't put static text or other non-interactive elements into the tab order (for example, by setting the
TabIndex property for an element that is not interactive). If non-interactive elements are in the tab order, that
is against keyboard accessibility guidelines because it decreases efficiency of keyboard navigation for users.
Many assistive technologies use tab order and the ability to focus an element as part of their logic for how to
present an app's interface to the assistive technology user. Text-only elements in the tab order can confuse
users who expect only interactive elements in the tab order (buttons, check boxes, text input fields, combo
boxes, lists, and so on).
Avoid using absolute positioning of UI elements (such as in a Canvas element) because the presentation
order often differs from the child element declaration order (which is the de facto logical order). Whenever
possible, arrange UI elements in document or logical order to ensure that screen readers can read those
elements in the correct order. If the visible order of UI elements can diverge from the document or logical
order, use explicit tab index values (set TabIndex) to define the correct reading order.
Dont use color as the only way to convey information. Users who are color blind cannot receive
information that is conveyed only through color, such as in a color status indicator. Include other visual cues,
preferably text, to ensure that information is accessible.
Dont automatically refresh an entire app canvas unless it is really necessary for app functionality. If
you need to automatically refresh page content, update only certain areas of the page. Assistive
technologies generally must assume that a refreshed app canvas is a totally new structure, even if the
effective changes were minimal. The cost of this to the assistive technology user is that any document view
or description of the refreshed app now must be recreated and presented to the user again.
A deliberate page navigation that is initiated by the user is a legitimate case for refreshing the app's
structure. But make sure that the UI item that initiates the navigation is correctly identified or named to give
some indication that invoking it will result in a context change and page reload.

NOTE
If you do refresh content within a region, consider setting the AccessibilityProperties.LiveSetting accessibility
property on that element to one of the non-default settings Polite or Assertive. Some assistive technologies can
map this setting to the Accessible Rich Internet Applications (ARIA) concept of live regions and can thus inform the
user that a region of content has changed.

Dont use UI elements that flash more than three times per second. Flashing elements can cause
some people to have seizures. It is best to avoid using UI elements that flash.
Dont change user context or activate functionality automatically. Context or activation changes should
occur only when the user takes a direct action on a UI element that has focus. Changes in user context include
changing focus, displaying new content, and navigating to a different page. Making context changes without
involving the user can be disorienting for users who have disabilities. The exceptions to this requirement
include displaying submenus, validating forms, displaying help text in another control, and changing context in
response to an asynchronous event.

Related topics
Accessibility
Accessibility in the Store
Accessibility checklist
Custom automation peers
3/6/2017 33 min to read Edit on GitHub

Describes the concept of automation peers for Microsoft UI Automation, and how you can provide automation
support for your own custom UI class.
UI Automation provides a framework that automation clients can use to examine or operate the user interfaces of
a variety of UI platforms and frameworks. If you are writing a Universal Windows Platform (UWP) app, the classes
that you use for your UI already provide UI Automation support. You can derive from existing, non-sealed classes
to define a new kind of UI control or support class. In the process of doing so, your class might add behavior that
should have accessibility support but that the default UI Automation support does not cover. In this case, you
should extend the existing UI Automation support by deriving from the AutomationPeer class that the base
implementation used, adding any necessary support to your peer implementation, and informing the Universal
Windows Platform (UWP) control infrastructure that it should create your new peer.
UI Automation enables not only accessibility applications and assistive technologies, such as screen readers, but
also quality-assurance (test) code. In either scenario, UI Automation clients can examine user-interface elements
and simulate user interaction with your app from other code outside your app. For info about UI Automation
across all platforms and in its wider meaning, see UI Automation Overview.
There are two distinct audiences who use the UI Automation framework.
UI Automation *clients* call UI Automation APIs to learn about all of the UI that is currently displayed to the
user. For example, an assistive technology such as a screen reader acts as a UI Automation client. The UI is
presented as a tree of automation elements that are related. The UI Automation client might be interested in
just one app at a time, or in the entire tree. The UI Automation client can use UI Automation APIs to navigate
the tree and to read or change information in the automation elements.
UI Automation *providers* contribute information to the UI Automation tree, by implementing APIs that
expose the elements in the UI that they introduced as part of their app. When you create a new control, you
should now act as a participant in the UI Automation provider scenario. As a provider, you should ensure that
all UI Automation clients can use the UI Automation framework to interact with your control for both
accessibility and testing purposes.
Typically there are parallel APIs in the UI Automation framework: one API for UI Automation clients and another,
similarly named API for UI Automation providers. For the most part, this topic covers the APIs for the UI
Automation provider, and specifically the classes and interfaces that enable provider extensibility in that UI
framework. Occasionally we mention UI Automation APIs that the UI Automation clients use, to provide some
perspective, or provide a lookup table that correlates the client and provider APIs. For more info about the client
perspective, see UI Automation Client Programmer's Guide.

NOTE
UI Automation clients don't typically use managed code and aren't typically implemented as a UWP app (they are usually
desktop apps). UI Automation is based on a standard and not a specific implementation or framework. Many existing UI
Automation clients, including assistive technology products such as screen readers, use Component Object Model (COM)
interfaces to interact with UI Automation, the system, and the apps that run in child windows. For more info on the COM
interfaces and how to write a UI Automation client using COM, see UI Automation Fundamentals.

Determining the existing state of UI Automation support for your


custom UI class
Before you attempt to implement an automation peer for a custom control, you should test whether the base class
and its automation peer already provides the accessibility or automation support that you need. In many cases,
the combination of the FrameworkElementAutomationPeer implementations, specific peers, and the patterns
they implement can provide a basic but satisfactory accessibility experience. Whether this is true depends on how
many changes you made to the object model exposure to your control versus its base class. Also, this depends on
whether your additions to base class functionality correlate to new UI elements in the template contract or to the
visual appearance of the control. In some cases your changes might introduce new aspects of user experience that
require additional accessibility support.
Even if using the existing base peer class provides the basic accessibility support, it is still a best practice to define
a peer so that you can report precise ClassName information to UI Automation for automated testing scenarios.
This consideration is especially important if you are writing a control that is intended for third-party consumption.

Automation peer classes


The UWP builds on existing UI Automation techniques and conventions used by previous managed-code UI
frameworks such as Windows Forms, Windows Presentation Foundation (WPF) and Microsoft Silverlight. Many of
the control classes and their function and purpose also have their origin in a previous UI framework.
By convention, peer class names begin with the control class name and end with "AutomationPeer". For example,
ButtonAutomationPeer is the peer class for the Button control class.

NOTE
For purposes of this topic, we treat the properties that are related to accessibility as being more important when you
implement a control peer. But for a more general concept of UI Automation support, you should implement a peer in
accordance with recommendations as documented by the UI Automation Provider Programmer's Guide and UI Automation
Fundamentals. Those topics don't cover the specific AutomationPeer APIs that you would use to provide the information
in the UWP framework for UI Automation, but they do describe the properties that identify your class or provide other
information or interaction.

Peers, patterns and control types


A control pattern is an interface implementation that exposes a particular aspect of a control's functionality to a UI
Automation client. UI Automation clients use the properties and methods exposed through a control pattern to
retrieve information about capabilities of the control, or to manipulate the control's behavior at run time.
Control patterns provide a way to categorize and expose a control's functionality independent of the control type
or the appearance of the control. For example, a control that presents a tabular interface uses the Grid control
pattern to expose the number of rows and columns in the table, and to enable a UI Automation client to retrieve
items from the table. As other examples, the UI Automation client can use the Invoke control pattern for controls
that can be invoked, such as buttons, and the Scroll control pattern for controls that have scroll bars, such as list
boxes, list views, or combo boxes. Each control pattern represents a separate type of functionality, and control
patterns can be combined to describe the full set of functionality supported by a particular control.
Control patterns relate to UI as interfaces relate to COM objects. In COM, you can query an object to ask what
interfaces it supports and then use those interfaces to access functionality. In UI Automation, UI Automation clients
can query a UI Automation element to find out which control patterns it supports, and then interact with the
element and its peered control through the properties, methods, events, and structures exposed by the supported
control patterns.
One of the main purposes of an automation peer is to report to a UI Automation client which control patterns the
UI element can support through its peer. To do this, UI Automation providers implement new peers that change
the GetPattern method behavior by overriding the GetPatternCore method. UI Automation clients make calls
that the UI Automation provider maps to calling GetPattern. UI Automation clients query for each specific pattern
that they want to interact with. If the peer supports the pattern, it returns an object reference to itself; otherwise it
returns null. If the return is not null, the UI Automation client expects that it can call APIs of the pattern interface
as a client, in order to interact with that control pattern.
A control type is a way to broadly define the functionality of a control that the peer represents. This is a different
concept than a control pattern because while a pattern informs UI Automation what info it can get or what actions
it can perform through a particular interface, the control type exists one level above that. Each control type has
guidance about these aspects of UI Automation:
UI Automation control patterns: A control type might support more than one pattern, each of which represents
a different classification of info or interaction. Each control type has a set of control patterns that the control
must support, a set that is optional, and a set that the control must not support.
UI Automation property values: Each control type has a set of properties that the control must support. These
are the general properties, as described in UI Automation Properties Overview, not the ones that are pattern-
specific.
UI Automation events: Each control type has a set of events that the control must support. Again these are
general, not pattern-specific, as described in UI Automation Events Overview.
UI Automation tree structure: Each control type defines how the control must appear in the UI Automation tree
structure.
Regardless of how automation peers for the framework are implemented, UI Automation client functionality isn't
tied to the UWP, and in fact it's likely that existing UI Automation clients such as assistive technologies will use
other programming models, such as COM. In COM, clients can QueryInterface for the COM control pattern
interface that implements the requested pattern or the general UI Automation framework for properties, events or
tree examination. For the patterns, the UI Automation framework marshals that interface code across into UWP
code running against the app's UI Automation provider and the relevant peer.
When you implement control patterns for a managed-code framework such as a Windows Store app using C# or
Microsoft Visual Basic, you can use .NET Framework interfaces to represent these patterns instead of using the
COM interface representation. For example, the UI Automation pattern interface for a Microsoft .NET provider
implementation of the Invoke pattern is IInvokeProvider.
For a list of control patterns, provider interfaces, and their purpose, see Control patterns and interfaces. For the list
of the control types, see UI Automation Control Types Overview.
Guidance for how to implement control patterns
The control patterns and what they're intended for are part of a larger definition of the UI Automation framework,
and don't just apply to the accessibility support for a Windows Store app. When you implement a control pattern
you should make sure you're implementing it in a way that matches the guidance as documented on MSDN and
also in the UI Automation specification. If you're looking for guidance, you can generally use the MSDN topics and
won't need to refer to the specification. Guidance for each pattern is documented here: Implementing UI
Automation Control Patterns. You'll notice that each topic under this area has an "Implementation Guidelines and
Conventions" section and "Required Members" section. The guidance usually refers to specific APIs of the relevant
control pattern interface in the Control Pattern Interfaces for Providers reference. Those interfaces are the
native/COM interfaces (and their APIs use COM-style syntax). But everything you see there has an equivalent in
the Windows.UI.Xaml.Automation.Provider namespace.
If you're using the default automation peers and expanding on their behavior, those peers have already been
written in conformance to UI Automation guidelines. If they support control patterns, you can rely on that pattern
support conforming with guidance at Implementing UI Automation Control Patterns. If a control peer reports that
it's representative of a control type defined by UI Automation, then the guidance documented at Supporting UI
Automation Control Types has been followed by that peer.
Nevertheless you might need additional guidance for control patterns or control types in order to follow the UI
Automation recommendations in your peer implementation. That would be particularly true if you're
implementing pattern or control type support that doesn't yet exist as a default implementation in a UWP control.
For example, the pattern for annotations isn't implemented in any of the default XAML controls. But you might
have an app that uses annotations extensively and therefore you want to surface that functionality to be
accessible. For this scenario, your peer should implement IAnnotationProvider and should probably report itself
as the Document control type with appropriate properties to indicate that your documents support annotation.
We recommend that you use the guidance that you see for the patterns under Implementing UI Automation
Control Patterns or control types under Supporting UI Automation Control Types as orientation and general
guidance. You might even try following some of the API links for descriptions and remarks as to the purpose of
the APIs. But for syntax specifics that are needed for UWP app programming, find the equivalent API within the
Windows.UI.Xaml.Automation.Provider namespace and use those reference pages for more info.

Built-in automation peer classes


In general, elements implement an automation peer class if they accept UI activity from the user, or if they contain
information needed by users of assistive technologies that represent the interactive or meaningful UI of apps. Not
all UWP visual elements have automation peers. Examples of classes that implement automation peers are
Button and TextBox. Examples of classes that do not implement automation peers are Border and classes based
on Panel, such as Grid and Canvas. A Panel has no peer because it is providing a layout behavior that is visual
only. There is no accessibility-relevant way for the user to interact with a Panel. Whatever child elements a Panel
contains are instead reported to UI Automation trees as child elements of the next available parent in the tree that
has a peer or element representation.

UI Automation and UWP process boundaries


Typically, UI Automation client code that accesses a UWP app runs out-of-process. The UI Automation framework
infrastructure enables information to get across the process boundary. This concept is explained in more detail in
UI Automation Fundamentals.

OnCreateAutomationPeer
All classes that derive from UIElement contain the protected virtual method OnCreateAutomationPeer. The
object initialization sequence for automation peers calls OnCreateAutomationPeer to get the automation peer
object for each control and thus to construct a UI Automation tree for run-time use. UI Automation code can use
the peer to get information about a controls characteristics and features and to simulate interactive use by means
of its control patterns. A custom control that supports automation must override OnCreateAutomationPeer and
return an instance of a class that derives from AutomationPeer. For example, if a custom control derives from
the ButtonBase class, the object returned by OnCreateAutomationPeer should derive from
ButtonBaseAutomationPeer.
If you're writing a custom control class and intend to also supply a new automation peer, you should override the
OnCreateAutomationPeer method for your custom control so that it returns a new instance of your peer. Your
peer class must derive directly or indirectly from AutomationPeer.
For example, the following code declares that the custom control NumericUpDown should use the peer
NumericUpDownPeer for UI Automation purposes.

C#
using Windows.UI.Xaml.Automation.Peers;
...
public class NumericUpDown : RangeBase {
public NumericUpDown() {
// other initialization; DefaultStyleKey etc.
}
...
protected override AutomationPeer OnCreateAutomationPeer()
{
return new NumericUpDownAutomationPeer(this);
}
}

Visual Basic

Public Class NumericUpDown


Inherits RangeBase
' other initialization; DefaultStyleKey etc.
Public Sub New()
End Sub
Protected Overrides Function OnCreateAutomationPeer() As AutomationPeer
Return New NumericUpDownAutomationPeer(Me)
End Function
End Class

C++

//.h
public ref class NumericUpDown sealed : Windows::UI::Xaml::Controls::Primitives::RangeBase
{
// other initialization not shown
protected:
virtual AutomationPeer^ OnCreateAutomationPeer() override
{
return ref new NumericUpDown(this);
}
};

NOTE
The OnCreateAutomationPeer implementation should do nothing more than initialize a new instance of your custom
automation peer, passing the calling control as owner, and return that instance. Do not attempt additional logic in this
method. In particular, any logic that could potentially lead to destruction of the AutomationPeer within the same call may
result in unexpected runtime behavior.

In typical implementations of OnCreateAutomationPeer, the owner is specified as this or Me because the


method override is in the same scope as the rest of the control class definition.
The actual peer class definition can be done in the same code file as the control or in a separate code file. The peer
definitions all exist in the Windows.UI.Xaml.Automation.Peers namespace that is a separate namespace from
the controls that they provide peers for. You can choose to declare your peers in separate namespaces also, as
long as you reference the necessary namespaces for the OnCreateAutomationPeer method call.
Choosing the correct peer base class
Make sure that your AutomationPeer is derived from a base class that gives you the best match for the existing
peer logic of the control class you are deriving from. In the case of the previous example, because NumericUpDown
derives from RangeBase, there is a RangeBaseAutomationPeer class available that you should base your peer
on. By using the closest matching peer class in parallel to how you derive the control itself, you can avoid
overriding at least some of the IRangeValueProvider functionality because the base peer class already
implements it.
The base Control class does not have a corresponding peer class. If you need a peer class to correspond to a
custom control that derives from Control, derive the custom peer class from
FrameworkElementAutomationPeer.
If you derive from ContentControl directly, that class has no default automation peer behavior because there is
no OnCreateAutomationPeer implementation that references a peer class. So make sure either to implement
OnCreateAutomationPeer to use your own peer, or to use FrameworkElementAutomationPeer as the peer if
that level of accessibility support is adequate for your control.

NOTE
You don't typically derive from AutomationPeer rather than FrameworkElementAutomationPeer. If you did derive
directly from AutomationPeer you'll need to duplicate a lot of basic accessibility support that would otherwise come from
FrameworkElementAutomationPeer.

Initialization of a custom peer class


The automation peer should define a type-safe constructor that uses an instance of the owner control for base
initialization. In the next example, the implementation passes the owner value on to the
RangeBaseAutomationPeer base, and ultimately it is the FrameworkElementAutomationPeer that actually
uses owner to set FrameworkElementAutomationPeer.Owner.
C#

public NumericUpDownAutomationPeer(NumericUpDown owner): base(owner)


{}

Visual Basic

Public Sub New(owner As NumericUpDown)


MyBase.New(owner)
End Sub

C++

//.h
public ref class NumericUpDownAutomationPeer sealed : Windows::UI::Xaml::Automation::Peers::RangeBaseAutomationPeer
//.cpp
public: NumericUpDownAutomationPeer(NumericUpDown^ owner);

Core methods of AutomationPeer


For UWP infrastructure reasons, the overridable methods of an automation peer are part of a pair of methods: the
public access method that the UI Automation provider uses as a forwarding point for UI Automation clients, and
the protected "Core" customization method that a UWP class can override to influence the behavior. The method
pair is wired together by default in such a way that the call to the access method always invokes the parallel
"Core" method that has the provider implementation, or as a fallback, invokes a default implementation from the
base classes.
When implementing a peer for a custom control, override any of the "Core" methods from the base automation
peer class where you want to expose behavior that is unique to your custom control. UI Automation code gets
information about your control by calling public methods of the peer class. To provide information about your
control, override each method with a name that ends with "Core" when your control implementation and design
creates accessibility scenarios or other UI Automation scenarios that differ from what's supported by the base
automation peer class.
At a minimum, whenever you define a new peer class, implement the GetClassNameCore method, as shown in
the next example.
C#

protected override string GetClassNameCore()


{
return "NumericUpDown";
}

NOTE
You might want to store the strings as constants rather than directly in the method body, but that is up to you. For
GetClassNameCore, you won't need to localize this string. The LocalizedControlType property is used any time a
localized string is needed by a UI Automation client, not ClassName.

GetAutomationControlType
Some assistive technologies use the GetAutomationControlType value directly when reporting characteristics
of the items in a UI Automation tree, as additional information beyond the UI Automation Name. If your control is
significantly different from the control you are deriving from and you want to report a different control type from
what is reported by the base peer class used by the control, you must implement a peer and override
GetAutomationControlTypeCore in your peer implementation. This is particularly important if you derive from
a generalized base class such as ItemsControl or ContentControl, where the base peer doesn't provide precise
information about control type.
Your implementation of GetAutomationControlTypeCore describes your control by returning an
AutomationControlType value. Although you can return AutomationControlType.Custom, you should return
one of the more specific control types if it accurately describes your control's main scenarios. Here's an example.
C#

protected override AutomationControlType GetAutomationControlTypeCore()


{
return AutomationControlType.Spinner;
}

NOTE
Unless you specify AutomationControlType.Custom, you don't have to implement GetLocalizedControlTypeCore to
provide a LocalizedControlType property value to clients. UI Automation common infrastructure provides translated
strings for every possible AutomationControlType value other than AutomationControlType.Custom.

GetPattern and GetPatternCore


A peer's implementation of GetPatternCore returns the object that supports the pattern that is requested in the
input parameter. Specifically, a UI Automation client calls a method that is forwarded to the provider's GetPattern
method, and specifies a PatternInterface enumeration value that names the requested pattern. Your override of
GetPatternCore should return the object that implements the specified pattern. That object is the peer itself,
because the peer should implement the corresponding pattern interface any time that it reports that it supports a
pattern. If your peer does not have a custom implementation of a pattern, but you know that the peer's base does
implement the pattern, you can call the base type's implementation of GetPatternCore from your
GetPatternCore. A peer's GetPatternCore should return null if a pattern is not supported by the peer. However,
instead of returning null directly from your implementation, you would usually rely on the call to the base
implementation to return null for any unsupported pattern.
When a pattern is supported, the GetPatternCore implementation can return this or Me. The expectation is that
the UI Automation client will cast the GetPattern return value to the requested pattern interface whenever it is
not null.
If a peer class inherits from another peer, and all necessary support and pattern reporting is already handled by
the base class, implementing GetPatternCore isn't necessary. For example, if you are implementing a range
control that derives from RangeBase, and your peer derives from RangeBaseAutomationPeer, that peer
returns itself for PatternInterface.RangeValue and has working implementations of the IRangeValueProvider
interface that supports the pattern.
Although it is not the literal code, this example approximates the implementation of GetPatternCore already
present in RangeBaseAutomationPeer.
C#

protected override object GetPatternCore(PatternInterface patternInterface)


{
if (patternInterface == PatternInterface.RangeValue)
{
return this;
}
return base.GetPattern(patternInterface);
}

If you are implementing a peer where you don't have all the support you need from a base peer class, or you want
to change or add to the set of base-inherited patterns that your peer can support, then you should override
GetPatternCore to enable UI Automation clients to use the patterns.
For a list of the provider patterns that are available in the UWP implementation of UI Automation support, see
Windows.UI.Xaml.Automation.Provider. Each such pattern has a corresponding value of the PatternInterface
enumeration, which is how UI Automation clients request the pattern in a GetPattern call.
A peer can report that it supports more than one pattern. If so, the override should include return path logic for
each supported PatternInterface value and return the peer in each matching case. It is expected that the caller
will request only one interface at a time, and it is up to the caller to cast to the expected interface.
Here's an example of a GetPatternCore override for a custom peer. It reports the support for two patterns,
IRangeValueProvider and IToggleProvider. The control here is a media display control that can display as full-
screen (the toggle mode) and that has a progress bar within which users can select a position (the range control).
This code came from the XAML accessibility sample.
C#
protected override object GetPatternCore(PatternInterface patternInterface)
{
if (patternInterface == PatternInterface.RangeValue)
{
return this;
}
else if (patternInterface == PatternInterface.Toggle)
{
return this;
}
return null;
}

Forwarding patterns from sub-elements


A GetPatternCore method implementation can also specify a sub-element or part as a pattern provider for its
host. This example mimics how ItemsControl transfers scroll-pattern handling to the peer of its internal
ScrollViewer control. To specify a sub-element for pattern handling, this code gets the sub-element object,
creates a peer for the sub-element by using the FrameworkElementAutomationPeer.CreatePeerForElement
method, and returns the new peer.
C#

protected override object GetPatternCore(PatternInterface patternInterface)


{
if (patternInterface == PatternInterface.Scroll)
{
ItemsControl owner = (ItemsControl) base.Owner;
UIElement itemsHost = owner.ItemsHost;
ScrollViewer element = null;
while (itemsHost != owner)
{
itemsHost = VisualTreeHelper.GetParent(itemsHost) as UIElement;
element = itemsHost as ScrollViewer;
if (element != null)
{
break;
}
}
if (element != null)
{
AutomationPeer peer = FrameworkElementAutomationPeer.CreatePeerForElement(element);
if ((peer != null) && (peer is IScrollProvider))
{
return (IScrollProvider) peer;
}
}
}
return base.GetPatternCore(patternInterface);
}

Other Core methods


Your control may need to support keyboard equivalents for primary scenarios; for more info about why this might
be necessary, see Keyboard accessibility. Implementing the key support is necessarily part of the control code and
not the peer code because that is part of a control's logic, but your peer class should override the
GetAcceleratorKeyCore and GetAccessKeyCore methods to report to UI Automation clients which keys are
used. Consider that the strings that report key information might need to be localized, and should therefore come
from resources, not hard-coded strings.
If you are providing a peer for a class that supports a collection, it's best to derive from both functional classes and
peer classes that already have that kind of collection support. If you can't do so, peers for controls that maintain
child collections may have to override the collection-related peer method GetChildrenCore to properly report the
parent-child relationships to the UI Automation tree.
Implement the IsContentElementCore and IsControlElementCore methods to indicate whether your control
contains data content or fulfills an interactive role in the user interface (or both). By default, both methods return
true. These settings improve the usability of assistive technologies such as screen readers, which may use these
methods to filter the automation tree. If your GetPatternCore method transfers pattern handling to a sub-
element peer, the sub-element peer's IsControlElementCore method can return false to hide the sub-element
peer from the automation tree.
Some controls may support labeling scenarios, where a text label part supplies information for a non-text part, or
a control is intended to be in a known labeling relationship with another control in the UI. If it's possible to provide
a useful class-based behavior, you can override GetLabeledByCore to provide this behavior.
GetBoundingRectangleCore and GetClickablePointCore are used mainly for automated testing scenarios. If
you want to support automated testing for your control, you might want to override these methods. This might be
desired for range-type controls, where you can't suggest just a single point because where the user clicks in
coordinate space has a different effect on a range. For example, the default ScrollBar automation peer overrides
GetClickablePointCore to return a "not a number" Point value.
GetLiveSettingCore influences the control default for the LiveSetting value for UI Automation. You might want
to override this if you want your control to return a value other than AutomationLiveSetting.Off. For more info
on what LiveSetting represents, see AutomationProperties.LiveSetting.
You might override GetOrientationCore if your control has a settable orientation property that can map to
AutomationOrientation. The ScrollBarAutomationPeer and SliderAutomationPeer classes do this.
Base implementation in FrameworkElementAutomationPeer
The base implementation of FrameworkElementAutomationPeer provides some UI Automation information
that can be interpreted from various layout and behavior properties that are defined at the framework level.
GetBoundingRectangleCore: Returns a Rect structure based on the known layout characteristics. Returns a
0-value Rect if IsOffscreen is true.
GetClickablePointCore: Returns a Point structure based on the known layout characteristics, as long as there
is a nonzero BoundingRectangle.
GetNameCore: More extensive behavior than can be summarized here; see GetNameCore. Basically, it
attempts a string conversion on any known content of a ContentControl or related classes that have content.
Also, if there is a value for LabeledBy, that item's Name value is used as the Name.
HasKeyboardFocusCore: Evaluated based on the owner's FocusState and IsEnabled properties. Elements
that aren't controls always return false.
IsEnabledCore: Evaluated based on the owner's IsEnabled property if it is a Control. Elements that aren't
controls always return true. This doesn't mean that the owner is enabled in the conventional interaction sense;
it means that the peer is enabled despite the owner not having an IsEnabled property.
IsKeyboardFocusableCore: Returns true if owner is a Control; otherwise it is false.
IsOffscreenCore: A Visibility of Collapsed on the owner element or any of its parents equates to a true
value for IsOffscreen. Exception: a Popup object can be visible even if its owner's parents are not.
SetFocusCore: Calls Focus.
GetParent: Calls FrameworkElement.Parent from the owner, and looks up the appropriate peer. This isn't an
override pair with a "Core" method, so you can't change this behavior.
NOTE
Default UWP peers implement a behavior by using internal native code that implements the UWP, not necessarily by using
actual UWP code. You won't be able to see the code or logic of the implementation through common language runtime
(CLR) reflection or other techniques. You also won't see distinct reference pages for subclass-specific overrides of base peer
behavior. For example, there might be additional behavior for GetNameCore of a TextBoxAutomationPeer, which won't
be described on the AutomationPeer.GetNameCore reference page, and there is no reference page for
TextBoxAutomationPeer.GetNameCore. There isn't even a TextBoxAutomationPeer.GetNameCore reference page.
Instead, read the reference topic for the most immediate peer class, and look for implementation notes in the Remarks
section.

Peers and AutomationProperties


Your automation peer should provide appropriate default values for your control's accessibility-related
information. Note that any app code that uses the control can override some of that behavior by including
AutomationProperties attached-property values on control instances. Callers can do this either for the default
controls or for custom controls. For example, the following XAML creates a button that has two customized UI
Automation properties: <Button AutomationProperties.Name="Special" AutomationProperties.HelpText="This is a special button."/>
For more info about AutomationProperties attached properties, see Basic accessibility information.
Some of the AutomationPeer methods exist because of the general contract of how UI Automation providers are
expected to report information, but these methods are not typically implemented in control peers. This is because
that info is expected to be provided by AutomationProperties values applied to the app code that uses the
controls in a specific UI. For example, most apps would define the labeling relationship between two different
controls in the UI by applying a AutomationProperties.LabeledBy value. However, LabeledByCore is
implemented in certain peers that represent data or item relationships in a control, such as using a header part to
label a data-field part, labeling items with their containers, or similar scenarios.

Implementing patterns
Let's look at how to write a peer for a control that implements an expand-collapse behavior by implementing the
control pattern interface for expand-collapse. The peer should enable the accessibility for the expand-collapse
behavior by returning itself whenever GetPattern is called with a value of PatternInterface.ExpandCollapse.
The peer should then inherit the provider interface for that pattern (IExpandCollapseProvider) and provide
implementations for each of the members of that provider interface. In this case the interface has three members
to override: Expand, Collapse, ExpandCollapseState.
It's helpful to plan ahead for accessibility in the API design of the class itself. Whenever you have a behavior that is
potentially requested either as a result of typical interactions with a user who is working in the UI or through an
automation provider pattern, provide a single method that either the UI response or the automation pattern can
call. For example, if your control has button parts that have wired event handlers that can expand or collapse the
control, and has keyboard equivalents for those actions, have these event handlers call the same method that you
call from within the body of the Expand or Collapse implementations for IExpandCollapseProvider in the peer.
Using a common logic method can also be a useful way to make sure that your control's visual states are updated
to show logical state in a uniform way, regardless of how the behavior was invoked.
A typical implementation is that the provider APIs first call Owner for access to the control instance at run time.
Then the necessary behavior methods can be called on that object.
C#
public class IndexCardAutomationPeer : FrameworkElementAutomationPeer, IExpandCollapseProvider {
private IndexCard ownerIndexCard;
public IndexCardAutomationPeer(IndexCard owner) : base(owner)
{
ownerIndexCard = owner;
}
}

An alternate implementation is that the control itself can reference its peer. This is a common pattern if you are
raising automation events from the control, because the RaiseAutomationEvent method is a peer method.

UI Automation events
UI Automation events fall into the following categories.

EVENT DESCRIPTION

Property change Fires when a property on a UI Automation element or control


pattern changes. For example, if a client needs to monitor an
app's check box control, it can register to listen for a property
change event on the ToggleState property. When the check
box control is checked or unchecked, the provider fires the
event and the client can act as necessary.

Element action Fires when a change in the UI results from user or


programmatic activity; for example, when a button is clicked
or invoked through the Invoke pattern.

Structure change Fires when the structure of the UI Automation tree changes.
The structure changes when new UI items become visible,
hidden, or removed on the desktop.

Global change Fires when actions of global interest to the client occur, such
as when the focus shifts from one element to another, or
when a child window closes. Some events do not necessarily
mean that the state of the UI has changed. For example, if the
user tabs to a text-entry field and then clicks a button to
update the field, a TextChanged event fires even if the user
did not actually change the text. When processing an event, it
may be necessary for a client application to check whether
anything has actually changed before taking action.

AutomationEvents identifiers
UI Automation events are identified by AutomationEvents values. The values of the enumeration uniquely
identify the kind of event.
Raising events
UI Automation clients can subscribe to automation events. In the automation peer model, peers for custom
controls must report changes to control state that are relevant to accessibility by calling the
RaiseAutomationEvent method. Similarly, when a key UI Automation property value changes, custom control
peers should call the RaisePropertyChangedEvent method.
The next code example shows how to get the peer object from within the control definition code and call a method
to fire an event from that peer. As an optimization, the code determines whether there are any listeners for this
event type. Firing the event and creating the peer object only when there are listeners avoids unnecessary
overhead and helps the control remain responsive.
C#

if (AutomationPeer.ListenerExists(AutomationEvents.PropertyChanged))
{
NumericUpDownAutomationPeer peer =
FrameworkElementAutomationPeer.FromElement(nudCtrl) as NumericUpDownAutomationPeer;
if (peer != null)
{
peer.RaisePropertyChangedEvent(
RangeValuePatternIdentifiers.ValueProperty,
(double)oldValue,
(double)newValue);
}
}

Peer navigation
After locating an automation peer, a UI Automation client can navigate the peer structure of an app by calling the
peer object's GetChildren and GetParent methods. Navigation among UI elements within a control is supported
by the peer's implementation of the GetChildrenCore method. The UI Automation system calls this method to
build up a tree of sub-elements contained within a control; for example, list items in a list box. The default
GetChildrenCore method in FrameworkElementAutomationPeer traverses the visual tree of elements to
build the tree of automation peers. Custom controls can override this method to expose a different representation
of child elements to automation clients, returning the automation peers of elements that convey information or
allow user interaction.

Native automation support for text patterns


Some of the default UWP app automation peers provide control pattern support for the text pattern
(PatternInterface.Text). But they provide this support through native methods, and the peers involved won't
note the ITextProvider interface in the (managed) inheritance. Still, if a managed or non-managed UI Automation
client queries the peer for patterns, it will report support for the text pattern, and provide behavior for parts of the
pattern when client APIs are called.
If you intend to derive from one of the UWP app text controls and also create a custom peer that derives from one
of the text-related peers, check the Remarks sections for the peer to learn more about any native-level support for
patterns. You can access the native base behavior in your custom peer if you call the base implementation from
your managed provider interface implementations, but it's difficult to modify what the base implementation does
because the native interfaces on both the peer and its owner control aren't exposed. Generally you should either
use the base implementations as-is (call base only) or completely replace the functionality with your own
managed code and don't call the base implementation. The latter is an advanced scenario, you'll need good
familiarity with the text services framework being used by your control in order to support the accessibility
requirements when using that framework.

AutomationProperties.AccessibilityView
In addition to providing a custom peer, you can also adjust the tree view representation for any control instance,
by setting AutomationProperties.AccessibilityView in XAML. This isn't implemented as part of a peer class, but
we'll mention it here because it's germane to overall accessibility support either for custom controls or for
templates you customize.
The main scenario for using AutomationProperties.AccessibilityView is to deliberately omit certain controls in
a template from the UI Automation views, because they don't meaningfully contribute to the accessibility view of
the entire control. To prevent this, set AutomationProperties.AccessibilityView to "Raw".
Throwing exceptions from automation peers
The APIs that you are implementing for your automation peer support are permitted to throw exceptions. It's
expected any UI Automation clients that are listening are robust enough to continue on after most exceptions are
thrown. In all likelihood that listener is looking at an all-up automation tree that includes apps other than your
own, and it's an unacceptable client design to bring down the entire client just because one area of the tree threw
a peer-based exception when the client called its APIs.
For parameters that are passed in to your peer, it's acceptable to validate the input, and for example throw
ArgumentNullException if it was passed null and that's not a valid value for your implementation. However, if
there are subsequent operations performed by your peer, remember that the peer's interactions with the hosting
control have something of an asynchronous character to them. Anything a peer does won't necessarily block the
UI thread in the control (and it probably shouldn't). So you could have situations where an object was available or
had certain properties when the peer was created or when an automation peer method was first called, but in the
meantime the control state has changed. For these cases, there are two dedicated exceptions that a provider can
throw:
Throw ElementNotAvailableException if you're unable to access either the peer's owner or a related peer
element based on the original info your API was passed. For example, you might have a peer that's trying to
run its methods but the owner has since been removed from the UI, such as a modal dialog that's been closed.
For a non-.NET client, this maps to UIA_E_ELEMENTNOTAVAILABLE.
Throw ElementNotEnabledException if there still is an owner, but that owner is in a mode such as
IsEnabled = false that's blocking some of the specific programmatic changes that your peer is trying to
accomplish. For a non-.NET client, this maps to UIA_E_ELEMENTNOTENABLED.
Beyond this, peers should be relatively conservative regarding exceptions that they throw from their peer support.
Most clients won't be able to handle exceptions from peers and turn these into actionable choices that their users
can make when interacting with the client. So sometimes a no-op, and catching exceptions without rethrowing
within your peer implementations, is a better strategy than is throwing exceptions every time something the peer
tries to do doesn't work. Consider also that most UI Automation clients aren't written in managed code. Most are
written in COM and are just checking for S_OK in an HRESULT whenever they call a UI Automation client method
that ends up accessing your peer.

Related topics
Accessibility
XAML accessibility sample
FrameworkElementAutomationPeer
AutomationPeer
OnCreateAutomationPeer
Control patterns and interfaces
Control patterns and interfaces
3/6/2017 4 min to read Edit on GitHub

Lists the Microsoft UI Automation control patterns, the classes that clients use to access them, and the interfaces
providers use to implement them.
The table in this topic describes the Microsoft UI Automation control patterns. The table also lists the classes used
by UI Automation clients to access the control patterns and the interfaces used by UI Automation providers to
implement them. The Control pattern column shows the pattern name from the UI Automation client perspective,
as a constant value listed in Control Pattern Availability Property Identifiers. From the UI Automation
provider perspective, each of these patterns is a PatternInterface constant name. The Class provider interface
column shows the name of the interface that providers implement to provide this pattern for a custom XAML
control.
For more info about how to implement custom automation peers that expose control patterns and implement the
interfaces, see Custom automation peers.
When you implement a control pattern, you should also consult the UI Automation provider documentation that
explains some of the expectations that clients will have of a control pattern regardless of which UI framework is
used to implement it. Some of the info listed in the general UI Automation provider documentation will influence
how you implement your peers and correctly support that pattern. See Implementing UI Automation Control
Patterns, and view the page that documents the pattern you intend to implement.

CONTROL PATTERN CLASS PROVIDER INTERFACE DESCRIPTION

Annotation IAnnotationProvider Used to expose the properties of an


annotation in a document.

Dock IDockProvider Used for controls that can be docked in


a docking container. For example,
toolbars or tool palettes.

Drag IDragProvider Used to support draggable controls, or


controls with draggable items.

DropTarget IDropTargetProvider Used to support controls that can be


the target of a drag-and-drop
operation.

ExpandCollapse IExpandCollapseProvider Used to support controls that visually


expand to display more content and
collapse to hide content.

Grid IGridProvider Used for controls that support grid


functionality such as sizing and moving
to a specified cell. Note that Grid itself
does not implement this pattern
because it provides layout but is not a
control.

GridItem IGridItemProvider Used for controls that have cells within


grids.
CONTROL PATTERN CLASS PROVIDER INTERFACE DESCRIPTION

Invoke IInvokeProvider Used for controls that can be invoked,


such as a Button.

ItemContainer IItemContainerProvider Enables applications to find an element


in a container, such as a virtualized list.

MultipleView IMultipleViewProvider Used for controls that can switch


between multiple representations of the
same set of information, data, or
children.

ObjectModel IObjectModelProvider Used to expose a pointer to the


underlying object model of a document.

RangeValue IRangeValueProvider Used for controls that have a range of


values that can be applied to the
control. For example, a spinner control
containing years might have a range of
1900 to the current year, while another
spinner control presenting months
would have a range of 1 to 12.

Scroll IScrollProvider Used for controls that can scroll. For


example, a control that has scroll bars
that are active when there is more
information than can be displayed in
the viewable area of the control.

ScrollItem IScrollItemProvider Used for controls that have individual


items in a list that scrolls. For example,
a list control that has individual items in
the scroll list, such as a combo box
control.

Selection ISelectionProvider Used for selection container controls.


For example, ListBox and ComboBox.

SelectionItem ISelectionItemProvider Used for individual items in selection


container controls, such as list boxes
and combo boxes.

Spreadsheet ISpreadsheetProvider Used to expose the contents of a


spreadsheet or other grid-based
document.

SpreadsheetItem ISpreadsheetItemProvider Used to expose the properties of a cell


in a spreadsheet or other grid-based
document.

Styles IStylesProvider Used to describe a UI element that has


a specific style, fill color, fill pattern, or
shape.

SynchronizedInput ISynchronizedInputProvider Enables UI Automation client apps to


direct the mouse or keyboard input to
a specific UI element.
CONTROL PATTERN CLASS PROVIDER INTERFACE DESCRIPTION

Table ITableProvider Used for controls that have a grid as


well as header information. For
example, a tabular calendar control.

TableItem ITableItemProvider Used for items in a table.

Text ITextProvider Used for edit controls and documents


that expose textual information. See
also ITextRangeProvider and
ITextProvider2.

TextChild ITextChildProvider Used to access an elements nearest


ancestor that supports the Text control
pattern.

TextEdit No managed class available Provides access to a control that


modifies text, for example a control that
performs auto-correction or enables
input composition through an Input
Method Editor (IME).

TextRange ITextRangeProvider Provides access to a span of continuous


text in a text container that implements
ITextProvider. See also
ITextRangeProvider2.

Toggle IToggleProvider Used for controls where the state can


be toggled. For example, CheckBox
and menu items that can be checked.

Transform ITransformProvider Used for controls that can be resized,


moved, and rotated. Typical uses for
the Transform control pattern are in
designers, forms, graphical editors, and
drawing applications.

Value IValueProvider Allows clients to get or set a value on


controls that do not support a range of
values.

VirtualizedItem IVirtualizedItemProvider Exposes items inside containers that are


virtualized and need to be made fully
accessible as UI Automation elements.

Window IWindowProvider Exposes information specific to


windows, a fundamental concept to the
Microsoft Windows operating system.
Examples of controls that are windows
are child windows and dialogs.

NOTE
You won't necessarily find implementations of all these patterns in existing XAML controls. Some of the patterns have
interfaces solely to support parity with the general UI Automation framework definition of patterns, and to support
automation peer scenarios that will require a purely custom implementation to support that pattern.
NOTE
Windows Phone Store apps do not support all the UI Automation control patterns listed here. Annotation, Dock, Drag,
DropTarget, ObjectModel are some of the unsupported patterns.

Related topics
Custom automation peers
Accessibility
App settings and data
3/6/2017 1 min to read Edit on GitHub

This section contains user experience guidelines for presenting app settings and storing those settings as app data.
App settings are the user-customizable portions of your Universal Windows Platform (UWP) app. For example, a
news reader app might let the user specify which news sources to display or how many columns to display on the
screen.
App data is data that the app itself creates and manages. It includes runtime state, app settings, reference content
(such as the dictionary definitions in a dictionary app), and other settings. App data is tied to the existence of the
app and is only meaningful to that app.

In this section
ARTICLE DESCRIPTION

Guidelines Best practices for creating and displaying app settings.

Store and retrieve app data How to store and retrieve local, roaming, and temporary
app data.
Guidelines for app settings
3/6/2017 6 min to read Edit on GitHub

App settings are the user-customizable portions of your app and live within an app settings page. For example, app
settings in a news reader app might let the user specify which news sources to display or how many columns to
display on the screen, while a weather app's settings could let the user choose between Celsius and Fahrenheit as
the default unit of measurement. This article describes best practices for creating and displaying app settings.

Should I include a settings page in my app?


Here are examples of app options that belong in an app settings page:
Configuration options that affect the behavior of the app and don't require frequent readjustment, like choosing
between Celsius or Fahrenheit as default units for temperature in a weather app, changing account settings for
a mail app, settings for notifications, or accessibility options.
Options that depend on the user's preferences, like music, sound effects, or color themes.
App information that isn't accessed very often, such as privacy policy, help, app version, or copyright info.
Commands that are part of the typical app workflow (for example, changing the brush size in an art app) shouldn't
be in a settings page. To learn more about command placement, see Command design basics.

General recommendations
Keep settings pages simple and make use of binary (on/off) controls. A toggle switch is usually the best control
for a binary setting.
For settings that let users choose one item from a set of up to 5 mutually exclusive, related options, use radio
buttons.
Create an entry point for all app settings in your app setting's page.
Keep your settings simple. Define smart defaults and keep the number of settings to a minimum.
When a user changes a setting, the app should immediately reflect the change.
Don't include commands that are part of the common app workflow.

Entry point
The way that users get to your app settings page should be based on your app's layout.
Navigation pane
For a nav pane layout, app settings should be the last item in the list of navigational choices and be pinned to the
bottom:
App bar
If you're using an app bar or tool bar, place the settings entry point as the last item in the "More" overflow menu. If
greater discoverability for the settings entry point is important for your app, place the entry point directly on the
app bar and not within the overflow.

Hub
If you're using a hub layout, the entry point for app settings should be placed inside the "More" overflow menu of
an app bar.
Tabs/pivots
For a tabs or pivots layout, we don't recommended placing the app settings entry point as one of the top items
within the navigation. Instead, the entry point for app settings should be placed inside the "More" overflow menu
of an app bar.
Master-details
Instead of burying the app settings entry point deeply within a master-details pane, make it the last pinned item on
the top level of the master pane.
Layout
On both desktop and mobile, the app settings window should open full-screen and fill the whole window. If your
app settings menu has up to four top-level groups, those groups should cascade down one column.
Desktop:

Mobile:

"Color mode" settings


If your app allows users to choose the app's color mode, present these options using radio buttons or a combo box
with the header "Choose a mode". The options should read
Light
Dark
Windows default
We also recommend adding a hyperlink to the Colors page of the Windows Settings app where users can check the
Windows default theme. Use the string "Windows color settings" for the hyperlink text.

Detailed redlines showing preferred text strings for the "Choose a mode" section are available on UNI.

"About" section and "Give feedback" button


If you need an "About this app" section in your app, create a dedicated app settings page for that. If you want a
"Give Feedback" button, place that toward the bottom of the "About this app" page.
"Terms of Use" and "Privacy Statement" should be hyperlink buttons with wrapping text.

Recommended page content


Once you have a list of items that you want to include in your app settings page, consider these guidelines:
Group similar or related settings under one settings label.
Try to keep the total number of settings to a maximum of four or five.
Display the same settings regardless of the app context. If some settings aren't relevant in a certain context,
disable those in the app settings flyout.
Use descriptive, one-word labels for settings. For example, name the setting "Accounts" instead of "Account
settings" for account-related settings. If you only want one option for your settings and the settings don't lend
themselves to a descriptive label, use "Options" or "Defaults."
If a setting directly links to the web instead of to a flyout, let the user know this with a visual clue, such as "Help
(online)" or "Web forums" styled as a hyperlink. Consider grouping multiple links to the web into a flyout with a
single setting. For example, an "About" setting could open a flyout with links to your terms of use, privacy policy,
and app support.
Combine less-used settings into a single entry so that more common settings can each have their own entry.
Put content or links that only contain information in an "About" setting.
Don't duplicate the functionality in the "Permissions" pane. Windows provides this pane by default and you
can't modify it.
Add settings content to Settings flyouts
Present content from top to bottom in a single column, scrollable if necessary. Limit scrolling to a maximum of
twice the screen height.
Use the following controls for app settings:
Toggle switches: To let users set values on or off.
Radio buttons: To let users choose one item from a set of up to 5 mutually exclusive, related options.
Text input box: To let users enter text. Use the type of text input box that corresponds to the type of text
you're getting from the user, such as an email or password.
Hyperlinks: To take the user to another page within the app or to an external website. When a user clicks
a hyperlink, the Settings flyout will be dismissed.
Buttons: To let users initiate an immediate action without dismissing the current Settings flyout.
Add a descriptive message if one of the controls is disabled. Place this message above the disabled control.
Animate content and controls as a single block after the Settings flyout and header have animated. Animate
content using the enterPage or EntranceThemeTransition animations with a 100px left offset.
Use section headers, paragraphs, and labels to aid organize and clarify content, if necessary.
If you need to repeat settings, use an additional level of UI or an expand/collapse model, but avoid hierarchies
deeper than two levels. For example, a weather app that provides per-city settings could list the cities and let the
user tap on the city to either open a new flyout or expand to show the settings options.
If loading controls or web content takes time, use an indeterminate progress control to indicate to users that
info is loading. For more info, see Guidelines for progress controls.
Don't use buttons for navigation or to commit changes. Use hyperlinks to navigate to other pages, and instead
of using a button to commit changes, automatically save changes to app settings when a user dismisses the
Settings flyout.

Related articles
Command design basics
Guidelines for progress controls
Store and retrieve app data
EntranceThemeTransition
Store and retrieve settings and other app data
3/6/2017 15 min to read Edit on GitHub

App data is mutable data that is specific to a particular app. It includes runtime state, user preferences, and other
settings. App data is different from user data, data that the user creates and manages when using an app. User
data includes document or media files, email or communication transcripts, or database records holding content
created by the user. User data may be useful or meaningful to more than one app. Often, this is data that the user
wants to manipulate or transmit as an entity independent of the app itself, such as a document.
**Important note about app data: **The lifetime of the app data is tied to the lifetime of the app. If the app is
removed, all of the app data will be lost as a consequence. Don't use app data to store user data or anything that
users might perceive as valuable and irreplaceable. We recommend that the user's libraries and Microsoft
OneDrive be used to store this sort of information. App data is ideal for storing app-specific user preferences,
settings, and favorites.

Types of app data


There are two types of app data: settings and files.
Settings
Use settings to store user preferences and application state info. The app data API enables you to easily
create and retrieve settings (we'll show you some examples later in this article).
Here are data types you can use for app settings:
UInt8, Int16, UInt16, Int32, UInt32, Int64, UInt64, Single, Double
Boolean
Char16, String
DateTime, TimeSpan
GUID, Point, Size, Rect
ApplicationDataCompositeValue: A set of related app settings that must be serialized and
deserialized atomically. Use composite settings to easily handle atomic updates of interdependent
settings. The system ensures the integrity of composite settings during concurrent access and roaming.
Composite settings are optimized for small amounts of data, and performance can be poor if you use
them for large data sets.
Files
Use files to store binary data or to enable your own, customized serialized types.

Storing app data in the app data stores


When an app is installed, the system gives it its own per-user data stores for settings and files. You don't need to
know where or how this data exists, because the system is responsible for managing the physical storage, ensuring
that the data is kept isolated from other apps and other users. The system also preserves the contents of these data
stores when the user installs an update to your app and removes the contents of these data stores completely and
cleanly when your app is uninstalled.
Within its app data store, each app has system-defined root directories: one for local files, one for roaming files,
and one for temporary files. Your app can add new files and new containers to each of these root directories.
Local app data
Local app data should be used for any information that needs to be preserved between app sessions and is not
suitable for roaming app data. Data that is not applicable on other devices should be stored here as well. There is
no general size restriction on local data stored. Use the local app data store for data that it does not make sense to
roam and for large data sets.
Retrieve the local app data store
Before you can read or write local app data, you must retrieve the local app data store. To retrieve the local app
data store, use the ApplicationData.LocalSettings property to get the app's local settings as an
ApplicationDataContainer object. Use the ApplicationData.LocalFolder property to get the files in a
StorageFolder object. Use the ApplicationData.LocalCacheFolder property to get the folder in the local app
data store where you can save files that are not included in backup and restore.

Windows.Storage.ApplicationDataContainer localSettings =
Windows.Storage.ApplicationData.Current.LocalSettings;
Windows.Storage.StorageFolder localFolder =
Windows.Storage.ApplicationData.Current.LocalFolder;

Create and retrieve a simple local setting


To create or write a setting, use the ApplicationDataContainer.Values property to access the settings in the
localSettings container we got in the previous step. This example creates a setting named exampleSetting .

// Simple setting

localSettings.Values["exampleSetting"] = "Hello Windows";

To retrieve the setting, you use the same ApplicationDataContainer.Values property that you used to create the
setting. This example shows how to retrieve the setting we just created.

// Simple setting
Object value = localSettings.Values["exampleSetting"];

Create and retrieve a local composite value


To create or write a composite value, create an ApplicationDataCompositeValue object. This example creates a
composite setting named exampleCompositeSetting and adds it to the localSettings container.

// Composite setting

Windows.Storage.ApplicationDataCompositeValue composite =
new Windows.Storage.ApplicationDataCompositeValue();
composite["intVal"] = 1;
composite["strVal"] = "string";

localSettings.Values["exampleCompositeSetting"] = composite;

This example shows how to retrieve the composite value we just created.
// Composite setting

Windows.Storage.ApplicationDataCompositeValue composite =
(Windows.Storage.ApplicationDataCompositeValue)localSettings.Values["exampleCompositeSetting"];

if (composite == null)
{
// No data
}
else
{
// Access data in composite["intVal"] and composite["strVal"]
}

Create and read a local file


To create and update a file in the local app data store, use the file APIs, such as
Windows.Storage.StorageFolder.CreateFileAsync and Windows.Storage.FileIO.WriteTextAsync. This
example creates a file named dataFile.txt in the localFolder container and writes the current date and time to the file.
The ReplaceExisting value from the CreationCollisionOption enumeration indicates to replace the file if it
already exists.

async void WriteTimestamp()


{
Windows.Globalization.DateTimeFormatting.DateTimeFormatter formatter =
new Windows.Globalization.DatetimeFormatting.DateTimeFormatter("longtime");

StorageFile sampleFile = await localFolder.CreateFileAsync("dataFile.txt",


CreationCollisionOption.ReplaceExisting);
await FileIO.WriteTextAsync(sampleFile, formatter.Format(DateTime.Now));
}

To open and read a file in the local app data store, use the file APIs, such as
Windows.Storage.StorageFolder.GetFileAsync,
Windows.Storage.StorageFile.GetFileFromApplicationUriAsync, and
Windows.Storage.FileIO.ReadTextAsync. This example opens the dataFile.txt file created in the previous step
and reads the date from the file. For details on loading file resources from various locations, see How to load file
resources.

async void ReadTimestamp()


{
try
{
StorageFile sampleFile = await localFolder.GetFileAsync("dataFile.txt");
String timestamp = await FileIO.ReadTextAsync(sampleFile);
// Data is contained in timestamp
}
catch (Exception)
{
// Timestamp not found
}
}

Roaming data
If you use roaming data in your app, your users can easily keep your app's app data in sync across multiple
devices. If a user installs your app on multiple devices, the OS keeps the app data in sync, reducing the amount of
setup work that the user needs to do for your app on their second device. Roaming also enables your users to
continue a task, such as composing a list, right where they left off even on a different device. The OS replicates
roaming data to the cloud when it is updated, and synchronizes the data to the other devices on which the app is
installed.
The OS limits the size of the app data that each app may roam. See ApplicationData.RoamingStorageQuota. If
the app hits this limit, none of the app's app data will be replicated to the cloud until the app's total roamed app
data is less than the limit again. For this reason, it is a best practice to use roaming data only for user preferences,
links, and small data files.
Roaming data for an app is available in the cloud as long as it is accessed by the user from some device within the
required time interval. If the user does not run an app for longer than this time interval, its roaming data is
removed from the cloud. If a user uninstalls an app, its roaming data isn't automatically removed from the cloud,
it's preserved. If the user reinstalls the app within the time interval, the roaming data is synchronized from the
cloud.
Roaming data do's and don'ts
Use roaming for user preferences and customizations, links, and small data files. For example, use roaming to
preserve a user's background color preference across all devices.
Use roaming to let users continue a task across devices. For example, roam app data like the contents of an
drafted email or the most recently viewed page in a reader app.
Handle the DataChanged event by updating app data. This event occurs when app data has just finished
syncing from the cloud.
Roam references to content rather than raw data. For example, roam a URL rather than the content of an online
article.
For important, time critical settings, use the HighPriority setting associated with RoamingSettings.
Don't roam app data that is specific to a device. Some info is only pertinent locally, such as a path name to a
local file resource. If you do decide to roam local information, make sure that the app can recover if the info
isn't valid on the secondary device.
Don't roam large sets of app data. There's a limit to the amount of app data an app may roam; use
RoamingStorageQuota property to get this maximum. If an app hits this limit, no data can roam until the size
of the app data store no longer exceeds the limit. When you design your app, consider how to put a bound on
larger data so as to not exceed the limit. For example, if saving a game state requires 10KB each, the app might
only allow the user store up to 10 games.
Don't use roaming for data that relies on instant syncing. Windows doesn't guarantee an instant sync; roaming
could be significantly delayed if a user is offline or on a high latency network. Ensure that your UI doesn't
depend on instant syncing.
Don't use roam frequently changing data. For example, if your app tracks frequently changing info, such as the
position in a song by second, don't store this as roaming app data. Instead, pick a less frequent representation
that still provides a good user experience, like the currently playing song.
Roaming pre-requisites
Any user can benefit from roaming app data if they use a Microsoft account to log on to their device. However,
users and group policy administrators can switch off roaming app data on a device at any time. If a user chooses
not to use a Microsoft account or disables roaming data capabilities, she will still be able to use your app, but app
data be local to each device.
Data stored in the PasswordVault will only transition if a user has made a device trusted. If a device isn't trusted,
data secured in this vault will not roam.
Conflict resolution
Roaming app data is not intended for simultaneous use on more than one device at a time. If a conflict arises
during synchronization because a particular data unit was changed on two devices, the system will always favor
the value that was written last. This ensures that the app utilizes the most up-to-date information. If the data unit is
a setting composite, conflict resolution will still occur on the level of the setting unit, which means that the
composite with the latest change will be synchronized.
When to write data
Depending on the expected lifetime of the setting, data should be written at different times. Infrequently or slowly
changing app data should be written immediately. However, app data that changes frequently should only be
written periodically at regular intervals (such as once every 5 minutes), as well as when the app is suspended. For
example, a music app might write the current song settings whenever a new song starts to play, however, the
actual position in the song should only be written on suspend.
Excessive usage protection
The system has various protection mechanisms in place to avoid inappropriate use of resources. If app data does
not transition as expected, it is likely that the device has been temporarily restricted. Waiting for some time will
usually resolve this situation automatically and no action is required.
Versioning
App data can utilize versioning to upgrade from one data structure to another. The version number is different
from the app version and can be set at will. Although not enforced, it is highly recommended that you use
increasing version numbers, since undesirable complications (including data loss)could occur if you try to
transition to a lower data version number that represents newer data.
App data only roams between installed apps with the same version number. For example, devices on version 2 will
transition data between each other and devices on version 3 will do the same, but no roaming will occur between a
device running version 2 and a device running version 3. If you install a new app that utilized various version
numbers on other devices, the newly installed app will sync the app data associated with the highest version
number.
Testing and tools
Developers can lock their device in order to trigger a synchronization of roaming app data. If it seems that the app
data does not transition within a certain time frame, please check the following items and make sure that:
Your roaming data does not exceed the maximum size (see RoamingStorageQuota for details).
Your files are closed and released properly.
There are at least two devices running the same version of the app.
Register to receive notification when roaming data changes
To use roaming app data, you need to register for roaming data changes and retrieve the roaming data containers
so you can read and write settings.
1. Register to receive notification when roaming data changes.
The DataChanged event notifies you when roaming data changes. This example sets DataChangeHandler as
the handler for roaming data changes.

void InitHandlers()
{
Windows.Storage.ApplicationData.Current.DataChanged +=
new TypedEventHandler<ApplicationData, object>(DataChangeHandler);
}

void DataChangeHandler(Windows.Storage.ApplicationData appData, object o)


{
// TODO: Refresh your data
}

1. Get the containers for the app's settings and files.


Use the ApplicationData.RoamingSettings property to get the settings and the
ApplicationData.RoamingFolder property to get the files.

Windows.Storage.ApplicationDataContainer roamingSettings =
Windows.Storage.ApplicationData.Current.RoamingSettings;
Windows.Storage.StorageFolder roamingFolder =
Windows.Storage.ApplicationData.Current.RoamingFolder;

Create and retrieve roaming settings


Use the ApplicationDataContainer.Values property to access the settings in the roamingSettings container we got
in the previous section. This example creates a simple setting named exampleSetting and a composite value named
composite .

// Simple setting

roamingSettings.Values["exampleSetting"] = "Hello World";


// High Priority setting, for example, last page position in book reader app
roamingSettings.values["HighPriority"] = "65";

// Composite setting

Windows.Storage.ApplicationDataCompositeValue composite =
new Windows.Storage.ApplicationDataCompositeValue();
composite["intVal"] = 1;
composite["strVal"] = "string";

roamingSettings.Values["exampleCompositeSetting"] = composite;

This example retrieves the settings we just created.

// Simple setting

Object value = roamingSettings.Values["exampleSetting"];

// Composite setting

Windows.Storage.ApplicationDataCompositeValue composite =
(Windows.Storage.ApplicationDataCompositeValue)roamingSettings.Values["exampleCompositeSetting"];

if (composite == null)
{
// No data
}
else
{
// Access data in composite["intVal"] and composite["strVal"]
}

Create and retrieve roaming files


To create and update a file in the roaming app data store, use the file APIs, such as
Windows.Storage.StorageFolder.CreateFileAsync and Windows.Storage.FileIO.WriteTextAsync. This
example creates a file named dataFile.txt in the roamingFolder container and writes the current date and time to the
file. The ReplaceExisting value from the CreationCollisionOption enumeration indicates to replace the file if it
already exists.
async void WriteTimestamp()
{
Windows.Globalization.DateTimeFormatting.DateTimeFormatter formatter =
new Windows.Globalization.DatetimeFormatting.DateTimeFormatter("longtime");

StorageFile sampleFile = await roamingFolder.CreateFileAsync("dataFile.txt",


CreationCollisionOption.ReplaceExisting);
await FileIO.WriteTextAsync(sampleFile, formatter.Format(DateTime.Now));
}

To open and read a file in the roaming app data store, use the file APIs, such as
Windows.Storage.StorageFolder.GetFileAsync,
Windows.Storage.StorageFile.GetFileFromApplicationUriAsync, and
Windows.Storage.FileIO.ReadTextAsync. This example opens the dataFile.txt file created in the previous section
and reads the date from the file. For details on loading file resources from various locations, see How to load file
resources.

async void ReadTimestamp()


{
try
{
StorageFile sampleFile = await roamingFolder.GetFileAsync("dataFile.txt");
String timestamp = await FileIO.ReadTextAsync(sampleFile);
// Data is contained in timestamp
}
catch (Exception)
{
// Timestamp not found
}
}

Temporary app data


The temporary app data store works like a cache. Its files do not roam and could be removed at any time. The
System Maintenance task can automatically delete data stored at this location at any time. The user can also clear
files from the temporary data store using Disk Cleanup. Temporary app data can be used for storing temporary
information during an app session. There is no guarantee that this data will persist beyond the end of the app
session as the system might reclaim the used space if needed. The location is available via the temporaryFolder
property.
Retrieve the temporary data container
Use the ApplicationData.TemporaryFolder property to get the files. The next steps use the temporaryFolder
variable from this step.

Windows.Storage.StorageFolder temporaryFolder = ApplicationData.Current.TemporaryFolder;</code></pre></td>


</tr>
</tbody>
</table>

Create and read temporary files


To create and update a file in the temporary app data store, use the file APIs, such as
Windows.Storage.StorageFolder.CreateFileAsync and Windows.Storage.FileIO.WriteTextAsync. This
example creates a file named dataFile.txt in the temporaryFolder container and writes the current date and time to the
file. The ReplaceExisting value from the CreationCollisionOption enumeration indicates to replace the file if it
already exists.
<colgroup>
<col width="100%" />
</colgroup>
<thead>
<tr class="header">
<th align="left">C#</th>
</tr>
</thead>
<tbody>
<tr class="odd">
async void WriteTimestamp()
{
Windows.Globalization.DateTimeFormatting.DateTimeFormatter formatter =
new Windows.Globalization.DatetimeFormatting.DateTimeFormatter("longtime");

StorageFile sampleFile = await temporaryFolder.CreateFileAsync("dataFile.txt",


CreateCollisionOption.ReplaceExisting);
await FileIO.WriteTextAsync(sampleFile, formatter.Format(DateTime.Now));
}

To open and read a file in the temporary app data store, use the file APIs, such as
Windows.Storage.StorageFolder.GetFileAsync,
Windows.Storage.StorageFile.GetFileFromApplicationUriAsync, and
Windows.Storage.FileIO.ReadTextAsync. This example opens the dataFile.txt file created in the previous step
and reads the date from the file. For details on loading file resources from various locations, see How to load file
resources.

async void ReadTimestamp()


{
try
{
StorageFile sampleFile = await temporaryFolder.GetFileAsync("dataFile.txt");
String timestamp = await FileIO.ReadTextAsync(sampleFile);
// Data is contained in timestamp
}
catch (Exception)
{
// Timestamp not found
}
}

Organize app data with containers


To help you organize your app data settings and files, you create containers (represented by
ApplicationDataContainer objects) instead of working directly with directories. You can add containers to the
local, roaming, and temporary app data stores. Containers can be nested up to 32 levels deep.
To create a settings container, call the ApplicationDataContainer.CreateContainer method. This example
creates a local settings container named exampleContainer and adds a setting named exampleSetting . The Always
value from the ApplicationDataCreateDisposition enumeration indicates that the container is created if it
doesn't already exist.
Windows.Storage.ApplicationDataContainer localSettings =
Windows.Storage.ApplicationData.Current.LocalSettings;
Windows.Storage.StorageFolder localFolder =
Windows.Storage.ApplicationData.Current.LocalFolder;

// Setting in a container
Windows.Storage.ApplicationDataContainer container =
localSettings.CreateContainer("exampleContainer", Windows.Storage.ApplicationDataCreateDisposition.Always);

if (localSettings.Containers.ContainsKey("exampleContainer"))
{
localSettings.Containers["exampleContainer"].Values["exampleSetting"] = "Hello Windows";
}

Delete app settings and containers


To delete a simple setting that your app no longer needs, use the ApplicationDataContainerSettings.Remove
method. This example deletesthe exampleSetting local setting that we created earlier.

Windows.Storage.ApplicationDataContainer localSettings =
Windows.Storage.ApplicationData.Current.LocalSettings;
Windows.Storage.StorageFolder localFolder =
Windows.Storage.ApplicationData.Current.LocalFolder;

// Delete simple setting

localSettings.Values.Remove("exampleSetting");

To delete a composite setting, use the ApplicationDataCompositeValue.Remove method. This example deletes
the local exampleCompositeSetting composite setting we created in an earlier example.

Windows.Storage.ApplicationDataContainer localSettings =
Windows.Storage.ApplicationData.Current.LocalSettings;
Windows.Storage.StorageFolder localFolder =
Windows.Storage.ApplicationData.Current.LocalFolder;

// Delete composite setting

localSettings.Values.Remove("exampleCompositeSetting");

To delete a container, call the ApplicationDataContainer.DeleteContainer method. This example deletes the
local exampleContainer settings container we created earlier.

Windows.Storage.ApplicationDataContainer localSettings =
Windows.Storage.ApplicationData.Current.LocalSettings;
Windows.Storage.StorageFolder localFolder =
Windows.Storage.ApplicationData.Current.LocalFolder;

// Delete container

localSettings.DeleteContainer("exampleContainer");

Versioning your app data


You can optionally version the app data for your app. This would enable you to create a future version of your app
that changes the format of its app data without causing compatibility problems with the previous version of your
app. The app checks the version of the app data in the data store, and if the version is less than the version the app
expects, the app should update the app data to the new format and update the version. For more info, see
theApplication.Version property and the ApplicationData.SetVersionAsync method.

Related articles
Windows.Storage.ApplicationData
Windows.Storage.ApplicationData.RoamingSettings
Windows.Storage.ApplicationData.RoamingFolder
Windows.Storage.ApplicationData.RoamingStorageQuota
Windows.Storage.ApplicationDataCompositeValue
Globalization and localization
3/6/2017 2 min to read Edit on GitHub

Windows is used worldwide, by audiences that vary in culture, region, and language. A user may speak any
language, or even multiple languages. A user may be located anywhere in the world, and may speak any language
in any location. You can increase the potential market for your app by designing it to be readily adaptable using
globalization and localization.
Globalization is the process of designing and developing your app to act appropriately for different global
markets without any changes or customization.
For example, you can:
Design the layout of your app to accommodate the different text lengths and font sizes of other languages in
labels and text strings.
Retrieve text and culture-dependent images from resources that can be adapted to different local markets,
instead of hard-coding them into your app's code or markup.
Use globalization APIs to display data that are formatted differently in different regions, such as numeric values,
dates, times, and currencies.
Localization is the process of adapting your app to meet the language, cultural, and political requirements of a
specific local market.
For example:
Translate the text and labels of the app for the new market, and create separate resources for its language.
Modify any culture-dependent images as necessary, and place in separate resources.
Watch this video for a brief introduction on how to prepare your app for the world: Introduction to globalization
and localization.

Articles
ARTICLE DESCRIPTION

Do's and don'ts Follow these best practices when globalizing your apps for
a wider audience and when localizing your apps for a
specific market.

Use global-ready formats Develop a global-ready app by appropriately formatting


dates, times, numbers, and currencies.

Manage language and region Control how Windows selects UI resources and formats
the UI elements of the app, by using the various language
and region settings provided by Windows.

Use patterns to format dates and times Use the Windows.Globalization.DateTimeFormatting


API with custom patterns to display dates and times in
exactly the format you wish.
ARTICLE DESCRIPTION

Adjust layout and fonts, and support RTL Develop your app to support the layouts and fonts of
multiple languages, including RTL (right-to-left) flow
direction.

Prepare your app for localization Prepare your app for localization to other markets,
languages, or regions.

Put UI strings into resources Put string resources for your UI into resource files. You
can then reference those strings from your code or
markup.

See also the documentation originally created for Windows 8.x, which still applies to Universal Windows Platform
(UWP) apps and Windows 10.
Globalizing your app
Language matching
NumeralSystem values
International fonts
App resources and localization
Globalization and localization do's and don'ts
3/6/2017 8 min to read Edit on GitHub

Follow these best practices when globalizing your apps for a wider audience and when localizing your apps for a
specific market.

Important APIs
Globalization
Globalization.NumberFormatting
Globalization.DateTimeFormatting
Resources
Resources.Core

Globalization
Prepare your app to easily adapt to different markets by choosing globally appropriate terms and images for your
UI, using Globalization APIs to format app data, and avoiding assumptions based on location or language.

RECOMMENDATION DESCRIPTION

Use the correct formats for numbers, dates, times, The formatting used for numbers, dates, times, and other
addresses, and phone numbers. forms of data varies between cultures, regions, languages,
and markets. If you're displaying numbers, dates, times, or
other data, use Globalization APIs to get the format
appropriate for a particular audience.

Support international paper sizes. The most common paper sizes differ between countries,
so if you include features that depend on paper size, like
printing, be sure to support and test common
international sizes.

Support international units of measurement and Different units and scales are used in different countries,
currencies. although the most popular are the metric system and the
imperial system. If you deal with measurements, like
length, temperature, or area, get the correct system
measurement by using the CurrenciesInUse property.

Display text and fonts correctly. The ideal font, font size, and direction of text varies
between different markets.
For more info, see Adjust layout and fonts, and
support RTL.
RECOMMENDATION DESCRIPTION

Use Unicode for character encoding. By default, recent versions of Microsoft Visual Studio use
Unicode character encoding for all documents. If you're
using a different editor, be sure to save source files in the
appropriate Unicode character encodings. All Windows
Runtime APIs return UTF-16 encoded strings.

Record the language of input. When your app asks users for text input, record the
language of input. This ensures that when the input is
displayed later it's presented to the user with the
appropriate formatting. Use the
CurrentInputMethodLanguage property to get the
current input language.

Don't use language to assume a user's location, and don't In Windows, the user's language and location are separate
use location to assume a user's language. concepts. A user can speak a particular regional variant of
a language, like en-gb for English as spoken in Great
Britain, but the user can be in an entirely different country
or region. Consider whether your apps require knowledge
about the user's language, like for UI text, or location, like
for licensing issues.
For more info, see Manage language and region.

Don't use colloquialisms and metaphors. Language that's specific to a demographic group, such as
culture and age, can be hard to understand or translate,
because only people in that demographic group use that
language. Similarly, metaphors might make sense to one
person but mean nothing to someone else. For example, a
"bluebird" means something specific to those who are
part of skiing culture, but those who arent part of that
culture dont understand the reference. If you plan to
localize your app and you use an informal voice or tone,
be sure that you adequately explain to localizers the
meaning and voice to be translated.

Don't use technical jargon, abbreviations, or acronyms. Technical language is less likely to be understood by non-
technical audiences or people from other cultures or
regions, and it's difficult to translate. People don't use
these kinds of words in everyday conversations. Technical
language often appears in error messages to identify
hardware and software issues. At times, this might be be
necessary, but you should rewrite strings to be non-
technical.

Don't use images that might be offensive. Images that might be appropriate in your own culture
may be offensive or misinterpreted in other cultures.
Avoid use of religious symbols, animals, or color
combinations that are associated with national flags or
political movements.
RECOMMENDATION DESCRIPTION

Avoid political offense in maps or when referring to Maps may include controversial regional or national
regions. boundaries, and they're a frequent source of political
offense. Be careful that any UI used for selecting a nation
refers to it as a "country/region". Putting a disputed
territory in a list labeled "Countries", like in an address
form, could get you in trouble.

Don't use string comparison by itself to compare BCP-47 language tags are complex. There are a number
language tags. of issues when comparing language tags, including issues
with matching script information, legacy tags, and
multiple regional variants. The resource management
system in Windows takes care of matching for you. You
can specify a set of resources in any languages, and the
system chooses the appropriate one for the user and the
app.
For more on resource management, see Defining app
resources.

Don't assume that sorting is always alphabetic. For languages that don't use Latin script, sorting is based
on things like pronunciation, number of pen strokes, and
other factors. Even languages that use Latin script don't
always use alphabetic sorting. For example, in some
cultures, a phone book might not be sorted alphabetically.
The system can handle sorting for you, but if you create
your own sorting algorithm, be sure to take into account
the sorting methods used in your target markets.

Localization
RECOMMENDATION DESCRIPTION

Separate resources such as UI strings and images from Design your apps so that resources, like strings and
code. images, are separated from your code. This enables them
to be independently maintained, localized, and customized
for different scaling factors, accessibility options, and a
number of other user and machine contexts.
Separate string resources from your app's code to create
a single language-independent codebase. Always separate
strings from app code and markup, and place them into a
resource file, like a ResW or ResJSON file.
Use the resource infrastructure in Windows to handle the
selection of the most appropriate resources to match the
user's runtime environment.

Isolate other localizable resource files. Take other files that require localization, like images that
contain text to be translated or that need to be changed
due to cultural sensitivity, and place them in folders
tagged with language names.
RECOMMENDATION DESCRIPTION

Set your default language, and mark all of your resources, Always set the default language for your apps
even the ones in your default language. appropriately in the app manifest (package.appxmanifest).
The default language determines the language that's used
when the user doesn't speak any of the supported
languages of the app. Mark default language resources,
for example en-us/Logo.png, with their language, so the
system can tell which language the resource is in and how
it's used in particular situations.

Determine the resources of your app that require What needs to change if your app is to be localized for
localization. other markets? Text strings require translation into other
languages. Images may need to be adapted for other
cultures. Consider how localization affects other resources
that your app uses, like audio or video.

Use resource identifiers in the code and markup to refer Instead of having string literals or specific file names for
to resources. images in your markup, use references to the resources.
Be sure to use unique identifiers for each resource. For
more info, see How to name resources using qualifiers.
Listen for events that fire when the system changes and it
begins to use a different set of qualifiers. Reprocess the
document so that the correct resources can be loaded.

Enable text size to increase. Allocate text buffers dynamically, since text size may
expand when translated. If you must use static buffers,
make them extra-large (perhaps doubling the length of
the English string) to accommodate potential expansion
when strings are translated. There also may be limited
space available for a user interface. To accommodate
localized languages, ensure that your string length is
approximately 40% longer than what you would need for
the English language. For really short strings, such as
single words, you may needs as much as 300% more
space. In addition, enabling multiline support and text-
wrapping in a control will leave more space to display
each string.

Support mirroring. Text alignment and reading order can be left-to-right, as


in English, or right-to-left (RTL), as in Arabic or Hebrew. If
you are localizing your product into languages that use a
different reading order than your own, be sure that the
layout of your UI elements supports mirroring. Even items
such as back buttons, UI transition effects, and images
may need to be mirrored.
For more info, see Adjust layout and fonts, and
support RTL.

Comment strings. Ensure that strings are properly commented, and only the
strings that need to be translated are provided to
localizers. Over-localization is a common source of
problems.
RECOMMENDATION DESCRIPTION

Use short strings. Shorter strings are easier to translate and enable
translation recycling. Translation recycling saves money
because the same string isn't sent to the localizer twice.
Strings longer than 8192 characters may not be
supported by some localization tools, so keep string
length to 4000 or less.

Provide strings that contain an entire sentence. Provide strings that contain an entire sentence, instead of
breaking the sentence into individual words, because the
translation of words may depend on their position in a
sentence. Also, don't assume that a phrase with multiple
parameters will keep those parameters in the same order
for every language.

Optimize image and audio files for localization. Reduce localization costs by avoiding use of text in images
or speech in audio files. If you're localizing to a language
with a different reading direction than your own, using
symmetrical images and effects make it easier to support
mirroring.

Don't re-use strings in different contexts. Don't re-use strings in different contexts, because even
simple words like "on" and "off" may be translated
differently, depending on the context.

Related articles
Samples
Application resources and localization sample
Globalization preferences sample
Adjust layout and fonts, and support RTL
3/6/2017 3 min to read Edit on GitHub

Develop your app to support the layouts and fonts of multiple languages, including RTL (right-to-left) flow
direction.

Layout guidelines
Some languages, such as German and Finnish, require more space than English for their text. The fonts for some
languages, such as Japanese, require more height. And some languages, such as Arabic and Hebrew, require that
text layout and app layout must be in right-to-left (RTL) reading order.
Use flexible layout mechanisms instead of absolute positioning, fixed widths, or fixed heights. When necessary,
particular UI elements can be adjusted based on language.
Specify a Uid for an element:

<TextBlock x:Uid="Block1">

Ensure that your app's ResW file has a resource for Block1.Width, which you can set for each language that you
localize into.
For Windows Store apps using C++, C#, or Visual Basic, use the FlowDirection property, with symmetrical
padding and margins, to enable localization for other layout directions.
XAML layout controls such as Grid scale and flip automatically with the FlowDirection property. Expose your own
FlowDirection property in your app as a resource for localizers.
Specify a Uid for the main page of your app:

<Page x:Uid="MainPage">

Ensure that your app's ResW file has a resource for MainPage.FlowDirection, which you can set for each language
that you localize into.

Mirroring images
If your app has images that must be mirrored (that is, the same image can be flipped) for RTL, you can apply the
FlowDirection property:

<!-- en-US\localized.xaml -->


<Image ... FlowDirection="LeftToRight" />

<!-- ar-SA\localized.xaml -->


<Image ... FlowDirection="RightToLeft" />

If your app requires a different image to flip the image correctly, you can use the resource management system
with the layoutdir qualifier. The system chooses an image named file.layoutdir-rtl.png when the application
language is set to an RTL language. This approach may be necessary when some part of the image is flipped, but
another part isn't.
Fonts
Use the LanguageFont font-mapping APIs for programmatic access to the recommended font family, size,
weight, and style for a particular language. The LanguageFont object provides access to the correct font info for
various categories of content including UI headers, notifications, body text, and user-editable document body
fonts.

Best practices for handling Right to Left (RTL) languages


When your app is localized for Right to Left (RTL) languages, use APIs to set the default text direction for the
RootFrame. This will cause all of the controls contained within the RootFrame to respond appropriately to the
default text direction. When more than one language is supported, use the LayoutDirection for the top preferred
language to set the FlowDirection property. Most controls included in Windows use FlowDirection already. If you
are implementing custom controls, they should use FlowDirection to make appropriate layout changes for RTL and
LTR languages.
C#

// For bidirectional languages, determine flow direction for RootFrame and all derived UI.

string resourceFlowDirection = ResourceContext.GetForCurrentView().QualifierValues["LayoutDirection"];


if (resourceFlowDirection == "LTR")
{
RootFrame.FlowDirection = FlowDirection.LeftToRight;
}
else
{
RootFrame.FlowDirection = FlowDirection.RightToLeft;
}

C++:

// Get preferred app language


m_language = Windows::Globalization::ApplicationLanguages::Languages->GetAt(0);

// Set flow direction accordingly


m_flowDirection = ResourceManager::Current->DefaultContext->QualifierValues->Lookup("LayoutDirection") != "LTR" ?
FlowDirection::RightToLeft : FlowDirection::LeftToRight;

RTL FAQ

Q
:
I
s
F
l
o
w
D
i
r
e
c
t
i
o
n
s
e
t
a
u
t
o
m
a
t
i
c
a
l
l
y
b
a
s
e
d
o
n
t
h
e
c
u
r
r
e
n
t
l
a
n
g
u
a
g
e
s
e
l
e
c
t
i
o
n
?
F
o
r
e
x
a
m
p
l
e
,
i
f
I
s
e
l
e
c
t
E
n
g
l
i
s
h
w
i
l
l
i
t
d
i
s
p
l
a
y
l
e
f
t
t
o
r
i
g
h
t
,
a
n
d
i
f
I
s
e
l
e
c
t
A
r
a
b
i
c
,
w
i
l
l
i
t
d
i
s
p
l
a
y
r
i
g
h
t
t
o
l
e
f
t
?

A: FlowDirection does not take into account the language. You set FlowDirection appropriately for the
language you are currently displaying. See the sample code above.

Q
:
I

m
n
o
t
t
o
o
f
a
m
i
l
i
a
r
w
i
t
h
l
o
c
a
l
i
z
a
t
i
o
n
.
D
o
t
h
e
r
e
s
o
u
r
c
e
s
a
l
r
e
a
d
y
c
o
n
t
a
i
n
f
l
o
w
d
i
r
e
c
t
i
o
n
?
I
s
i
t
p
o
s
s
i
b
l
e
t
o
d
e
t
e
r
m
i
n
e
t
h
e
f
l
o
w
d
i
r
e
c
t
i
o
n
f
r
o
m
t
h
e
c
u
r
r
e
n
t
l
a
n
g
u
a
g
e
?

A: If you are using current best practices, resources do not contain flow direction directly. You must
determine flow direction for the current language. Here are two ways to do this:
The preferred way is to use the LayoutDirection for the top preferred language to set the FlowDirection
property of the RootFrame. All the controls in the RootFrame inherit FlowDirection from the RootFrame.
Another way is to set the FlowDirection in the resw file for the RTL languages you are localizing for. For
example, you might have an Arabic resw file and a Hebrew resw file. In these files you could use x:UID to set
the FlowDirection. This method is more prone to errors than the programmatic method, though.

Related topics
FlowDirection
Use patterns to format dates and times
3/6/2017 5 min to read Edit on GitHub

Use the Windows.Globalization.DateTimeFormatting API with custom patterns to display dates and times in
exactly the format you wish.

Important APIs
Windows.Globalization.DateTimeFormatting
DateTimeFormatter
DateTime

Introduction
Windows.Globalization.DateTimeFormatting provides various ways to properly format dates and times for
languages and regions around the world. You can use standard formats for year, month, day, and so on, or you can
use standard string templates, such as "longdate" or "month day".
But when you want more control over the order and format of the constituents of the DateTime string you wish to
display, you can use a special syntax for the string template parameter, called a "pattern". The pattern syntax allows
you to obtain individual constituents of a DateTime objectjust the month name, or just the year value, for
examplein order to display them in whatever custom format you choose. Furthermore, the pattern can be
localized to adapt to other languages and regions.
Note This is an overview of format patterns. For a more complete discussion of format templates and format
patterns see the Remarks section of the DateTimeFormatter class.

What you need to know


It's important to note that when you use patterns, you are building a custom format that is not guaranteed to be
valid across cultures. For example, consider the "month day" template:
C#

var datefmt = new Windows.Globalization.DateTimeFormatting.DateTimeFormatter("month day");

JavaScript

var datefmt = new Windows.Globalization.DateTimeFormatting.DateTimeFormatter("month day");

This creates a formatter based on the language and region value of the current context. Therefore, it always
displays the month and day together in an appropriate global format. For example, it displays "January 1" for
English (US), but "1 janvier" for French (France) and "1 1 " for Japanese. That is because the template is based on
a culture-specific pattern string, which can be accessed via the pattern property:
C#

var monthdaypattern = datefmt.Patterns;


JavaScript

var monthdaypattern = datefmt.patterns;

This yields different results depending on the language and region of the formatter. Note that different regions
may use different constituents, in different orders, with or without additional characters and spacing:

En-US: "{month.full} {day.integer}"


Fr-FR: "{day.integer} {month.full}"
Ja-JP: "{month.integer} {day.integer} "

You can use patterns to construct a custom DateTimeFormatter, for instance this one based on the US English
pattern:
C#

var datefmt = new Windows.Globalization.DateTimeFormatting.DateTimeFormatter("{month.full} {day.integer}");

JavaScript

var datefmt = new Windows.Globalization.DateTimeFormatting.DateTimeFormatter("{month.full} {day.integer}");

Windows returns culture-specific values for the individual constituents inside the brackets {}. But with the pattern
syntax, the constituent order is invariant. You get precisely what you ask for, which may not be culturally
appropriate:

En-US: January 1
Fr-FR: janvier 1 (inappropriate for France; non-standard order)
Ja-JP: 1 1 (inappropriate for Japan; the day symbol is missing)

Furthermore, patterns are not guaranteed to remain consistent over time. Countries or regions may change their
calendar systems, which alters a format template. Windows updates the output of the formatters to accommodate
such changes. Therefore, you should only use the pattern syntax for formatting DateTimes when:
You are not dependent on a particular output for a format.
You do not need the format to follow some culture-specific standard.
You specifically intend the pattern to be invariant across cultures.
You intend to localize the pattern.
To summarize the differences between the standard string templates and non-standard string patterns:
String templates, such as "month day":
Abstracted representation of a DateTime format that includes values for the month and the day, in some order.
Guaranteed to return a valid standard format across all language-region values supported by Windows.
Guaranteed to give you a culturally-appropriate formatted string for the given language-region.
Not all combinations of constituents are valid. For example, there is no string template for "dayofweek day".
String patterns, such as "{month.full} {day.integer}":
Explicitly ordered string that expresses the full month name, followed by a space, followed by the day integer, in
that order.
May not correspond to a valid standard format for any language-region pair.
Not guaranteed to be culturally appropriate.
Any combination of constituents may be specified, in any order.

Tasks
Suppose you wish to display the current month and day together with the current time, in a specific format. For
example, you would like US English users to see something like this:

June 25 | 1:38 PM

The date part corresponds to the "month day" template, and the time part corresponds to the "hour minute"
template. So, you can create a custom format that concatenates the patterns which make up those templates.
First, get the formatters for the relevant date and time templates, and then get the patterns of those templates:
C#

// Get formatters for the date part and the time part.
var mydate = new Windows.Globalization.DateTimeFormatting.DateTimeFormatter("month day");
var mytime = new Windows.Globalization.DateTimeFormatting.DateTimeFormatter("hour minute");

// Get the patterns from these formatters.


var mydatepattern = mydate.Patterns[0];
var mytimepattern = mytime.Patterns[0];

JavaScript

// Get formatters for the date part and the time part.
var dtf = Windows.Globalization.DateTimeFormatting;
var mydate = dtf.DateTimeFormatter("month day");
var mytime = dtf.DateTimeFormatter("hour minute");

// Get the patterns from these formatters.


var mydatepattern = mydate.patterns[0];
var mytimepattern = mytime.patterns[0];

You should store your custom format as a localizable resource string. For example, the string for English (United
States) would be "{date} | {time}". Localizers can adjust this string as needed. For example, they can change the
order of the constituents, if it seems more natural in some language or region to have the time precede the date.
Or, they can replace "|" with some other separator character. At runtime you replace the {date} and {time} portions
of the string with the relevant pattern:
C#

// Assemble the custom pattern. This string comes from a resource, and should be localizable.
var resourceLoader = new Windows.ApplicationModel.Resources.ResourceLoader();
var mydateplustime = resourceLoader.GetString("date_plus_time");
mydateplustime = mydateplustime.replace("{date}", mydatepattern);
mydateplustime = mydateplustime.replace("{time}", mytimepattern);

JavaScript

// Assemble the custom pattern. This string comes from a resource, and should be localizable.
var mydateplustime = WinJS.Resources.getString("date_plus_time");
mydateplustime = mydateplustime.replace("{date}", mydatepattern);
mydateplustime = mydateplustime.replace("{time}", mytimepattern);
Then you can construct a new formatter based on the custom pattern:
C#

// Get the custom formatter.


var mydateplustimefmt = new Windows.Globalization.DateTimeFormatting.DateTimeFormatter(mydateplustime);

JavaScript

// Get the custom formatter.


var mydateplustimefmt = new dtf.DateTimeFormatter(mydateplustime);

Related topics
Date and time formatting sample
Windows.Globalization.DateTimeFormatting
Windows.Foundation.DateTime
Manage language and region
3/6/2017 11 min to read Edit on GitHub

Control how Windows selects UI resources and formats the UI elements of the app, by using the various language
and region settings provided by Windows.

Important APIs
Windows.Globalization
Windows.ApplicationModel.Resources
WinJS.Resources Namespace

Introduction
For a sample app that demonstrates how to manage language and region settings, see Application resources and
localization sample.
A Windows user doesn't need to choose just one language from a limited set of languages. Instead, the user can
tell Windows that they speak any language in the world, even if Windows itself isn't translated into that language.
The user can even specify that they can speak multiple languages.
A Windows user can specify their location, which can be anywhere in the world. Also, the user can specify that they
speak any language in any location. The location and language do not limit each other. Just because the user
speaks French doesn't mean they are located in France, or just because the user is in France doesn't mean they
prefer to speak French.
A Windows user can run apps in a completely different language than Windows. For example, the user can run an
app in Spanish while Windows is running in English.
For Windows Store apps, a language is represented as a BCP-47 language tag. Most APIs in the Windows Runtime,
HTML, and XAML can return or accept string representations of these BCP-47 language tags. See also the IANA list
of languages.
See Supported languages for a list of the language tags specifically supported by the Windows Store.

Tasks
Users can set their language preferences.
The user language preferences list is an ordered list of languages that describe the user's languages in the order
that they prefer them.
The user sets the list in Settings > Time & language > Region & language. Alternatively, they can use Control
Panel > Clock, Language, and Region.
The user's language preferences list can contain multiple languages and regional or otherwise specific variants.
For example, the user might prefer fr-CA, but can also understand en-GB.
Specify the supported languages in the app's manifest.
Specify the list of your app's supported languages in the Resources element of the app's manifest file (typically
Package.appxmanifest), or Visual Studio automatically generates the list of languages in the manifest file based on
the languages found in the project. The manifest should accurately describe the supported languages at the
appropriate level of granularity. The languages listed in the manifest are the languages displayed to users in the
Windows Store.
Specify the default language.
Open package.appxmanifest in Visual Studio, go to the Application tab, and set your default language to the
language you are using to author your application.
An app uses the default language when it doesn't support any of the languages that the user has chosen. Visual
Studio uses the default language to add metadata to assets marked in that language, enabling the appropriate
assets to be chosen at runtime.
The default language property must also be set as the first language in the manifest to appropriately set the
application language (described in the step "Create the application language list", below). Resources in the default
language must still be qualified with their language (for example, en-US/logo.png). The default language does not
specify the implicit language of unqualified assets. To learn more, see How to name resources using qualifiers.
Qualify resources with their language.
Consider your audience carefully and the language and location of users you want to target. Many people who live
in a region don't prefer the primary language of that region. For example, there are millions of households in the
United States in which the primary language is Spanish.
When you qualify resources with language:
Include script when there is no suppress script value defined for the language. See the IANA subtag registry for
language tag details. For example, use zh-Hant, zh-Hant-TW, or zh-Hans, and not zh-CN or zh-TW.
Mark all linguistic content with a language. The default language project property is not the language of
unmarked resources (that is, neutral language); it specifies which marked language resource should be chosen
if no other marked language resource matches the user.
Mark assets with an accurate representation of the content.
Windows does complex matching, including across regional variants (such as en-US to en-GB), so applications
are free to mark assets with an accurate representation of the content and let Windows match appropriately for
each user.
The Windows Store displays what's in the manifest to users looking at the application.
Be aware that some tools and other components such as machine translators may find specific language tags,
such as regional dialect info, helpful in understanding the data.
Be sure to mark assets with full details, especially when multiple variants are available. For example, mark en-
GB and en-US if both are specific to that region.
For languages that have a single standard dialect, there is no need to add region. General tagging is reasonable
in some situations, such as marking assets with ja instead of ja-JP.
Sometimes there are situations where not all resources need to be localized.
For resources such as UI strings that come in all languages, mark them with the appropriate language they are
in and make sure to have all of these resources in the default language. There is no need to specify a neutral
resource (one not marked with a language).
For resources that come in a subset of the entire application's set of languages (partial localization), specify the
set of the languages the assets do come in and make sure to have all of these resources in the default language.
Windows picks the best language possible for the user by looking at all the languages the user speaks in their
preference order. For example, not all of an app's UI may be localized into Catalan if the app has a full set of
resources in Spanish. For users who speak Catalan and then Spanish, the resources not available in Catalan
appear in Spanish.
For resources that have specific exceptions in some languages and all other languages map to a common
resource, the resource that should be used for all languages should be marked with the undetermined
language tag 'und'. Windows interprets the 'und' language tag in a manner similar to '*', in that it matches the
top application language after any other specific match. For example, if a few resources (such as the width of an
element) are different for Finnish, but the rest of the resources are the same for all languages, the Finnish
resource should be marked with the Finnish language tag, and the rest should be marked with 'und'.
For resources that are based on a language's script instead of the language, such as a font or height of text, use
the undetermined language tag with a specified script: 'und-<script>'. For example, for Latin fonts use und-
Latn\fonts.css and for Cyrillic fonts use und-Cryl\fonts.css.
Create the application language list.
At runtime, the system determines the user language preferences that the app declares support for in its manifest,
and creates an application language list. It uses this list to determine the language(s) that the application should
be in. The list determines the language(s) that is used for app and system resources, dates, times, and numbers,
and other components. For example, the Resource Management System
(Windows.ApplicationModel.Resources, Windows.ApplicationModel.Resources.Core and
WinJS.Resources Namespace) loads UI resources according to the application language.
Windows.Globalization also chooses formats based on the application language list. The application language
list is available using Windows.Globalization.ApplicationLanguages.Languages.
The matching of languages to resources is difficult. We recommend that you let Windows handle the matching
because there are many optional components to a language tag that influence priority of match, and these can be
encountered in practice.
Examples of optional components in a language tag are:
Script for suppress script languages. For example, en-Latn-US matches en-US.
Region. For example, en-US matches en.
Variants. For example, de-DE-1996 matches de-DE.
-x and other extensions. For example, en-US-x-Pirate matches en-US.
There are also many components to a language tag that are not of the form xx or xx-yy, and not all match.
zh-Hant does not match zh-Hans.
Windows prioritizes matching of languages in a standard well-understood way. For example, en-US matches, in
priority order, en-US, en, en-GB, and so forth.
Windows does cross regional matching. For example, en-US matches en-US, then en, and then en-*.
Windows has additional data for affinity matching within a region, such as the primary region for a language.
For example, fr-FR is a better match for fr-BE than is fr-CA.
Any future improvements in language matching in Windows will be obtained for free when you depend on
Windows APIs.
Matching for the first language in a list occurs before matching of the second language in a list, even for other
regional variants. For example, a resource for en-GB is chosen over an fr-CA resource if the application language is
en-US. Only if there are no resources for a form of en is a resource for fr-CA chosen.
The application language list is set to the user's regional variant, even if it is different than the regional variant that
the app provided. For example, if the user speaks en-GB but the app supports en-US, the application language list
would include en-GB. This ensures that dates, times, and numbers are formatted more closely to the user's
expectations (en-GB), but the UI resources are still loaded (due to language matching) in the app's supported
language (en-US).
The application language list is made up of the following items:
1. (Optional) Primary Language Override The PrimaryLanguageOverride is a simple override setting for
apps that give users their own independent language choice, or apps that have some strong reason to override
the default language choices. To learn more, see the Application resources and localization sample.
2. The user's languages supported by the app. This is a list of the user's language preferences, in order of
language preference. It is filtered by the list of supported languages in the app's manifest. Filtering the user's
languages by those supported by the app maintains consistency among software development kits (SDKs),
class libraries, dependent framework packages, and the app.
3. If 1 and 2 are empty, the default or first language supported by the app. If the user doesn't speak any
languages that the app supports, the chosen application language is the first language supported by the app.
See the Remarks section below for examples.
Set the HTTP Accept Language header.
HTTP requests made from Windows Store apps and Desktop apps in typical web requests and XMLHttpRequest
(XHR), use the standard HTTP Accept-Language header. By default, the HTTP header is set to the user's language
preferences, in the user's preferred order, as specified in Settings > Time & language > Region & language.
Each language in the list is further expanded to include neutrals of the language and a weighting (q). For example,
a user's language list of fr-FR and en-US results in an HTTP Accept-Language header of fr-FR, fr, en-US, en ("fr-
FR,fr;q=0.8,en-US;q=0.5,en;q=0.3").
Use the APIs in the Windows.Globalization namespace.
Typically, the API elements in the Windows.Globalization namespace use the application language list to
determine the language. If none of the languages has a matching format, the user locale is used. This is the same
locale that is used for the system clock. The user locale is available from the Settings > Time & language >
Region & language > Additional date, time, & regional settings > Region: Change date, time, or number
formats. The Windows.Globalization APIs also accept an override to specify a list of languages to use, instead of
the application language list.
Windows.Globalization also has a Language object that is provided as a helper object. It lets apps inspect
details about the language, such as the script of the language, the display name, and the native name.
Use geographic region when appropriate.
Instead of language, you can use the user's home geographic region setting for choosing what content to display
to the user. For example, a news app might default to displaying content from a user's home location, which is set
when Windows is installed and is available in the Windows UI under Region: Change date, time, or number
formats as described in the previous task. You can retrieve the current user's home region setting by using
Windows.System.UserProfile.GlobalizationPreferences.HomeGeographicRegion.
Windows.Globalization also has a GeographicRegion object that is provided as a helper object. It lets apps
inspect details about a particular region, such as its display name, native name, and currencies in use.

Remarks
The following table contains examples of what the user would see in the app's UI under various language and
region settings.

APP-SUPPORTED USER LANGUAGE APP'S PRIMARY


LANGUAGES (DEFINED PREFERENCES (SET IN LANGUAGE OVERRIDE WHAT THE USER SEES
IN MANIFEST) CONTROL PANEL) (OPTIONAL) APP LANGUAGES IN THE APP

English (GB) (default); English (GB) none English (GB) UI: English (GB)
German (Germany) Dates/Times/Number
s: English (GB)
APP-SUPPORTED USER LANGUAGE APP'S PRIMARY
LANGUAGES (DEFINED PREFERENCES (SET IN LANGUAGE OVERRIDE WHAT THE USER SEES
IN MANIFEST) CONTROL PANEL) (OPTIONAL) APP LANGUAGES IN THE APP

German (Germany) French (Austria) none French (Austria) UI: French (France)
(default); French (fallback from French
(France); Italian (Italy) (Austria))
Dates/Times/Number
s: French (Austria)

English (US) (default); English (Canada); none English (Canada); UI: English (US)
French (France); French (Canada) French (Canada) (fallback from English
English (GB) (Canada))
Dates/Times/Number
s: English (Canada)

Spanish (Spain) English (US) none Spanish (Spain) UI: Spanish (Spain)
(default); Spanish (uses default since no
(Mexico); Spanish fallback available for
(Latin America); English)
Portuguese (Brazil) Dates/Times/Number
s Spanish (Spain)

Catalan (default); Catalan; French none Catalan; French UI: Mostly Catalan
Spanish (Spain); (France) (France) and some French
French (France) (France) because not
all the strings are in
Catalan
Dates/Times/Number
s: Catalan

English (GB) (default); German (Germany); English (GB) (chosen English (GB); German UI: English (GB)
French (France); English (GB) by user in app's UI) (Germany) (language override)
German (Germany) Dates/Times/Number
s English (GB)

Related topics
BCP-47 language tag
IANA list of languages
Application resources and localization sample
Supported languages
Prepare your app for localization
3/6/2017 9 min to read Edit on GitHub

Prepare your app for localization to other markets, languages, or regions. Before you get started, be sure to read
through the do's and don'ts.

Use resource files and qualifiers.


Be sure to specify the UI strings of your app in resource files, instead of placing them in your code. For more detail,
see Put UI strings into resources.
Specify images or other file resources with the appropriate language tag in their file or folder. Be aware that it takes
a significant amount of system resources to localize images, audio, and video, so its best to use neutral media
assets whenever you can. To learn more, see How to name resources using qualifiers.

Add contextual comments.


Add localization comments to your app resource files. The comments are visible to the localizer, and should
provide contextual information that helps the localizer to accurately translate the resources. The comments should
also provide sufficient constraint information on the resource, so that translation does not break the software.
Optionally, the comments can be logged by the Makepri.exe tool.
XAML: Resw files (resources created in Visual Studio for apps using XAML) have a comment element. For example:

<data name="String1">
<value>Hello World</value>
<comment>A greeting (This is a comment to the localizer)</comment>
</data>

Localize sentences instead of words.


Consider the following string: "The {0} could not be synchronized."
A variety of words could replace {0}, such as appointment, task, or document. While this example would appear to
work for the English language, it will not work in all cases for the corresponding sentence in German. Notice that in
the following German sentences, some of the words in the template string ("Der", "Die", "Das") need to match the
parameterized word:

ENGLISH GERMAN

The appointment could not be synchronized. Der Termin konnte nicht synchronisiert werden.

The task could not be synchronized. Die Aufgabe konnte nicht synchronisiert werden.

The document could not be synchronized. Das Dokument konnte nicht synchronisiert werden.

As another example, consider the sentence "Remind me in {0} minute(s)." While using "minute(s)" works for the
English language, other languages might use different terms. For example, the Polish language uses "minuta",
"minuty", or "minut" depending on the context.
To solve this problem, localize the entire sentence, rather than a single word. Doing this may seem like extra work
and an inelegant solution, but it is the best solution because:
A clean error message will be displayed for all languages.
Your localizer will not need to ask about what the strings will be replaced with.
You will not need to implement a costly code fix when a problem like this surfaces after your app is completed.

Ensure the correct parameter order.


Don't assume that all languages use parameters in the same order. For example, consider the string "Every %s %s",
where the first %s is replaced by the name of a month, and the second %s is replaced by the date of a month. This
example works for the English language, but will fail when the app is localized into the German language, where
the date and month are displayed in the reverse order.
To solve this problem, change the string to "Every %1 %2", so that the order is interchangeable depending on the
language.

Dont over localize.


Localize specific strings, not tags. Consider the following examples:

OVER-LOCALIZED STRING CORRECTLY-LOCALIZED STRING

<link>terms of use</link> terms of use

<link>privacy policy</link> privacy policy

Including the above <link> tag in the resources means that it too will be localized. This renders the tag not valid.
Only the strings themselves should be localized. Generally, you should think of tags as code that should be kept
separate from localizable content. However, some long strings should include markup to keep context and ensure
ordering.

Do not use the same strings in dissimilar contexts.


Reusing a string may seem like the best solution, but it can cause localization problems if the same word or phrase
can have different meanings or contexts.
You can reuse strings if the two contexts are the same. For instance, you can reuse the string "Volume" for both
sound effect volume and music volume because both refer to intensity of sound. You should not reuse that same
string when referring to a hard disk volume because the context and meaning are different, and the word might be
translated differently.
Another example is the use of the strings "on" and "off". In the English language, "on" and "off" can be used for a
toggle for Flight Mode, Bluetooth, and devices. But in Italian, the translation depends on the context of what is
being turned on and off. You would need to create a pair of strings for each context.
Additionally, a string like "text" or "fax" could be used as both a verb and a noun in the English language, which can
confuse the translation process. Instead, create a separate string for both the verb and noun format. When you're
not sure whether the contexts are the same, err on the safe side and use a distinct string.

Identify resources with unique attributes.


Resource identifiers are case insensitive and must be unique per resource file. When accessing a resource, use the
resource identifier, not the actual value of the resource. Resource identifiers don't change, but the actual values of
the resources do change depending on the language.
Be sure to use meaningful resource identifiers to provide additional context for translation.
Don't change the resource identifiers after the string resources are sent to translation. Localization teams use the
resource identifier to track additions, deletions, and updates in the resources. Changes in resource identifiersalso
known as "resource identifiers shift"require strings to be retranslated, because it will appear as though strings
were deleted and others added.

Choose an appropriate translation approach.


After strings are separated into resource files, they can be translated. The ideal time to translate strings is after the
strings in your project are finalized, which usually happens toward the end of a project. You can approach the
translation process in number of ways. This may depend on the volume of strings to be translated, the number of
languages to be translated, and how the translation will be done (such as in-house versus hiring an external
vendor).
Consider the following options:
The resource files can be translated by opening them directly in the project. This approach works well
for a project that has a small volume of strings and that needs to be translated into two or three languages. It
could be suitable for a scenario where a developer speaks more than one language and is willing to handle the
translation process. This approach benefits by being quick, requires no tools, and minimizes the risk of
mistranslations, but it is not scalable. In particular, the resources in different languages can easily get out of
sync, causing bad user experiences and maintenance headaches.
The string resource files are in XML or ResJSON text format, so could be handed off for translation
using any text editor. The translated files would then be copied back into the project. This approach
carries a risk of translators accidentally editing the XML tags, but it lets translation work take place outside of
the Microsoft Visual Studio project. This approach could work well for projects that need to be translated into a
small number of languages. The XLIFF format is an XML format specifically designed for use in localization, and
should be well supported by some localization vendors or localization tools. You can use the Multilingual App
Toolkit to generate XLIFF files from other resource files, such as .resw or .resjson.
Handoffs to localizers may need to occur for other files, such as images or audio files. Typically, we don't
recommend creating culturally dependent files because they can be difficult to localize.
Additionally, consider the following suggestions:
Use a localization tool. A number of localization tools are available for parsing resource files and allowing
only the translatable strings to be edited by translators. This approach reduces the risk of a translator
accidentally editing the XML tags. But it has the drawback of introducing a new tool and process to the
localization process. A localization tool is good for projects with a large volume of strings, but a small number of
languages. To learn more, see How to use the Multilingual App Toolkit.
Use a localization vendor. Consider using a localization vendor if your project contains a large volume of
strings and needs to be translated for many languages. A localization vendor can give advice about tools and
processes, as well as translating your resource files. This is an ideal solution, but is also the most costly option,
and may increase the turnaround time for your translated content.
Keep your localizers informed. Inform localizers of strings that can be considered a noun or a verb. Explain
fabricated words to your localizers by using terminology tools. Keep strings grammatically correct,
unambiguous, and as nontechnical as possible to avoid confusion.

Keep access keys and labels consistent.


It is a challenge to "synchronize" the access keys used in accessibility with the display of the localized access keys,
because the two string resources are categorized in two separate sections. Be sure to provide comments for the
label string such as: Make sure that the emphasized shortcut key is synchronized with the access key.
Support Furigana for Japanese strings that can be sorted.
Japanese Kanji characters have the unique property of having more than one pronunciation depending on the
word and context they are used in. This leads to problems when you try to sort Japanese named objects, such as
application names, files, songs, and so on. Japanese Kanji have, in the past, usually been sorted in a machine-
understandable order called XJIS. Unfortunately, because this sorting order is not phonetic it is not very useful for
humans.
Furigana works around this problem by allowing the user or creator to specify the phonetics for the characters they
are using. If you use the following procedure to add Furigana to your app name, you can ensure that it is sorted in
the proper location in the app list. If your app name contains Kanji characters and Furigana is not provided when
the users UI language or the sort order is set to Japanese, Windows makes its best effort to generate the
appropriate pronunciation. However, there is a possibility for app names containing rare or unique readings to be
sorted under a more common reading instead. Therefore, the best practice for Japanese applications (especially
those containing Kanji characters in their names) is to provide a Furigana version of their app name as part of the
Japanese localization process.
1. Add "ms-resource:Appname" as the Package Display Name and the Application Display Name.
2. Create a ja-JP folder under strings, and add two resource files as follows:

strings\
en-us\
ja-jp\
Resources.altform-msft-phonetic.resw
Resources.resw

3. In Resources.resw for general ja-JP: Add a string resource for Appname " "
4. In Resources.altform-msft-phonetic.resw for Japanese furigana resources: Add Furigana value for AppName "
"
The user can search for the app name " " using both the Furigana value " " (noa), and the phonetic value
(using the GetPhonetic function from Input Method Editor (IME)) " " (mare-ao).
Sorting follows the Regional Control Panel format:
Under Japanese user locale,
If Furigana is enabled, the " " is sorted under " ".
If Furigana is missing, the " " is sorted under " ".
Under non-Japanese user locale,
If Furigana is enabled, the " " is sorted under " ".
If Furigana is missing, the " " is sorted under " ".

Related topics
Globalization and localization do's and don'ts
Put UI strings into resources
How to name resources using qualifiers
Put UI strings into resources
3/6/2017 4 min to read Edit on GitHub

Put string resources for your UI into resource files. You can then reference those strings from your code or
markup.

Important APIs
ApplicationModel.Resources.ResourceLoader
WinJS.Resources.processAll

This topic shows the steps to add several language string resources to your Universal Windows app, and how to
briefly test it.

Put strings into resource files, instead of putting them directly in code
or markup.
1. Open your solution (or create a new one) in Visual Studio.
2. Open package.appxmanifest in Visual Studio, go to the Application tab, and (for this example) set the
Default language to "en-US". If there are multiple package.appxmanifest files in your solution, do this for
each one.
Note This specifies the default language for the project. The default language resources are used if the
user's preferred language or display languages do not match the language resources provided in the
application.
3. Create a folder to contain the resource files.
a. In the Solution Explorer, right-click the project (the Shared project if your solution contains multiple
projects) and select Add > New Folder.
b. Name the new folder "Strings".
c. If the new folder is not visible in Solution Explorer, select Project > Show All Files from the Microsoft
Visual Studio menu while the project is still selected.
4. Create a sub-folder and a resource file for English (United States).
a. Right-click the Strings folder and add a new folder beneath it. Name it "en-US". The resource file is to be
placed in a folder that has been named for the BCP-47 language tag. See How to name resources using
qualifiers for details on the language qualifier and a list of common language tags.
b. Right-click the en-US folder and select Add > New Item.
c. Select "Resources File (.resw)".
d. Click Add. This adds a resource file with the default name "Resources.resw". We recommend that
you use this default filename. Apps can partition their resources into other files, but you must be
careful to refer to them correctly (see How to load string resources).
e. If you have .resx files with only string resources from previous .NET projects, select Add > Existing
Item, add the .resx file, and rename it to .resw.
f. Open the file and use the editor to add these resources:
Strings/en-US/Resources.resw
![add resource, english](images/addresource-en-us.png)
In this example, "Greeting.Text" and "Farewell" identify the strings that are to be displayed. "Greeting.Width" identifies the Width property of
the "Greeting" string. The comments are a good place to provide any special instructions to translators who localize the strings to other
languages.

Associate controls to resources.


You need to associate every control that needs localized text with the .resw file. You do this using the x:Uid
attribute on your XAML elements like this:

<TextBlock x:Uid="Greeting" Text="" />

For the resource name, you give the Uid attribute value, plus you specify what property is to get the translated
string (in this case the Text property). You can specify other properties/values for different languages such as
Greeting.Width, but be careful with such layout-related properties. You should strive to allow the controls to lay
out dynamically based on the device's screen.
Note that attached properties are handled differently in resw files such as AutomationPeer.Name. You need to
explicitly write out the namespace like this:

MediumButton.[using:Windows.UI.Xaml.Automation]AutomationProperties.Name</code></pre></td>

Add string resource identifiers to code and markup.


In your code, you can dynamically reference strings:
C#

var loader = new Windows.ApplicationModel.Resources.ResourceLoader();


var str = loader.GetString("Farewell");

C++

auto loader = ref new Windows::ApplicationModel::Resources::ResourceLoader();


auto str = loader->GetString("Farewell");

Add folders and resource files for two additional languages.


1. Add another folder under the Strings folder for German. Name the folder "de-DE" for Deutsch (Deutschland).
2. Create another resources file in the de-DE folder, and add the following:
strings/de-DE/Resources.resw
3. Create one more folder named "fr-FR", for franais (France). Create a new resources file and add the
following:
strings/fr-FR/Resources.resw

Build and run the app.


Test the app for your default display language.
1. Press F5 to build and run the app.
2. Note that the greeting and farewell are displayed in the user's preferred language.
3. Exit the app.
Test the app for the other languages.
1. Bring up Settings on your device.
2. Select Time & language.
3. Select Region & language (or on a phone or phone emulator, Language).
4. Note that the language that was displayed when you ran the app is the top language listed that is English,
German, or French. If your top language is not one of these three, the app falls back to the next one on the list
that the app supports.
5. If you do not have all three of these languages on your machine, add the missing ones by clicking Add a
language and adding them to the list.
6. To test the app with another language, select the language in the list and click Set as default (or on a phone or
phone emulator, tap and hold the language in the list and then tap Move up until it is at the top). Then run the
app.

Related topics
How to name resources using qualifiers
How to load string resources
The BCP-47 language tag
Use global-ready formats
3/6/2017 5 min to read Edit on GitHub

Develop a global-ready app by appropriately formatting dates, times, numbers, phone numbers, and currencies.
This permits you to adapt your app later for additional cultures, regions, and languages in the global market.

Important APIs
Windows.Globalization.Calendar
Windows.Globalization.DateTimeFormatting
Windows.Globalization.NumberFormatting
Windows.Globalization.PhoneNumberFormatting

Introduction
Many app developers naturally create their apps thinking only of their own language and culture. But when the app
begins to grow into other markets, adapting the app for new languages and regions can be difficult in unexpected
ways. For example, dates, times, numbers, calendars, currency, telephone numbers, units of measurement, and
paper sizes are all items that can be displayed differently in different cultures or languages.
The process of adapting to new markets can be simplified by taking a few things into account as you develop your
app.

Format dates and times appropriately


There are many different ways to properly display dates and times. Different regions and cultures use different
conventions for the order of day and month in the date, for the separation of hours and minutes in the time, and
even for what punctuation is used as a separator. In addition, dates may be displayed in various long formats
("Wednesday, March 28, 2012") or short formats ("3/28/12"), which can vary across cultures. And of course, the
names and abbreviations for the days of the week and months of the year differ for every language.
If you need to allow users to choose a date or select a time, use the standard date and time picker controls. These
will automatically use the date and time formats for the user's preferred language and region.
If you need to display dates or times yourself, use Date/Time and Number formatters to automatically display the
user's preferred format for dates, times and numbers. The code below formats a given DateTime by using the
preferred language and region. For example, if the current date is 3 June 2012, the formatter gives "6/3/2012", if
the user prefers English (United States), but it gives "03.06.2012" if the user prefers German (Germany):
// Use the Windows.Globalization.DateTimeFormatting.DateTimeFormatter class
// to display dates and times using basic formatters.

// Formatters for dates and times, using shortdate format.


var sdatefmt = new Windows.Globalization.DateTimeFormatting.DateTimeFormatter("shortdate");
var stimefmt = new Windows.Globalization.DateTimeFormatting.DateTimeFormatter("shorttime");

// Obtain the date that will be formatted.


var dateToFormat = DateTime.Now;

// Perform the actual formatting.


var sdate = sdatefmt.Format(dateToFormat);
var stime = stimefmt.Format(dateToFormat);

// Results for display.


var results = "Short Date: " + sdate + "\n" +
"Short Time: " + stime;

Format numbers and currencies appropriately


Different cultures format numbers differently. Format differences may include how many decimal digits to display,
what characters to use as decimal separators, and what currency symbol to use. Use NumberFormatting to
display decimal, percent, or permille numbers, and currencies. In most cases you simply display numbers or
currencies according to the user's current preferences. But you may also use the formatters to display a currency
for a particular region or format.
The code below gives an example of how to display currencies per the user's preferred language and region, or for
a specific given currency system:

// This scenario uses the Windows.Globalization.NumberFormatting.CurrencyFormatter class


// to format a number as a currency.

// Determine the current user's default currency.


var userCurrency = Windows.System.UserProfile.GlobalizationPreferences.Currencies[0];

// Number to be formatted.
var fractionalNumber = 12345.67;

// Currency formatter using the current user's preference settings for number formatting.
var userCurrencyFormat = new Windows.Globalization.NumberFormatting.CurrencyFormatter(userCurrency);
var currencyDefault = userCurrencyFormat.Format(fractionalNumber);

// Create a formatter initialized to a specific currency,


// in this case US Dollar (specified as an ISO 4217 code)
// but with the default number formatting for the current user.
var currencyFormatUSD = new Windows.Globalization.NumberFormatting.CurrencyFormatter("USD");
var currencyUSD = currencyFormatUSD.Format(fractionalNumber);

// Create a formatter initialized to a specific currency.


// In this case it's the Euro with the default number formatting for France.
var currencyFormatEuroFR = new Windows.Globalization.NumberFormatting.CurrencyFormatter("EUR", new[] { "fr-FR" }, "FR");
var currencyEuroFR = currencyFormatEuroFR.Format(fractionalNumber);

// Results for display.


var results = "Fixed number (" + fractionalNumber + ")\n" +
"With user's default currency: " + currencyDefault + "\n" +
"Formatted US Dollar: " + currencyUSD + "\n" +
"Formatted Euro (fr-FR defaults): " + currencyEuroFR;

Use a culturally appropriate calendar


The calendar differs across regions and languages. The Gregorian calendar is not the default for every region. Users
in some regions may choose alternate calendars, such as the Japanese era calendar or Arabic lunar calendars.
Dates and times on the calendar are also sensitive to different time zones and daylight saving time.
Use the standard date and time picker controls to allow users to choose a date, to ensure that the preferred
calendar format is used. For more complex scenarios, where working directly with operations on calendar dates
may be required, Windows.Globalization provides a Calendar class that gives an appropriate calendar
representation for the given culture, region, and calendar type.

Format phone numbers appropriately


Phone numbers are formatted differently across regions. The number of digits, how the digits are grouped, and the
significance of certain parts of the phone number vary from one country to the next. Starting in Windows 10,
version 1607, you can use PhoneNumberFormatting to format phone numbers appropriately for the current
region.
PhoneNumberInfo parses a string of digits and allows you to determine if the digits are a valid phone number in
the current region, compare two numbers for equality, and to extract the different functional parts of the phone
number, such as country code or geographical area code.
PhoneNumberFormatter formats a string of digits or a PhoneNumberInfo for display, even when the string of
digits represents a partial phone number. (You can use this partial number formatting to format a number as a
user is entering the number.)
The code below shows how to use PhoneNumberFormatter to format a phone number as it is being entered. Each
time text changes in a TextBox named gradualInput, the contents of the text box are formatted using the current
default region and displayed in a TextBlock named outBox. For demonstration purposes, the string is also
formatted using the region for New Zealand, and displayed in a TextBlock named NZOutBox.

using Windows.Globalization;

PhoneNumberFormatter currentFormatter, NZFormatter;

public MainPage()
{
this.InitializeComponent();

// Use the default formatter for the current region


currentFormatter = new PhoneNumberFormatter();

// Create an explicit formatter for New Zealand.


// Note that you must check the results of TryCreate before you use the formatter.
PhoneNumberFormatter.TryCreate("NZ", out NZFormatter);

private void gradualInput_TextChanged(object sender, TextChangedEventArgs e)


{
// Format for the default region into outBox.
outBox.Text = currentFormatter.FormatPartialString(gradualInput.Text);

// If the NZFormatter was created successfully, format the partial string for the NZOutBox.
if(NZFormatter != null)
{
NZOutBox.Text = NZFormatter.FormatPartialString(gradualInput.Text);
}
}

Respect the user's Language and Cultural Preferences


For scenarios where you provide different functionality based on the user's language, region, or cultural
preferences, Windows gives you a way to access those preferences, through
Windows.System.UserProfile.GlobalizationPreferences. When needed, use the GlobalizationPreferences
class to get the value of the user's current geographic region, preferred languages, preferred currencies, and so on.

Related topics
Plan for a global market
Guidelines for date and time controls
Reference
Windows.Globalization.Calendar
Windows.Globalization.DateTimeFormatting
Windows.Globalization.NumberFormatting
Windows.System.UserProfile.GlobalizationPreferences
Samples
Calendar details and math sample
Date and time formatting sample
Globalization preferences sample
Number formatting and parsing sample
Guidelines for App Help
3/6/2017 3 min to read Edit on GitHub

Applications can be complex, and providing effective help for your users can greatly improve their experience. Not
all applications need to provide help for their users, and what sort of help should be provided can vary greatly,
depending on the application.
If you decide to provide help, follow these guidelines when creating it. Help that isn't helpful can be worse than no
help at all.

Intuitive Design
As useful as help content can be, your app cannot rely on it to provide a good experience for the user. If the user is
unable to immediately discover and use the critical functions of your app, the user will not use your app. No
amount or quality help will change that first impression.
An intuitive and user-friendly design is the first step to writing useful help. Not only does it keep the user engaged
for long enough for them to use more advanced features, but it also provides them with knowledge of an app's
core functions, which they can build upon as they continue to use the app and learn.

General instructions
A user will not look for help content unless they already have a problem, so help needs to provide a quick and
effective answer to that problem. If help is not immediately useful, or if help is too complicated, then users are
more likely to ignore it.
All help, no matter what kind, should follow these principles:
Easy to understand: Help that confuses the user is worse than no help at all.
Straightforward: Users looking for help want clear answers presented directly to them.
Relevant: Users do not want to have to search for their specific issue. They want the most relevant help
presented straight to them (this is called "Contextual Help"), or they want an easily navigated interface.
Direct: When a user looks for help, they want to see help. If your app includes pages for reporting bugs,
giving feedback, viewing term of service, or similar functions, it is fine if your help links to those pages. But
they should be included as an afterthought on the main help page, and not as items of equal or greater
importance.
Consistent: No matter the type, help is still a part of your app, and should be treated as any other part of
the UI. The same design principles of usability, accessibility, and style which are used throughout the rest of
your app should also be present in the help you offer.

Types of help
There are three primary categories of help content, each with varying strengths and suitable for different purposes.
Use any combination of them in your app, depending on your needs.
Instructional UI
Normally, users should be able to use all the core functions of your app without instruction. But sometimes, your
app will depend on use of a specific gesture, or there may be secondary features of your app which are not
immediately obvious. In this case, instructional UI should be used to educate users with instructions on how to
perform specific tasks.
See guidelines for instructional UI
In-app help
The standard method of presenting help is to display it within the application at the user's request. There are
several ways in which this can be implemented, such as in help pages or informative descriptions. This method is
ideal for general-purpose help, that directly answers a user's questions without complexity.
See guidelines for in-app help
External help
For detailed tutorials, advanced functions, or libraries of help topics too large to fit within your application, links to
external web pages are ideal. These links should be used sparingly if possible, as they remove the user from the
application experience.
See guidelines for external help
Instructional UI guidelines
3/6/2017 2 min to read Edit on GitHub

In some circumstances it can be helpful to teach the user about functions in your app that might not be obvious to
them, such as specific touch interactions. In these cases, you need to present instructions to the user through the
user interface (UI), so that they can use those features they might have missed.

When to use instructional UI


Instructional UI has to be used carefully. When overused, it can be easily ignored or annoy the user, causing it to be
ineffective.
Instructional UI should be used to help the user discover important and non-obvious features of your app, such as
touch gestures or settings they may be interested in. It can also be used to inform users about new features or
changes in your app that they might have otherwise overlooked.
Unless your app is dependent on touch gestures, instructional UI should not be used to teach users the
fundamental features of your app.

Principles of writing instructional UI


Good instructional UI is relevant and educational to the user, and enhances the user experience. It should be:
Simple: Users don't want their experience to be interrupted with complicated information
Memorable: Users don't want to see the same instructions every time they attempt a task, so instructions need
to be something they'll remember.
Immediately relevant: If the instructional UI doesn't teach a user about something that they immediately want
to do, they won't have a reason to pay attention to it.
Avoid overusing instructional UI, and be sure to choose the right topics. Do not teach:
Fundamental features: If a user needs instructions to use your app, consider making the app design more
intuitive.
Obvious features: If a user can figure out a feature on their own without instruction, then the instructional UI
will just get in the way.
Complex features: Instructional UI needs to be concise, and users interested in complex features are usually
willing to seek out instructions and don't need to be given them.
Avoid inconveniencing the user with your instructional UI. Do not:
Obscure important information: Instructional UI should never get in the way of other features of your app.
Force users to participate: Users should be able to ignore instructional UI and still progress through the app.
Displaying repeat information: Don't harass the user with instructional UI, even if they ignore it the first time.
Adding an setting to display instructional UI again is a better solution.

Examples of instructional UI
Here are a few instances in which instructional UI can help your users learn:
Helping users discover touch interactions. The following screen shot shows instructional UI teaching a
player how to use touch gestures in the game, Cut the Rope.
Making a great first impression. When Movie Moments launches for the first time, instructional UI
prompts the user to begin creating movies without obstructing their experience.

Guiding users to take the next step in a complicated task. In the Windows Mail app, a hint at the
bottom of the Inbox directs users to Settings to access older messages.
When the user clicks the message, the app's Settings flyout appears on the right side of the screen, allowing
the user to complete the task. These screen shots show the Mail app before and after a user clicks the
instructional UI message.

BEFORE AFTER

Related articles
Guidelines for app help
In-app help pages
3/6/2017 2 min to read Edit on GitHub

Most of the time, it is best that help be displayed within the application and when the user chooses to view it.

When to use in-app help pages


In-app help should be the default method of displaying help for the user. It should be used for any help which is
simple, straightforward, and does not introduce new content to the user. Instructions, advice, and tips & tricks are
all suitable for in-app help.
Complex instructions or tutorials are not easy to reference quickly, and they take up large amounts of space.
Therefore, they should be hosted externally, and not incorporated into the app itself.
Users should not have to seek out help for basic instructions or to discover new features. If you need to have help
that educates users, use instructional UI.

Types of In-app help


In-app help can come in several forms, though they all follow the same general principles of design and usability.
Help Pages
Having a separate page or pages of help within your app is a quick and easy way of displaying useful instructions.
Be concise: A large library of help topics is unwieldy and unsuited for in-app help.
Be consistent: Make sure that users can reach your help pages the same way from any part of your app. They
should never have to search for it.
Users scan, not read: Because the help a user is looking for might be on the same page as other help topics,
make sure they can easily tell which one they need to focus on.
Popups
Popups allow for highly contexual help, displaying instructions and advice that is relevant to the specific task that
the user is attempting.
Focus on one issue: Space is even more restricted in a popup than a help page. Help popups needs to refer
specifically a single task to be effective.
Visibility is important: Because help popups can only be viewed from one location, make sure that they're
clearly visible to the user without being obstructive. If the user misses it, they might move away from the popup
in search of a help page.
Don't use too many resources: Help shouldn't lag or be slow-loading. Using videos or audio files or high
resolution images in popups is more likely to frustrate the user than it is to help them.
Descriptions
Sometimes, it can be useful to provide more information about a feature when a user inspects it. Descriptions are
similar to instructive UI, but the key difference is that instructional UI attempts to teach and educate the user about
features that they don't know about, whereas detailed descriptions enhance a user's understanding of app features
that they're already interested in.
Don't teach the basics: Assume that the user already knows the fundamentals of how to use the item being
described. Clarifying or offering further information is useful. Telling them what they already know is not.
Describe interesting interactions: One of the best uses for descriptions is to educate the user on how a
features that they already know about can interact. This helps users learn more about things they already like to
use.
Stay out of the way: Much like instructional UI, descriptions need to avoid interfering with a user's enjoyment
of the app.

Related articles
Guidelines for app help
External help pages
3/6/2017 1 min to read Edit on GitHub

If your app requires detailed help for complex content, consider hosting these instructions on a web page.

When to use external help pages


External help pages are less convenient for general use or quick reference. They are suitable for help content that is
too extensive to be incorporated into the app itself, as well as for tutorials and instructions for advanced functions
of an app that won't be used by its general audience.
If your help content is brief or specific enough to be displayed in-app, you should do so. Do not direct users outside
of the app for help unless it is necessary.

Navigating external help pages


When a user is directed to an external help page, follow one of two scenarios:
They are linked directly to the page that corresponds with their known issue. This is contextual help, and should
be used when possible.
They are linked to a general help page, with a clear display of categories and subcategories to choose from.
Providing users with a way to search your help can be useful, but do not make this search the only way of
navigating your help. It can sometimes be difficult for users to describe their problems, which can make searching
difficult. Users should be able to quickly find pages relevant to their problems without needing to search.

Tutorials and detailed walkthroughs


External help pages are the ideal place to provide users with tutorials and walkthroughs, whether video or textual.
Tutorials should focus on more complicated ideas and advanced functions. Users shouldn't need a tutorial to
use your app.
Make sure that these tutorials are displayed differently from standard help instructions. Users who are looking
for advanced instructions are more eager to search for them than users who want straightforward solutions to
their problems.
Consider linking to tutorials from both a directory and from individual help pages that correspond to each
tutorial.

Related articles
Guidelines for app help
Design downloads for UWP apps
3/6/2017 1 min to read Edit on GitHub

This section contains design and UI-related downloads for UWP apps. For additional tools, such as Visual Studio,
see our main downloads page.

Design templates
PowerPoint
This deck has everything you need to quickly mock up wireframes for UWP apps, including controls and layouts.
Download the design templates for PowerPoint

Adobe Illustrator
These Adobe Illustrator templates provide controls and layouts for designing UWP apps.
Download the design templates for Adobe Illustrator

Adobe Photoshop
Controls and layouts for designing UWP apps in Adobe Photoshop.
Download the design templates for Adobe Photoshop
Tools
Tile and icon generator for Adobe Photoshop
This set of actions for Adobe Photoshop generates the 68 recommended tile and icon assets from just 7 files.
Download the tile and icon generator

Samples
Photo sharing app
This sample app demonstrates photo sharing with real-world social media. It demonstrates responsive design,
in-app purchase, Azure services, push notifications, and more.
Download the Photo sharing app sample
Read more about PhotoSharingApp

Hue Lights
This sample integrates Windows features with intelligent home automation. Specifically, it shows how you can
use Cortana and Bluetooth Low Energy (Bluetooth LE) to create an interactive experience with the Phillips Hue
Lights (a Wi-Fi enabled lighting system).
Download the Hue Lights sample
Read more about the Hue Lights sample

Want more code? Check out the Windows sample page for complete list of all our UWP app samples. Go to the
samples portal
App-to-app communication
3/6/2017 2 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This section explains how to share data between Universal Windows Platform (UWP) apps, including how to use
the Share contract, copy and paste, and drag and drop.
The Share contract is one way users can quickly exchange data between apps. For example, a user might want to
share a webpage with their friends using a social networking app, or save a link in a notes app to refer to later.
Consider using a Share contract if your app receives content in scenarios that a user can quickly complete while in
the context of another app.
An app can support the Share feature in two ways. First, it can be a source app that provides content that the user
wants to share. Second, the app can be a target app that the user selects as the destination for shared content. An
app can also be both a source app and a target app. If you want your app to share content as a source app, you
need to decide what data formats your app can provide.
In addition to the Share contract, apps can also integrate classic techniques for transferring data, such as dragging
and dropping or copy and pasting. In addition to communication between UWP apps, these methods also support
sharing to and from desktop applications.

In this section
TOPIC DESCRIPTION

Share data This article explains how to support the Share contract in a
UWP app. The Share contract is an easy way to quickly share
data, such as text, links, photos, and videos, between apps.
For example, a user might want to share a webpage with their
friends using a social networking app, or save a link in a notes
app to refer to later.

Receive data This article explains how to receive content in your UWP app
shared from another app using Share contract. This Share
contract allows your app to be presented as an option when
the user invokes Share.

Copy and paste This article explains how to support copy and paste in UWP
apps using the clipboard. Copy and paste is the classic way to
exchange data either between apps, or within an app, and
almost every app can support clipboard operations to some
degree.

Drag and drop This article explains how to add dragging and dropping in
your UWP app. Drag and drop is a classic, natural way of
interacting with content such as images and files. Once
implemented, drag and drop works seamlessly in all directions,
including app-to-app, app-to-desktop, and desktop-to app.

See also
Develop UWP apps
Share data
3/6/2017 3 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This article explains how to support the Share contract in a Universal Windows Platform (UWP) app. The Share
contract is an easy way to quickly share data, such as text, links, photos, and videos, between apps. For example, a
user might want to share a webpage with their friends using a social networking app, or save a link in a notes app
to refer to later.

Set up an event handler


Add a DataRequested event handler to be called whenever a user invokes share. This can occur either when the
user taps a control in your app (such as a button or app bar command) or automatically in a specific scenario (if the
user finishes a level and gets a high score, for example).

DataTransferManager dataTransferManager = DataTransferManager.GetForCurrentView();


dataTransferManager.DataRequested += DataTransferManager_DataRequested;

When a DataRequested event occurs, your app receives a DataRequest object. This contains a DataPackage
that you can use to provide the content that the user wants to share. You must provide a title and data to share. A
description is optional, but recommended.

DataRequest request = args.Request;

Choose data
You can share various types of data, including:
Plain text
Uniform Resource Identifiers (URIs)
HTML
Formatted text
Bitmaps
Plain text
Files
Custom developer-defined data
The DataPackage object can contain one or more of these formats, in any combination. The following example
demonstrates sharing text.

request.Data.SetText("Hello world!");

Set properties
When you package data for sharing, you can supply a variety of properties that provide additional information
about the content being shared. These properties help target apps improve the user experience. For example, a
description helps when the user is sharing content with more than one app. Adding a thumbnail when sharing an
image or a link to a web page provides a visual reference to the user. For more information, see
DataPackagePropertySet.
All properties except the title are optional. The title property is mandatory and must be set.

request.Data.Properties.Title = "Share Example";


request.Data.Properties.Description = "A demonstration on how to share";

Launch the share UI


A UI for sharing is provided by the system. To launch it, call the ShowShareUI method.

DataTransferManager.ShowShareUI();

Handle errors
In most cases, sharing content is a straightforward process. However, there's always a chance that something
unexpected could happen. For example, the app might require the user to select content for sharing but the user
didn't select any. To handle these situations, use the FailWithDisplayText method, which will display a message
to the user if something goes wrong.

Delay share with delegates


Sometimes, it might not make sense to prepare the data that the user wants to share right away. For example, if
your app supports sending a large image file in several different possible formats, it's inefficient to create all those
images before the user makes their selection.
To solve this problem, a DataPackage can contain a delegate a function that is called when the receiving app
requests data. We recommend using a delegate any time that the data a user wants to share is resource-intensive.

async void OnDeferredImageRequestedHandler(DataProviderRequest request)


{
// Provide updated bitmap data using delayed rendering
if (this.imageStream != null)
{
DataProviderDeferral deferral = request.GetDeferral();
InMemoryRandomAccessStream inMemoryStream = new InMemoryRandomAccessStream();

// Decode the image.


BitmapDecoder imageDecoder = await BitmapDecoder.CreateAsync(this.imageStream);

// Re-encode the image at 50% width and height.


BitmapEncoder imageEncoder = await BitmapEncoder.CreateForTranscodingAsync(inMemoryStream, imageDecoder);
imageEncoder.BitmapTransform.ScaledWidth = (uint)(imageDecoder.OrientedPixelHeight * 0.5);
imageEncoder.BitmapTransform.ScaledHeight = (uint)(imageDecoder.OrientedPixelHeight * 0.5);
await imageEncoder.FlushAsync();

request.SetData(RandomAccessStreamReference.CreateFromStream(inMemoryStream));
deferral.Complete();
}
}

See also
App-to-app communication
Receive data
DataPackage
DataPackagePropertySet
DataRequest
DataRequested
FailWithDisplayText
ShowShareUi
Receive data
3/6/2017 5 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This article explains how to receive content in your Universal Windows Platform (UWP) app shared from another
app by using Share contract. This Share contract allows your app to be presented as an option when the user
invokes Share.

Declare your app as a share target


The system displays a list of possible target apps when a user invokes Share. In order to appear on the list, your
app needs to declare that it supports the Share contract. This lets the system know that your app is available to
receive content.
1. Open the manifest file. It should be called something like package.appxmanifest.
2. Open the Declarations tab.
3. Choose Share Target from the Available Declarations list, and then select Add.

Choose file types and formats


Next, decide what file types and data formats you support. The Share APIs support several standard formats, such
as Text, HTML, and Bitmap. You can also specify custom file types and data formats. If you do, remember that
source apps have to know what those types and formats are; otherwise, those apps can't use the formats to share
data.
Only register for formats that your app can handle. Only target apps that support the data being shared appear
when the user invokes Share.
To set file types:
1. Open the manifest file. It should be called something like package.appxmanifest.
2. In the Supported File Types section of the Declarations page, select Add New.
3. Type the file name extension that you want to support, for example, ".docx." You need to include the period. If
you want to support all file types, select the SupportsAnyFileType check box.
To set data formats:
1. Open the manifest file.
2. Open the Data Formats section of the Declarations page, and then select Add New.
3. Type the name of the data format you support, for example, "Text."

Handle share activation


When a user selects your app (usually by selecting it from a list of available target apps in the share UI), an
OnShareTargetActivated event is raised. Your app needs to handle this event to process the data that the user
wants to share.
protected override async void OnShareTargetActivated(ShareTargetActivatedEventArgs args)
{
// Code to handle activation goes here.
}

The data that the user wants to share is contained in a ShareOperation object. You can use this object to check the
format of the data it contains.

ShareOperation shareOperation = args.ShareOperation;


if (shareOperation.Data.Contains(StandardDataFormats.Text))
{
string text = await shareOperation.Data.GetTextAsync();

// To output the text from this example, you need a TextBlock control
// with a name of "sharedContent".
sharedContent.Text = "Text: " + text;
}

Report sharing status


In some cases, it can take time for your app to process the data it wants to share. Examples include users sharing
collections of files or images. These items are larger than a simple text string, so they take longer to process.

shareOperation.ReportDataRetreived();

After calling ReportStarted, don't expect any more user interaction with your app. As a result, you shouldn't call it
unless your app is at a point where it can be dismissed by the user.
With an extended share, it's possible that the user might dismiss the source app before your app has all the data
from the DataPackage object. As a result, we recommend that you let the system know when your app has
acquired the data it needs. This way, the system can suspend or terminate the source app as necessary.

shareOperation.ReportSubmittedBackgroundTask();

If something goes wrong, call ReportError to send an error message to the system. The user will see the message
when they check on the status of the share. At that point, your app is shut down and the share is ended. The user
will need to start again to share the content to your app. Depending on your scenario, you may decide that a
particular error isn't serious enough to end the share operation. In that case, you can choose to not call
ReportError and to continue with the share.

shareOperation.ReportError("Could not reach the server! Try again later.");

Finally, when your app has successfully processed the shared content, you should call ReportCompleted to let the
system know.

shareOperation.ReportCompleted();

When you use these methods, you usually call them in the order just described, and you don't call them more than
once. However, there are times when a target app can call ReportDataRetrieved before ReportStarted. For
example, the app might retrieve the data as part of a task in the activation handler, but not call ReportStarted until
the user selects a Share button.
Return a QuickLink if sharing was successful
When a user selects your app to receive content, we recommend that you create a QuickLink. A QuickLink is like
a shortcut that makes it easier for users to share information with your app. For example, you could create a
QuickLink that opens a new mail message pre-configured with a friend's email address.
A QuickLink must have a title, an icon, and an Id. The title (like "Email Mom") and icon appear when the user taps
the Share charm. The Id is what your app uses to access any custom information, such as an email address or login
credentials. When your app creates a QuickLink, the app returns the QuickLink to the system by calling
ReportCompleted.
A QuickLink does not actually store data. Instead, it contains an identifier that, when selected, is sent to your app.
Your app is responsible for storing the Id of the QuickLink and the corresponding user data. When the user taps
the QuickLink, you can get its Id through the QuickLinkId property.

async void ReportCompleted(ShareOperation shareOperation, string quickLinkId, string quickLinkTitle)


{
QuickLink quickLinkInfo = new QuickLink
{
Id = quickLinkId,
Title = quickLinkTitle,

// For quicklinks, the supported FileTypes and DataFormats are set


// independently from the manifest
SupportedFileTypes = { "*" },
SupportedDataFormats = { StandardDataFormats.Text, StandardDataFormats.Uri,
StandardDataFormats.Bitmap, StandardDataFormats.StorageItems }
};

StorageFile iconFile = await Windows.ApplicationModel.Package.Current.InstalledLocation.CreateFileAsync(


"assets\\user.png", CreationCollisionOption.OpenIfExists);
quickLinkInfo.Thumbnail = RandomAccessStreamReference.CreateFromFile(iconFile);
shareOperation.ReportCompleted(quickLinkInfo);
}

See also
App-to-app communication
Share data
OnShareTargetActivated
ReportStarted
ReportError
ReportCompleted
ReportDataRetrieved
ReportStarted
QuickLink
QuickLInkId
Copy and paste
3/6/2017 2 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This article explains how to support copy and paste in Universal Windows Platform (UWP) apps using the
clipboard. Copy and paste is the classic way to exchange data either between apps, or within an app, and almost
every app can support clipboard operations to some degree.

Check for built-in clipboard support


In many cases, you do not need to write code to support clipboard operations. Many of the default XAML controls
you can use to create apps already support clipboard operations.

Get set up
First, include the Windows.ApplicationModel.DataTransfer namespace in your app. Then, add an instance of
the DataPackage object. This object contains both the data the user wants to copy and any properties (such as a
description) that you want to include.

DataPackage dataPackage = new DataPackage();

Copy and cut


Copy and cut (also referred to as move) work almost exactly the same. Choose which operation you want by using
the RequestedOperation property.

// copy
dataPackage.RequestedOperation = DataPackageOperation.Copy;
// or cut
dataPackage.RequestedOperation = DataPackageOperation.Move;

Drag and drop


Next, you can add the data that a user has selected to the DataPackage object. If this data is supported by the
DataPackage class, you can use one of the corresponding methods in the DataPackage object. Here's how to
add text:

dataPackage.SetText("Hello World!");

The last step is to add the DataPackage to the clipboard by calling the static SetContent method.

Clipboard.SetContent(dataPackage);

Paste
To get the contents of the clipboard, call the static GetContent method. This method returns a DataPackageView
that contains the content. This object is almost identical to a DataPackage object, except that its contents are read-
only. With that object, you can use either the AvailableFormats or the Contains method to identify what formats
are available. Then, you can call the corresponding DataPackageView method to get the data.

DataPackageView dataPackageView = Clipboard.GetContent();


if (dataPackageView.Contains(StandardDataFormats.Text))
{
string text = await dataPackageView.GetTextAsync();
// To output the text from this example, you need a TextBlock control
TextOutput.Text = "Clipboard now contains: " + text;
}

Track changes to the clipboard


In addition to copy and paste commands, you may also want to track clipboard changes. Do this by handling the
clipboard's ContentChanged event.

Clipboard.ContentChanged += (s, e) =>


{
DataPackageView dataPackageView = Clipboard.GetContent();
if (dataPackageView.Contains(StandardDataFormats.Text))
{
string text = await dataPackageView.GetTextAsync();
// To output the text from this example, you need a TextBlock control
TextOutput.Text = "Clipboard now contains: " + text;
}
}

See also
App-to-app communication
DataTransfer
DataPackage
DataPackageView
DataPackagePropertySet
DataRequest
DataRequested
FailWithDisplayText
ShowShareUi
RequestedOperation
ControlsList
SetContent
GetContent
AvailableFormats
Contains
ContentChanged
Drag and drop
3/6/2017 3 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This article explains how to add dragging and dropping in your Universal Windows Platform (UWP) app. Drag and
drop is a classic, natural way of interacting with content such as images and files. Once implemented, drag and
drop works seamlessly in all directions, including app-to-app, app-to-desktop, and desktop-to app.

Set valid areas


Use the AllowDrop and CanDrag properties to designate the areas of your app valid for dragging and dropping.
The following markup shows how to set a specific area of the app as valid for dropping by using the AllowDrop in
XAML. If a user tries to drop somewhere else, the system won't let them. If you want users to be able to drop items
anywhere on your app, set the entire background as a drop target.

<Grid AllowDrop="True" DragOver="Grid_DragOver" Drop="Grid_Drop"


Background="LightBlue" Margin="10,10,10,353">
<TextBlock>Drop anywhere in the blue area</TextBlock>
</Grid>

With dragging, you'll usually want to be specific about what's draggable. Users will want to drag certain items, such
as pictures, not everything in your app. Here's how to set CanDrag by using XAML.

<Image x:Name="Image" CanDrag="True" Margin="10,292,10,0" Height="338"></Image>

You don't need to do any other work to allow dragging, unless you want to customize the UI (which is covered later
in this article). Dropping requires a few more steps.

Handle the DragOver event


The DragOver event fires when a user has dragged an item over your app, but not yet dropped it. In this handler,
you need to specify what kind of operations your app supports by using the AcceptedOperation property. Copy
is the most common.

private void Grid_DragOver(object sender, DragEventArgs e)


{
e.AcceptedOperation = DataPackageOperation.Copy;
}

Process the Drop event


The Drop event occurs when the user releases items in a valid drop area. Process them by using the DataView
property.
For simplicity in the example below, we'll assume the user dropped a single photo and access. In reality, users can
drop multiple items of varying formats simultaneously. Your app should handle this possibility by checking what
types of files were dropped and processing them accordingly, and notifying the user if they're trying to do
something your app doesn't support.
private async void Grid_Drop(object sender, DragEventArgs e)
{
if (e.DataView.Contains(StandardDataFormats.StorageItems))
{
var items = await e.DataView.GetStorageItemsAsync();
if (items.Count > 0)
{
var storageFile = items[0] as StorageFile;
var bitmapImage = new BitmapImage();
bitmapImage.SetSource(await storageFile.OpenAsync(FileAccessMode.Read));
// Set the image on the main page to the dropped image
Image.Source = bitmapImage;
}
}
}

Customize the UI
The system provides a default UI for dragging and dropping. However, you can also choose to customize various
parts of the UI by setting custom captions and glyphs, or by opting not to show a UI at all. To customize the UI, use
the DragEventArgs.DragUIOverride property.

private void Grid_DragOverCustomized(object sender, DragEventArgs e)


{
e.AcceptedOperation = DataPackageOperation.Copy;
e.DragUIOverride.Caption = "Custom text here"; // Sets custom UI text
e.DragUIOverride.SetContentFromBitmapImage(null); // Sets a custom glyph
e.DragUIOverride.IsCaptionVisible = true; // Sets if the caption is visible
e.DragUIOverride.IsContentVisible = true; // Sets if the dragged content is visible
e.DragUIOverride.IsGlyphVisible = true; // Sets if the glyph is visibile
}

Open a context menu on an item you can drag with touch


When using touch, dragging a UIElement and opening its context menu share similar touch gestures; each begins
with a press and hold. Here's how the system disambiguates between the two actions for elements in your app that
support both:
If a user presses and holds an item and begins dragging it within 500 milliseconds, the item is dragged and the
context menu is not shown.
If the user presses and holds but does not drag within 500 milliseconds, the context menu is opened.
After the context menu is open, if the user tries to drag the item (without lifting their finger), the context menu is
dismissed and the drag will start.

Designate an item in a ListView or GridView as a folder


You can specify a ListViewItem or GridViewItem as a folder. This is particularly useful for TreeView and File
Explorer scenarios. To do so, explicitly set the AllowDrop property to True on that item.
The system will automatically show the appropriate animations for dropping into a folder versus a non-folder item.
Your app code must continue to handle the Drop event on the folder item (as well as on the non-folder item) in
order to update the data source and add the dropped item to the target folder.

See also
App-to-app communication
AllowDrop
CanDrag
DragOver
AcceptedOperation
DataView
DragUIOverride
Drop
IsDragSource
Audio, video, and camera
3/6/2017 1 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This section provides information about creating Universal Windows Platform (UWP) apps that capture, play back,
or edit photos, videos, or audio.

TOPIC DESCRIPTION

Camera Lists the camera features that are available for UWP apps, and
links to the how-to articles that show how to use them.

Media playback Provides information about creating UWP apps that use audio
and video playback.

Detect faces in images or videos Shows you how to use the FaceTracker for tracking faces over
time in a sequence of video frames.

Media compositions and editing Shows you how to use the APIs in the
Windows.Media.Editing namespace to quickly develop apps
that enable the users to create media compositions from
audio and video source files.

Custom video effects Describes how to create a Windows Runtime component that
implements the IBasicVideoEffect interface to allow you to
create custom effects for video streams.

Custom audio effects Describes how to create a Windows Runtime component that
implements the IBasicAudioEffect interface to allow you to
create custom effects for audio streams.

Create, edit, and save bitmap images Explains how to load and save image files by using the
SoftwareBitmap object to represent bitmap images.

Audio device information properties Lists the device information properties related to audio
devices.

Transcode media files Shows you how to use the Windows.Media.Transcoding APIs
to transcode video files from one format to another.

Process media files in the background Shows you how to use the MediaProcessingTrigger and a
background task to process media files in the background.

Audio graphs Shows you how to use the APIs in the Windows.Media.Audio
namespace to create audio graphs for audio routing, mixing,
and processing scenarios.

MIDI Shows you how to enumerate MIDI (Musical Instrument


Digital Interface) devices and send and receive MIDI messages
from a UWP app.
TOPIC DESCRIPTION

Import media from a device Describes how to import media from a device, including
searching for available media sources, importing files such as
videos, photos, and sidecar files, and deleting the imported
files from the source device.

Camera-independent Flashlight Shows you how to access and use a device's lamp, if one is
present. Lamp functionality is managed separately from the
device's camera and camera flash functionality.

Supported codecs Lists the audio, video, and image codec and format support
for UWP apps.

See also
Develop UWP apps
Camera
3/6/2017 3 min to read Edit on GitHub

This section provides guidance for creating Universal Windows Platform (UWP) apps that use the camera or
microphone to capture photos, video, or audio.

Use the Windows built-in camera UI


TOPIC DESCRIPTION

Capture photos and video with Windows built-in camera UI Shows how to use the CameraCaptureUI class to capture
photos or videos using the camera UI built into Windows. If
you simply want to enable the user to capture a photo or
video and return the result to your app, this is the quickest
and easiest way to do it.

Basic MediaCapture tasks


TOPIC DESCRIPTION

Display the camera preview Shows how to quickly display the camera preview stream
within a XAML page in a UWP app.

Basic photo, video, and audio capture with MediaCapture Shows the simplest way to capture photos and video using
the MediaCapture class. The MediaCapture class exposes a
robust set of APIs that provide low-level control over the
capture pipeline and enable advanced capture scenarios, but
this article is intended to help you add basic media capture
to your app quickly and easily.

Camera UI features for mobile devices Shows you how to take advantage of special camera UI
features that are only present on mobile devices.

Advanced MediaCapture tasks


TOPIC DESCRIPTION

Handle device and screen orientation with MediaCapture Shows you how to handle device orientation when capturing
photos and videos by using a helper class.

Discover and select camera capabilities with camera profiles Shows how to use camera profiles to discover and manage
the capabilities of different video capture devices. This
includes tasks such as selecting profiles that support specific
resolutions or frame rates, profiles that support simultaneous
access to multiple cameras, and profiles that support HDR.

Set format, resolution, and frame rate for MediaCapture Shows you how to use the IMediaEncodingProperties
interface to set the resolution and frame rate of the camera
preview stream and captured photos and video. It also shows
how to ensure that the aspect ratio of the preview stream
matches that of the captured media.
TOPIC DESCRIPTION

HDR and low-light photo capture Shows you how to use the AdvancedPhotoCapture class to
capture High Dynamic Range (HDR) and low-light photos.

Manual camera controls for photo and video capture Shows you how to use manual device controls to enable
enhanced photo and video capture scenarios including
optical image stabilization and smooth zoom.

Manual camera controls for video capture Shows you how to use manual device controls to enable
enhanced video capture scenarios including HDR video and
exposure priority.

Video stabilization effect for video capture Shows you how to use the video stabilization effect.

Scene anlysis for MediaCapture Shows you how to use the SceneAnalysisEffect and the
FaceDetectionEffect to analyze the content of the media
capture preview stream.

Capture a photo sequence with VariablePhotoSequence Shows you how to capture a variable photo sequence, which
allows you to capture multiple frames of images in rapid
succession and configure each frame to use different focus,
flash, ISO, exposure, and exposure compensation settings.

Process media frames with MediaFrameReader Shows you how to use a MediaFrameReader with
MediaCapture to get media frames from one or more
available sources, including color, depth, and infrared
cameras, audio devices, or even custom frame sources such
as those that produce skeletal tracking frames. This feature is
designed to be used by apps that perform real-time
processing of media frames, such as augmented reality and
depth-aware camera apps.

Get a preview frame Shows you how to get a single preview frame from the media
capture preview stream.

UWP app samples for camera


Camera face detection sample
Camera preview frame sample
Camera HDR sample
Camera manual controls sample
Camera profile sample
Camera resolution sample
Camera starter kit
Camera video stabilization sample

Related topics
Audio, video, and camera
Capture photos and video with Windows built-in
camera UI
3/6/2017 5 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This article describes how to use the CameraCaptureUI class to capture photos or videos using the camera UI built
into Windows. This feature is easy to use and allows your app to get a user-captured photo or video with just a few
lines of code.
If you want to provide your own camera UI or if your scenario requires more robust, low-level control of the
capture operation, you should use the MediaCapture object and implement your own capture experience. For
more information, see Basic photo, video, and audio capture with MediaCapture.

NOTE
You should not specify the webcam or microphone capabilities in your app manifest file if you are using CameraCaptureUI.
If you do so, your app will be displayed in the device's camera privacy settings, but even if the user denies camera access to
your app, it will not prevent the CameraCaptureUI from capturing media. This is because the Windows built-in camera app is
a trusted first-party app that requires the user to initiate photo, audio, and video capture with a button press. Your app may
fail WACK (Windows Application Certification Kit) certification when submitted to the Store if you specify the webcam or
microphone capabilities when using CameraCaptureUI. You must specify the webcam or microphone capabilities in your app
manifest file if you are using MediaCapture to capture audio, photos, or video programmatically.

Capture a photo with CameraCaptureUI


To use the camera capture UI, include the Windows.Media.Capture namespace in your project. To do file
operations with the returned image file, include Windows.Storage.

using Windows.Media.Capture;
using Windows.Storage;

To capture a photo, create a new CameraCaptureUI object. Using the object's PhotoSettings property you can
specify properties for the returned photo such as the image format of the photo. By default, the camera capture UI
allows the user to crop the photo before it is returned, although this can be disabled with the AllowCropping
property. This example sets the CroppedSizeInPixels to request that the returned image be 200 x 200 in pixels.

NOTE
Imaging cropping in the CameraCaptureUI is not supported for devices in the Mobile device family. The value of the
AllowCropping property is ignored when your app is running on these devices.

Call CaptureFileAsync and specify CameraCaptureUIMode.Photo to specify that a photo should be captured.
The method returns a StorageFile instance containing the image if the capture is successful. If the user cancels the
capture, the returned object is null.
CameraCaptureUI captureUI = new CameraCaptureUI();
captureUI.PhotoSettings.Format = CameraCaptureUIPhotoFormat.Jpeg;
captureUI.PhotoSettings.CroppedSizeInPixels = new Size(200, 200);

StorageFile photo = await captureUI.CaptureFileAsync(CameraCaptureUIMode.Photo);

if (photo == null)
{
// User cancelled photo capture
return;
}

The StorageFile containing the captured photo is given a dynamically generated name and saved in your app's
local folder. To better organize your captured photos, you may want to move the file to a different folder.

StorageFolder destinationFolder =
await ApplicationData.Current.LocalFolder.CreateFolderAsync("ProfilePhotoFolder",
CreationCollisionOption.OpenIfExists);

await photo.CopyAsync(destinationFolder, "ProfilePhoto.jpg", NameCollisionOption.ReplaceExisting);


await photo.DeleteAsync();

To use your photo in your app, you may want to create a SoftwareBitmap object that can be used with several
different Universal Windows app features.
First you should include the Windows.Graphics.Imaging namespace in your project.

using Windows.Storage.Streams;
using Windows.Graphics.Imaging;

Call OpenAsync to get a stream from the image file. Call BitmapDecoder.CreateAsync to get a bitmap decoder
for the stream. Then call GetSoftwareBitmap to get a SoftwareBitmap representation of the image.

IRandomAccessStream stream = await photo.OpenAsync(FileAccessMode.Read);


BitmapDecoder decoder = await BitmapDecoder.CreateAsync(stream);
SoftwareBitmap softwareBitmap = await decoder.GetSoftwareBitmapAsync();

To display the image in your UI, declare an Image control in your XAML page.

<Image x:Name="imageControl" Width="200" Height="200"/>

To use the software bitmap in your XAML page, include the using Windows.UI.Xaml.Media.Imaging namespace
in your project.

using Windows.UI.Xaml.Media.Imaging;

The Image control requires that the image source be in BGRA8 format with premultiplied alpha or no alpha, so call
the static method SoftwareBitmap.Convert to create a new software bitmap with the desired format. Next, create
a new SoftwareBitmapSource object and call SetBitmapAsync to assign the software bitmap to the source.
Finally, set the Image control's Source property to display the captured photo in the UI.
SoftwareBitmap softwareBitmapBGR8 = SoftwareBitmap.Convert(softwareBitmap,
BitmapPixelFormat.Bgra8,
BitmapAlphaMode.Premultiplied);

SoftwareBitmapSource bitmapSource = new SoftwareBitmapSource();


await bitmapSource.SetBitmapAsync(softwareBitmapBGR8);

imageControl.Source = bitmapSource;

Capture a video with CameraCaptureUI


To capture a video, create a new CameraCaptureUI object. Using the object's VideoSettings property you can
specify properties for the returned video such as the format of the video.
Call CaptureFileAsync and specify Video to specify that a video should be capture. The method returns a
StorageFile instance containing the video if the capture is successful. If the user cancels the capture, the returned
object is null.

CameraCaptureUI captureUI = new CameraCaptureUI();


captureUI.VideoSettings.Format = CameraCaptureUIVideoFormat.Mp4;

StorageFile videoFile = await captureUI.CaptureFileAsync(CameraCaptureUIMode.Video);

if (videoFile == null)
{
// User cancelled photo capture
return;
}

What you do with the captured video file depends on the scenario for your app. The rest of this article shows you
how to quickly create a media composition from one or more captured videos and show it in your UI.
First, add a MediaElement control in which the video composition will be displayed to your XAML page.

<MediaElement x:Name="mediaElement" Width="320" Height="240" AreTransportControlsEnabled="True"/>

Add the Windows.Media.Editing and Windows.Media.Core namespaces to your project.

using Windows.Media.Editing;
using Windows.Media.Core;

Declare member variables for a MediaComposition object and a MediaStreamSource that you want to stay in
scope for the lifetime of the page.

MediaComposition mediaComposition;
MediaStreamSource mediaStreamSource;

Once, before you capture any videos, you should create a new instance of the MediaComposition class.

mediaComposition = new MediaComposition();

With the video file returned from the camera capture UI, create a new MediaClip by calling
MediaClip.CreateFromFileAsync. Add the media clip to the composition's Clips collection.
Call GeneratePreviewMediaStreamSource to create the MediaStreamSource object from the composition.

MediaClip mediaClip = await MediaClip.CreateFromFileAsync(videoFile);

mediaComposition.Clips.Add(mediaClip);
mediaStreamSource = mediaComposition.GeneratePreviewMediaStreamSource(
(int)mediaElement.ActualWidth,
(int)mediaElement.ActualHeight);

Finally, set the stream source to using the media element's SetMediaStreamSource method to show the
composition in the UI.

mediaElement.SetMediaStreamSource(mediaStreamSource);

You can continue to capture video clips and add them to the composition. For more information on media
compositions, see Media compositions and editing.

NOTE
This article is for Windows 10 developers writing Universal Windows Platform (UWP) apps. If youre developing for Windows
8.x or Windows Phone 8.x, see the archived documentation.

Related topics
Camera
Basic photo, video, and audio capture with MediaCapture
CameraCaptureUI
Display the camera preview
3/6/2017 5 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This article describes how to quickly display the camera preview stream within a XAML page in a Universal
Windows Platform (UWP) app. Creating an app that captures photos and videos using the camera requires you to
perform tasks like handling device and camera orientation or setting encoding options for the captured file. For
some app scenarios, you may want to just simply show the preview stream from the camera without worrying
about these other considerations. This article shows you how to do that with a minimum of code. Note that you
should always shut down the preview stream properly when you are done with it by following the steps below.
For information on writing a camera app that captures photos or videos, see Basic photo, video, and audio capture
with MediaCapture.

Add capability declarations to the app manifest


In order for your app to access a device's camera, you must declare that your app uses the webcam and
microphone device capabilities.
Add capabilities to the app manifest
1. In Microsoft Visual Studio, in Solution Explorer, open the designer for the application manifest by double-
clicking the package.appxmanifest item.
2. Select the Capabilities tab.
3. Check the box for Webcam and the box for Microphone.

Add a CaptureElement to your page


Use a CaptureElement to display the preview stream within your XAML page.

<CaptureElement Name="PreviewControl" Stretch="Uniform"/>

Use MediaCapture to start the preview stream


The MediaCapture object is your app's interface to the device's camera. This class is a member of the
Windows.Media.Capture namespace. The example in this article also uses APIs from the
Windows.ApplicationModel and System.Threading.Tasks namespaces, in addition to those included by the
default project template.
Add using directives to include the following namespaces in your page's .cs file.

using Windows.Media.Capture;
using Windows.ApplicationModel;
using System.Threading.Tasks;
using Windows.System.Display;
using Windows.Graphics.Display;

Declare a class member variable for the MediaCapture object and a boolean to track whether the camera is
currently previewing.
MediaCapture _mediaCapture;
bool _isPreviewing;

Declare a variable of type DisplayRequest that will be used to make sure the display does not turn off while the
preview is running.

DisplayRequest _displayRequest;

Create a new instance of the MediaCapture class and call InitializeAsync to initialize the capture device. This
method may fail, on devices that don't have a camera for example, so you should call it from within a try block. An
UnauthorizedAccessException will be thrown when you attempt to initialize the camera if the user has disabled
camera access in the device's privacy settings. You will also see this exception during development if you have
neglected to add the proper capabilities to your app manifest.
Important On some device families, a user consent prompt is displayed to the user before your app is granted
access to the device's camera. For this reason, you must only call MediaCapture.InitializeAsync from the main
UI thread. Attempting to initialize the camera from another thread may result in initialization failure.
Connect the MediaCapture to the CaptureElement by setting the Source property. Start the preview by calling
StartPreviewAsync. Call RequestActive to make sure the device doesn't go to sleep while the preview is
running. Finally, set the DisplayInformation.AutoRotationPreferences property to Landscape to prevent the
UI and the CaptureElement from rotating when the user changes the device orientation. For more information on
handling device orientation changes, see Handle device orientation with MediaCapture.

private async Task StartPreviewAsync()


{
try
{

_mediaCapture = new MediaCapture();


await _mediaCapture.InitializeAsync();

PreviewControl.Source = _mediaCapture;
await _mediaCapture.StartPreviewAsync();
_isPreviewing = true;

_displayRequest.RequestActive();
DisplayInformation.AutoRotationPreferences = DisplayOrientations.Landscape;
}
catch (UnauthorizedAccessException)
{
// This will be thrown if the user denied access to the camera in privacy settings
System.Diagnostics.Debug.WriteLine("The app was denied access to the camera");
}
catch (Exception ex)
{
System.Diagnostics.Debug.WriteLine("MediaCapture initialization failed. {0}", ex.Message);
}
}

Shut down the preview stream


When you are done using the preview stream, you should always shut down the stream and properly dispose of
the associated resources to ensure that the camera is available to other apps on the device. The required steps for
shutting down the preview stream are:
If the camera is currently previewing, call StopPreviewAsync to stop the preview stream. An exception will be
thrown if you call StopPreviewAsync while the preview is not running.
Set the Source property of the CaptureElement to null. Use CoreDispatcher.RunAsync to make sure this call
is executed on the UI thread.
Call the MediaCapture object's Dispose method to release the object. Again, use CoreDispatcher.RunAsync
to make sure this call is executed on the UI thread.
Set the MediaCapture member variable to null.
Call RequestRelease to allow the screen to turn off when inactive.

private async Task CleanupCameraAsync()


{
if (_mediaCapture != null)
{
if (_isPreviewing)
{
await _mediaCapture.StopPreviewAsync();
}

await Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () =>


{
PreviewControl.Source = null;
if (_displayRequest != null)
{
_displayRequest.RequestRelease();
}

_mediaCapture.Dispose();
_mediaCapture = null;
});
}

You should shut down the preview stream when the user navigates away from your page by overriding the
OnNavigatedFrom method.

protected async override void OnNavigatedFrom(NavigationEventArgs e)


{
await CleanupCameraAsync();
}

You should also shut down the preview stream properly when your app is suspending. To do this, register a
handler for the Application.Suspending event in your page's constructor.

public MainPage()
{
this.InitializeComponent();

Application.Current.Suspending += Application_Suspending;
}

In the Suspending event handler, first check to make sure that the page is being displayed the application's
Frame by comparing the page type to the CurrentSourcePageType property. If the page is not currently being
displayed, then the OnNavigatedFrom event should already have been raised and the preview stream shut down.
If the page is currently being displayed, get a SuspendingDeferral object from the event args passed into the
handler to make sure the system does not suspend your app until the preview stream has been shut down. After
shutting down the stream, call the deferral's Complete method to let the system continue suspending your app.
private async void Application_Suspending(object sender, SuspendingEventArgs e)
{
// Handle global application events only if this page is active
if (Frame.CurrentSourcePageType == typeof(MainPage))
{
var deferral = e.SuspendingOperation.GetDeferral();
await CleanupCameraAsync();
deferral.Complete();
}
}

Related topics
Camera
Basic photo, video, and audio capture with MediaCapture
Get a preview frame
Basic photo, video, and audio capture with
MediaCapture
3/6/2017 9 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This article shows the simplest way to capture photos and video using the MediaCapture class. The
MediaCapture class exposes a robust set of APIs that provide low-level control over the capture pipeline and
enable advanced capture scenarios, but this article is intended to help you add basic media capture to your app
quickly and easily. To learn about more of the features that MediaCapture provides, see Camera.
If you simply want to capture a photo or video and don't intend to add any additional media capture features,
or if you don't want to create your own camera UI, you may want to use the CameraCaptureUI class, which
allows you to simply launch the Windows built-in camera app and receive the photo or video file that was
captured. For more information, see Capture photos and video with Windows built-in camera UI
The code in this article was adapted from the Camera starter kit sample. You can download the sample to see
the code used in context or to use the sample as a starting point for your own app.

Add capability declarations to the app manifest


In order for your app to access a device's camera, you must declare that your app uses the webcam and
microphone device capabilities. If you want to save captured photos and videos to the users's Pictures or
Videos library, you must also declare the picturesLibrary and videosLibrary capability.
To add capabilities to the app manifest
1. In Microsoft Visual Studio, in Solution Explorer, open the designer for the application manifest by double-
clicking the package.appxmanifest item.
2. Select the Capabilities tab.
3. Check the box for Webcam and the box for Microphone.
4. For access to the Pictures and Videos library check the boxes for Pictures Library and the box for Videos
Library.

Initialize the MediaCapture object


All of the capture methods described in this article require the first step of initializing the MediaCapture
object by calling the constructor and then calling InitializeAsync. Since the MediaCapture object will be
accessed from multiple places in your app, declare a class variable to hold the object. Implement a handler for
the MediaCapture object's Failed event to be notified if a capture operation fails.

MediaCapture _mediaCapture;
bool _isPreviewing;

_mediaCapture = new MediaCapture();


await _mediaCapture.InitializeAsync();
_mediaCapture.Failed += MediaCapture_Failed;
Set up the camera preview
It's possible to capture photos, videos, and audio using MediaCapture without showing the camera preview,
but typically you want to show the preview stream so that the user can see what's being captured. Also, a few
MediaCapture features require the preview stream to be running before they can be enbled, including auto
focus, auto exposure, and auto white balance. To see how to set up the camera preview, see Display the
camera preview.

Capture a photo to a SoftwareBitmap


The SoftwareBitmap class was introduced in Windows 10 to provide a common representation of images
across multiple features. If you want to capture a photo and then immediately use the captured image in your
app, such as displaying it in XAML, instead of capturing to a file, then you should capture to a
SoftwareBitmap. You still have the option of saving the image to disk later.
After initializing the MediaCapture object, you can capture a photo to a SoftwareBitmap using the
LowLagPhotoCapture class. Get an instance of this class by calling PrepareLowLagPhotoCaptureAsync,
passing in an ImageEncodingProperties object specifying the image format you want.
CreateUncompressed creates an uncompressed encoding with the specified pixel format. Capture a photo by
calling CaptureAsync, which returns a CapturedPhoto object. Get a SoftwareBitmap by accessing the
Frame property and then the SoftwareBitmap property.
If you want, you can capture multiple photos by repeatedly calling CaptureAsync. When you are done
capturing, call FinishAsync to shut down the LowLagPhotoCapture session and free up the associated
resources. After calling FinishAsync, to begin capturing photos again you will need to call
PrepareLowLagPhotoCaptureAsync again to reinitialize the capture session before calling CaptureAsync.

// Prepare and capture photo


var lowLagCapture = await
_mediaCapture.PrepareLowLagPhotoCaptureAsync(ImageEncodingProperties.CreateUncompressed(MediaPixelFormat.Bgra8));

var capturedPhoto = await lowLagCapture.CaptureAsync();


var softwareBitmap = capturedPhoto.Frame.SoftwareBitmap;

await lowLagCapture.FinishAsync();

For information about working with the SoftwareBitmap object, including how to display one in a XAML
page, see Create, edit, and save bitmap images.

Capture a photo to a file


A typical photography app will save a captured photo to disk or to cloud storage and will need to add
metadata, such as photo orientation, to the file. The following example shows you how to capture an photo to a
file. You still have the option of creating a SoftwareBitmap from the image file later.
The technique shown in this example captures the photo to an in-memory stream and then transcode the
photo from the stream to a file on disk. This example uses GetLibraryAsync to get the user's pictures library
and then the SaveFolder property to get a reference default save folder. Remember to add the Pictures
Library capability to your app manifest to access this folder. CreateFileAsync creates a new StorageFile to
which the photo will be saved.
Create an InMemoryRandomAccessStream and then call CapturePhotoToStreamAsync to capture a
photo to the stream, passing in the stream and an ImageEncodingProperties object specifying the image
format that should be used. You can create custom encoding properties by initializing the object yourself, but
the class provides static methods, like ImageEncodingProperties.CreateJpeg for common encoding
formats. Next, create a file stream to the output file by calling OpenAsync. Create a BitmapDecoder to
decode the image from the in memory stream and then create a BitmapEncoder to encode the image to file
by calling CreateForTranscodingAsync.
You can optionally create a BitmapPropertySet object and then call SetPropertiesAsync on the image
encoder to include metadata about the photo in the image file. For more information about encoding
properties, see Image metadata. Handling device orientation properly is essential for most photography
apps. For more information, see Handle device orientation with MediaCapture.
Finally, call FlushAsync on the encoder object to transcode the photo from the in-memory stream to the file.

var myPictures = await Windows.Storage.StorageLibrary.GetLibraryAsync(Windows.Storage.KnownLibraryId.Pictures);


StorageFile file = await myPictures.SaveFolder.CreateFileAsync("photo.jpg", CreationCollisionOption.GenerateUniqueName);

using (var captureStream = new InMemoryRandomAccessStream())


{
await _mediaCapture.CapturePhotoToStreamAsync(ImageEncodingProperties.CreateJpeg(), captureStream);

using (var fileStream = await file.OpenAsync(FileAccessMode.ReadWrite))


{
var decoder = await BitmapDecoder.CreateAsync(captureStream);
var encoder = await BitmapEncoder.CreateForTranscodingAsync(fileStream, decoder);

var properties = new BitmapPropertySet {


{ "System.Photo.Orientation", new BitmapTypedValue(PhotoOrientation.Normal, PropertyType.UInt16) }
};
await encoder.BitmapProperties.SetPropertiesAsync(properties);

await encoder.FlushAsync();
}
}

For more information about working with files and folders, see Files, folders, and libraries.

Capture a video
Quickly add video capture to your app by using the LowLagMediaRecording class. First, declare a class
variable to for the object.

LowLagMediaRecording _mediaRecording;

Next, create a StorageFile object to which the video will be saved. Note that to save to the user's video library,
as shown in this example, you must add the Videos Library capability to your app manifest. Call
PrepareLowLagRecordToStorageFileAsync to initialize the media recording, passing in the storage file and
a MediaEncodingProfile object specifying the encoding for the video. The class provides static methods, like
CreateMp4, for creating common video encoding profiles.
Finally, call StartAsync to begin capturing video.

var myVideos = await Windows.Storage.StorageLibrary.GetLibraryAsync(Windows.Storage.KnownLibraryId.Videos);


StorageFile file = await myVideos.SaveFolder.CreateFileAsync("video.mp4", CreationCollisionOption.GenerateUniqueName);
_mediaRecording = await _mediaCapture.PrepareLowLagRecordToStorageFileAsync(
MediaEncodingProfile.CreateMp4(VideoEncodingQuality.Auto), file);
await _mediaRecording.StartAsync();

To stop recording video, call StopAsync.


await _mediaRecording.StopAsync();

You can continue to call StartAsync and StopAsync to capture additional videos. When you are done
capturing videos, call FinishAsync to dispose of the capture session and clean up associated resources. After
this call, you must call PrepareLowLagRecordToStorageFileAsync again to reinitialize the capture session
before calling StartAsync.

await _mediaRecording.FinishAsync();

When capturing video, you should register a handler for the RecordLimitationExceeded event of the
MediaCapture object, which will be raised by the operating system if you surpass the limit for a single
recording, currently three hours. In the handler for the event, you should finalize your recording by calling
StopAsync.

_mediaCapture.RecordLimitationExceeded += MediaCapture_RecordLimitationExceeded;

private async void MediaCapture_RecordLimitationExceeded(MediaCapture sender)


{
await _mediaRecording.StopAsync();
System.Diagnostics.Debug.WriteLine("Record limitation exceeded.");
}

Pause and resume video recording


You can pause a video recording and then resume recording without creating a separate output file by calling
PauseAsync and then calling ResumeAsync.

await _mediaRecording.PauseAsync(Windows.Media.Devices.MediaCapturePauseBehavior.ReleaseHardwareResources);

await _mediaRecording.ResumeAsync();

Starting with Windows 10, version 1607, you can pause a video recording and receive the last frame captured
before the recording was paused. You can then overlay this frame on the camera preview to allow the user to
align the camera with the paused frame before resuming recording. Calling PauseWithResultAsync returns a
MediaCapturePauseResult object. The LastFrame property is a VideoFrame object representing the last
frame. To display the frame in XAML, get the SoftwareBitmap representation of the video frame. Currently,
only images in BGRA8 format with premultiplied or empty alpha channel are supported, so call Convert if
necessary to get the correct format. Create a new SoftwareBitmapSource object and call SetBitmapAsync to
initialize it. Finally, set the Source property of a XAML Image control to display the image. For this trick to
work, your image must be aligned with the CaptureElement control and should have an opacity value less
than one. Don't forget that you can only modify the UI on the UI thread, so make this call inside RunAsync.
PauseWithResultAsync also returns the duration of the video that was recorded in the preceeding segment
in case you need to track how much total time has been recorded.
MediaCapturePauseResult result =
await _mediaRecording.PauseWithResultAsync(Windows.Media.Devices.MediaCapturePauseBehavior.RetainHardwareResources);

var pausedFrame = result.LastFrame.SoftwareBitmap;


if(pausedFrame.BitmapPixelFormat != BitmapPixelFormat.Bgra8 || pausedFrame.BitmapAlphaMode != BitmapAlphaMode.Ignore)
{
pausedFrame = SoftwareBitmap.Convert(pausedFrame, BitmapPixelFormat.Bgra8, BitmapAlphaMode.Ignore);
}

var source = new SoftwareBitmapSource();


await source.SetBitmapAsync(pausedFrame);

await Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () =>


{
PauseImage.Source = source;
PauseImage.Visibility = Visibility.Visible;
});

_totalRecordedTime += result.RecordDuration;

When you resume recording, you can set the source of the image to null and hide it.

await Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () =>


{
PauseImage.Source = null;
PauseImage.Visibility = Visibility.Collapsed;
});

await _mediaRecording.ResumeAsync();

Note that you can also get a result frame when you stop the video by calling StopWithResultAsync.

Capture audio
You can quickly add audio capture to your app by using the same technique shown above for capturing video.
The example below creates a StorageFile in the application data folder. Call
PrepareLowLagRecordToStorageFileAsync to initialize the capture session, passing in the file and a
MediaEncodingProfile which is generated in this example by the CreateMp3 static method. To begin
recording, call StartAsync.

_mediaCapture.RecordLimitationExceeded += MediaCapture_RecordLimitationExceeded;

var localFolder = Windows.Storage.ApplicationData.Current.LocalFolder;


StorageFile file = await localFolder.CreateFileAsync("audio.mp3", CreationCollisionOption.GenerateUniqueName);
_mediaRecording = await _mediaCapture.PrepareLowLagRecordToStorageFileAsync(
MediaEncodingProfile.CreateMp3(AudioEncodingQuality.High), file);
await _mediaRecording.StartAsync();

Call StopAsync to stop the audio recording.

await _mediaRecording.StopAsync();

You can call StartAsync and StopAsync multiple times to record several audio files. When you are done
capturing audio, call FinishAsync to dispose of the capture session and clean up associated resources. After
this call, you must call PrepareLowLagRecordToStorageFileAsync again to reinitialize the capture session
before calling StartAsync.
await _mediaRecording.FinishAsync();

Related topics
Camera
Capture photos and video with Windows built-in camera UI
Handle device orientation with MediaCapture
Create, edit, and save bitmap images
Files, folders, and libraries
Camera UI features for mobile devices
3/6/2017 2 min to read Edit on GitHub

This article show you how to take advantage of special camera UI features that are only present on mobile devices.

Add the mobile extension to your project


To use these features, you must add a reference to the Microsoft Mobile Extension SDK for Universal App Platform
to your project.
To add a reference to the mobile extension SDK for hardware camera button support
1. In Solution Explorer, right-click References and select Add Reference.
2. Expand the Windows Universal node and select Extensions.
3. Select the Microsoft Mobile Extension SDK for Universal App Platform check box.

Hide the status bar


Mobile devices have a StatusBar control that provides the user with status information about the device. This
control takes up space on the screen that can interfere with the media capture UI. You can hide the status bar by
calling HideAsync, but you must make this call from within a conditional block where you use the
ApiInformation.IsTypePresent method to determine if the API is available. This method will only return true on
mobile devices that support the status bar. You should hide the status bar when your app launches or when you
begin previewing from the camera.

// Hide the status bar


if (ApiInformation.IsTypePresent("Windows.UI.ViewManagement.StatusBar"))
{
await Windows.UI.ViewManagement.StatusBar.GetForCurrentView().HideAsync();
}

When your app is shutting down or when the user navigates away from the media capture page of your app, you
can make the control visible again.

// Show the status bar


if (ApiInformation.IsTypePresent("Windows.UI.ViewManagement.StatusBar"))
{
await Windows.UI.ViewManagement.StatusBar.GetForCurrentView().ShowAsync();
}

Use the hardware camera button


Some mobile devices have a dedicated hardware camera button that some users prefer over an on-screen control.
To be notified when the hardware camera button is pressed, register a handler for the
HardwareButtons.CameraPressed event. Because this API is available on mobile devices only, you must again
use the IsTypePresent to make sure the API is supported on the current device before attempting to access it.

using Windows.Phone.UI.Input;
if (ApiInformation.IsTypePresent("Windows.Phone.UI.Input.HardwareButtons"))
{
HardwareButtons.CameraPressed += HardwareButtons_CameraPressed;
}

In the handler for the CameraPressed event, you can initiate a photo capture.

private async void HardwareButtons_CameraPressed(object sender, CameraEventArgs e)


{
await TakePhotoAsync();
}

When your app is shutting down or the user moves away from the media capture page of your app, unregister the
hardware button handler.

if (ApiInformation.IsTypePresent("Windows.Phone.UI.Input.HardwareButtons"))
{
HardwareButtons.CameraPressed -= HardwareButtons_CameraPressed;
}

NOTE
This article is for Windows 10 developers writing Universal Windows Platform (UWP) apps. If you're developing for Windows
8.x or Windows Phone 8.x, see the archived documentation. |

Related topics
Camera
Basic photo, video, and audio capture with MediaCapture
Handle device orientation with MediaCapture
3/6/2017 14 min to read Edit on GitHub

When your app captures a photo or video that is intended to be viewed outside of your app, such as saving to a
file on the user's device or sharing online, it's important that you encode the image with the proper orientation
metadata so that when another app or device displays the image, it is oriented correctly. Determining the correct
orientation data to include in a media file can be a complex task because there are several variables to consider,
including the orientation of the device chassis, the orientation of the display, and the placement of the camera on
the chassis (whether it is a front or back-facing camera).
To simplify the process of handling orientation, we recommend using a helper class, CameraRotationHelper, for
which the full definition is provided at the end of this article. You can add this class to your project and then follow
the steps in this article to add orientation support to your camera app. The helper class also makes it easier for you
to rotate the controls in your camera UI so that they are rendered correctly from the user's point of view.

NOTE
This article builds on the code and concepts discussed in the article Basic photo, video, and audio capture with
MediaCapture. We recommend that you familiarize yourself with the basic concepts of using the MediaCapture class
before adding orientation support to your app.

Namespaces used in this article


The example code in this article uses APIs from the following namespaces that you should include in your code.

using Windows.Devices.Enumeration;
using Windows.UI.Core;

The first step in adding orientation support to your app is to lock the display so that it doesn't automatically rotate
when the device is rotated. Automatic UI rotation works well for most types of apps, but it is unintuitive for users
when the camera preview rotates. Lock the display orientation by setting the
DisplayInformation.AutoRotationPreferences property to DisplayOrientations.Landscape.

DisplayInformation.AutoRotationPreferences = DisplayOrientations.Landscape;

Tracking the camera device location


To calculate the correct orientation for captured media, you app must determine the location of the camera device
on the chassis. Add a boolean member variable to track whether the camera is external to the device, such as a
USB web cam. Add another boolean variable to track whether the preview should be mirrored, which is the case if
a front-facing camera is used. Also, add a variable for storing a DeviceInformation object that represents the
selected camera.

private bool _externalCamera;


private bool _mirroringPreview;
DeviceInformation _cameraDevice;

Select a camera device and initialize the MediaCapture object


The article Basic photo, video, and audio capture with MediaCapture shows you how to initialize the
MediaCapture object with just a couple of lines of code. To support camera orientation, we will add a few more
steps to the initialization process.
First, call DeviceInformation.FindAllAsync passing in the device selector DeviceClass.VideoCapture to get a
list of all available video capture devices. Next, select the first device in the list where the panel location of the
camera is known and where it matches the supplied value, which in this example is a front-facing camera. If no
camera is found on the desired panel, the first or default available camera is used.
If a camera device is found, a new MediaCaptureInitializationSettings object is created and the
VideoDeviceId property is set to the selected device. Next, create the MediaCapture object and call
InitializeAsync, passing in the settings object to tell the system to use the selected camera.
Finally, check to see if the selected device panel is null or unknown. If so, the camera is external, which means that
its rotation is unrelated to the rotation of the device. If the panel is known and is on the front of the device chassis,
we know the preview should be mirrored, so the variable tracking this is set.
var allVideoDevices = await DeviceInformation.FindAllAsync(DeviceClass.VideoCapture);
DeviceInformation desiredDevice = allVideoDevices.FirstOrDefault(x => x.EnclosureLocation != null
&& x.EnclosureLocation.Panel == Windows.Devices.Enumeration.Panel.Front);
_cameraDevice = desiredDevice ?? allVideoDevices.FirstOrDefault();

if (_cameraDevice == null)
{
System.Diagnostics.Debug.WriteLine("No camera device found!");
return;
}

var settings = new MediaCaptureInitializationSettings { VideoDeviceId = _cameraDevice.Id };

_mediaCapture = new MediaCapture();


_mediaCapture.RecordLimitationExceeded += MediaCapture_RecordLimitationExceeded;
_mediaCapture.Failed += MediaCapture_Failed;

try
{
await _mediaCapture.InitializeAsync(settings);
}
catch (UnauthorizedAccessException)
{
System.Diagnostics.Debug.WriteLine("The app was denied access to the camera");
return;
}

// Handle camera device location


if (_cameraDevice.EnclosureLocation == null ||
_cameraDevice.EnclosureLocation.Panel == Windows.Devices.Enumeration.Panel.Unknown)
{
_externalCamera = true;
}
else
{
_externalCamera = false;
_mirroringPreview = (_cameraDevice.EnclosureLocation.Panel == Windows.Devices.Enumeration.Panel.Front);
}

Initialize the CameraRotationHelper class


Now we begin using the CameraRotationHelper class. Declare a class member variable to store the object. Call
the constructor, passing in the enclosure location of the selected camera. The helper class uses this information to
calculate the correct orientation for captured media, the preview stream, and the UI. Register a handler for the
helper class's OrientationChanged event, which will be raised when we need to update the orientation of the UI
or the preview stream.

private CameraRotationHelper _rotationHelper;

_rotationHelper = new CameraRotationHelper(_cameraDevice.EnclosureLocation);


_rotationHelper.OrientationChanged += RotationHelper_OrientationChanged;

Add orientation data to the camera preview stream


Adding the correct orientation to the metadata of the preview stream does not affect how the preview appears to
the user, but it helps the system encode any frames captured from the preview stream correctly.
You start the camera preview by calling MediaCapture.StartPreviewAsync. Before you do this, check the
member variable to see if the preview should be mirrored (for a front-facing camera). If so, set the FlowDirection
property of the CaptureElement, named PreviewControl in this example, to FlowDirection.RightToLeft. After
starting the preview, call the helper method SetPreviewRotationAsync to set the preview rotation. Following is
the implementation of this method.

PreviewControl.Source = _mediaCapture;
PreviewControl.FlowDirection = _mirroringPreview ? FlowDirection.RightToLeft : FlowDirection.LeftToRight;

await _mediaCapture.StartPreviewAsync();
await SetPreviewRotationAsync();

We set the preview rotation in a separate method so that it can be updated when the phone orientation changes
without reinitializing the preview stream. If the camera is external to the device, no action is taken. Otherwise, the
CameraRotationHelper method GetCameraPreviewOrientation is called and returns the proper orientation
for the preview stream.
To set the metadata, the preview stream properties are retrieved by calling
VideoDeviceController.GetMediaStreamProperties. Next, create the GUID representing the Media Foundation
Transform (MFT) attribute for video stream rotation. In C++ you can use the constant
MF_MT_VIDEO_ROTATION, but in C# you must manually specify the GUID value.
Add a property value to the stream properties object, specifying the GUID as the key and the preview rotation as
the value. This property expects values to be in units of counterclockwise degrees, so the CameraRotationHelper
method ConvertSimpleOrientationToClockwiseDegrees is used to convert the simple orientation value.
Finally, call SetEncodingPropertiesAsync to apply the new rotation property to the stream.

private async Task SetPreviewRotationAsync()


{
if (!_externalCamera)
{
// Add rotation metadata to the preview stream to make sure the aspect ratio / dimensions match when rendering and getting preview frames
var rotation = _rotationHelper.GetCameraPreviewOrientation();
var props = _mediaCapture.VideoDeviceController.GetMediaStreamProperties(MediaStreamType.VideoPreview);
Guid RotationKey = new Guid("C380465D-2271-428C-9B83-ECEA3B4A85C1");
props.Properties.Add(RotationKey, CameraRotationHelper.ConvertSimpleOrientationToClockwiseDegrees(rotation));
await _mediaCapture.SetEncodingPropertiesAsync(MediaStreamType.VideoPreview, props, null);
}
}

Next, add the handler for the CameraRotationHelper.OrientationChanged event. This event passes in an
argument that lets you know whether the preview stream needs to be rotated. If the orientation of the device was
changed to face up or face down, this value will be false. If the preview does need to be rotated, call
SetPreviewRotationAsync which was defined previously.
Next, in the OrientationChanged event handler, update your UI if needed. Get the current recommended UI
orientation from the helper class by calling GetUIOrientation and convert the value to clockwise degrees, which
is used for XAML transforms. Create a RotateTransform from the orientation value and set the
RenderTransform property of your XAML controls. Depending on your UI layout, you may need to make
additional adjustments here in addition to simply rotating the controls. Also, remember that all updates to your UI
must be made on the UI thread, so you should place this code inside a call to RunAsync.
private async void RotationHelper_OrientationChanged(object sender, bool updatePreview)
{
if (updatePreview)
{
await SetPreviewRotationAsync();
}
await Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () => {
// Rotate the buttons in the UI to match the rotation of the device
var angle = CameraRotationHelper.ConvertSimpleOrientationToClockwiseDegrees(_rotationHelper.GetUIOrientation());
var transform = new RotateTransform { Angle = angle };

// The RenderTransform is safe to use (i.e. it won't cause layout issues) in this case, because these buttons have a 1:1 aspect ratio
CapturePhotoButton.RenderTransform = transform;
CapturePhotoButton.RenderTransform = transform;
});
}

Capture a photo with orientation data


The article Basic photo, video, and audio capture with MediaCapture shows you how to capture a photo to a
file by capturing first to an in-memory stream and then using a decoder to read the image data from the stream
and an encoder to transcode the image data to a file. Orientation data, obtained from the
CameraRotationHelper class, can be added to the image file during the transcoding operation.
In the following example, a photo is captured to an InMemoryRandomAccessStream with a call to
CapturePhotoToStreamAsync and a BitmapDecoder is created from the stream. Next a StorageFile is created
and opened to retreive an IRandomAccessStream for writing to the file.
Before transcoding the file, the photo's orientation is retrieved from the helper class method
GetCameraCaptureOrientation. This method returns a SimpleOrientation object which is converted to a
PhotoOrientation object with the helper method ConvertSimpleOrientationToPhotoOrientation. Next, a
new BitmapPropertySet object is created and a property is added where the key is "System.Photo.Orientation"
and the value is the photo orientation, expressed as a BitmapTypedValue. "System.Photo.Orientation" is one of
many Windows properties that can be added as metadata to an image file. For a list of all of the photo-related
properties, see Windows Properties - Photo. For more information about workine with metadata in images, see
Image metadata.
Finally, the property set which includes the orientation data is set for the encoder by with a call to
SetPropertiesAsync and the image is transcoded with a call to FlushAsync.
private async Task CapturePhotoWithOrientationAsync()
{
var captureStream = new InMemoryRandomAccessStream();

try
{
await _mediaCapture.CapturePhotoToStreamAsync(ImageEncodingProperties.CreateJpeg(), captureStream);
}
catch (Exception ex)
{
System.Diagnostics.Debug.WriteLine("Exception when taking a photo: {0}", ex.ToString());
return;
}

var decoder = await BitmapDecoder.CreateAsync(captureStream);


var file = await KnownFolders.PicturesLibrary.CreateFileAsync("SimplePhoto.jpeg", CreationCollisionOption.GenerateUniqueName);

using (var outputStream = await file.OpenAsync(FileAccessMode.ReadWrite))


{
var encoder = await BitmapEncoder.CreateForTranscodingAsync(outputStream, decoder);
var photoOrientation = CameraRotationHelper.ConvertSimpleOrientationToPhotoOrientation(
_rotationHelper.GetCameraCaptureOrientation());
var properties = new BitmapPropertySet {
{ "System.Photo.Orientation", new BitmapTypedValue(photoOrientation, PropertyType.UInt16) } };
await encoder.BitmapProperties.SetPropertiesAsync(properties);
await encoder.FlushAsync();
}
}

Capture a video with orientation data


Basic video capture is described in the article Basic photo, video, and audio capture with MediaCapture.
Adding orientation data to the encoding of the captured video is done using the same technique as described
earlier in the section about adding orientation data to the preview stream.
In the following example, a file is created to which the captured video will be written. An MP4 encoding profile is
create using the static method CreateMp4. The proper orientation for the video is obtained from the
CameraRotationHelper class with a call to GetCameraCaptureOrientation Because the rotation property
requires the orientation to be expressed in counterclockwise degrees, the
ConvertSimpleOrientationToClockwiseDegrees helper method is called to convert the orientation value. Next,
create the GUID representing the Media Foundation Transform (MFT) attribute for video stream rotation. In C++
you can use the constant MF_MT_VIDEO_ROTATION, but in C# you must manually specify the GUID value. Add a
property value to the stream properties object, specifying the GUID as the key and the rotation as the value. Finally
call StartRecordToStorageFileAsync to begin recording video encoded with orientation data.
private async Task StartRecordingWithOrientationAsync()
{
try
{
var videoFile = await KnownFolders.VideosLibrary.CreateFileAsync("SimpleVideo.mp4", CreationCollisionOption.GenerateUniqueName);

var encodingProfile = MediaEncodingProfile.CreateMp4(VideoEncodingQuality.Auto);

var rotationAngle = CameraRotationHelper.ConvertSimpleOrientationToClockwiseDegrees(


_rotationHelper.GetCameraCaptureOrientation());
Guid RotationKey = new Guid("C380465D-2271-428C-9B83-ECEA3B4A85C1");
encodingProfile.Video.Properties.Add(RotationKey, PropertyValue.CreateInt32(rotationAngle));

await _mediaCapture.StartRecordToStorageFileAsync(encodingProfile, videoFile);


}
catch (Exception ex)
{
System.Diagnostics.Debug.WriteLine("Exception when starting video recording: {0}", ex.ToString());
}
}

CameraRotationHelper full code listing


The following code snippet lists the full code for the CameraRotationHelper class that manages the hardware
orientation sensors, calculates the proper orientation values for photos and videos, and provides helper methods
to convert between the different representations of orientation that are used by different Windows features. If you
follow the guidance shown in the article above, you can add this class to your project as-is without having to make
any changes. Of course, you can feel free to customize the following code to meet the needs of your particular
scenario.
This helper class uses the device's SimpleOrientationSensor to determine the current orientation of the device
chassis and the DisplayInformation class to determine the current orientation of the display. Each of these
classes provide events that are raised when the current orientation changes. The panel on which the capture
device is mounted - front-facing, back-facing, or external - is used to determine whether the preview stream
should be mirrored. Also, the EnclosureLocation.RotationAngleInDegreesClockwise property, supported by
some devices, is used to determine the orientation in which the camera is mounted on the chasses.
The following methods can be used to get recommended orientation values for the specified camera app tasks:
GetUIOrientation - Returns the suggested orientation for camera UI elements.
GetCameraCaptureOrientation - Returns the suggested orientation for encoding into image metadata.
GetCameraPreviewOrientation - Returns the suggested orientation for the preview stream to provide a
natural user experience.

class CameraRotationHelper
{
private EnclosureLocation _cameraEnclosureLocation;
private DisplayInformation _displayInformation = DisplayInformation.GetForCurrentView();
private SimpleOrientationSensor _orientationSensor = SimpleOrientationSensor.GetDefault();
public event EventHandler<bool> OrientationChanged;

public CameraRotationHelper(EnclosureLocation cameraEnclosureLocation)


{
_cameraEnclosureLocation = cameraEnclosureLocation;
if (!IsEnclosureLocationExternal(_cameraEnclosureLocation))
{
_orientationSensor.OrientationChanged += SimpleOrientationSensor_OrientationChanged;
}
_displayInformation.OrientationChanged += DisplayInformation_OrientationChanged;
}
private void SimpleOrientationSensor_OrientationChanged(SimpleOrientationSensor sender,
SimpleOrientationSensorOrientationChangedEventArgs args)
{
if (args.Orientation != SimpleOrientation.Faceup && args.Orientation != SimpleOrientation.Facedown)
{
HandleOrientationChanged(false);
}
}

private void DisplayInformation_OrientationChanged(DisplayInformation sender, object args)


{
HandleOrientationChanged(true);
}

private void HandleOrientationChanged(bool updatePreviewStreamRequired)


{
var handler = OrientationChanged;
if (handler != null)
{
handler(this, updatePreviewStreamRequired);
}
}

public static bool IsEnclosureLocationExternal(EnclosureLocation enclosureLocation)


{
return (enclosureLocation == null || enclosureLocation.Panel == Windows.Devices.Enumeration.Panel.Unknown);
}

private bool IsCameraMirrored()


{
// Front panel cameras are mirrored by default
return (_cameraEnclosureLocation.Panel == Windows.Devices.Enumeration.Panel.Front);
}

private SimpleOrientation GetCameraOrientationRelativeToNativeOrientation()


{
// Get the rotation angle of the camera enclosure
var enclosureAngle = ConvertClockwiseDegreesToSimpleOrientation((int)_cameraEnclosureLocation.RotationAngleInDegreesClockwise);

// Account for the fact that, on portrait-first devices, the built in camera sensor is read at a 90 degree offset to the native orientation
if (_displayInformation.NativeOrientation == DisplayOrientations.Portrait && !IsEnclosureLocationExternal(_cameraEnclosureLocation))
{
return AddOrientations(SimpleOrientation.Rotated90DegreesCounterclockwise, enclosureAngle);
}
else
{
return AddOrientations(SimpleOrientation.NotRotated, enclosureAngle);
}
}

// Gets the rotation to rotate ui elements


public SimpleOrientation GetUIOrientation()
{
if (IsEnclosureLocationExternal(_cameraEnclosureLocation))
{
// Cameras that are not attached to the device do not rotate along with it, so apply no rotation
return SimpleOrientation.NotRotated;
}

// Return the difference between the orientation of the device and the orientation of the app display
var deviceOrientation = _orientationSensor.GetCurrentOrientation();
var displayOrientation = ConvertDisplayOrientationToSimpleOrientation(_displayInformation.CurrentOrientation);
return SubOrientations(displayOrientation, deviceOrientation);
}

// Gets the rotation of the camera to rotate pictures/videos when saving to file
public SimpleOrientation GetCameraCaptureOrientation()
{
if (IsEnclosureLocationExternal(_cameraEnclosureLocation))
{
// Cameras that are not attached to the device do not rotate along with it, so apply no rotation
return SimpleOrientation.NotRotated;
}

// Get the device orienation offset by the camera hardware offset


var deviceOrientation = _orientationSensor.GetCurrentOrientation();
var result = SubOrientations(deviceOrientation, GetCameraOrientationRelativeToNativeOrientation());

// If the preview is being mirrored for a front-facing camera, then the rotation should be inverted
if (IsCameraMirrored())
{
result = MirrorOrientation(result);
}
return result;
}

// Gets the rotation of the camera to display the camera preview


public SimpleOrientation GetCameraPreviewOrientation()
{
if (IsEnclosureLocationExternal(_cameraEnclosureLocation))
{
// Cameras that are not attached to the device do not rotate along with it, so apply no rotation
return SimpleOrientation.NotRotated;
}

// Get the app display rotation offset by the camera hardware offset
var result = ConvertDisplayOrientationToSimpleOrientation(_displayInformation.CurrentOrientation);
result = SubOrientations(result, GetCameraOrientationRelativeToNativeOrientation());

// If the preview is being mirrored for a front-facing camera, then the rotation should be inverted
if (IsCameraMirrored())
{
result = MirrorOrientation(result);
}
return result;
}

public static PhotoOrientation ConvertSimpleOrientationToPhotoOrientation(SimpleOrientation orientation)


{
switch (orientation)
{
case SimpleOrientation.Rotated90DegreesCounterclockwise:
return PhotoOrientation.Rotate90;
case SimpleOrientation.Rotated180DegreesCounterclockwise:
return PhotoOrientation.Rotate180;
case SimpleOrientation.Rotated270DegreesCounterclockwise:
return PhotoOrientation.Rotate270;
case SimpleOrientation.NotRotated:
default:
return PhotoOrientation.Normal;
}
}

public static int ConvertSimpleOrientationToClockwiseDegrees(SimpleOrientation orientation)


{
switch (orientation)
{
case SimpleOrientation.Rotated90DegreesCounterclockwise:
return 270;
case SimpleOrientation.Rotated180DegreesCounterclockwise:
return 180;
case SimpleOrientation.Rotated270DegreesCounterclockwise:
return 90;
case SimpleOrientation.NotRotated:
default:
return 0;
}
}

private SimpleOrientation ConvertDisplayOrientationToSimpleOrientation(DisplayOrientations orientation)


{
SimpleOrientation result;
switch (orientation)
{
case DisplayOrientations.Landscape:
result = SimpleOrientation.NotRotated;
break;
case DisplayOrientations.PortraitFlipped:
result = SimpleOrientation.Rotated90DegreesCounterclockwise;
break;
case DisplayOrientations.LandscapeFlipped:
result = SimpleOrientation.Rotated180DegreesCounterclockwise;
break;
case DisplayOrientations.Portrait:
default:
result = SimpleOrientation.Rotated270DegreesCounterclockwise;
break;
}

// Above assumes landscape; offset is needed if native orientation is portrait


if (_displayInformation.NativeOrientation == DisplayOrientations.Portrait)
{
result = AddOrientations(result, SimpleOrientation.Rotated90DegreesCounterclockwise);
}

return result;
}

private static SimpleOrientation MirrorOrientation(SimpleOrientation orientation)


{
// This only affects the 90 and 270 degree cases, because rotating 0 and 180 degrees is the same clockwise and counter-clockwise
switch (orientation)
{
case SimpleOrientation.Rotated90DegreesCounterclockwise:
return SimpleOrientation.Rotated270DegreesCounterclockwise;
case SimpleOrientation.Rotated270DegreesCounterclockwise:
return SimpleOrientation.Rotated90DegreesCounterclockwise;
}
return orientation;
}

private static SimpleOrientation AddOrientations(SimpleOrientation a, SimpleOrientation b)


{
var aRot = ConvertSimpleOrientationToClockwiseDegrees(a);
var bRot = ConvertSimpleOrientationToClockwiseDegrees(b);
var result = (aRot + bRot) % 360;
return ConvertClockwiseDegreesToSimpleOrientation(result);
}

private static SimpleOrientation SubOrientations(SimpleOrientation a, SimpleOrientation b)


{
var aRot = ConvertSimpleOrientationToClockwiseDegrees(a);
var bRot = ConvertSimpleOrientationToClockwiseDegrees(b);
//add 360 to ensure the modulus operator does not operate on a negative
var result = (360 + (aRot - bRot)) % 360;
return ConvertClockwiseDegreesToSimpleOrientation(result);
}

private static VideoRotation ConvertSimpleOrientationToVideoRotation(SimpleOrientation orientation)


{
switch (orientation)
{
case SimpleOrientation.Rotated90DegreesCounterclockwise:
return VideoRotation.Clockwise270Degrees;
case SimpleOrientation.Rotated180DegreesCounterclockwise:
return VideoRotation.Clockwise180Degrees;
return VideoRotation.Clockwise180Degrees;
case SimpleOrientation.Rotated270DegreesCounterclockwise:
return VideoRotation.Clockwise90Degrees;
case SimpleOrientation.NotRotated:
default:
return VideoRotation.None;
}
}

private static SimpleOrientation ConvertClockwiseDegreesToSimpleOrientation(int orientation)


{
switch (orientation)
{
case 270:
return SimpleOrientation.Rotated90DegreesCounterclockwise;
case 180:
return SimpleOrientation.Rotated180DegreesCounterclockwise;
case 90:
return SimpleOrientation.Rotated270DegreesCounterclockwise;
case 0:
default:
return SimpleOrientation.NotRotated;
}
}
}

Related topics
Camera
Basic photo, video, and audio capture with MediaCapture
Discover and select camera capabilities with camera
profiles
3/6/2017 7 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This article discusses how to use camera profiles to discover and manage the capabilities of different video capture
devices. This includes tasks such as selecting profiles that support specific resolutions or frame rates, profiles that
support simultaneous access to multiple cameras, and profiles that support HDR.

NOTE
This article builds on concepts and code discussed in Basic photo, video, and audio capture with MediaCapture, which
describes the steps for implementing basic photo and video capture. It is recommended that you familiarize yourself with the
basic media capture pattern in that article before moving on to more advanced capture scenarios. The code in this article
assumes that your app already has an instance of MediaCapture that has been properly initialized.

About camera profiles


Cameras on different devices support different capabilities including the set of supported capture resolutions,
frame rate for video captures, and whether HDR or variable frame rate captures are supported. The Universal
Windows Platform (UWP) media capture framework stores this set of capabilities in a
MediaCaptureVideoProfileMediaDescription. A camera profile, represented by a MediaCaptureVideoProfile
object, has three collections of media descriptions; one for photo capture, one for video capture, and another for
video preview.
Before initializing your MediaCapture object, you can query the capture devices on the current device to see what
profiles are supported. When you select a supported profile, you know that the capture device supports all of the
capabilities in the profile's media descriptions. This eliminates the need for a trial and error approach to
determining which combinations of capabilities are supported on a particular device.

var mediaInitSettings = new MediaCaptureInitializationSettings { VideoDeviceId = cameraDevice.Id };

The code examples in this article replace this minimal initialization with the discovery of camera profiles supporting
various capabilities, which are then used to initialize the media capture device.

Find a video device that supports camera profiles


Before searching for supported camera profiles, you should find a capture device that supports the use of camera
profiles. The GetVideoProfileSupportedDeviceIdAsync helper method defined in the example below uses the
DeviceInformaion.FindAllAsync method to retrieve a list of all available video capture devices. It loops through
all of the devices in the list, calling the static method, IsVideoProfileSupported, for each device to see if it
supports video profiles. Also, the EnclosureLocation.Panel property for each device, allowing you to specify
wether you want a camera on the front or back of the device.
If a device that supports camera profiles is found on the specified panel, the Id value, containing the device's ID
string, is returned.
public async Task<string> GetVideoProfileSupportedDeviceIdAsync(Windows.Devices.Enumeration.Panel panel)
{
string deviceId = string.Empty;

// Finds all video capture devices


DeviceInformationCollection devices = await DeviceInformation.FindAllAsync(DeviceClass.VideoCapture);

foreach (var device in devices)


{
// Check if the device on the requested panel supports Video Profile
if (MediaCapture.IsVideoProfileSupported(device.Id) && device.EnclosureLocation.Panel == panel)
{
// We've located a device that supports Video Profiles on expected panel
deviceId = device.Id;
break;
}
}

return deviceId;
}

If the device ID returned from the GetVideoProfileSupportedDeviceIdAsync helper method is null or an empty
string, there is no device on the specified panel that supports camera profiles. In this case, you should initialize your
media capture device without using profiles.

string videoDeviceId = await GetVideoProfileSupportedDeviceIdAsync(Windows.Devices.Enumeration.Panel.Back);

if (string.IsNullOrEmpty(videoDeviceId))
{
// No devices on the specified panel support video profiles. .
return;
}

Select a profile based on supported resolution and frame rate


To select a profile with particular capabilities, such as with the ability to achieve a particular resolution and frame
rate, you should first call the helper method defined above to get the ID of a capture device that supports using
camera profiles.
Create a new MediaCaptureInitializationSettings object, passing in the selected device ID. Next, call the static
method MediaCapture.FindAllVideoProfiles to get a list of all camera profiles supported by the device.
This example uses a Linq query method, included in the using System.Linq namespace, to select a profile that
contains a SupportedRecordMediaDescription object where the Width, Height, and FrameRate properties
match the requested values. If a match is found, VideoProfile and RecordMediaDescription of the
MediaCaptureInitializationSettings are set to the values from the anonymous type returned from the Linq
query. If no match is found, the default profile is used.
var mediaInitSettings = new MediaCaptureInitializationSettings { VideoDeviceId = videoDeviceId };

IReadOnlyList<MediaCaptureVideoProfile> profiles = MediaCapture.FindAllVideoProfiles(videoDeviceId);

var match = (from profile in profiles


from desc in profile.SupportedRecordMediaDescription
where desc.Width == 640 && desc.Height == 480 && Math.Round(desc.FrameRate) == 30
select new { profile, desc }).FirstOrDefault();

if (match != null)
{
mediaInitSettings.VideoProfile = match.profile;
mediaInitSettings.RecordMediaDescription = match.desc;
}
else
{
// Could not locate a WVGA 30FPS profile, use default video recording profile
mediaInitSettings.VideoProfile = profiles[0];
}

After you populate the MediaCaptureInitializationSettings with your desired camera profile, you simply call
InitializeAsync on your media capture object to configure it to the desired profile.

await _mediaCapture.InitializeAsync(mediaInitSettings);

Select a profile that supports concurrence


You can use camera profiles to determine if a device supports video capture from multiple cameras concurrently.
For this scenario, you will need to create two sets of capture objects, one for the front camera and one for the back.
For each camera, create a MediaCapture, a MediaCaptureInitializationSettings, and a string to hold the
capture device ID. Also, add a boolean variable that will track whether concurrence is supported.

MediaCapture mediaCaptureFront = new MediaCapture();


MediaCapture mediaCaptureBack = new MediaCapture();

MediaCaptureInitializationSettings mediaInitSettingsFront = new MediaCaptureInitializationSettings();


MediaCaptureInitializationSettings mediaInitSettingsBack = new MediaCaptureInitializationSettings();

string frontVideoDeviceId = string.Empty;


string backVideoDeviceId = string.Empty;

bool concurrencySupported = false;

The static method MediaCapture.FindConcurrentProfiles returns a list of the camera profiles that are supported
by the specified capture device that can also supports concurrence. Use a Linq query to find a profile that supports
concurrence and that is supported by both the front and back camera. If a profile that meets theses requirements is
found, set the profile on each of the MediaCaptureInitializationSettings objects and set the boolean
concurrence tracking variable to true.
// Find front and back Device ID of capture device that supports Video Profile
frontVideoDeviceId = await GetVideoProfileSupportedDeviceIdAsync(Windows.Devices.Enumeration.Panel.Front);
backVideoDeviceId = await GetVideoProfileSupportedDeviceIdAsync(Windows.Devices.Enumeration.Panel.Back);

// First check if the devices support video profiles, if not there's no reason to proceed
if (string.IsNullOrEmpty(frontVideoDeviceId) || string.IsNullOrEmpty(backVideoDeviceId))
{
// Either the front or back camera doesn't support video profiles
return;
}

mediaInitSettingsFront.VideoDeviceId = frontVideoDeviceId;
mediaInitSettingsBack.VideoDeviceId = backVideoDeviceId;

Call MediaCapture.InitializeAsync for the primary camera for your app scenario. If concurrence is supported,
initialize the second camera as well.

await mediaCaptureFront.InitializeAsync(mediaInitSettingsFront);
if (concurrencySupported)
{
// Only initialize the back camera if concurrency is available.
await mediaCaptureBack.InitializeAsync(mediaInitSettingsBack);
}

Use known profiles to find a profile that supports HDR video


Selecting a profile that supports HDR begins like the other scenarios. Create a a
MediaCaptureInitializationSettings and a string to hold the capture device ID. Add a boolean variable that will
track whether HDR video is supported.

MediaCaptureInitializationSettings mediaInitSettings = new MediaCaptureInitializationSettings();


string videoDeviceId = string.Empty;
bool HdrVideoSupported = false;

Use the GetVideoProfileSupportedDeviceIdAsync helper method defined above to get the device ID for a
capture device that supports camera profiles.

// Select the first video capture device found on the back of the device
videoDeviceId = await GetVideoProfileSupportedDeviceIdAsync(Windows.Devices.Enumeration.Panel.Back);

if (string.IsNullOrEmpty(videoDeviceId))
{
// No devices on the specified panel support video profiles. .
return;
}

The static method MediaCapture.FindKnownVideoProfiles returns the camera profiles supported by the
specified device that is categorized by the specified KnownVideoProfile value. For this scenario, the
VideoRecording value is specified to limit the returned camera profiles to ones that support video recording.
Loop through the returned list of camera profiles. For each camera profile, loop through each
VideoProfileMediaDescription in the profile checking to see if the IsHdrVideoSupported property is true. After
a suitable media description is found, break out of the loop and assign the profile and description objects to the
MediaCaptureInitializationSettings object.
IReadOnlyList<MediaCaptureVideoProfile> profiles =
MediaCapture.FindKnownVideoProfiles(videoDeviceId, KnownVideoProfile.VideoRecording);

// Walk through available profiles, look for first profile with HDR supported Video Profile
foreach (MediaCaptureVideoProfile profile in profiles)
{
IReadOnlyList<MediaCaptureVideoProfileMediaDescription> recordMediaDescription =
profile.SupportedRecordMediaDescription;
foreach (MediaCaptureVideoProfileMediaDescription videoProfileMediaDescription in recordMediaDescription)
{
if (videoProfileMediaDescription.IsHdrVideoSupported)
{
// We've located the profile and description for HDR Video, set profile and flag
mediaInitSettings.VideoProfile = profile;
mediaInitSettings.RecordMediaDescription = videoProfileMediaDescription;
HdrVideoSupported = true;
break;
}
}

if (HdrVideoSupported)
{
// Profile with HDR support found. Stop looking.
break;
}
}

Determine if a device supports simultaneous photo and video capture


Many devices support capturing photos and video simultaneously. To determine if a capture device supports this,
call MediaCapture.FindAllVideoProfiles to get all of the camera profiles supported by the device. Use a link
query to find a profile that has at least one entry for both SupportedPhotoMediaDescription and
SupportedRecordMediaDescription which means that the profile supports simultaneous capture.

var simultaneousPhotoAndVideoSupported = false;

IReadOnlyList<MediaCaptureVideoProfile> profiles = MediaCapture.FindAllVideoProfiles(videoDeviceId);

var match = (from profile in profiles


where profile.SupportedPhotoMediaDescription.Any() &&
profile.SupportedRecordMediaDescription.Any()
select profile).FirstOrDefault();

if (match != null)
{
// Simultaneous photo and video supported
simultaneousPhotoAndVideoSupported = true;
}
else
{
// Simultaneous photo and video not supported
simultaneousPhotoAndVideoSupported = false;
}

You can refine this query to look for profiles that support specific resolutions or other capabilities in addition to
simultaneous video record. You can also use the MediaCapture.FindKnownVideoProfiles and specify the
BalancedVideoAndPhoto value to retrieve profiles that support simultaneous capture, but querying all profiles
will provide more complete results.

Related topics
Camera
Basic photo, video, and audio capture with MediaCapture
Set format, resolution, and frame rate for
MediaCapture
3/6/2017 8 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This article shows you how to use the IMediaEncodingProperties interface to set the resolution and frame rate of
the camera preview stream and captured photos and video. It also shows how to ensure that the aspect ratio of the
preview stream matches that of the captured media.
Camera profiles offer a more advanced way of discovering and setting the stream properties of the camera, but
they are not supported for all devices. For more information, see Camera profiles.
The code in this article was adapted from the CameraResolution sample. You can download the sample to see the
code used in context or to use the sample as a starting point for your own app.

NOTE
This article builds on concepts and code discussed in Basic photo, video, and audio capture with MediaCapture, which
describes the steps for implementing basic photo and video capture. It is recommended that you familiarize yourself with the
basic media capture pattern in that article before moving on to more advanced capture scenarios. The code in this article
assumes that your app already has an instance of MediaCapture that has been properly initialized.

A media encoding properties helper class


Creating a simple helper class to wrap the functionality of the IMediaEncodingProperties interface makes it
easier to select a set of encoding properties that meet particular criteria. This helper class is particularly useful due
to the following behavior of the encoding properties feature:
Warning
The VideoDeviceController.GetAvailableMediaStreamProperties method takes a member of the
MediaStreamType enumeration, such as VideoRecord or Photo, and returns a list of either
ImageEncodingProperties or VideoEncodingProperties objects that convey the stream encoding settings, such
as the resolution of the captured photo or video. The results of calling GetAvailableMediaStreamProperties may
include ImageEncodingProperties or VideoEncodingProperties regardless of what MediaStreamType value
is specified. For this reason, you should always check the type of each returned value and cast it to the appropriate
type before attempting to access any of the property values.
The helper class defined below handles the type checking and casting for ImageEncodingProperties or
VideoEncodingProperties so that your app code doesn't need to distinguish between the two types. In addition
to this, the helper class exposes properties for the aspect ratio of the properties, the frame rate (for video encoding
properties only), and a friendly name that makes it easier to display the encoding properties in the app's UI.
You must include the Windows.Media.MediaProperties namespace in the source file for the helper class.

using Windows.Media.MediaProperties;

class StreamPropertiesHelper
{
private IMediaEncodingProperties _properties;
public StreamPropertiesHelper(IMediaEncodingProperties properties)
{
if (properties == null)
{
throw new ArgumentNullException(nameof(properties));
}

// This helper class only uses VideoEncodingProperties or VideoEncodingProperties


if (!(properties is ImageEncodingProperties) && !(properties is VideoEncodingProperties))
{
throw new ArgumentException("Argument is of the wrong type. Required: " + typeof(ImageEncodingProperties).Name
+ " or " + typeof(VideoEncodingProperties).Name + ".", nameof(properties));
}

// Store the actual instance of the IMediaEncodingProperties for setting them later
_properties = properties;
}

public uint Width


{
get
{
if (_properties is ImageEncodingProperties)
{
return (_properties as ImageEncodingProperties).Width;
}
else if (_properties is VideoEncodingProperties)
{
return (_properties as VideoEncodingProperties).Width;
}

return 0;
}
}

public uint Height


{
get
{
if (_properties is ImageEncodingProperties)
{
return (_properties as ImageEncodingProperties).Height;
}
else if (_properties is VideoEncodingProperties)
{
return (_properties as VideoEncodingProperties).Height;
}

return 0;
}
}

public uint FrameRate


{
get
{
if (_properties is VideoEncodingProperties)
{
if ((_properties as VideoEncodingProperties).FrameRate.Denominator != 0)
{
return (_properties as VideoEncodingProperties).FrameRate.Numerator /
(_properties as VideoEncodingProperties).FrameRate.Denominator;
}
}

return 0;
}
}
public double AspectRatio
{
get { return Math.Round((Height != 0) ? (Width / (double)Height) : double.NaN, 2); }
}

public IMediaEncodingProperties EncodingProperties


{
get { return _properties; }
}

public string GetFriendlyName(bool showFrameRate = true)


{
if (_properties is ImageEncodingProperties ||
!showFrameRate)
{
return Width + "x" + Height + " [" + AspectRatio + "] " + _properties.Subtype;
}
else if (_properties is VideoEncodingProperties)
{
return Width + "x" + Height + " [" + AspectRatio + "] " + FrameRate + "FPS " + _properties.Subtype;
}

return String.Empty;
}

Determine if the preview and capture streams are independent


On some devices, the same hardware pin is used for both preview and capture streams. On these devices, setting
the encoding properties of one will also set the other. On devices that use different hardware pins for capture and
preview, the properties can be set for each stream independently. Use the following code to determine if the
preview and capture streams are independent. You should adjust your UI to enable or disable the setting of the
streams independently based on the result of this test.

private void CheckIfStreamsAreIdentical()


{
if (_mediaCapture.MediaCaptureSettings.VideoDeviceCharacteristic == VideoDeviceCharacteristic.AllStreamsIdentical ||
_mediaCapture.MediaCaptureSettings.VideoDeviceCharacteristic == VideoDeviceCharacteristic.PreviewRecordStreamsIdentical)
{
ShowMessageToUser("Preview and video streams for this device are identical. Changing one will affect the other");
}
}

Get a list of available stream properties


Get a list of the available stream properties for a capture device by getting the VideoDeviceController for your
app's MediaCapture object and then calling GetAvailableMediaStreamProperties and passing in one of the
MediaStreamType values, VideoPreview, VideoRecord, or Photo. In this example, Linq syntax is used to create
a list of StreamPropertiesHelper objects, defined previously in this article, for each of the
IMediaEncodingProperties values returned from GetAvailableMediaStreamProperties. This example first
uses Linq extension methods to order the returned properties based first on resolution and then on frame rate.
If your app has specific resolution or frame rate requirements, you can select a set of media encoding properties
programmatically. A typical camera app will instead expose the list of available properties in the UI and allow the
user to select their desired settings. A ComboBoxItem is created for each item in the list of
StreamPropertiesHelper objects in the list. The content is set to the friendly name returned by the helper class
and the tag is set to the helper class itself so it can be used later to retrieve the associated encoding properties. Each
ComboBoxItem is then added to the ComboBox passed into the method.

private void PopulateStreamPropertiesUI(MediaStreamType streamType, ComboBox comboBox, bool showFrameRate = true)


{
// Query all properties of the specified stream type
IEnumerable<StreamPropertiesHelper> allStreamProperties =
_mediaCapture.VideoDeviceController.GetAvailableMediaStreamProperties(streamType).Select(x => new StreamPropertiesHelper(x));

// Order them by resolution then frame rate


allStreamProperties = allStreamProperties.OrderByDescending(x => x.Height * x.Width).ThenByDescending(x => x.FrameRate);

// Populate the combo box with the entries


foreach (var property in allStreamProperties)
{
ComboBoxItem comboBoxItem = new ComboBoxItem();
comboBoxItem.Content = property.GetFriendlyName(showFrameRate);
comboBoxItem.Tag = property;
comboBox.Items.Add(comboBoxItem);
}
}

Set the desired stream properties


Tell the video device controller to use your desired encoding properties by calling
SetMediaStreamPropertiesAsync, passing in the MediaStreamType value indicating whether the photo, video,
or preview properties should be set. This example sets the requested encoding properties when the user selects an
item in one of the ComboBox objects populated with the PopulateStreamPropertiesUI helper method.

private async void PreviewSettings_Changed(object sender, RoutedEventArgs e)


{
if (_isPreviewing)
{
var selectedItem = (sender as ComboBox).SelectedItem as ComboBoxItem;
var encodingProperties = (selectedItem.Tag as StreamPropertiesHelper).EncodingProperties;
await _mediaCapture.VideoDeviceController.SetMediaStreamPropertiesAsync(MediaStreamType.VideoPreview, encodingProperties);
}
}

private async void PhotoSettings_Changed(object sender, RoutedEventArgs e)


{
if (_isPreviewing)
{
var selectedItem = (sender as ComboBox).SelectedItem as ComboBoxItem;
var encodingProperties = (selectedItem.Tag as StreamPropertiesHelper).EncodingProperties;
await _mediaCapture.VideoDeviceController.SetMediaStreamPropertiesAsync(MediaStreamType.Photo, encodingProperties);
}
}

private async void VideoSettings_Changed(object sender, RoutedEventArgs e)


{
if (_isPreviewing)
{
var selectedItem = (sender as ComboBox).SelectedItem as ComboBoxItem;
var encodingProperties = (selectedItem.Tag as StreamPropertiesHelper).EncodingProperties;
await _mediaCapture.VideoDeviceController.SetMediaStreamPropertiesAsync(MediaStreamType.VideoRecord, encodingProperties);
}
}
Match the aspect ratio of the preview and capture streams
A typical camera app will provide UI for the user to select the video or photo capture resolution but will
programmatically set the preview resolution. There are a few different strategies for selecting the best preview
stream resolution for your app:
Select the highest available preview resolution, letting the UI framework perform any necessary scaling of
the preview.
Select the preview resolution closest to the capture resolution so that the preview displays the closest
representation to the final captured media.
Select the preview resolution closest to the size of the CaptureElement so that no more pixels than
necessary are going through the preview stream pipeline.
Important
It is possible, on some devices, to set a different aspect ratio for the camera's preview stream and capture stream.
Frame cropping caused by this mismatch can result in content being present in the captured media that was not
visible in the preview which can result in a negative user experience. It is strongly recommended that you use the
same aspect ratio, within a small tolerance window, for the preview and capture streams. It is fine to have entirely
different resolutions enabled for capture and preview as long as the aspect ratio match closely.
To ensure that the photo or video capture streams match the aspect ratio of the preview stream, this example calls
VideoDeviceController.GetMediaStreamProperties and passes in the VideoPreview enum value to request
the current stream properties for the preview stream. Next a small aspect ratio tolerance window is defined so that
we can include aspect ratios that are not exactly the same as the preview stream, as long as they are close. Next, a
Linq extension method is used to select just the StreamPropertiesHelper objects where the aspect ratio is within
the defined tolerance range of the preview stream.

private void MatchPreviewAspectRatio(MediaStreamType streamType, ComboBox comboBox)


{
// Query all properties of the specified stream type
IEnumerable<StreamPropertiesHelper> allVideoProperties =
_mediaCapture.VideoDeviceController.GetAvailableMediaStreamProperties(streamType).Select(x => new StreamPropertiesHelper(x));

// Query the current preview settings


StreamPropertiesHelper previewProperties = new
StreamPropertiesHelper(_mediaCapture.VideoDeviceController.GetMediaStreamProperties(MediaStreamType.VideoPreview));

// Get all formats that have the same-ish aspect ratio as the preview
// Allow for some tolerance in the aspect ratio comparison
const double ASPECT_RATIO_TOLERANCE = 0.015;
var matchingFormats = allVideoProperties.Where(x => Math.Abs(x.AspectRatio - previewProperties.AspectRatio) <
ASPECT_RATIO_TOLERANCE);

// Order them by resolution then frame rate


allVideoProperties = matchingFormats.OrderByDescending(x => x.Height * x.Width).ThenByDescending(x => x.FrameRate);

// Clear out old entries and populate the video combo box with new matching entries
comboBox.Items.Clear();
foreach (var property in allVideoProperties)
{
ComboBoxItem comboBoxItem = new ComboBoxItem();
comboBoxItem.Content = property.GetFriendlyName();
comboBoxItem.Tag = property;
comboBox.Items.Add(comboBoxItem);
}
comboBox.SelectedIndex = -1;
}
High dynamic range (HDR) and low-light photo
capture
3/6/2017 10 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This article shows you how to use the AdvancedPhotoCapture class to capture high dynamic range (HDR)
photos. This API also allows you to obtain a reference frame from the HDR capture before the processing of the
final image is complete.
Other articles related to HDR capture include:
You can use the SceneAnalysisEffect to allow the system to evaluate the content of the media capture
preview stream to determine if HDR processing would improve the capture result. For more information,
see Scene analysis for MediaCapture.
Use the HdrVideoControl to capture video using the Windows built-in HDR processing algorithm. For
more information, see Capture device controls for video capture.
You can use the VariablePhotoSequenceCapture to capture a sequence of photos, each with different
capture settings, and implement your own HDR or other processing algorithm. For more information, see
Variable photo sequence.
Starting with Windows 10, version 1607, AdvancedPhotoCapture can be used to capture photos using a built-in
algorithm that enhances the quality of photos captured in low-light settings.

NOTE
Concurrently recording video and photo capture using AdvancedPhotoCapture is not supported.

NOTE
If the FlashControl.Enabled property is set to true, it will override the AdvancedPhotoCapture settings and cause a
normal photo to be captured with flash. If Auto is set to true, the AdvancedPhotoCapture will be used as configured and
the flash will not be used.

NOTE
This article builds on concepts and code discussed in Basic photo, video, and audio capture with MediaCapture, which
describes the steps for implementing basic photo and video capture. We recommend that you familiarize yourself with the
basic media capture pattern in that article before moving on to more advanced capture scenarios. The code in this article
assumes that your app already has an instance of MediaCapture that has been properly initialized.

There is a Universal Windows Sample demonstrating the use of the AdvancedPhotoCapture class that you can
use to see the API used in context or as a starting point for your own app. For more information see, Camera
Advanced Capture sample.

Advanced photo capture namespaces


The code examples in this article use APIs in the following namespaces in addition to the namespaces required for
basic media capture.

using Windows.Media.Core;
using Windows.Media.Devices;

HDR photo capture


Determine if HDR photo capture is supported on the current device
The HDR capture technique described in this article is performed using the AdvancedPhotoCapture object. Not
all devices support HDR capture with AdvancedPhotoCapture. Determine if the device on which your app is
currently running supports the technique by getting the MediaCapture object's VideoDeviceController and
then getting the AdvancedPhotoControl property. Check the video device controller's SupportedModes
collection to see if it includes AdvancedPhotoMode.Hdr. If it does, HDR capture using AdvancedPhotoCapture
is supported.

bool _hdrSupported;
private void IsHdrPhotoSupported()
{
_hdrSupported =
_mediaCapture.VideoDeviceController.AdvancedPhotoControl.SupportedModes.Contains(Windows.Media.Devices.AdvancedPhotoMode.Hdr);
}

Configure and prepare the AdvancedPhotoCapture object


Because you will need to access the AdvancedPhotoCapture instance from multiple places within your code, you
should declare a member variable to hold the object.

private AdvancedPhotoCapture _advancedCapture;

In your app, after you have initialized the MediaCapture object, create an AdvancedPhotoCaptureSettings
object and set the mode to AdvancedPhotoMode.Hdr. Call the AdvancedPhotoControl object's Configure
method, passing in the AdvancedPhotoCaptureSettings object you created.
Call the MediaCapture object's PrepareAdvancedPhotoCaptureAsync, passing in an
ImageEncodingProperties object specifying the type of encoding the capture should use. The
ImageEncodingProperties class provides static methods for creating the image encodings that are supported by
MediaCapture.
PrepareAdvancedPhotoCaptureAsync returns the AdvancedPhotoCapture object you will use to initiate
photo capture. You can use this object to register handlers for the OptionalReferencePhotoCaptured and
AllPhotosCaptured which are discussed later in this article.
if (_hdrSupported == false) return;

// Choose HDR mode


var settings = new AdvancedPhotoCaptureSettings { Mode = AdvancedPhotoMode.Hdr };

// Configure the mode


_mediaCapture.VideoDeviceController.AdvancedPhotoControl.Configure(settings);

// Prepare for an advanced capture


_advancedCapture =
await _mediaCapture.PrepareAdvancedPhotoCaptureAsync(ImageEncodingProperties.CreateUncompressed(MediaPixelFormat.Nv12));

// Register for events published by the AdvancedCapture


_advancedCapture.AllPhotosCaptured += AdvancedCapture_AllPhotosCaptured;
_advancedCapture.OptionalReferencePhotoCaptured += AdvancedCapture_OptionalReferencePhotoCaptured;

Capture an HDR photo


Capture an HDR photo by calling the AdvancedPhotoCapture object's CaptureAsync method. This method
returns an AdvancedCapturedPhoto object that provides the captured photo in its Frame property.

try
{

// Start capture, and pass the context object


AdvancedCapturedPhoto advancedCapturedPhoto = await _advancedCapture.CaptureAsync();

using (var frame = advancedCapturedPhoto.Frame)


{
// Read the current orientation of the camera and the capture time
var photoOrientation = CameraRotationHelper.ConvertSimpleOrientationToPhotoOrientation(
_rotationHelper.GetCameraCaptureOrientation());
var fileName = String.Format("SimplePhoto_{0}_HDR.jpg", DateTime.Now.ToString("HHmmss"));
await SaveCapturedFrameAsync(frame, fileName, photoOrientation);
}
}
catch (Exception ex)
{
Debug.WriteLine("Exception when taking an HDR photo: {0}", ex.ToString());
}

Most photography apps will want to encode a captured photo's rotation into the image file so that it can be
displayed correctly by other apps and devices. This example shows the use of the helper class
CameraRotationHelper to calculate the proper orientation for the file. This class is described and listed in full in
the article Handle device orientation with MediaCapture.
The SaveCapturedFrameAsync helper method, which saves the image to disk, is discussed later in this article.
Get optional reference frame
The HDR process captures multiple frames and then composites them into a single image after all of the frames
have been captured. You can get access to a frame after it is captured but before the entire HDR process is
complete by handling the OptionalReferencePhotoCaptured event. You don't need to do this if you are only
interested in the final HDR photo result.

IMPORTANT
OptionalReferencePhotoCaptured is not raised on devices that support hardware HDR and therefore do not generate
reference frames. Your app should handle the case where this event is not raised.

Because the reference frame arrives out of context of the call to CaptureAsync, a mechanism is provided to pass
context information to the OptionalReferencePhotoCaptured handler. First you should call an object that will
contain your context information. The name and contents of this object is up to you. This example defines an object
that has members to track the file name and camera orientation of the capture.

public class MyAdvancedCaptureContextObject


{
public string CaptureFileName;
public PhotoOrientation CaptureOrientation;
}

Create a new instance of your context object, populate its members, and then pass it into the overload of
CaptureAsync that accepts an object as a parameter.

// Read the current orientation of the camera and the capture time
var photoOrientation = CameraRotationHelper.ConvertSimpleOrientationToPhotoOrientation(
_rotationHelper.GetCameraCaptureOrientation());
var fileName = String.Format("SimplePhoto_{0}_HDR.jpg", DateTime.Now.ToString("HHmmss"));

// Create a context object, to identify the capture in the OptionalReferencePhotoCaptured event


var context = new MyAdvancedCaptureContextObject()
{
CaptureFileName = fileName,
CaptureOrientation = photoOrientation
};

// Start capture, and pass the context object


AdvancedCapturedPhoto advancedCapturedPhoto = await _advancedCapture.CaptureAsync(context);

In the OptionalReferencePhotoCaptured event handler, cast the Context property of the


OptionalReferencePhotoCapturedEventArgs object to your context object class. This example modifies the file
name to distinguish the reference frame image from the final HDR image and then calls the
SaveCapturedFrameAsync helper method to save the image.

private async void AdvancedCapture_OptionalReferencePhotoCaptured(AdvancedPhotoCapture sender,


OptionalReferencePhotoCapturedEventArgs args)
{
// Retrieve the context (i.e. what capture does this belong to?)
var context = args.Context as MyAdvancedCaptureContextObject;

// Remove "_HDR" from the name of the capture to create the name of the reference
var referenceName = context.CaptureFileName.Replace("_HDR", "");

using (var frame = args.Frame)


{
await SaveCapturedFrameAsync(frame, referenceName, context.CaptureOrientation);
}
}

Receive a notification when all frames have been captured


The HDR photo capture has two steps. First, multiple frames are captured, and then the frames are processed into
the final HDR image. You can't initiate another capture while the source HDR frames are still being captured, but
you can initiate a capture after all of the frames have been captured but before the HDR post-processing is
complete. The AllPhotosCaptured event is raised when the HDR captures are complete, letting you know that you
can initiate another capture. A typical scenario is to disable your UI's capture button when HDR capture begins and
then reenable it when AllPhotosCaptured is raised.
private void AdvancedCapture_AllPhotosCaptured(AdvancedPhotoCapture sender, object args)
{
// Update UI to enable capture button
}

Clean up the AdvancedPhotoCapture object


When your app is done capturing, before disposing of the MediaCapture object, you should shut down the
AdvancedPhotoCapture object by calling FinishAsync and setting your member variable to null.

await _advancedCapture.FinishAsync();
_advancedCapture = null;

Low-light photo capture


When you use the low-light feature of the AdvancedPhotoCapture class, the system will evaluate the current
scene and, if needed, apply an algorithm to compensate for low-light conditions. If the system determines that the
algorithm is not needed, a regular capture is performed instead.
Before using low-light photo capture, determine if the device on which your app is currently running supports the
technique by getting the MediaCapture object's VideoDeviceController and then getting the
AdvancedPhotoControl property. Check the video device controller's SupportedModes collection to see if it
includes AdvancedPhotoMode.LowLight. If it does, low-light capture using AdvancedPhotoCapture is
supported.

bool _lowLightSupported;

_lowLightSupported =
_mediaCapture.VideoDeviceController.AdvancedPhotoControl.SupportedModes.Contains(Windows.Media.Devices.AdvancedPhotoMode.Low
Light);

Next, declare a member variable to store the AdvancedPhotoCapture object.

private AdvancedPhotoCapture _advancedCapture;

In your app, after you have initialized the MediaCapture object, create an AdvancedPhotoCaptureSettings
object and set the mode to AdvancedPhotoMode.LowLight. Call the AdvancedPhotoControl object's
Configure method, passing in the AdvancedPhotoCaptureSettings object you created.
Call the MediaCapture object's PrepareAdvancedPhotoCaptureAsync, passing in an
ImageEncodingProperties object specifying the type of encoding the capture should use.

if (_lowLightSupported == false) return;

// Choose LowLight mode


var settings = new AdvancedPhotoCaptureSettings { Mode = AdvancedPhotoMode.LowLight };
_mediaCapture.VideoDeviceController.AdvancedPhotoControl.Configure(settings);

// Prepare for an advanced capture


_advancedCapture =
await _mediaCapture.PrepareAdvancedPhotoCaptureAsync(ImageEncodingProperties.CreateUncompressed(MediaPixelFormat.Nv12));

To capture a photo, call CaptureAsync.


AdvancedCapturedPhoto advancedCapturedPhoto = await _advancedCapture.CaptureAsync();
var photoOrientation = ConvertOrientationToPhotoOrientation(GetCameraOrientation());
var fileName = String.Format("SimplePhoto_{0}_LowLight.jpg", DateTime.Now.ToString("HHmmss"));
await SaveCapturedFrameAsync(advancedCapturedPhoto.Frame, fileName, photoOrientation);

Like the HDR example above, this example uses a helper class called CameraRotationHelper to determine the
rotation value that should be encoded into the image so that it can be displayed properly by other apps and
devices. This class is described and listed in full in the article Handle device orientation with MediaCapture.
The SaveCapturedFrameAsync helper method, which saves the image to disk, is discussed later in this article.
You can capture multiple low-light photos without reconfiguring the AdvancedPhotoCapture object, but when
you are done capturing, you should call FinishAsync to clean up the object and associated resources.

await _advancedCapture.FinishAsync();
_advancedCapture = null;

Working with AdvancedCapturedPhoto objects


AdvancedPhotoCapture.CaptureAsync returns an AdvancedCapturedPhoto object representing the captured
photo. This object exposes the Frame property which returns a CapturedFrame object representing the image.
The OptionalReferencePhotoCaptured event also provides a CapturedFrame object in its event args. After you
get an object of this type, there are a number of things you can do with it, including creating a SoftwareBitmap or
saving the image to a file.

Get a SoftwareBitmap from a CapturedFrame


It's trivial to get a SoftwareBitmap from a CapturedFrame object by simply accessing the SoftwareBitmap
property of the object. However, most encoding formats do not support SoftwareBitmap with
AdvancedPhotoCapture, so you should check and make sure the property is not null before using it.

SoftwareBitmap bitmap;
if (advancedCapturedPhoto.Frame.SoftwareBitmap != null)
{
bitmap = advancedCapturedPhoto.Frame.SoftwareBitmap;
}

In the current release, the only encoding format that supports SoftwareBitmap for AdvancedPhotoCapture is
uncompressed NV12. So, if you want to use this feature, you must specify that encoding when you call
PrepareAdvancedPhotoCaptureAsync.

_advancedCapture =
await _mediaCapture.PrepareAdvancedPhotoCaptureAsync(ImageEncodingProperties.CreateUncompressed(MediaPixelFormat.Nv12));

Of course, you can always save the image to a file and then load the file into a SoftwareBitmap in a separate step.
For more information about working with SoftwareBitmap, see Create, edit, and save bitmap images.

Save a CapturedFrame to a file


The CapturedFrame class implements the IInputStream interface, so it can be used as the input to a
BitmapDecoder, and then a BitmapEncoder can be used to write the image data to disk.
In the following example, a new folder in the user's pictures library is created and a file is created within this folder.
Note that your app will need to include the Pictures Library capability in your app manifest file in order to access
this directory. A file stream is then opened to the specified file. Next, the BitmapDecoder.CreateAsync is called to
create the decoder from the CapturedFrame. Then CreateForTranscodingAsync creates an encoder from the file
stream and the decoder.
The next steps encode the orientation of the photo into the image file by using the BitmapProperties of the
encoder. For more information about handling orientation when capturing images, see Handle device
orientation with MediaCapture.
Finally, the image is written to the file with a call to FlushAsync.

private static async Task<StorageFile> SaveCapturedFrameAsync(CapturedFrame frame, string fileName, PhotoOrientation photoOrientation)
{
var folder = await KnownFolders.PicturesLibrary.CreateFolderAsync("MyApp", CreationCollisionOption.OpenIfExists);
var file = await folder.CreateFileAsync(fileName, CreationCollisionOption.GenerateUniqueName);

using (var inputStream = frame)


{
using (var fileStream = await file.OpenAsync(FileAccessMode.ReadWrite))
{
var decoder = await BitmapDecoder.CreateAsync(inputStream);
var encoder = await BitmapEncoder.CreateForTranscodingAsync(fileStream, decoder);
var properties = new BitmapPropertySet {
{ "System.Photo.Orientation", new BitmapTypedValue(photoOrientation, PropertyType.UInt16) } };
await encoder.BitmapProperties.SetPropertiesAsync(properties);
await encoder.FlushAsync();
}
}
return file;
}

Related topics
Camera
Basic photo, video, and audio capture with MediaCapture
Manual camera controls for photo and video capture
3/6/2017 29 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This article shows you how to use manual device controls to enable enhanced photo and video capture scenarios
including optical image stabilization and smooth zoom.
The controls discussed in this article are all added to your app using the same pattern. First, check to see if the
control is supported on the current device on which your app is running. If the control is supported, set the desired
mode for the control. Typically, if a particular control is unsupported on the current device, you should disable or
hide the UI element that allows the user to enable the feature.
The code in this article was adapted from the Camera Manual Controls SDK sample. You can download the sample
to see the code used in context or to use the sample as a starting point for your own app.

NOTE
This article builds on concepts and code discussed in Basic photo, video, and audio capture with MediaCapture, which
describes the steps for implementing basic photo and video capture. We recommend that you familiarize yourself with the
basic media capture pattern in that article before moving on to more advanced capture scenarios. The code in this article
assumes that your app already has an instance of MediaCapture that has been properly initialized.

All of the device control APIs discussed in this article are members of the Windows.Media.Devices namespace.

using Windows.Media.Devices;

Exposure
The ExposureControl allows you to set the shutter speed used during photo or video capture.
This example uses a Slider control to adjust the current exposure value and a checkbox to toggle automatic
exposure adjustment.

<Slider Name="ExposureSlider" ValueChanged="ExposureSlider_ValueChanged"/>


<TextBlock Name="ExposureTextBlock" Text="{Binding ElementName=ExposureSlider,Path=Value}"/>
<CheckBox Name="ExposureAutoCheckBox" Content="Auto" Checked="ExposureCheckBox_CheckedChanged"
Unchecked="ExposureCheckBox_CheckedChanged"/>

Check to see if the current capture device supports the ExposureControl by checking the Supported property. If
the control is supported, you can show and enable the UI for this feature. Set the checked state of the checkbox to
indicate if automatic exposure adjustment is currently active to the value of the Auto property.
The exposure value must be within the range supported by the device and must be an increment of the supported
step size. Get the supported values for the current device by checking the Min, Max, and Step properties, which are
used to set the corresponding properties of the slider control.
Set the slider control's value to the current value of the ExposureControl after unregistering the ValueChanged
event handler so that the event is not triggered when the value is set.
var exposureControl = _mediaCapture.VideoDeviceController.ExposureControl;

if (exposureControl.Supported)
{
ExposureAutoCheckBox.Visibility = Visibility.Visible;
ExposureSlider.Visibility = Visibility.Visible;

ExposureAutoCheckBox.IsChecked = exposureControl.Auto;

ExposureSlider.Minimum = exposureControl.Min.Ticks;
ExposureSlider.Maximum = exposureControl.Max.Ticks;
ExposureSlider.StepFrequency = exposureControl.Step.Ticks;

ExposureSlider.ValueChanged -= ExposureSlider_ValueChanged;
var value = exposureControl.Value;
ExposureSlider.Value = value.Ticks;
ExposureSlider.ValueChanged += ExposureSlider_ValueChanged;
}
else
{
ExposureAutoCheckBox.Visibility = Visibility.Collapsed;
ExposureSlider.Visibility = Visibility.Collapsed;
}

In the ValueChanged event handler, get the current value of the control and the set the exposure value by calling
SetValueAsync.

private async void ExposureSlider_ValueChanged(object sender, Windows.UI.Xaml.Controls.Primitives.RangeBaseValueChangedEventArgs e)


{
var value = TimeSpan.FromTicks((long)(sender as Slider).Value);
await _mediaCapture.VideoDeviceController.ExposureControl.SetValueAsync(value);
}

In the CheckedChanged event handler of the auto exposure checkbox, turn automatic exposure adjustment on or
off by calling SetAutoAsync and passing in a boolean value.

private async void ExposureCheckBox_CheckedChanged(object sender, RoutedEventArgs e)


{
if(! _isPreviewing)
{
// Auto exposure only supported while preview stream is running.
return;
}

var autoExposure = ((sender as CheckBox).IsChecked == true);


await _mediaCapture.VideoDeviceController.ExposureControl.SetAutoAsync(autoExposure);
}

IMPORTANT
Automatic exposure mode is only supported while the preview stream is running. Check to make sure that the preview
stream is running before turning on automatic exposure.

Exposure compensation
The ExposureCompensationControl allows you to set the exposure compensation used during photo or video
capture.
This example uses a Slider control to adjust the current exposure compensation value.
<Slider Name="EvSlider" ValueChanged="EvSlider_ValueChanged"/>
<TextBlock Text="{Binding ElementName=EvSlider,Path=Value}" Name="EvTextBlock"/>

Check to see if the current capture device supports the ExposureCompensationControl by checking the
Supported property. If the control is supported, you can show and enable the UI for this feature.
The exposure compensation value must be within the range supported by the device and must be an increment of
the supported step size. Get the supported values for the current device by checking the Min, Max, and Step
properties, which are used to set the corresponding properties of the slider control.
Set slider control's value to the current value of the ExposureCompensationControl after unregistering the
ValueChanged event handler so that the event is not triggered when the value is set.

var exposureCompensationControl = _mediaCapture.VideoDeviceController.ExposureCompensationControl;

if (exposureCompensationControl.Supported)
{
EvSlider.Visibility = Visibility.Visible;
EvSlider.Minimum = exposureCompensationControl.Min;
EvSlider.Maximum = exposureCompensationControl.Max;
EvSlider.StepFrequency = exposureCompensationControl.Step;

EvSlider.ValueChanged -= EvSlider_ValueChanged;
EvSlider.Value = exposureCompensationControl.Value;
EvSlider.ValueChanged += EvSlider_ValueChanged;
}
else
{
EvSlider.Visibility = Visibility.Collapsed;
}

In the ValueChanged event handler, get the current value of the control and the set the exposure value by calling
SetValueAsync.

private async void EvSlider_ValueChanged(object sender, Windows.UI.Xaml.Controls.Primitives.RangeBaseValueChangedEventArgs e)


{
var value = (sender as Slider).Value;
await _mediaCapture.VideoDeviceController.ExposureCompensationControl.SetValueAsync((float)value);
}

Flash
The FlashControl allows you to enable or disable the flash or to enable automatic flash, where the system
dynamically determines whether to use the flash. This control also allows you to enable automatic red eye
reduction on devices that support it. These settings all apply to capturing photos. The TorchControl is a separate
control for turning the torch on or off for video capture.
This example uses a set of radio buttons to allow the user to switch between on, off, and auto flash settings. A
checkbox is also provided to allow toggling of red eye reduction and the video torch.

<RadioButton Name="FlashOnRadioButton" Content="On" Checked="FlashOnRadioButton_Checked"/>


<RadioButton Name="FlashAutoRadioButton" Content="Auto" Checked="FlashAutoRadioButton_Checked"/>
<RadioButton Name="FlashOffRadioButton" Content="Off" Checked="FlashOffRadioButton_Checked"/>
<CheckBox Name="RedEyeFlashCheckBox" Content="Red Eye" Visibility="Collapsed" Checked="RedEyeFlashCheckBox_CheckedChanged"
Unchecked="RedEyeFlashCheckBox_CheckedChanged"/>
<CheckBox Name="TorchCheckBox" Content="Video Light" Visibility="Collapsed" Checked="TorchCheckBox_CheckedChanged"
Unchecked="TorchCheckBox_CheckedChanged"/>
Check to see if the current capture device supports the FlashControl by checking the Supported property. If the
control is supported, you can show and enable the UI for this feature. If the FlashControl is supported, automatic
red eye reduction may or may not be supported, so check the RedEyeReductionSupported property before
enabling the UI. Because the TorchControl is separate from the flash control, you must also check its Supported
property before using it.
In the Checked event handler for each of the flash radio buttons, enable or disable the appropriate corresponding
flash setting. Note that to set the flash to always be used, you must set the Enabled property to true and the Auto
property to false.

var flashControl = _mediaCapture.VideoDeviceController.FlashControl;

if (flashControl.Supported)
{
FlashAutoRadioButton.Visibility = Visibility.Visible;
FlashOnRadioButton.Visibility = Visibility.Visible;
FlashOffRadioButton.Visibility = Visibility.Visible;

FlashAutoRadioButton.IsChecked = true;

if (flashControl.RedEyeReductionSupported)
{
RedEyeFlashCheckBox.Visibility = Visibility.Visible;
}

// Video light is not strictly part of flash, but users might expect to find it there
if (_mediaCapture.VideoDeviceController.TorchControl.Supported)
{
TorchCheckBox.Visibility = Visibility.Visible;
}
}
else
{
FlashAutoRadioButton.Visibility = Visibility.Collapsed;
FlashOnRadioButton.Visibility = Visibility.Collapsed;
FlashOffRadioButton.Visibility = Visibility.Collapsed;
}

private void FlashOnRadioButton_Checked(object sender, RoutedEventArgs e)


{
_mediaCapture.VideoDeviceController.FlashControl.Enabled = true;
_mediaCapture.VideoDeviceController.FlashControl.Auto = false;
}

private void FlashAutoRadioButton_Checked(object sender, RoutedEventArgs e)


{
_mediaCapture.VideoDeviceController.FlashControl.Enabled = true;
_mediaCapture.VideoDeviceController.FlashControl.Auto = true;
}

private void FlashOffRadioButton_Checked(object sender, RoutedEventArgs e)


{
_mediaCapture.VideoDeviceController.FlashControl.Enabled = false;
}

In the handler for the red eye reduction checkbox, set the RedEyeReduction property to the appropriate value.
private void RedEyeFlashCheckBox_CheckedChanged(object sender, RoutedEventArgs e)
{
_mediaCapture.VideoDeviceController.FlashControl.RedEyeReduction = (RedEyeFlashCheckBox.IsChecked == true);
}

Finally, in the handler for the video torch checkbox, set the Enabled property to the appropriate value.

private void TorchCheckBox_CheckedChanged(object sender, RoutedEventArgs e)


{
_mediaCapture.VideoDeviceController.TorchControl.Enabled = (TorchCheckBox.IsChecked == true);

if(! (_isPreviewing && _isRecording))


{
System.Diagnostics.Debug.WriteLine("Torch may not emit light if preview and video capture are not running.");
}
}

NOTE
On some devices the torch will not emit light, even if TorchControl.Enabled is set to true, unless the device has a preview
stream running and is actively capturing video. The recommended order of operations is to turn on the video preview, turn
on the torch by setting Enabled to true, and then initiate video capture. On some devices the torch will light up after the
preview is started. On other devices, the torch may not light up until video capture is started.

Focus
Three different commonly used methods for adjusting the focus of the camera are supported by the FocusControl
object, continuous autofocus, tap to focus, and manual focus. A camera app may support all three of these methods,
but for readability, this article discusses each technique separately. This section also discusses how to enable the
focus assist light.
Continuous autofocus
Enabling continuous autofocus instructs the camera to adjust the focus dynamically to try to keep the subject of the
photo or video in focus. This example uses a radio button to toggle continuous autofocus on and off.

<RadioButton Content="CAF" Name="CafFocusRadioButton" Checked="CafFocusRadioButton_Checked"/>

Check to see if the current capture device supports the FocusControl by checking the Supported property. Next,
determine if continuous autofocus is supported by checking the SupportedFocusModes list to see if it contains
the value FocusMode.Continuous, and if so, show the continuous autofocus radio button.

var focusControl = _mediaCapture.VideoDeviceController.FocusControl;

if (focusControl.Supported)
{
CafFocusRadioButton.Visibility = focusControl.SupportedFocusModes.Contains(FocusMode.Continuous)
? Visibility.Visible : Visibility.Collapsed;
}
else
{
CafFocusRadioButton.Visibility = Visibility.Collapsed;
}

In the Checked event handler for the continuous autofocus radio button, use the
VideoDeviceController.FocusControl property to get an instance of the control. Call UnlockAsync to unlock the
control in case your app has previously called LockAsync to enable one of the other focus modes.
Create a new FocusSettings object and set the Mode property to Continuous. Set the AutoFocusRange
property to a value appropriate for your app scenario or selected by the user from your UI. Pass your
FocusSettings object into the Configure method, and then call FocusAsync to initiate continuous autofocus.

private async void CafFocusRadioButton_Checked(object sender, RoutedEventArgs e)


{
if(! _isPreviewing)
{
// Autofocus only supported while preview stream is running.
return;
}

var focusControl = _mediaCapture.VideoDeviceController.FocusControl;


await focusControl.UnlockAsync();
var settings = new FocusSettings { Mode = FocusMode.Continuous, AutoFocusRange = AutoFocusRange.FullRange };
focusControl.Configure(settings);
await focusControl.FocusAsync();
}

IMPORTANT
Autofocus mode is only supported while the preview stream is running. Check to make sure that the preview stream is
running before turning on continuous autofocus.

Tap to focus
The tap-to-focus technique uses the FocusControl and the RegionsOfInterestControl to specify a subregion of
the capture frame where the capture device should focus. The region of focus is determined by the user tapping on
the screen displaying the preview stream.
This example uses a radio button to enable and disable tap-to-focus mode.

<RadioButton Content="Tap" Name="TapFocusRadioButton" Checked="TapFocusRadioButton_Checked"/>

Check to see if the current capture device supports the FocusControl by checking the Supported property. The
RegionsOfInterestControl must be supported, and must support at least one region, in order to use this
technique. Check the AutoFocusSupported and MaxRegions properties to determine whether to show or hide
the radio button for tap-to-focus.

var focusControl = _mediaCapture.VideoDeviceController.FocusControl;

if (focusControl.Supported)
{
TapFocusRadioButton.Visibility = (_mediaCapture.VideoDeviceController.RegionsOfInterestControl.AutoFocusSupported &&
_mediaCapture.VideoDeviceController.RegionsOfInterestControl.MaxRegions > 0)
? Visibility.Visible : Visibility.Collapsed;
}
else
{
TapFocusRadioButton.Visibility = Visibility.Collapsed;
}

In the Checked event handler for the tap-to-focus radio button, use the VideoDeviceController.FocusControl
property to get an instance of the control. Call LockAsync to lock the control in case your app has previously called
UnlockAsync to enable continuous autofocus, and then wait for the user to tap the screen to change the focus.
private async void TapFocusRadioButton_Checked(object sender, RoutedEventArgs e)
{
// Lock focus in case Continuous Autofocus was active when switching to Tap-to-focus
var focusControl = _mediaCapture.VideoDeviceController.FocusControl;
await focusControl.LockAsync();
// Wait for user tap
}

This example focuses on a region when the user taps the screen, and then removes the focus from that region
when the user taps again, like a toggle. Use a boolean variable to track the current toggled state.

bool _isFocused = false;

The next step is to listen for the event when the user taps the screen by handling the Tapped event of the
CaptureElement that is currently displaying the capture preview stream. If the camera isn't currently previewing,
or if tap-to-focus mode is disabled, return from the handler without doing anything.
If the tracking variable _isFocused is toggled to false, and if the camera isn't currently in the process of focus
(determined by the FocusState property of the FocusControl), begin the tap-to-focus process. Get the position of
the user's tap from the event args passed into the handler. This example also uses this opportunity to pick the size
of the region that will be focused upon. In this case, the size is 1/4 of the smallest dimension of the capture element.
Pass the tap position and the region size into the TapToFocus helper method that is defined in the next section.
If the _isFocused toggle is set to true, the user tap should clear the focus from the previous region. This is done in
the TapUnfocus helper method shown below.

private async void PreviewControl_Tapped(object sender, TappedRoutedEventArgs e)


{
if (!_isPreviewing || (TapFocusRadioButton.IsChecked != true)) return;

if (!_isFocused && _mediaCapture.VideoDeviceController.FocusControl.FocusState != MediaCaptureFocusState.Searching)


{
var smallEdge = Math.Min(Window.Current.Bounds.Width, Window.Current.Bounds.Height);

// Choose to make the focus rectangle 1/4th the length of the shortest edge of the window
var size = new Size(smallEdge / 4, smallEdge / 4);
var position = e.GetPosition(sender as UIElement);

// Note that at this point, a rect at "position" with size "size" could extend beyond the preview area. The following method will reposition the
rect if that is the case
await TapToFocus(position, size);
}
else
{
await TapUnfocus();
}
}

In the TapToFocus helper method, first set the _isFocused toggle to true so that the next screen tap will release the
focus from the tapped region.
The next task in this helper method is to determine the rectangle within the preview stream that will be assigned to
the focus control. This requires two steps. The first step is to determine the rectangle that the preview stream takes
up within the CaptureElement control. This depends on the dimensions of the preview stream and the orientation
of the device. The helper method GetPreviewStreamRectInControl, shown at the end of this section, performs
this task and returns the rectangle containing the preview stream.
The next task in TapToFocus is to convert the tap location and desired focus rectangle size, which were determined
within the CaptureElement.Tapped event handler, into coordinates within capture stream. The
ConvertUiTapToPreviewRect helper method, shown later in this section, performs this conversion and returns
the rectangle, in capture stream coordinates, where the focus will be requested.
Now that the target rectangle has been obtained, create a new RegionOfInterest object, setting the Bounds
property to the target rectangle obtained in the previous steps.
Get the capture device's FocusControl. Create a new FocusSettings object and set the Mode and
AutoFocusRange to your desired values, after checking to make sure that they are supported by the
FocusControl. Call Configure on the FocusControl to make your settings active and signal the device to begin
focusing on the specified region.
Next, get the capture device's RegionsOfInterestControl and call SetRegionsAsync to set the active region.
Multiple regions of interest can be set on devices that support it, but this example only sets a single region.
Finally, call FocusAsync on the FocusControl to initiate focusing.

IMPORTANT
When implementing tap to focus, the order of operations is important. You should call these APIs in the following order:
1. FocusControl.Configure
2. RegionsOfInterestControl.SetRegionsAsync
3. FocusControl.FocusAsync

public async Task TapToFocus(Point position, Size size)


{
_isFocused = true;

var previewRect = GetPreviewStreamRectInControl();


var focusPreview = ConvertUiTapToPreviewRect(position, size, previewRect);

// Note that this Region Of Interest could be configured to also calculate exposure
// and white balance within the region
var regionOfInterest = new RegionOfInterest
{
AutoFocusEnabled = true,
BoundsNormalized = true,
Bounds = focusPreview,
Type = RegionOfInterestType.Unknown,
Weight = 100,
};

var focusControl = _mediaCapture.VideoDeviceController.FocusControl;


var focusRange = focusControl.SupportedFocusRanges.Contains(AutoFocusRange.FullRange) ? AutoFocusRange.FullRange :
focusControl.SupportedFocusRanges.FirstOrDefault();
var focusMode = focusControl.SupportedFocusModes.Contains(FocusMode.Single) ? FocusMode.Single :
focusControl.SupportedFocusModes.FirstOrDefault();
var settings = new FocusSettings { Mode = focusMode, AutoFocusRange = focusRange };
focusControl.Configure(settings);

var roiControl = _mediaCapture.VideoDeviceController.RegionsOfInterestControl;


await roiControl.SetRegionsAsync(new[] { regionOfInterest }, true);

await focusControl.FocusAsync();
}

In the TapUnfocus helper method, obtain the RegionsOfInterestControl and call ClearRegionsAsync to clear
the region that was registered with the control within the TapToFocus helper method. Then, get the FocusControl
and call FocusAsync to cause the device to refocus without a region of interest.
private async Task TapUnfocus()
{
_isFocused = false;

var roiControl = _mediaCapture.VideoDeviceController.RegionsOfInterestControl;


await roiControl.ClearRegionsAsync();

var focusControl = _mediaCapture.VideoDeviceController.FocusControl;


await focusControl.FocusAsync();
}

The GetPreviewStreamRectInControl helper method uses the resolution of the preview stream and the
orientation of the device to determine the rectangle within the preview element that contains the preview stream,
trimming off any letterboxed padding that the control may provide to maintain the stream's aspect ratio. This
method uses class member variables defined in the basic media capture example code found in Basic photo, video,
and audio capture with MediaCapture.
public Rect GetPreviewStreamRectInControl()
{
var result = new Rect();

var previewResolution = _mediaCapture.VideoDeviceController.GetMediaStreamProperties(MediaStreamType.VideoPreview) as


VideoEncodingProperties;

// In case this function is called before everything is initialized correctly, return an empty result
if (PreviewControl == null || PreviewControl.ActualHeight < 1 || PreviewControl.ActualWidth < 1 ||
previewResolution == null || previewResolution.Height == 0 || previewResolution.Width == 0)
{
return result;
}

var streamWidth = previewResolution.Width;


var streamHeight = previewResolution.Height;

// For portrait orientations, the width and height need to be swapped


if (_displayOrientation == DisplayOrientations.Portrait || _displayOrientation == DisplayOrientations.PortraitFlipped)
{
streamWidth = previewResolution.Height;
streamHeight = previewResolution.Width;
}

// Start by assuming the preview display area in the control spans the entire width and height both (this is corrected in the next if for the
necessary dimension)
result.Width = PreviewControl.ActualWidth;
result.Height = PreviewControl.ActualHeight;

// If UI is "wider" than preview, letterboxing will be on the sides


if ((PreviewControl.ActualWidth / PreviewControl.ActualHeight > streamWidth / (double)streamHeight))
{
var scale = PreviewControl.ActualHeight / streamHeight;
var scaledWidth = streamWidth * scale;

result.X = (PreviewControl.ActualWidth - scaledWidth) / 2.0;


result.Width = scaledWidth;
}
else // Preview stream is "wider" than UI, so letterboxing will be on the top+bottom
{
var scale = PreviewControl.ActualWidth / streamWidth;
var scaledHeight = streamHeight * scale;

result.Y = (PreviewControl.ActualHeight - scaledHeight) / 2.0;


result.Height = scaledHeight;
}

return result;
}

The ConvertUiTapToPreviewRect helper method takes as arguments the location of the tap event, the desired
size of the focus region, and the rectangle containing the preview stream obtained from the
GetPreviewStreamRectInControl helper method. This method uses these values and the device's current
orientation to calculate the rectangle within the preview stream that contains the desired region. Once again, this
method uses class member variables defined in the basic media capture example code found in Capture Photos
and Video with MediaCapture.
private Rect ConvertUiTapToPreviewRect(Point tap, Size size, Rect previewRect)
{
// Adjust for the resulting focus rectangle to be centered around the position
double left = tap.X - size.Width / 2, top = tap.Y - size.Height / 2;

// Get the information about the active preview area within the CaptureElement (in case it's letterboxed)
double previewWidth = previewRect.Width, previewHeight = previewRect.Height;
double previewLeft = previewRect.Left, previewTop = previewRect.Top;

// Transform the left and top of the tap to account for rotation
switch (_displayOrientation)
{
case DisplayOrientations.Portrait:
var tempLeft = left;

left = top;
top = previewRect.Width - tempLeft;
break;
case DisplayOrientations.LandscapeFlipped:
left = previewRect.Width - left;
top = previewRect.Height - top;
break;
case DisplayOrientations.PortraitFlipped:
var tempTop = top;

top = left;
left = previewRect.Width - tempTop;
break;
}

// For portrait orientations, the information about the active preview area needs to be rotated
if (_displayOrientation == DisplayOrientations.Portrait || _displayOrientation == DisplayOrientations.PortraitFlipped)
{
previewWidth = previewRect.Height;
previewHeight = previewRect.Width;
previewLeft = previewRect.Top;
previewTop = previewRect.Left;
}

// Normalize width and height of the focus rectangle


var width = size.Width / previewWidth;
var height = size.Height / previewHeight;

// Shift rect left and top to be relative to just the active preview area
left -= previewLeft;
top -= previewTop;

// Normalize left and top


left /= previewWidth;
top /= previewHeight;

// Ensure rectangle is fully contained within the active preview area horizontally
left = Math.Max(left, 0);
left = Math.Min(1 - width, left);

// Ensure rectangle is fully contained within the active preview area vertically
top = Math.Max(top, 0);
top = Math.Min(1 - height, top);

// Create and return resulting rectangle


return new Rect(left, top, width, height);
}

Manual focus
The manual focus technique uses a Slider control to set the current focus depth of the capture device. A radio
button is used to toggle manual focus on and off.

<Slider Name="FocusSlider" IsEnabled="{Binding ElementName=ManualFocusRadioButton,Path=IsChecked}"


ValueChanged="FocusSlider_ValueChanged"/>
<TextBlock Text="{Binding ElementName=FocusSlider,Path=Value,FallbackValue='0'}"/>
<RadioButton Content="Manual" Name="ManualFocusRadioButton" Checked="ManualFocusRadioButton_Checked" IsChecked="False"/>

Check to see if the current capture device supports the FocusControl by checking the Supported property. If the
control is supported, you can show and enable the UI for this feature.
The focus value must be within the range supported by the device and must be an increment of the supported step
size. Get the supported values for the current device by checking the Min, Max, and Step properties, which are
used to set the corresponding properties of the slider control.
Set the slider control's value to the current value of the FocusControl after unregistering the ValueChanged
event handler so that the event is not triggered when the value is set.

var focusControl = _mediaCapture.VideoDeviceController.FocusControl;

if (focusControl.Supported)
{
FocusSlider.Visibility = Visibility.Visible;
ManualFocusRadioButton.Visibility = Visibility.Visible;

FocusSlider.Minimum = focusControl.Min;
FocusSlider.Maximum = focusControl.Max;
FocusSlider.StepFrequency = focusControl.Step;

FocusSlider.ValueChanged -= FocusSlider_ValueChanged;
FocusSlider.Value = focusControl.Value;
FocusSlider.ValueChanged += FocusSlider_ValueChanged;
}
else
{
FocusSlider.Visibility = Visibility.Collapsed;
ManualFocusRadioButton.Visibility = Visibility.Collapsed;
}

In the Checked event handler for the manual focus radio button, get the FocusControl object and call LockAsync
in case your app had previously unlocked the focus with a call to UnlockAsync.

private async void ManualFocusRadioButton_Checked(object sender, RoutedEventArgs e)


{
var focusControl = _mediaCapture.VideoDeviceController.FocusControl;
await focusControl.LockAsync();
}

In the ValueChanged event handler of the manual focus slider, get the current value of the control and the set the
focus value by calling SetValueAsync.

private async void FocusSlider_ValueChanged(object sender, Windows.UI.Xaml.Controls.Primitives.RangeBaseValueChangedEventArgs e)


{
var value = (sender as Slider).Value;
await _mediaCapture.VideoDeviceController.FocusControl.SetValueAsync((uint)value);
}

Enable the focus light


On devices that support it, you can enable a focus assist light to help the device focus. This example uses a
checkbox to enable or disable the focus assist light.

<CheckBox Content="Assist Light" Name="FocusLightCheckBox" IsEnabled="{Binding


ElementName=TapFocusRadioButton,Path=IsChecked}"
Checked="FocusLightCheckBox_CheckedChanged" Unchecked="FocusLightCheckBox_CheckedChanged"/>

Check to see if the current capture device supports the FlashControl by checking the Supported property. Also
check the AssistantLightSupported to make sure the assist light is also supported. If these are both supported,
you can show and enable the UI for this feature.

var focusControl = _mediaCapture.VideoDeviceController.FocusControl;

if (focusControl.Supported)
{

FocusLightCheckBox.Visibility = (_mediaCapture.VideoDeviceController.FlashControl.Supported &&


_mediaCapture.VideoDeviceController.FlashControl.AssistantLightSupported) ? Visibility.Visible : Visibility.Collapsed;
}
else
{
FocusLightCheckBox.Visibility = Visibility.Collapsed;
}

In the CheckedChanged event handler, get the capture devices FlashControl object. Set the
AssistantLightEnabled property to enable or disable the focus light.

private void FocusLightCheckBox_CheckedChanged(object sender, RoutedEventArgs e)


{
var flashControl = _mediaCapture.VideoDeviceController.FlashControl;

flashControl.AssistantLightEnabled = (FocusLightCheckBox.IsChecked == true);


}

ISO speed
The IsoSpeedControl allows you to set the ISO speed used during photo or video capture.
This example uses a Slider control to adjust the current exposure compensation value and a checkbox to toggle
automatic ISO speed adjustment.

<Slider Name="IsoSlider" ValueChanged="IsoSlider_ValueChanged"/>


<TextBlock Text="{Binding ElementName=IsoSlider,Path=Value}" Visibility="{Binding ElementName=IsoSlider,Path=Visibility}"/>
<CheckBox Name="IsoAutoCheckBox" Content="Auto" Checked="IsoAutoCheckBox_CheckedChanged"
Unchecked="IsoAutoCheckBox_CheckedChanged"/>

Check to see if the current capture device supports the IsoSpeedControl by checking the Supported property. If
the control is supported, you can show and enable the UI for this feature. Set the checked state of the checkbox to
indicate if automatic ISO speed adjustment is currently active to the value of the Auto property.
The ISO speed value must be within the range supported by the device and must be an increment of the supported
step size. Get the supported values for the current device by checking the Min, Max, and Step properties, which are
used to set the corresponding properties of the slider control.
Set the slider control's value to the current value of the IsoSpeedControl after unregistering the ValueChanged
event handler so that the event is not triggered when the value is set.
private void UpdateIsoControlCapabilities()
{
var isoSpeedControl = _mediaCapture.VideoDeviceController.IsoSpeedControl;

if (isoSpeedControl.Supported)
{
IsoAutoCheckBox.Visibility = Visibility.Visible;
IsoSlider.Visibility = Visibility.Visible;

IsoAutoCheckBox.IsChecked = isoSpeedControl.Auto;

IsoSlider.Minimum = isoSpeedControl.Min;
IsoSlider.Maximum = isoSpeedControl.Max;
IsoSlider.StepFrequency = isoSpeedControl.Step;

IsoSlider.ValueChanged -= IsoSlider_ValueChanged;
IsoSlider.Value = isoSpeedControl.Value;
IsoSlider.ValueChanged += IsoSlider_ValueChanged;
}
else
{
IsoAutoCheckBox.Visibility = Visibility.Collapsed;
IsoSlider.Visibility = Visibility.Collapsed;
}
}

In the ValueChanged event handler, get the current value of the control and the set the ISO speed value by calling
SetValueAsync.

private async void IsoSlider_ValueChanged(object sender, Windows.UI.Xaml.Controls.Primitives.RangeBaseValueChangedEventArgs e)


{
var value = (sender as Slider).Value;
await _mediaCapture.VideoDeviceController.IsoSpeedControl.SetValueAsync((uint)value);
}

In the CheckedChanged event handler of the auto ISO speed checkbox, turn on automatic ISO speed adjustment
by calling SetAutoAsync. Turn automatic ISO speed adjustment off by calling SetValueAsync and passing in the
current value of the slider control.

private async void IsoAutoCheckBox_CheckedChanged(object sender, RoutedEventArgs e)


{
var autoIso = (sender as CheckBox).IsChecked == true;

if (autoIso)
{
await _mediaCapture.VideoDeviceController.IsoSpeedControl.SetAutoAsync();
}
else
{
await _mediaCapture.VideoDeviceController.IsoSpeedControl.SetValueAsync((uint)IsoSlider.Value);
}
}

Optical image stabilization


Optical image stabilization (OIS) stabilizes a the captured video stream by mechanically manipulating the hardware
capture device, which can provide a superior result than digital stabilization. On devices that don't support OIS, you
can use the VideoStabilizationEffect to perform digital stabilization on your captured vide. For more information,
see Effects for video capture.
Determine if OIS is supported on the current device by checking the
OpticalImageStabilizationControl.Supported property.
The OIS control supports three modes: on, off, and automatic, which means that the device dynamically determines
if OIS would improve the media capture and, if so, enables OIS. To determine if a particular mode is supported on a
device, check to see if the OpticalImageStabilizationControl.SupportedModes collection contains the desired
mode.
Enable or disable OIS by setting the OpticalImageStabilizationControl.Mode to the desired mode.

private void SetOpticalImageStabilizationMode(OpticalImageStabilizationMode mode)


{
if (!_mediaCapture.VideoDeviceController.OpticalImageStabilizationControl.Supported)
{
ShowMessageToUser("Optical image stabilization not available");
return;
}

var stabilizationModes = _mediaCapture.VideoDeviceController.OpticalImageStabilizationControl.SupportedModes;

if (!stabilizationModes.Contains(mode))
{
ShowMessageToUser("Optical image stabilization setting not supported");
return;
}

_mediaCapture.VideoDeviceController.OpticalImageStabilizationControl.Mode = mode;
}

Powerline frequency
Some camera devices support anti-flicker processing that depends on knowing the AC frequency of the powerlines
in the current environment. Some devices support automatic determination of the powerline frequency, while
others require that the frequency be set manually. The following code example shows how to determine powerline
frequency support on the device and, if needed, how to set the frequency manually.
First, call the VideoDeviceController method TryGetPowerlineFrequency, passing in an output parameter of
type PowerlineFrequency; if this call fails, the powerline frequency control is not supported on the current device.
If the feature is supported, you can determine if automatic mode is available on the device by trying to set auto
mode. Do this by calling TrySetPowerlineFrequency and passing in the value Auto. If the call succeeds, that
means that your auto powerline frequency is supported. If the powerline frequency controller is supported on the
device but automatic frequency detection is not, you can still manually set the frequency by using
TrySetPowerlineFrequency. In this example, MyCustomFrequencyLookup is a custom method that you
implement to determine the correct frequency for the device's current location.
PowerlineFrequency getFrequency;

if (! _mediaCapture.VideoDeviceController.TryGetPowerlineFrequency(out getFrequency))
{
// Powerline frequency is not supported on this device.
return;
}

if (! _mediaCapture.VideoDeviceController.TrySetPowerlineFrequency(PowerlineFrequency.Auto))
{
// Set the frequency manually
PowerlineFrequency setFrequency = MyCustomFrequencyLookup();
if (_mediaCapture.VideoDeviceController.TrySetPowerlineFrequency(setFrequency))
{
System.Diagnostics.Debug.WriteLine(String.Format("Powerline frequency manually set to {0}.", setFrequency));
}
}

White balance
The WhiteBalanceControl allows you to set the white balance used during photo or video capture.
This example uses a ComboBox control to select from built-in color temperature presets and a Slider control for
manual white balance adjustment.

<Slider Name="WbSlider" ValueChanged="WbSlider_ValueChanged"/>


<TextBlock Name="WbTextBox" Text="{Binding ElementName=WbSlider,Path=Value}" Visibility="{Binding
ElementName=WbSlider,Path=Visibility}"/>
<ComboBox Name="WbComboBox" SelectionChanged="WbComboBox_SelectionChanged"/>

Check to see if the current capture device supports the WhiteBalanceControl by checking the Supported
property. If the control is supported, you can show and enable the UI for this feature. Set the items of the combo
box to the values of the ColorTemperaturePreset enumeration. And set the selected item to the current value of
the Preset property.
For manual control, the white balance value must be within the range supported by the device and must be an
increment of the supported step size. Get the supported values for the current device by checking the Min, Max,
and Step properties, which are used to set the corresponding properties of the slider control. Before enabling
manual control, check to make sure that the range between the minimum and maximum supported values is
greater than the step size. If it is not, manual control is not supported on the current device.
Set the slider control's value to the current value of the WhiteBalanceControl after unregistering the
ValueChanged event handler so that the event is not triggered when the value is set.
var whiteBalanceControl = _mediaCapture.VideoDeviceController.WhiteBalanceControl;

if (whiteBalanceControl.Supported)
{
WbSlider.Visibility = Visibility.Visible;
WbComboBox.Visibility = Visibility.Visible;

if (WbComboBox.ItemsSource == null)
{
WbComboBox.ItemsSource = Enum.GetValues(typeof(ColorTemperaturePreset)).Cast<ColorTemperaturePreset>();
}

WbComboBox.SelectedItem = whiteBalanceControl.Preset;

if (whiteBalanceControl.Max - whiteBalanceControl.Min > whiteBalanceControl.Step)


{

WbSlider.Minimum = whiteBalanceControl.Min;
WbSlider.Maximum = whiteBalanceControl.Max;
WbSlider.StepFrequency = whiteBalanceControl.Step;

WbSlider.ValueChanged -= WbSlider_ValueChanged;
WbSlider.Value = whiteBalanceControl.Value;
WbSlider.ValueChanged += WbSlider_ValueChanged;
}
else
{
WbSlider.Visibility = Visibility.Collapsed;
}
}
else
{
WbSlider.Visibility = Visibility.Collapsed;
WbComboBox.Visibility = Visibility.Collapsed;
}

In the SelectionChanged event handler of the color temperature preset combo box, get the currently selected
preset and set the value of the control by calling SetPresetAsync. If the selected preset value is not Manual,
disable the manual white balance slider.

private async void WbComboBox_SelectionChanged(object sender, SelectionChangedEventArgs e)


{
if(!_isPreviewing)
{
// Do not set white balance values unless the preview stream is running.
return;
}

var selected = (ColorTemperaturePreset)WbComboBox.SelectedItem;


WbSlider.IsEnabled = (selected == ColorTemperaturePreset.Manual);
await _mediaCapture.VideoDeviceController.WhiteBalanceControl.SetPresetAsync(selected);

In the ValueChanged event handler, get the current value of the control and the set the white balance value by
calling SetValueAsync.
private async void WbSlider_ValueChanged(object sender, Windows.UI.Xaml.Controls.Primitives.RangeBaseValueChangedEventArgs e)
{
if (!_isPreviewing)
{
// Do not set white balance values unless the preview stream is running.
return;
}

var value = (sender as Slider).Value;


await _mediaCapture.VideoDeviceController.WhiteBalanceControl.SetValueAsync((uint)value);
}

IMPORTANT
Adjusting the white balance is only supported while the preview stream is running. Check to make sure that the preview
stream is running before setting the white balance value or preset.

IMPORTANT
The ColorTemperaturePreset.Auto preset value instructs the system to automatically adjust the white balance level. For
some scenarios, such as capturing a photo sequence where the white balance levels should be the same for each frame, you
may want to lock the control to the current automatic value. To do this, call SetPresetAsync and specify the Manual preset
and do not set a value on the control using SetValueAsync. This will cause the device to lock the current value. Do not
attempt to read the current control value and then pass the returned value into SetValueAsync because this value is not
guaranteed to be correct.

Zoom
The ZoomControl allows you to set the zoom level used during photo or video capture.
This example uses a Slider control to adjust the current zoom level. The following section shows how to adjust
zoom based on a pinch gesture on the screen.

<Slider Name="ZoomSlider" Grid.Row="0" Orientation="Vertical" HorizontalAlignment="Center" VerticalAlignment="Stretch"


ValueChanged="ZoomSlider_ValueChanged"/>
<TextBlock Grid.Row="1" HorizontalAlignment="Center" Text="{Binding ElementName=ZoomSlider,Path=Value}"/>

Check to see if the current capture device supports the ZoomControl by checking the Supported property. If the
control is supported, you can show and enable the UI for this feature.
The zoom level value must be within the range supported by the device and must be an increment of the supported
step size. Get the supported values for the current device by checking the Min, Max, and Step properties, which are
used to set the corresponding properties of the slider control.
Set the slider control's value to the current value of the ZoomControl after unregistering the ValueChanged
event handler so that the event is not triggered when the value is set.
var zoomControl = _mediaCapture.VideoDeviceController.ZoomControl;

if (zoomControl.Supported)
{
ZoomSlider.Visibility = Visibility.Visible;

ZoomSlider.Minimum = zoomControl.Min;
ZoomSlider.Maximum = zoomControl.Max;
ZoomSlider.StepFrequency = zoomControl.Step;

ZoomSlider.ValueChanged -= ZoomSlider_ValueChanged;
ZoomSlider.Value = zoomControl.Value;
ZoomSlider.ValueChanged += ZoomSlider_ValueChanged;
}
else
{
ZoomSlider.Visibility = Visibility.Collapsed;
}

In the ValueChanged event handler, create a new instance of the ZoomSettings class, setting the Value property
to the current value of the zoom slider control. If the SupportedModes property of the ZoomControl contains
ZoomTransitionMode.Smooth, it means the device supports smooth transitions between zoom levels. Since this
modes provides a better user experience, you will typically want to use this value for the Mode property of the
ZoomSettings object.
Finally, change the current zoom settings by passing your ZoomSettings object into the Configure method of the
ZoomControl object.

private void ZoomSlider_ValueChanged(object sender, Windows.UI.Xaml.Controls.Primitives.RangeBaseValueChangedEventArgs e)


{
var level = (float)ZoomSlider.Value;
var settings = new ZoomSettings { Value = level };

var zoomControl = _mediaCapture.VideoDeviceController.ZoomControl;


if (zoomControl.SupportedModes.Contains(ZoomTransitionMode.Smooth))
{
settings.Mode = ZoomTransitionMode.Smooth;
}
else
{
settings.Mode = zoomControl.SupportedModes.First();
}

zoomControl.Configure(settings);
}

Smooth zoom using pinch gesture


As discussed in the previous section, on devices that support it, smooth zoom mode allows the capture device to
smoothly transition between digital zoom levels, allowing the user to dynamically adjust the zoom level during the
capture operation without discrete and jarring transitions. This section describes how to adjust the zoom level in
response to a pinch gesture.
First, determine if the digital zoom control is supported on the current device by checking the
ZoomControl.Supported property. Next, determine if smooth zoom mode is available by checking the
ZoomControl.SupportedModes to see if it contains the value ZoomTransitionMode.Smooth.
private bool IsSmoothZoomSupported()
{
if (!_mediaCapture.VideoDeviceController.ZoomControl.Supported)
{
ShowMessageToUser("Digital zoom is not supported on this device.");
return false;
}

var zoomModes = _mediaCapture.VideoDeviceController.ZoomControl.SupportedModes;

if (!zoomModes.Contains(ZoomTransitionMode.Smooth))
{
ShowMessageToUser("Smooth zoom not supported");
return false;
}

return true;
}

On a multi-touch enabled device, a typical scenario is to adjust the zoom factor based on a two-finger pinch
gesture. Set the ManipulationMode property of the CaptureElement control to ManipulationModes.Scale to
enable the pinch gesture. Then, register for the ManipulationDelta event which is raised when the pinch gesture
changes size.

private void RegisterPinchGestureHandler()


{
if (!IsSmoothZoomSupported())
{
return;
}

// Enable pinch/zoom gesture for the preview control


PreviewControl.ManipulationMode = ManipulationModes.Scale;
PreviewControl.ManipulationDelta += PreviewControl_ManipulationDelta;
}

In the handler for the ManipulationDelta event, update the zoom factor based on the change in the user's pinch
gesture. The ManipulationDelta.Scale value represents the change in scale of the pinch gesture such that a small
increase in the size of the pinch is a number slightly larger than 1.0 and a small decrease in the pinch size is a
number slightly smaller than 1.0. In this example, the current value of the zoom control is multiplied by the scale
delta.
Before setting the zoom factor, you must make sure that the value is not less than the minimum value supported by
the device as indicated by the ZoomControl.Min property. Also, make sure that the value is less than or equal to
the ZoomControl.Max value. Finally, you must make sure that the the zoom factor is a multiple of the zoom step
size supported by the device as indicated by the Step property. If your zoom factor does not meet these
requirements, an exception will be thrown when you attempt to set the zoom level on the capture device.
Set the zoom level on the capture device by creating a new ZoomSettings object. Set the Mode property to
ZoomTransitionMode.Smooth and then set the Value property to your desired zoom factor. Finally, call
ZoomControl.Configure to set the new zoom value on the device. The device will smoothly transition to the new
zoom value.
private void PreviewControl_ManipulationDelta(object sender, ManipulationDeltaRoutedEventArgs e)
{
var zoomControl = _mediaCapture.VideoDeviceController.ZoomControl;

// Example zoom factor calculation based on size of scale gesture


var zoomFactor = zoomControl.Value * e.Delta.Scale;

if (zoomFactor < zoomControl.Min) zoomFactor = zoomControl.Min;


if (zoomFactor > zoomControl.Max) zoomFactor = zoomControl.Max;
zoomFactor = zoomFactor - (zoomFactor % zoomControl.Step);

var settings = new ZoomSettings();


settings.Mode = ZoomTransitionMode.Smooth;
settings.Value = zoomFactor;

_mediaCapture.VideoDeviceController.ZoomControl.Configure(settings);

Related topics
Camera
Basic photo, video, and audio capture with MediaCapture
Manual camera controls for video capture
3/6/2017 2 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This article shows you how to use manual device controls to enable enhanced video capture scenarios, including
HDR video and exposure priority.
The video device controls discussed in this article are all added to your app by using the same pattern. First, check
to see if the control is supported on the current device on which your app is running. If the control is supported, set
the desired mode for the control. Typically, if a particular control is unsupported on the current device, you should
disable or hide the UI element that allows the user to enable the feature.
All of the device control APIs discussed in this article are members of the Windows.Media.Devices namespace.

using Windows.Media.Devices;

NOTE
This article builds on concepts and code discussed in Basic photo, video, and audio capture with MediaCapture, which
describes the steps for implementing basic photo and video capture. We recommend that you familiarize yourself with the
basic media capture pattern in that article before moving on to more advanced capture scenarios. The code in this article
assumes that your app already has an instance of MediaCapture that has been properly initialized.

HDR video
The high dynamic range (HDR) video feature applies HDR processing to the video stream of the capture device.
Determine if HDR video is supported by selecting the HdrVideoControl.Supported property.
The HDR video control supports three modes: on, off, and automatic, which means that the device dynamically
determines if HDR video processing would improve the media capture and, if so, enables HDR video. To determine
if a particular mode is supported on the current device, check to see if the HdrVideoControl.SupportedModes
collection contains the desired mode.
Enable or disable HDR video processing by setting the HdrVideoControl.Mode to the desired mode.
private void SetHdrVideoMode(HdrVideoMode mode)
{
if (!_mediaCapture.VideoDeviceController.HdrVideoControl.Supported)
{
ShowMessageToUser("HDR Video not available");
return;
}

var hdrVideoModes = _mediaCapture.VideoDeviceController.HdrVideoControl.SupportedModes;

if (!hdrVideoModes.Contains(mode))
{
ShowMessageToUser("HDR Video setting not supported");
return;
}

_mediaCapture.VideoDeviceController.HdrVideoControl.Mode = mode;
}

Exposure priority
The ExposurePriorityVideoControl, when enabled, evaluates the video frames from the capture device to
determine if the video is capturing a low-light scene. If so, the control lowers the frame rate of the captured video
in order to increase the exposure time for each frame and improve the visual quality of the captured video.
Determine if the exposure priority control is supported on the current device by checking the
ExposurePriorityVideoControl.Supported property.
Enable or disable the exposure priority control by setting the ExposurePriorityVideoControl.Enabled to the
desired mode.

if (!_mediaCapture.VideoDeviceController.ExposurePriorityVideoControl.Supported)
{
ShowMessageToUser("Exposure priority not available");
return;
}
_mediaCapture.VideoDeviceController.ExposurePriorityVideoControl.Enabled = true;

Related topics
Camera
Basic photo, video, and audio capture with MediaCapture
Effects for video capture
3/6/2017 8 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This topic shows you how to apply effects to the camera preview and recording video streams and shows you how
to use the video stabilization effect.

NOTE
This article builds on concepts and code discussed in Basic photo, video, and audio capture with MediaCapture, which
describes the steps for implementing basic photo and video capture. We recommend that you familiarize yourself with the
basic media capture pattern in that article before moving on to more advanced capture scenarios. The code in this article
assumes that your app already has an instance of MediaCapture that has been properly initialized.

Adding and removing effects from the camera video stream


To capture or preview video from the device's camera, you use the MediaCapture object as described in Basic
photo, video, and audio capture with MediaCapture. After you have initialized the MediaCapture object, you can
add one or more video effects to the preview or capture stream by calling AddVideoEffectAsync, passing in an
IVideoEffectDefinition object representing the effect to be added, and a member of the MediaStreamType
enumeration indicating whether the effect should be added to the camera's preview stream or the record stream.

NOTE
On some devices, the preview stream and the capture stream are the same, which means that if you specify
MediaStreamType.VideoPreview or MediaStreamType.VideoRecord when you call AddVideoEffectAsync, the effect
will be applied to both preview and record streams. You can determine whether the preview and record streams are the same
on the current device by checking the VideoDeviceCharacteristic property of the MediaCaptureSettings for the
MediaCapture object. If the value of this property is VideoDeviceCharacteristic.AllStreamsIdentical or
VideoDeviceCharacteristic.PreviewRecordStreamsIdentical, then the streams are the same and any effect you apply to
one will affect the other.

The following example adds an effect to both the camera preview and record streams. This example illustrates
checking to see if the record and preview streams are the same.

if (_mediaCapture.MediaCaptureSettings.VideoDeviceCharacteristic == VideoDeviceCharacteristic.AllStreamsIdentical ||
_mediaCapture.MediaCaptureSettings.VideoDeviceCharacteristic == VideoDeviceCharacteristic.PreviewRecordStreamsIdentical)
{
// This effect will modify both the preview and the record streams, because they are the same stream.
myRecordEffect = await _mediaCapture.AddVideoEffectAsync(myEffectDefinition, MediaStreamType.VideoRecord);
}
else
{
myRecordEffect = await _mediaCapture.AddVideoEffectAsync(myEffectDefinition, MediaStreamType.VideoRecord);
myPreviewEffect = await _mediaCapture.AddVideoEffectAsync(myEffectDefinition, MediaStreamType.VideoPreview);
}

Note that AddVideoEffectAsync returns an object that implements IMediaExtension that represents the added
video effect. Some effects allow you to change the effect settings by passing a PropertySet into the SetProperties
method.
Starting with Windows 10, version 1607, you can also use the object returned by AddVideoEffectAsync to
remove the effect from the video pipeline by passing it into RemoveEffectAsync. RemoveEffectAsync
automatically determines whether the effect object parameter was added to the preview or record stream, so you
don't need to specify the stream type when making the call.

if (myRecordEffect != null)
{
await _mediaCapture.RemoveEffectAsync(myRecordEffect);
}
if(myPreviewEffect != null)
{
await _mediaCapture.RemoveEffectAsync(myPreviewEffect);
}

You can also remove all effects from the preview or capture stream by calling ClearEffectsAsync and specifying
the stream for which all effects should be removed.

await _mediaCapture.ClearEffectsAsync(MediaStreamType.VideoPreview);
await _mediaCapture.ClearEffectsAsync(MediaStreamType.VideoRecord);

Video stabilization effect


The video stabilization effect manipulates the frames of a video stream to minimize shaking caused by holding the
capture device in your hand. Because this technique causes the pixels to be shifted right, left, up, and down, and
because the effect can't know what the content just outside the video frame is, the stabilized video is cropped
slightly from the original video. A utility function is provided to allow you to adjust your video encoding settings to
optimally manage the cropping performed by the effect.
On devices that support it, Optical Image Stabilization (OIS) stabilizes video by mechanically manipulating the
capture device and, therefore, does not need to crop the edges of the video frames. For more information, see
Capture device controls for video capture.
Set up your app to use video stabilization
In addition to the namespaces required for basic media capture, using the video stabilization effect requires the
following namespace.

using Windows.Media.Core;
using Windows.Media.MediaProperties;
using Windows.Media.Effects;
using Windows.Media;

Declare a member variable to store the VideoStabilizationEffect object. As part of the effect implementation, you
will modify the encoding properties that you use to encode the captured video. Declare two variables to store a
backup copy of the initial input and output encoding properties so that you can restore them later when the effect
is disabled. Finally, declare a member variable of type MediaEncodingProfile because this object will be accessed
from multiple locations within your code.

private VideoStabilizationEffect _videoStabilizationEffect;


private VideoEncodingProperties _inputPropertiesBackup;
private VideoEncodingProperties _outputPropertiesBackup;
private MediaEncodingProfile _encodingProfile;

For this scenario, you should assign the media encoding profile object to a member variable so that you can access
it later.
_encodingProfile = MediaEncodingProfile.CreateMp4(VideoEncodingQuality.Auto);

Initialize the video stabilization effect


After your MediaCapture object has been initialized, create a new instance of the
VideoStabilizationEffectDefinition object. Call MediaCapture.AddVideoEffectAsync to add the effect to the
video pipeline and retrieve an instance of the VideoStabilizationEffect class. Specify
MediaStreamType.VideoRecord to indicate that the effect should be applied to the video record stream.
Register an event handler for the EnabledChanged event and call the helper method
SetUpVideoStabilizationRecommendationAsync, both of which are discussed later in this article. Finally, set
the Enabled property of the effect to true to enable the effect.

// Create the effect definition


VideoStabilizationEffectDefinition stabilizerDefinition = new VideoStabilizationEffectDefinition();

// Add the video stabilization effect to media capture


_videoStabilizationEffect =
(VideoStabilizationEffect)await _mediaCapture.AddVideoEffectAsync(stabilizerDefinition, MediaStreamType.VideoRecord);

_videoStabilizationEffect.EnabledChanged += VideoStabilizationEffect_EnabledChanged;

await SetUpVideoStabilizationRecommendationAsync();

_videoStabilizationEffect.Enabled = true;

Use recommended encoding properties


As discussed earlier in this article, the technique that the video stabilization effect uses necessarily causes the
stabilized video to be cropped slightly from the source video. Define the following helper function in your code in
order to adjust the video encoding properties to optimally handle this limitation of the effect. This step is not
required in order to use the video stabilization effect, but if you don't perform this step, the resulting video will be
upscaled slightly and therefore have slightly lower visual fidelity.
Call GetRecommendedStreamConfiguration on your video stabilization effect instance, passing in the
VideoDeviceController object, which informs the effect about your current input stream encoding properties,
and your MediaEncodingProfile which lets the effect know your current output encoding properties. This
method returns a VideoStreamConfiguration object containing new recommended input and output stream
encoding properties.
The recommended input encoding properties are, if it is supported by the device, a higher resolution that the initial
settings you provided so that there is minimal loss in resolution after the effect's cropping is applied.
Call VideoDeviceController.SetMediaStreamPropertiesAsync to set the new encoding properties. Before
setting the new properties, use the member variable to store the initial encoding properties so that you can change
the settings back when you disable the effect.
If the video stabilization effect must crop the output video, the recommended output encoding properties will be
the size of the cropped video. This means that the output resolution will match the cropped video size. If you do not
use the recommended output properties, the video will be scaled up to match the initial output size, which will
result in a loss of visual fidelity.
Set the Video property of the MediaEncodingProfile object. Before setting the new properties, use the member
variable to store the initial encoding properties so that you can change the settings back when you disable the
effect.
private async Task SetUpVideoStabilizationRecommendationAsync()
{

// Get the recommendation from the effect based on our current input and output configuration
var recommendation = _videoStabilizationEffect.GetRecommendedStreamConfiguration(_mediaCapture.VideoDeviceController,
_encodingProfile.Video);

// Handle the recommendation for the input into the effect, which can contain a larger resolution than currently configured, so cropping is
minimized
if (recommendation.InputProperties != null)
{
// Back up the current input properties from before VS was activated
_inputPropertiesBackup = _mediaCapture.VideoDeviceController.GetMediaStreamProperties(MediaStreamType.VideoRecord) as
VideoEncodingProperties;

// Set the recommendation from the effect (a resolution higher than the current one to allow for cropping) on the input
await _mediaCapture.VideoDeviceController.SetMediaStreamPropertiesAsync(MediaStreamType.VideoRecord,
recommendation.InputProperties);
await _mediaCapture.VideoDeviceController.SetMediaStreamPropertiesAsync(MediaStreamType.VideoPreview,
recommendation.InputProperties);
}

// Handle the recommendations for the output from the effect


if (recommendation.OutputProperties != null)
{
// Back up the current output properties from before VS was activated
_outputPropertiesBackup = _encodingProfile.Video;

// Apply the recommended encoding profile for the output


_encodingProfile.Video = recommendation.OutputProperties;
}
}

Handle the video stabilization effect being disabled


The system may automatically disable the video stabilization effect if the pixel throughput is too high for the effect
to handle or if it detects that the effect is running slowly. If this occurs, the EnabledChanged event is raised. The
VideoStabilizationEffect instance in the sender parameter indicates the new state of the effect, enabled or
disabled. The VideoStabilizationEffectEnabledChangedEventArgs has a
VideoStabilizationEffectEnabledChangedReason value indicating why the effect was enabled or disabled.
Note that this event is also raised if you programmatically enable or disable the effect, in which case the reason will
be Programmatic.
Typically, you would use this event to adjust your app's UI to indicate the current status of video stabilization.

private async void VideoStabilizationEffect_EnabledChanged(VideoStabilizationEffect sender,


VideoStabilizationEffectEnabledChangedEventArgs args)
{
await Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () =>
{
// Update your UI to reflect the change in status
ShowMessageToUser("video stabilization status: " + sender.Enabled + ". Reason: " + args.Reason);
});
}

Clean up the video stabilization effect


To clean up the video stabilization effect, call RemoveEffectAsync to remove the effect from the video pipeline. If
the member variables containing the initial encoding properties are not null, use them to restore the encoding
properties. Finally, remove the EnabledChanged event handler and set the effect to null.
// Clear all effects in the pipeline
await _mediaCapture.RemoveEffectAsync(_videoStabilizationEffect);

// If backed up settings (stream properties and encoding profile) exist, restore them and clear the backups
if (_inputPropertiesBackup != null)
{
await _mediaCapture.VideoDeviceController.SetMediaStreamPropertiesAsync(MediaStreamType.VideoRecord, _inputPropertiesBackup);
_inputPropertiesBackup = null;
}

if (_outputPropertiesBackup != null)
{
_encodingProfile.Video = _outputPropertiesBackup;
_outputPropertiesBackup = null;
}

_videoStabilizationEffect.EnabledChanged -= VideoStabilizationEffect_EnabledChanged;

_videoStabilizationEffect = null;

Related topics
Camera
Basic photo, video, and audio capture with MediaCapture
Effects for analyzing camera frames
3/6/2017 7 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This article describes how to use the SceneAnalysisEffect and the FaceDetectionEffect to analyze the content of
the media capture preview stream.

Scene analysis effect


The SceneAnalysisEffect analyzes the video frames in the media capture preview stream and recommends
processing options to improve the capture result. Currently, the effect supports detecting whether the capture
would be improved by using High Dynamic Range (HDR) processing.
If the effect recommends using HDR, you can do this in the following ways:
Use the AdvancedPhotoCapture class to capture photos using the Windows built-in HDR processing
algorithm. For more information, see High Dynamic Range (HDR) photo capture.
Use the HdrVideoControl to capture video using the Windows built-in HDR processing algorithm. For
more information, see Capture device controls for video capture.
Use the VariablePhotoSequenceControl to capture a sequence of frames that you can then composite
using a custom HDR implementation. For more information, see Variable photo sequence.
Scene analysis namespaces
To use scene analysis, your app must include the following namespaces in addition to the namespaces required for
basic media capture.

using Windows.Media.Core;
using Windows.Media.Devices;

Initialize the scene analysis effect and add it to the preview stream
Video effects are implemented using two APIs, an effect definition, which provides settings that the capture device
needs to initialize the effect, and an effect instance, which can be used to control the effect. Since you may want to
access the effect instance from multiple places within your code, you should typically declare a member variable to
hold the object.

private SceneAnalysisEffect _sceneAnalysisEffect;

In your app, after you have initialized the MediaCapture object, create a new instance of
SceneAnalysisEffectDefinition.
Register the effect with the capture device by calling AddVideoEffectAsync on your MediaCapture object,
providing the SceneAnalysisEffectDefinition and specifying MediaStreamType.VideoPreview to indicate that
the effect should be applied to the video preview stream, as opposed to the capture stream.
AddVideoEffectAsync returns an instance of the added effect. Because this method can be used with multiple
effect types, you must cast the returned instance to a SceneAnalysisEffect object.
To receive the results of the scene analysis, you must register a handler for the SceneAnalyzed event.
Currently, the scene analysis effect only includes the high dynamic range analyzer. Enable HDR analysis by setting
the effect's HighDynamicRangeControl.Enabled to true.

// Create the definition


var definition = new SceneAnalysisEffectDefinition();

// Add the effect to the video record stream


_sceneAnalysisEffect = (SceneAnalysisEffect)await _mediaCapture.AddVideoEffectAsync(definition, MediaStreamType.VideoPreview);

// Subscribe to notifications about scene information


_sceneAnalysisEffect.SceneAnalyzed += SceneAnalysisEffect_SceneAnalyzed;

// Enable HDR analysis


_sceneAnalysisEffect.HighDynamicRangeAnalyzer.Enabled = true;

Implement the SceneAnalyzed event handler


The results of the scene analysis are returned in the SceneAnalyzed event handler. The
SceneAnalyzedEventArgs object passed into the handler has a SceneAnalysisEffectFrame object which has a
HighDynamicRangeOutput object. The Certainty property of the high dynamic range output provides a value
between 0 and 1.0 where 0 indicates that HDR processing would not help improve the capture result and 1.0
indicates that HDR processing would help. Your can decide the threshold point at which you want to use HDR or
show the results to the user and let the user decide.

private void SceneAnalysisEffect_SceneAnalyzed(SceneAnalysisEffect sender, SceneAnalyzedEventArgs args)


{
double hdrCertainty = args.ResultFrame.HighDynamicRange.Certainty;

// Certainty value is between 0.0 and 1.0


if(hdrCertainty > MyCertaintyCap)
{
ShowMessageToUser("Enabling HDR capture is recommended.");
}
}

The HighDynamicRangeOutput object passed into the handler also has a FrameControllers property which
contains suggested frame controllers for capturing a variable photo sequence for HDR processing. For more
information, see Variable photo sequence.
Clean up the scene analysis effect
When your app is done capturing, before disposing of the MediaCapture object, you should disable the scene
analysis effect by setting the effect's HighDynamicRangeAnalyzer.Enabled property to false and unregister
your SceneAnalyzed event handler. Call MediaCapture.ClearEffectsAsync, specifying the video preview stream
since that was the stream to which the effect was added. Finally, set your member variable to null.

// Disable detection
_sceneAnalysisEffect.HighDynamicRangeAnalyzer.Enabled = false;

_sceneAnalysisEffect.SceneAnalyzed -= SceneAnalysisEffect_SceneAnalyzed;

// Remove the effect from the preview stream


await _mediaCapture.ClearEffectsAsync(MediaStreamType.VideoPreview);

// Clear the member variable that held the effect instance


_sceneAnalysisEffect = null;

Face detection effect


The FaceDetectionEffect identifies the location of faces within the media capture preview stream. The effect
allows you to receive a notification whenever a face is detected in the preview stream and provides the bounding
box for each detected face within the preview frame. On supported devices, the face detection effect also provides
enhanced exposure and focus on the most important face in the scene.
Face detection namespaces
To use face detection, your app must include the following namespaces in addition to the namespaces required for
basic media capture.

using Windows.Media.Core;

Initialize the face detection effect and add it to the preview stream
Video effects are implemented using two APIs, an effect definition, which provides settings that the capture device
needs to initialize the effect, and an effect instance, which can be used to control the effect. Since you may want to
access the effect instance from multiple places within your code, you should typically declare a member variable to
hold the object.

FaceDetectionEffect _faceDetectionEffect;

In your app, after you have initialized the MediaCapture object, create a new instance of
FaceDetectionEffectDefinition. Set the DetectionMode property to prioritize faster face detection or more
accurate face detection. Set SynchronousDetectionEnabled to specify that incoming frames are not delayed
waiting for face detection to complete as this can result in a choppy preview experience.
Register the effect with the capture device by calling AddVideoEffectAsync on your MediaCapture object,
providing the FaceDetectionEffectDefinition and specifying MediaStreamType.VideoPreview to indicate that
the effect should be applied to the video preview stream, as opposed to the capture stream.
AddVideoEffectAsync returns an instance of the added effect. Because this method can be used with multiple
effect types, you must cast the returned instance to a FaceDetectionEffect object.
Enable or disable the effect by setting the FaceDetectionEffect.Enabled property. Adjust how often the effect
analyzes frames by setting the FaceDetectionEffect.DesiredDetectionInterval property. Both of these
properties can be adjusted while media capture is ongoing.

// Create the definition, which will contain some initialization settings


var definition = new FaceDetectionEffectDefinition();

// To ensure preview smoothness, do not delay incoming samples


definition.SynchronousDetectionEnabled = false;

// In this scenario, choose detection speed over accuracy


definition.DetectionMode = FaceDetectionMode.HighPerformance;

// Add the effect to the preview stream


_faceDetectionEffect = (FaceDetectionEffect)await _mediaCapture.AddVideoEffectAsync(definition, MediaStreamType.VideoPreview);

// Choose the shortest interval between detection events


_faceDetectionEffect.DesiredDetectionInterval = TimeSpan.FromMilliseconds(33);

// Start detecting faces


_faceDetectionEffect.Enabled = true;

Receive notifications when faces are detected


If you want to perform some action when faces are detected, such as drawing a box around detected faces in the
video preview, you can register for the FaceDetected event.
// Register for face detection events
_faceDetectionEffect.FaceDetected += FaceDetectionEffect_FaceDetected;

In the handler for the event, you can get a list of all faces detected in a frame by accessing the
FaceDetectionEffectFrame.DetectedFaces property of the FaceDetectedEventArgs. The FaceBox property is
a BitmapBounds structure that describes the rectangle containing the detected face in units relative to the
preview stream dimensions. To view sample code that transforms the preview stream coordinates into screen
coordinates, see the face detection UWP sample.

private void FaceDetectionEffect_FaceDetected(FaceDetectionEffect sender, FaceDetectedEventArgs args)


{
foreach (Windows.Media.FaceAnalysis.DetectedFace face in args.ResultFrame.DetectedFaces)
{
BitmapBounds faceRect = face.FaceBox;

// Draw a rectangle on the preview stream for each face


}
}

Clean up the face detection effect


When your app is done capturing, before disposing of the MediaCapture object, you should disable the face
detection effect with FaceDetectionEffect.Enabled and unregister your FaceDetected event handler if you
previously registered one. Call MediaCapture.ClearEffectsAsync, specifying the video preview stream since that
was the stream to which the effect was added. Finally, set your member variable to null.

// Disable detection
_faceDetectionEffect.Enabled = false;

// Unregister the event handler


_faceDetectionEffect.FaceDetected -= FaceDetectionEffect_FaceDetected;

// Remove the effect from the preview stream


await _mediaCapture.ClearEffectsAsync(MediaStreamType.VideoPreview);

// Clear the member variable that held the effect instance


_faceDetectionEffect = null;

Check for focus and exposure support for detected faces


Not all devices have a capture device that can adjust its focus and exposure based on detected faces. Because face
detection consumes device resources, you may only want to enable face detection on devices that can use the
feature to enhance capture. To see if face-based capture optimization is available, get the VideoDeviceController
for your initialized MediaCapture and then get the video device controller's RegionsOfInterestControl. Check to
see if the MaxRegions supports at least one region. Then check to see if either AutoExposureSupported or
AutoFocusSupported are true. If these conditions are met, then the device can take advantage of face detection to
enhance capture.

var regionsControl = _mediaCapture.VideoDeviceController.RegionsOfInterestControl;


bool faceDetectionFocusAndExposureSupported =
regionsControl.MaxRegions > 0 &&
(regionsControl.AutoExposureSupported || regionsControl.AutoFocusSupported);

Related topics
Camera
Basic photo, video, and audio capture with MediaCapture
Variable photo sequence
3/6/2017 4 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This article shows you how to capture a variable photo sequence, which allows you to capture multiple frames of
images in rapid succession and configure each frame to use different focus, flash, ISO, exposure, and exposure
compensation settings. This feature enables scenarios like creating High Dynamic Range (HDR) images.
If you want to capture HDR images but don't want to implement your own processing algorithm, you can use the
AdvancedPhotoCapture API to use the HDR capabilities built-in to Windows. For more information, see High
Dynamic Range (HDR) photo capture.

NOTE
This article builds on concepts and code discussed in Basic photo, video, and audio capture with MediaCapture, which
describes the steps for implementing basic photo and video capture. It is recommended that you familiarize yourself with the
basic media capture pattern in that article before moving on to more advanced capture scenarios. The code in this article
assumes that your app already has an instance of MediaCapture that has been properly initialized.

Set up your app to use variable photo sequence capture


In addition to the namespaces required for basic media capture, implementing a variable photo sequence capture
requires the following namespaces.

using Windows.Media.Capture.Core;
using Windows.Media.Devices.Core;

Declare a member variable to store the VariablePhotoSequenceCapture object, which is used to initiate the
photo sequence capture. Declare an array of SoftwareBitmap objects to store each captured image in the
sequence. Also, declare an array to store the CapturedFrameControlValues object for each frame. This can be
used by your image processing algorithm to determine what settings were used to capture each frame. Finally,
declare an index that will be used to track which image in the sequence is currently being captured.

VariablePhotoSequenceCapture _photoSequenceCapture;
SoftwareBitmap[] _images;
CapturedFrameControlValues[] _frameControlValues;
int _photoIndex;

Prepare the variable photo sequence capture


After you have initialized your MediaCapture, make sure that variable photo sequences are supported on the
current device by getting an instance of the VariablePhotoSequenceController from the media capture's
VideoDeviceController and checking the Supported property.
var varPhotoSeqController = _mediaCapture.VideoDeviceController.VariablePhotoSequenceController;

if (!varPhotoSeqController.Supported)
{
ShowMessageToUser("Variable Photo Sequence is not supported");
return;
}

Get a FrameControlCapabilities object from the variable photo sequence controller. This object has a property
for every setting that can be configured per frame of a photo sequence. These include:
Exposure
ExposureCompensation
Flash
Focus
IsoSpeed
PhotoConfirmation
This example will set a different exposure compensation value for each frame. To verify that exposure
compensation is supported for photo sequences on the current device, check the Supported property of the
FrameExposureCompensationCapabilities object accessed through the ExposureCompensation property.

var frameCapabilities = varPhotoSeqController.FrameCapabilities;

if (!frameCapabilities.ExposureCompensation.Supported)
{
ShowMessageToUser("EVCompenstaion is not supported in FrameController");
return;
}

Create a new FrameController object for each frame you want to capture. This example captures three frames.
Set the values for the controls you want to vary for each frame. Then, clear the DesiredFrameControllers
collection of the VariablePhotoSequenceController and add each frame controller to the collection.

var frame0 = new FrameController();


var frame1 = new FrameController();
var frame2 = new FrameController();

frame0.ExposureCompensationControl.Value = -1.0f;
frame1.ExposureCompensationControl.Value = 0.0f;
frame2.ExposureCompensationControl.Value = 1.0f;

varPhotoSeqController.DesiredFrameControllers.Clear();
varPhotoSeqController.DesiredFrameControllers.Add(frame0);
varPhotoSeqController.DesiredFrameControllers.Add(frame1);
varPhotoSeqController.DesiredFrameControllers.Add(frame2);

Create an ImageEncodingProperties object to set the encoding you want to use for the captured images. Call
the static method MediaCapture.PrepareVariablePhotoSequenceCaptureAsync, passing in the encoding
properties. This method returns a VariablePhotoSequenceCapture object. Finally, register event handlers for
the PhotoCaptured and Stopped events.
try
{
var imageEncodingProperties = ImageEncodingProperties.CreateJpeg();

_photoSequenceCapture = await _mediaCapture.PrepareVariablePhotoSequenceCaptureAsync(imageEncodingProperties);

_photoSequenceCapture.PhotoCaptured += OnPhotoCaptured;
_photoSequenceCapture.Stopped += OnStopped;
}
catch (Exception ex)
{
ShowMessageToUser("Exception in PrepareVariablePhotoSequence: " + ex.Message);
}

Start the variable photo sequence capture


To start the capture of the variable photo sequence, call VariablePhotoSequenceCapture.StartAsync. Be sure to
initialize the arrays for storing the captured images and frame control values and set the current index to 0. Set
your app's recording state variable and update your UI to disable starting another capture while this capture is in
progress.

private async void StartPhotoCapture()


{
_images = new SoftwareBitmap[3];
_frameControlValues = new CapturedFrameControlValues[3];
_photoIndex = 0;
_isRecording = true;

await _photoSequenceCapture.StartAsync();
}

Receive the captured frames


The PhotoCaptured event is raised for each captured frame. Save the frame control values and captured image
for the frame and then increment the current frame index. This example shows how to get a SoftwareBitmap
representation of each frame. For more information on using SoftwareBitmap, see Imaging.

void OnPhotoCaptured(VariablePhotoSequenceCapture s, VariablePhotoCapturedEventArgs args)


{

_images[_photoIndex] = args.Frame.SoftwareBitmap;
_frameControlValues[_photoIndex] = args.CapturedFrameControlValues;
_photoIndex++;
}

Handle the completion of the variable photo sequence capture


The Stopped event is raised when all of the frames in the sequence have been captured. Update the recording
state of your app and update your UI to allow the user to initiate new captures. At this point, you can pass the
captured images and frame control values to your image processing code.

void OnStopped(object s, object e)


{
_isRecording = false;
MyPostProcessingFunction(_images, _frameControlValues, 3);
}
Update frame controllers
If you want to perform another variable photo sequence capture with different per frame settings, you don't need
to completely reinitialize the VariablePhotoSequenceCapture. You can either clear the
DesiredFrameControllers collection and add new frame controllers or you can modify the existing frame
controller values. The following example checks the FrameFlashCapabilities object to verify that the current
device supports flash and flash power for variable photo sequence frames. If so, each frame is updated to enable
the flash at 100% power. The exposure compensation values that were previously set for each frame are still
active.

var varPhotoSeqController = _mediaCapture.VideoDeviceController.VariablePhotoSequenceController;

if (varPhotoSeqController.FrameCapabilities.Flash.Supported &&
varPhotoSeqController.FrameCapabilities.Flash.PowerSupported)
{
for (int i = 0; i < varPhotoSeqController.DesiredFrameControllers.Count; i++)
{
varPhotoSeqController.DesiredFrameControllers[i].FlashControl.Mode = FrameFlashMode.Enable;
varPhotoSeqController.DesiredFrameControllers[i].FlashControl.PowerPercent = 100;
}
}

Clean up the variable photo sequence capture


When you are done capturing variable photo sequences or your app is suspending, clean up the variable photo
sequence object by calling FinishAsync. Unregister the object's event handlers and set it to null.

await _photoSequenceCapture.FinishAsync();
_photoSequenceCapture.PhotoCaptured -= OnPhotoCaptured;
_photoSequenceCapture.Stopped -= OnStopped;
_photoSequenceCapture = null;

Related topics
Camera
Basic photo, video, and audio capture with MediaCapture
Process media frames with MediaFrameReader
3/6/2017 21 min to read Edit on GitHub

This article shows you how to use a MediaFrameReader with MediaCapture to get media frames from one or
more available sources, including color, depth, and infrared cameras, audio devices, or even custom frame sources
such as those that produce skeletal tracking frames. This feature is designed to be used by apps that perform real-
time processing of media frames, such as augmented reality and depth-aware camera apps.
If you are interested in simply capturing video or photos, such as a typical photography app, then you probably
want to use one of the other capture techniques supported by MediaCapture. For a list of available media capture
techniques and articles showing how to use them, see Camera.

NOTE
The features discussed in this article are only available starting with Windows 10, version 1607.

NOTE
There is an Universal Windows app sample that demonstrates using MediaFrameReader to display frames from different
frame sources, including color, depth, and infrared camreas. For more information, see Camera frames sample.

Setting up your project


As with any app that uses MediaCapture, you must declare that your app uses the webcam capability before
attempting to access any camera device. If your app will capture from an audio device, you should also declare the
microphone device capability.
Add capabilities to the app manifest
1. In Microsoft Visual Studio, in Solution Explorer, open the designer for the application manifest by double-
clicking the package.appxmanifest item.
2. Select the Capabilities tab.
3. Check the box for Webcam and the box for Microphone.
4. For access to the Pictures and Videos library check the boxes for Pictures Library and the box for Videos
Library.
The example code in this article uses APIs from the following namespaces, in addition to those included by the
default project template.

using Windows.Media.Capture.Frames;
using Windows.Devices.Enumeration;
using Windows.Media.Capture;
using Windows.UI.Xaml.Media.Imaging;
using Windows.Media.MediaProperties;
using Windows.Graphics.Imaging;
using System.Threading;
using Windows.UI.Core;
using System.Threading.Tasks;
Select frame sources and frame source groups
Many apps that process media frames need to get frames from multiple sources at once, such as a device's color
and depth cameras. The MediaFrameSourceGroup object represents a set of media frame sources that can be
used simultaneously. Call the static method MediaFrameSourceGroup.FindAllAsync to get a list of all of the
groups of frame sources supported by the current device.

var frameSourceGroups = await MediaFrameSourceGroup.FindAllAsync();

You can also create a DeviceWatcher using DeviceInformation.CreateWatcher and the value retuned from
MediaFrameSourceGroup.GetDeviceSelector to receive notifications when the available frame source groups
on the device changes, such as when an external camera is plugged in. For more information see Enumerate
devices.
A MediaFrameSourceGroup has a collection of MediaFrameSourceInfo objects that describe the frame sources
included in the group. After retrieving the frame source groups available on the device, you can select the group
that exposes the frame sources you are interested in.
The following example shows the simplest way to select a frame source group. This code simply loops over all of
the available groups and then loops over each item in the SourceInfos collection. Each MediaFrameSourceInfo is
checked to see if it supports the features we are seeking. In this case, the MediaStreamType property is checked
for the value VideoPreview, meaning the device provides a video preview stream, and the SourceKind property is
checked for the value Color, indicating that the source provides color frames.

var frameSourceGroups = await MediaFrameSourceGroup.FindAllAsync();

MediaFrameSourceGroup selectedGroup = null;


MediaFrameSourceInfo colorSourceInfo = null;

foreach (var sourceGroup in frameSourceGroups)


{
foreach (var sourceInfo in sourceGroup.SourceInfos)
{
if (sourceInfo.MediaStreamType == MediaStreamType.VideoPreview
&& sourceInfo.SourceKind == MediaFrameSourceKind.Color)
{
colorSourceInfo = sourceInfo;
break;
}
}
if (colorSourceInfo != null)
{
selectedGroup = sourceGroup;
break;
}
}

This method of identifying the desired frame source group and frame sources works for simple cases, but if you
want to select frame sources based on more complex criteria, it can quickly become cumbersome. Another method
is to use Linq syntax and anonymous objects to make the selection. The following example uses the Select
extension method to transform the MediaFrameSourceGroup objects in the frameSourceGroups list into an
anonymous object with two fields: sourceGroup, representing the group itself, and colorSourceInfo, which
represents the color frame source in the group. The colorSourceInfo field is set to the result of FirstOrDefault,
which selects the first object for which the provided predicate resolves to true. In this case, the predicate is true if
the stream type is VideoPreview, the source kind is Color, and if the camera is on the front panel of the device.
From the list of anonymous objects returned from the query described above, the Where extension method is used
to select only those objects where the colorSourceInfo field is not null. Finally, FirstOrDefault is called to select the
first item in the list.
Now you can use the fields of the selected object to get references to the selected MediaFrameSourceGroup and
the MediaFrameSourceInfo object representing the color camera. These will be used later to initialize the
MediaCapture object and create a MediaFrameReader for the selected source. Finally, you should test to see if
the source group is null, meaning the current device doesn't have your requested capture sources.

var selectedGroupObjects = frameSourceGroups.Select(group =>


new
{
sourceGroup = group,
colorSourceInfo = group.SourceInfos.FirstOrDefault((sourceInfo) =>
{
return sourceInfo.MediaStreamType == MediaStreamType.VideoPreview
&& sourceInfo.SourceKind == MediaFrameSourceKind.Color
&& sourceInfo.DeviceInformation?.EnclosureLocation.Panel == Windows.Devices.Enumeration.Panel.Front;
})

}).Where(t => t.colorSourceInfo != null)


.FirstOrDefault();

MediaFrameSourceGroup selectedGroup = selectedGroupObjects?.sourceGroup;


MediaFrameSourceInfo colorSourceInfo = selectedGroupObjects?.colorSourceInfo;

if (selectedGroup == null)
{
return;
}

The following example uses a similar technique as described above to select a source group that contains color,
depth, and infrared cameras.

var allGroups = await MediaFrameSourceGroup.FindAllAsync();


var eligibleGroups = allGroups.Select(g => new
{
Group = g,

// For each source kind, find the source which offers that kind of media frame,
// or null if there is no such source.
SourceInfos = new MediaFrameSourceInfo[]
{
g.SourceInfos.FirstOrDefault(info => info.SourceKind == MediaFrameSourceKind.Color),
g.SourceInfos.FirstOrDefault(info => info.SourceKind == MediaFrameSourceKind.Depth),
g.SourceInfos.FirstOrDefault(info => info.SourceKind == MediaFrameSourceKind.Infrared),
}
}).Where(g => g.SourceInfos.Any(info => info != null)).ToList();

if (eligibleGroups.Count == 0)
{
System.Diagnostics.Debug.WriteLine("No source group with color, depth or infrared found.");
return;
}

var selectedGroupIndex = 0; // Select the first eligible group


MediaFrameSourceGroup selectedGroup = eligibleGroups[selectedGroupIndex].Group;
MediaFrameSourceInfo colorSourceInfo = eligibleGroups[selectedGroupIndex].SourceInfos[0];
MediaFrameSourceInfo infraredSourceInfo = eligibleGroups[selectedGroupIndex].SourceInfos[1];
MediaFrameSourceInfo depthSourceInfo = eligibleGroups[selectedGroupIndex].SourceInfos[2];

Initialize the MediaCapture object to use the selected frame source


group
The next step is to initialize the MediaCapture object to use the frame source group you selected in the previous
step.
The MediaCapture object is typically used from multiple locations within your app, so you should declare a class
member variable to hold it.

MediaCapture _mediaCapture;

Create an instance of the MediaCapture object by calling the constructor. Next, create a MediaCaptureSettings
object that will be used to initialize the MediaCapture object. In this example, the following settings are used:
SourceGroup - This tells the system which source group you will be using to get frames. Remember that the
source group defines a set of media frame sources that can be used simultaneously.
SharingMode - This tells the system whether you need exclusive control over the capture source devices. If you
set this to ExclusiveControl, it means that you can change the settings of the capture device, such as the format
of the frames it produces, but this means that if another app already has exclusive control, your app will fail
when it tries to initialize the media capture device. If you set this to SharedReadOnly, you can receive frames
from the frame sources even if they are in use by another app, but you can't change the settings for the devices.
MemoryPreference - If you specify CPU, the system will use CPU memory which guarantees that when frames
arrive, they will be available as SoftwareBitmap objects. If you specify Auto, the system will dynamically
choose the optimal memory location to store frames. If the system chooses to use GPU memory, the media
frames will arrive as an IDirect3DSurface object and not as a SoftwareBitmap.
StreamingCaptureMode - Set this to Video to indicate that audio doesn't need to be streamed.
Call InitializeAsync to initialize the MediaCapture with your desired settings. Be sure to call this within a try block
in case initialization fails.

_mediaCapture = new MediaCapture();

var settings = new MediaCaptureInitializationSettings()


{
SourceGroup = selectedGroup,
SharingMode = MediaCaptureSharingMode.ExclusiveControl,
MemoryPreference = MediaCaptureMemoryPreference.Cpu,
StreamingCaptureMode = StreamingCaptureMode.Video
};
try
{
await _mediaCapture.InitializeAsync(settings);
}
catch (Exception ex)
{
System.Diagnostics.Debug.WriteLine("MediaCapture initialization failed: " + ex.Message);
return;
}

Set the preferred format for the frame source


To set the preferred format for a frame source, you need to get a MediaFrameSource object representing the
source. You get this object by accessing the Frames dictionary of the initialized MediaCapture object, specifying
the identifier of the frame source you want to use. This is why we saved the MediaFrameSourceInfo object when
we were selecting a frame source group.
The MediaFrameSource.SupportedFormats property contains a list of MediaFrameFormat objects describing
the supported formats for the frame source. Use the Where Linq extension method to select a format based on
desired properties. In this example, a format is selected that has a width of 1080 pixels and can supply frames in
32-bit RGB format. The FirstOrDefault extension method selects the first entry in the list. If the selected format is
null, then the requested format is not supported by the frame source. If the format is supported, you can request
that the source use this format by calling SetFormatAsync.

var colorFrameSource = _mediaCapture.FrameSources[colorSourceInfo.Id];


var preferredFormat = colorFrameSource.SupportedFormats.Where(format =>
{
return format.VideoFormat.Width >= 1080
&& format.Subtype == MediaEncodingSubtypes.Argb32;

}).FirstOrDefault();

if (preferredFormat == null)
{
// Our desired format is not supported
return;
}

await colorFrameSource.SetFormatAsync(preferredFormat);

Create a frame reader for the frame source


To receive frames for a media frame source, use a MediaFrameReader.

MediaFrameReader _mediaFrameReader;

Instantiate the frame reader by calling CreateFrameReaderAsync on your initialized MediaCapture object. The
first argument to this method is the frame source from which you want to receive frames. You can create a separate
frame reader for each frame source you want to use. The second argument tells the system the output format in
which you want frames to arrive. This can save you from having to do your own conversions to frames as they
arrive. Note that if you specify a format that is not supported by the frame source, an exception will be thrown, so
be sure that this value is in the SupportedFormats collection.
After creating the frame reader, register a handler for the FrameArrived event which is raised whenever a new
frame is available from the source.
Tell the system to start reading frames from the source by calling StartAsync.

_mediaFrameReader = await _mediaCapture.CreateFrameReaderAsync(colorFrameSource, MediaEncodingSubtypes.Argb32);


_mediaFrameReader.FrameArrived += ColorFrameReader_FrameArrived;
await _mediaFrameReader.StartAsync();

Handle the frame arrived event


The MediaFrameReader.FrameArrived event is raised whenever a new frame is available. You can choose to
process every frame that arrives or only use frames when you need them. Because the frame reader raises the
event on its own thread, you may need to implement some synchronization logic to make sure that you aren't
attempting to access the same data from multiple threads. This section shows you how to synchronize drawing
color frames to an image control in a XAML page. This scenario addresses the additional synchronization constraint
that requires all updates to XAML controls be performed on the UI thread.
The first step in displaying frames in XAML is to create an Image control.

<Image x:Name="_imageElement" Width="320" Height="240"/>


In your code behind page, declare a class member variable of type SoftwareBitmap which will be used as a back
buffer that all incoming images will be copied to. Note that the image data itself isn't copied, just the object
references. Also, declare a boolean to track whether our UI operation is currently running.

private SoftwareBitmap _backBuffer;


private bool _taskRunning = false;

Because the frames will arrive as SoftwareBitmap objects, you need to create a SoftwareBitmapSource object
which allows you to use a SoftwareBitmap as the source for a XAML Control. You should set the image source
somewhere in your code before you start the frame reader.

_imageElement.Source = new SoftwareBitmapSource();

Now it's time to implement the FrameArrived event handler. When the handler is called, the sender parameter
contains a reference to the MediaFrameReader object which raised the event. Call TryAcquireLatestFrame on
this object to attempt to get the latest frame. As the name implies, TryAcquireLatestFrame may not succeed in
returning a frame. So, when you access the VideoMediaFrame and then SoftwareBitmap properties, be sure to test
for null. In this example the null condtional operator ? is used to access the SoftwareBitmap and then the retrieved
object is checked for null.
The Image control can only display images in BRGA8 format with either pre-multiplied or no alpha. If the arriving
frame is not in that format, the static method Convert is used to convert the software bitmap to the correct format.
Next, the Interlocked.Exchange method is used to swap the reference of to arriving bitmap with the backbuffer
bitmap. This method swaps these references in an atomic operation that is thread-safe. After swapping, the old
backbuffer image, now in the softwareBitmap variable is disposed of to clean up its resources.
Next, the CoreDispatcher associated with the Image element is used to create a task that will run on the UI thread
by calling RunAsync. Because the asynchronous tasks will be performed within the task, the lambda expression
passed to RunAsync is declared with the async keyword.
Within the task, the _taskRunning variable is checked to make sure that only one instance of the task is running at a
time. If the task isn't already running, _taskRunning is set to true to prevent the task from running again. In a while
loop, Interlocked.Exchange is called to copy from the backbuffer into a temporary SoftwareBitmap until the
backbuffer image is null. For each time the temporary bitmap is populated, the Source property of the Image is
cast to a SoftwareBitmapSource, and then SetBitmapAsync is called to set the source of the image.
Finally, the _taskRunning variable is set back to false so that the task can be run again the next time the handler is
called.

NOTE
If you access the SoftwareBitmap or Direct3DSurface objects provided by the VideoMediaFrame property of a
MediaFrameReference, the system creates a strong reference to these objects, which means that they will not be disposed
when you call Dispose on the containing MediaFrameReference. You must explicitly call the Dispose method of the
SoftwareBitmap or Direct3DSurface directly for the objects to be immediately disposed. Otherwise, the garbage collector
will eventually free the memory for these objects, but you can't know when this will occur, and if the number of allocated
bitmaps or surfaces exceeds the maximum amount allowed by the system, the flow of new frames will stop.
private void ColorFrameReader_FrameArrived(MediaFrameReader sender, MediaFrameArrivedEventArgs args)
{
var mediaFrameReference = sender.TryAcquireLatestFrame();
var videoMediaFrame = mediaFrameReference?.VideoMediaFrame;
var softwareBitmap = videoMediaFrame?.SoftwareBitmap;

if (softwareBitmap != null)
{
if (softwareBitmap.BitmapPixelFormat != Windows.Graphics.Imaging.BitmapPixelFormat.Bgra8 ||
softwareBitmap.BitmapAlphaMode != Windows.Graphics.Imaging.BitmapAlphaMode.Premultiplied)
{
softwareBitmap = SoftwareBitmap.Convert(softwareBitmap, BitmapPixelFormat.Bgra8, BitmapAlphaMode.Premultiplied);
}

// Swap the processed frame to _backBuffer and dispose of the unused image.
softwareBitmap = Interlocked.Exchange(ref _backBuffer, softwareBitmap);
softwareBitmap?.Dispose();

// Changes to XAML ImageElement must happen on UI thread through Dispatcher


var task = _imageElement.Dispatcher.RunAsync(CoreDispatcherPriority.Normal,
async () =>
{
// Don't let two copies of this task run at the same time.
if (_taskRunning)
{
return;
}
_taskRunning = true;

// Keep draining frames from the backbuffer until the backbuffer is empty.
SoftwareBitmap latestBitmap;
while ((latestBitmap = Interlocked.Exchange(ref _backBuffer, null)) != null)
{
var imageSource = (SoftwareBitmapSource)_imageElement.Source;
await imageSource.SetBitmapAsync(latestBitmap);
latestBitmap.Dispose();
}

_taskRunning = false;
});
}

mediaFrameReference.Dispose();
}

Cleanup resources
When you are done reading frames, be sure to stop the media frame reader by calling StopAsync, unregistering
the FrameArrived handler, and disposing of the MediaCapture object.

await _mediaFrameReader.StopAsync();
_mediaFrameReader.FrameArrived -= ColorFrameReader_FrameArrived;
_mediaCapture.Dispose();
_mediaCapture = null;

For more information about cleaning up media capture objects when your application is suspended, see Display
the camera preview.

The FrameRenderer helper class


The Universal Windows Camera frames sample provides a helper class that makes it easy to display the frames
from color, infrared, and depth sources in your app. Typically, you will want to do something more with depth and
infrared data than just display it to the screen, but this helper class is a helpful tool for demonstrating the frame
reader feature and for debugging your own frame reader implementation.
The FrameRenderer helper class implements the following methods.
FrameRenderer constructor - The constructor initializes the helper class to use the XAML Image element you
pass in for displaying media frames.
ProcessFrame - This method displays a media frame, represented by a MediaFrameReference, in the Image
element you passed into the constructor. You should typically call this method from your FrameArrived event
handler, passing in the frame returned by TryAcquireLatestFrame.
ConvertToDisplayableImage - This methods checks the format of the media frame and, if necessary, converts
it to a displayable format. For color images, this means making sure that the color format is BGRA8 and that the
bitmap alpha mode is premultiplied. For depth or infrared frames, each scanline is processed to convert the
depth or infrared values to a psuedocolor gradient, using the PsuedoColorHelper class that is also included in
the sample and listed below.

NOTE
In order to do pixel manipulation on SoftwareBitmap images, you must access a native memory buffer. To do this, you must
use the IMemoryBufferByteAccess COM interface included in the code listing below and you must update your project
properties to allow compilation of unsafe code. For more information, see Create, edit, and save bitmap images.

[ComImport]
[Guid("5B0D3235-4DBA-4D44-865E-8F1D0E4FD04D")]
[InterfaceType(ComInterfaceType.InterfaceIsIUnknown)]
unsafe interface IMemoryBufferByteAccess
{
void GetBuffer(out byte* buffer, out uint capacity);
}

class FrameRenderer
{
private Image _imageElement;
private SoftwareBitmap _backBuffer;
private bool _taskRunning = false;

public FrameRenderer(Image imageElement)


{
_imageElement = imageElement;
_imageElement.Source = new SoftwareBitmapSource();
}

public void ProcessFrame(MediaFrameReference frame)


{
var softwareBitmap = FrameRenderer.ConvertToDisplayableImage(frame?.VideoMediaFrame);

if (softwareBitmap != null)
{
// Swap the processed frame to _backBuffer and trigger UI thread to render it
softwareBitmap = Interlocked.Exchange(ref _backBuffer, softwareBitmap);

// UI thread always reset _backBuffer before using it. Unused bitmap should be disposed.
softwareBitmap?.Dispose();

// Changes to xaml ImageElement must happen in UI thread through Dispatcher


var task = _imageElement.Dispatcher.RunAsync(CoreDispatcherPriority.Normal,
async () =>
{
// Don't let two copies of this task run at the same time.
if (_taskRunning)
{
return;
return;
}
_taskRunning = true;

// Keep draining frames from the backbuffer until the backbuffer is empty.
SoftwareBitmap latestBitmap;
while ((latestBitmap = Interlocked.Exchange(ref _backBuffer, null)) != null)
{
var imageSource = (SoftwareBitmapSource)_imageElement.Source;
await imageSource.SetBitmapAsync(latestBitmap);
latestBitmap.Dispose();
}

_taskRunning = false;
});
}

frame.Dispose();
}

// Function delegate that transforms a scanline from an input image to an output image.
private unsafe delegate void TransformScanline(int pixelWidth, byte* inputRowBytes, byte* outputRowBytes);

/// <summary>
/// Converts a frame to a SoftwareBitmap of a valid format to display in an Image control.
/// </summary>
/// <param name="inputFrame">Frame to convert.</param>
public static unsafe SoftwareBitmap ConvertToDisplayableImage(VideoMediaFrame inputFrame)
{
SoftwareBitmap result = null;
using (var inputBitmap = inputFrame?.SoftwareBitmap)
{
if (inputBitmap != null)
{
if (inputBitmap.BitmapPixelFormat == BitmapPixelFormat.Bgra8 &&
inputBitmap.BitmapAlphaMode == BitmapAlphaMode.Premultiplied)
{
// SoftwareBitmap is already in the correct format for an Image control, so just return a copy.
result = SoftwareBitmap.Copy(inputBitmap);
}
else if (inputBitmap.BitmapPixelFormat == BitmapPixelFormat.Gray16)
{
string subtype = inputFrame.VideoFormat.MediaFrameFormat.Subtype;
if (string.Equals(subtype, "D16", StringComparison.OrdinalIgnoreCase))
{
// Use a special pseudo color to render 16 bits depth frame.
result = TransformBitmap(inputBitmap, PseudoColorHelper.PseudoColorForDepth);
}
else
{
// Use pseudo color to render 16 bits frames.
result = TransformBitmap(inputBitmap, PseudoColorHelper.PseudoColorFor16BitInfrared);
}
}
else if (inputBitmap.BitmapPixelFormat == BitmapPixelFormat.Gray8)
{
// Use pseudo color to render 8 bits frames.
result = TransformBitmap(inputBitmap, PseudoColorHelper.PseudoColorFor8BitInfrared);
}
else
{
try
{
// Convert to Bgra8 Premultiplied SoftwareBitmap, so xaml can display in UI.
result = SoftwareBitmap.Convert(inputBitmap, BitmapPixelFormat.Bgra8, BitmapAlphaMode.Premultiplied);
}
catch (ArgumentException exception)
{
// Conversion of software bitmap format is not supported. Drop this frame.
System.Diagnostics.Debug.WriteLine(exception.Message);
System.Diagnostics.Debug.WriteLine(exception.Message);
}

}
}
}
return result;
}

/// <summary>
/// Transform image into Bgra8 image using given transform method.
/// </summary>
/// <param name="softwareBitmap">Input image to transform.</param>
/// <param name="transformScanline">Method to map pixels in a scanline.</param>
private static unsafe SoftwareBitmap TransformBitmap(SoftwareBitmap softwareBitmap, TransformScanline transformScanline)
{
// XAML Image control only supports premultiplied Bgra8 format.
var outputBitmap = new SoftwareBitmap(BitmapPixelFormat.Bgra8,
softwareBitmap.PixelWidth, softwareBitmap.PixelHeight, BitmapAlphaMode.Premultiplied);

using (var input = softwareBitmap.LockBuffer(BitmapBufferAccessMode.Read))


using (var output = outputBitmap.LockBuffer(BitmapBufferAccessMode.Write))
{
// Get stride values to calculate buffer position for a given pixel x and y position.
int inputStride = input.GetPlaneDescription(0).Stride;
int outputStride = output.GetPlaneDescription(0).Stride;
int pixelWidth = softwareBitmap.PixelWidth;
int pixelHeight = softwareBitmap.PixelHeight;

using (var outputReference = output.CreateReference())


using (var inputReference = input.CreateReference())
{
// Get input and output byte access buffers.
byte* inputBytes;
uint inputCapacity;
((IMemoryBufferByteAccess)inputReference).GetBuffer(out inputBytes, out inputCapacity);
byte* outputBytes;
uint outputCapacity;
((IMemoryBufferByteAccess)outputReference).GetBuffer(out outputBytes, out outputCapacity);

// Iterate over all pixels and store converted value.


for (int y = 0; y < pixelHeight; y++)
{
byte* inputRowBytes = inputBytes + y * inputStride;
byte* outputRowBytes = outputBytes + y * outputStride;

transformScanline(pixelWidth, inputRowBytes, outputRowBytes);


}
}
}
return outputBitmap;
}

/// <summary>
/// A helper class to manage look-up-table for pseudo-colors.
/// </summary>
private static class PseudoColorHelper
{
#region Constructor, private members and methods

private const int TableSize = 1024; // Look up table size


private static readonly uint[] PseudoColorTable;
private static readonly uint[] InfraredRampTable;

// Color palette mapping value from 0 to 1 to blue to red colors.


private static readonly Color[] ColorRamp =
{
Color.FromArgb(a:0xFF, r:0x7F, g:0x00, b:0x00),
Color.FromArgb(a:0xFF, r:0xFF, g:0x00, b:0x00),
Color.FromArgb(a:0xFF, r:0xFF, g:0x7F, b:0x00),
Color.FromArgb(a:0xFF, r:0xFF, g:0x7F, b:0x00),
Color.FromArgb(a:0xFF, r:0xFF, g:0xFF, b:0x00),
Color.FromArgb(a:0xFF, r:0x7F, g:0xFF, b:0x7F),
Color.FromArgb(a:0xFF, r:0x00, g:0xFF, b:0xFF),
Color.FromArgb(a:0xFF, r:0x00, g:0x7F, b:0xFF),
Color.FromArgb(a:0xFF, r:0x00, g:0x00, b:0xFF),
Color.FromArgb(a:0xFF, r:0x00, g:0x00, b:0x7F),
};

static PseudoColorHelper()
{
PseudoColorTable = InitializePseudoColorLut();
InfraredRampTable = InitializeInfraredRampLut();
}

/// <summary>
/// Maps an input infrared value between [0, 1] to corrected value between [0, 1].
/// </summary>
/// <param name="value">Input value between [0, 1].</param>
[MethodImpl(MethodImplOptions.AggressiveInlining)] // Tell the compiler to inline this method to improve performance
private static uint InfraredColor(float value)
{
int index = (int)(value * TableSize);
index = index < 0 ? 0 : index > TableSize - 1 ? TableSize - 1 : index;
return InfraredRampTable[index];
}

/// <summary>
/// Initializes the pseudo-color look up table for infrared pixels
/// </summary>
private static uint[] InitializeInfraredRampLut()
{
uint[] lut = new uint[TableSize];
for (int i = 0; i < TableSize; i++)
{
var value = (float)i / TableSize;
// Adjust to increase color change between lower values in infrared images
var alpha = (float)Math.Pow(1 - value, 12);
lut[i] = ColorRampInterpolation(alpha);
}
return lut;
}

/// <summary>
/// Initializes pseudo-color look up table for depth pixels
/// </summary>
private static uint[] InitializePseudoColorLut()
{
uint[] lut = new uint[TableSize];
for (int i = 0; i < TableSize; i++)
{
lut[i] = ColorRampInterpolation((float)i / TableSize);
}
return lut;
}

/// <summary>
/// Maps a float value to a pseudo-color pixel
/// </summary>
private static uint ColorRampInterpolation(float value)
{
// Map value to surrounding indexes on the color ramp
int rampSteps = ColorRamp.Length - 1;
float scaled = value * rampSteps;
int integer = (int)scaled;
int index =
integer < 0 ? 0 :
integer >= rampSteps - 1 ? rampSteps - 1 :
integer;
Color prev = ColorRamp[index];
Color next = ColorRamp[index + 1];

// Set color based on ratio of closeness between the surrounding colors


uint alpha = (uint)((scaled - integer) * 255);
uint beta = 255 - alpha;
return
((prev.A * beta + next.A * alpha) / 255) << 24 | // Alpha
((prev.R * beta + next.R * alpha) / 255) << 16 | // Red
((prev.G * beta + next.G * alpha) / 255) << 8 | // Green
((prev.B * beta + next.B * alpha) / 255); // Blue
}

/// <summary>
/// Maps a value in [0, 1] to a pseudo RGBA color.
/// </summary>
/// <param name="value">Input value between [0, 1].</param>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private static uint PseudoColor(float value)
{
int index = (int)(value * TableSize);
index = index < 0 ? 0 : index > TableSize - 1 ? TableSize - 1 : index;
return PseudoColorTable[index];
}

#endregion

/// <summary>
/// Maps each pixel in a scanline from a 16 bit depth value to a pseudo-color pixel.
/// </summary>
/// /// <param name="pixelWidth">Width of the input scanline, in pixels.</param>
/// /// <param name="inputRowBytes">Pointer to the start of the input scanline.</param>
/// /// <param name="outputRowBytes">Pointer to the start of the output scanline.</param>
public static unsafe void PseudoColorForDepth(int pixelWidth, byte* inputRowBytes, byte* outputRowBytes)
{
// Visualize space in front of your desktop.
const ushort min = 500; // 0.5 meters
const ushort max = 4000; // 4 meters
const float one_min = 1.0f / min;
const float range = 1.0f / max - one_min;

ushort* inputRow = (ushort*)inputRowBytes;


uint* outputRow = (uint*)outputRowBytes;
for (int x = 0; x < pixelWidth; x++)
{
var value = inputRow[x];

if (value == 0)
{
// Map invalid depth values to transparent pixels.
// This happens when depth information cannot be calculated, e.g. when objects are too close.
outputRow[x] = 0;
}
else
{
var alpha = (1.0f / value - one_min) / range;
outputRow[x] = PseudoColor(alpha * alpha);
}
}
}

/// <summary>
/// Maps each pixel in a scanline from a 8 bit infrared value to a pseudo-color pixel.
/// </summary>
/// /// <param name="pixelWidth">Width of the input scanline, in pixels.</param>
/// <param name="inputRowBytes">Pointer to the start of the input scanline.</param>
/// <param name="outputRowBytes">Pointer to the start of the output scanline.</param>
public static unsafe void PseudoColorFor8BitInfrared(
int pixelWidth, byte* inputRowBytes, byte* outputRowBytes)
{
byte* inputRow = inputRowBytes;
uint* outputRow = (uint*)outputRowBytes;
for (int x = 0; x < pixelWidth; x++)
{
outputRow[x] = InfraredColor(inputRow[x] / (float)Byte.MaxValue);
}
}

/// <summary>
/// Maps each pixel in a scanline from a 16 bit infrared value to a pseudo-color pixel.
/// </summary>
/// <param name="pixelWidth">Width of the input scanline.</param>
/// <param name="inputRowBytes">Pointer to the start of the input scanline.</param>
/// <param name="outputRowBytes">Pointer to the start of the output scanline.</param>
public static unsafe void PseudoColorFor16BitInfrared(int pixelWidth, byte* inputRowBytes, byte* outputRowBytes)
{
ushort* inputRow = (ushort*)inputRowBytes;
uint* outputRow = (uint*)outputRowBytes;
for (int x = 0; x < pixelWidth; x++)
{
outputRow[x] = InfraredColor(inputRow[x] / (float)UInt16.MaxValue);
}
}
}
}

Related topics
Camera
Basic photo, video, and audio capture with MediaCapture
Camera frames sample
Get a preview frame
3/6/2017 2 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This topic shows you how to get a single preview frame from the media capture preview stream.

NOTE
This article builds on concepts and code discussed in Basic photo, video, and audio capture with MediaCapture, which
describes the steps for implementing basic photo and video capture. We recommend that you familiarize yourself with the
basic media capture pattern in that article before moving on to more advanced capture scenarios. The code in this article
assumes that your app already has an instance of MediaCapture that has been properly initialized, and that you have a
CaptureElement with an active video preview stream.

In addition to the namespaces required for basic media capture, capturing a preview frame requires the following
namespace.

using Windows.Media;

When you request a preview frame, you can specify the format in which you would like to receive the frame by
creating a VideoFrame object with the format you desire. This example creates a video frame that is the same
resolution as the preview stream by calling VideoDeviceController.GetMediaStreamProperties and specifying
MediaStreamType.VideoPreview to request the properties for the preview stream. The width and height of the
preview stream is used to create the new video frame.

// Get information about the preview


var previewProperties = _mediaCapture.VideoDeviceController.GetMediaStreamProperties(MediaStreamType.VideoPreview) as
VideoEncodingProperties;

// Create a video frame in the desired format for the preview frame
VideoFrame videoFrame = new VideoFrame(BitmapPixelFormat.Bgra8, (int)previewProperties.Width, (int)previewProperties.Height);

If your MediaCapture object is initialized and you have an active preview stream, call GetPreviewFrameAsync to
get a preview stream. Pass in the video frame created in the last step to specify the format of the returned frame.

VideoFrame previewFrame = await _mediaCapture.GetPreviewFrameAsync(videoFrame);

Get a SoftwareBitmap representation of the preview frame by accessing the SoftwareBitmap property of the
VideoFrame object. For information about saving, loading, and modifying software bitmaps, see Imaging.

SoftwareBitmap previewBitmap = previewFrame.SoftwareBitmap;

You can also get a IDirect3DSurface representation of the preview frame if you want to use the image with
Direct3D APIs.

var previewSurface = previewFrame.Direct3DSurface;


IMPORTANT
Either the SoftwareBitmap property or the Direct3DSurface property of the returned VideoFrame may be null depending
on how you call GetPreviewFrameAsync and also depending on the device on which your app is running.
If you call the overload of GetPreviewFrameAsync that accepts a VideoFrame argument, the returned VideoFrame
will have a non-null SoftwareBitmap and the Direct3DSurface property will be null.
If you call the overload of GetPreviewFrameAsync that has no arguments on a device that uses a Direct3D surface to
represent the frame internally, the Direct3DSurface property will be non-null and the SoftwareBitmap property will be
null.
If you call the overload of GetPreviewFrameAsync that has no arguments on a device that does not use a Direct3D
surface to represent the frame internally, the SoftwareBitmap property will be non-null and the Direct3DSurface
property will be null.

Your app should always check for a null value before trying to operate on the objects returned by the
SoftwareBitmap or Direct3DSurface properties.
When you are done using the preview frame, be sure to call its Close method (projected to Dispose in C#) to free
the resources used by the frame. Or, use the using pattern, which automatically disposes of the object.

previewFrame.Dispose();
previewFrame = null;

Related topics
Camera
Basic photo, video, and audio capture with MediaCapture
Media playback
3/6/2017 3 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This section provides information on creating Universal Windows apps that playback audio and video.

Media playback developer features


The following table lists the how-to articles that provide detailed guidance for adding media playback features to
your app.

TOPIC DESCRIPTION

Play audio and video with MediaPlayer This article shows you how to take advantage of the new
features and improvements to the media playback system for
UWP apps. Starting with Windows 10, version 1607,the
recommended best practice for playing media is to use the
MediaPlayer class instead of MediaElement for media
playback. The lightweight XAML control,
MediaPlayerElement, has been introduced to allow you
render media content in a XAML page. MediaPlayer
provides several advantages including automatic integration
with the System Media Transport Controls and a simpler,
one-process model for background audio. This article also
shows you how to render video to a
Windows.UI.Composition surface and how to use a
MediaTimelineController to synchronize multiple media
players.

Media items, playlists, and tracks This article shows you how to use the MediaSource class,
which provides a common way to reference and play back
media from different sources such as local or remote files and
exposes a common model for accessing media data,
regardless of the underlying media format. The
MediaPlaybackItem class extends the functionality of
MediaSource, allowing you to manage and select from
multiple audio, video, and metadata tracks contained in a
media item. MediaPlaybackList allows you to create
playback lists from one or more media playback items.

Integrate with the System Media Transport Controls This article shows you how to integrate your app with the
System Media Transport Controls (SMTC). Starting with
Windows 10, version 1607, every instance of MediaPlayer
that you create to play media is automatically displayed by
the SMTC. This article shows you how to provide the SMTC
with metadata about the content you are playing and how to
augment or completely override the default behavior of
SMTC controls.
TOPIC DESCRIPTION

Create, schedule, and manage media breaks This article shows you how to create, schedule, and manage
media breaks to your media playback app. Starting with
Windows 10, version 1607, you can use the
MediaBreakManager class to quickly and easy add media
breaks to any MediaPlaybackItem that you play with a
MediaPlayer. Media breaks are typically used to insert
audio or video ads into media content. Once you schedule
one or more media breaks, the system will automatically play
your media content at the specified time during playback.
The MediaBreakManager provides events so that your app
can react when media breaks start, end, or when they are
skipped by the user. You can also access a
MediaPlaybackSession for your media breaks to monitor
events like download and buffering progress updates.

Play media in the background This article shows you how to configure your app so that
media continues to play when your app moves from the
foreground to the background. This means that even after
the user has minimized your app, returned to the home
screen, or has navigated away from your app in some other
way, your app can continue to play audio. With Windows 10,
version 1607, a new single-process model for background
media playback has been introduced that is much quicker
and easier to implement than the legacy two-process model.
This article includes information on handling the new
application lifecycle events EnteredBackground and
LeavingBackground to manage your app's memory usage
while running in the background.

Adaptive Streaming This article describes how to add playback of adaptive


streaming multimedia content to a Universal Windows
Platform (UWP) apps. This feature currently supports
playback of Http Live Streaming (HLS) and Dynamic
Streaming over HTTP (DASH) content.

Media casting This article shows you how to cast media to remote devices
from a Universal Windows app.

PlayReady DRM This topic describes how to add PlayReady protected media
content to your Universal Windows Platform (UWP) app.

PlayReady Encrypted Media Extension This section describes how to modify your PlayReady Web
app to support the changes made from the previous
Windows 8.1 version to the Windows 10 version.

Media playback SDK samples


The following SDK samples demonstrate the media playback features available to UWP apps on Windows 10.
Use these samples to see the media playback APIs used in context or as a starting point for your own app.
Adaptive streaming sample
Background Audio sample
System Media Tranport sample
Play audio and video with MediaPlayer
3/6/2017 13 min to read Edit on GitHub

This article shows you how to play media in your Universal Windows app using the MediaPlayer class. With
Windows 10, version 1607, significant improvements were made to the media playback APIs, including a
simplified single-process design for background audio, automatic integration with the System Media Transport
Controls (SMTC), the ability to synchronize multiple media players, the ability to a Windows.UI.Composition
surface, and an easy interface for creating and scheduling media breaks in your content. To take advantage of
these improvements, the recommended best practice for playing media is to use the MediaPlayer class instead
of MediaElement for media playback. The lightweight XAML control, MediaPlayerElement, has been
introduced to allow you render media content in a XAML page. Many of the playback control and status APIs
provided by MediaElement are now available through the new MediaPlaybackSession object. MediaElement
continues to function to support backwards compatibility, but no additional features will be added to this class.
This article will walk you through the MediaPlayer features that a typical media playback app will use. Note that
MediaPlayer uses the MediaSource class as a container for all media items. This class allows you to load and
play media from many different sources, including local files, memory streams, and network sources, all using the
same interface. There are also higher-level classes that work with MediaSource, such as MediaPlaybackItem
and MediaPlaybackList, that provide more advanced features like playlists and the ability to manage media
sources with multiple audio, video, and metadata tracks. For more information on MediaSource and related APIs,
see Media items, playlists, and tracks.

Play a media file with MediaPlayer


Basic media playback with MediaPlayer is very simple to implement. First, create a new instance of the
MediaPlayer class. Your app can have multiple MediaPlayer instances active at once. Next, set the Source
property of the player to an object that implements the IMediaPlaybackSource, such as a MediaSource, a
MediaPlaybackItem, or a MediaPlaybackList. In this example, a MediaSource is created from a file in the
app's local storage, and then a MediaPlaybackItem is created from the source and then assigned to the player's
Source property.
Unlike MediaElement, MediaPlayer does not automatically begin playback by default. You can start playback by
calling Play, by setting the AutoPlay property to true, or waiting for the user to initiate playback with the built-in
media controls.

_mediaPlayer = new MediaPlayer();


_mediaPlayer.Source = MediaSource.CreateFromUri(new Uri("ms-appx:///Assets/example_video.mkv"));
_mediaPlayer.Play();

When your app is done using a MediaPlayer, you should call the Close method (projected to Dispose in C#) to
clean up the resources used by the player.

_mediaPlayer.Dispose();

Use MediaPlayerElement to render video in XAML


You can play media in a MediaPlayer without displaying it in XAML, but many media playback apps will want to
render the media in a XAML page. To do this, use the lightweight MediaPlayerElement control. Like
MediaElement, MediaPlayerElement allows you to specify whether the built-in transport controls should be
shown.

<MediaPlayerElement x:Name="_mediaPlayerElement" AreTransportControlsEnabled="False" HorizontalAlignment="Stretch" Grid.Row="0"/>

You can set the MediaPlayer instance that the element is bound to by calling SetMediaPlayer.

_mediaPlayerElement.SetMediaPlayer(_mediaPlayer);

You can also set the playback source on the MediaPlayerElement and the element will automatically create a
new MediaPlayer instance that you can access using the MediaPlayer property.

_mediaPlayerElement.Source = MediaSource.CreateFromUri(new Uri("ms-appx:///Assets/example_video.mkv"));


_mediaPlayer = _mediaPlayerElement.MediaPlayer;
_mediaPlayer.Play();

NOTE
If you disable the MediaPlaybackCommandManager of the MediaPlayer by setting IsEnabled to false, it will break the
link between the MediaPlayer the TransportControls provided by the MediaPlayerElement, so the built-in transport
controls will no longer automatically control the playback of the player. Instead, you must implement your own controls to
control the MediaPlayer.

Common MediaPlayer tasks


This section shows you how to use some of the features of the MediaPlayer.
Set the audio category
Set the AudioCategory property of a MediaPlayer to one of the values of the MediaPlayerAudioCategory
enumeration to let the system know what kind of media you are playing. Games should categorize their music
streams as GameMedia so that game music mutes automatically if another application plays music in the
background. Music or video applications should categorize their streams as Media or Movie so they will take
priority over GameMedia streams.

_mediaPlayer.AudioCategory = MediaPlayerAudioCategory.Media;

Output to a specific audio endpoint


By default, the audio output from a MediaPlayer is routed to the default audio endpoint for the system, but you
can specify a specific audio endpoint that the MediaPlayer should use for output. In the example below,
MediaDevice.GetAudioRenderSelector returns a string that uniquely idenfies the audio render category of
devices. Next, the DeviceInformation method FindAllAsync is called to get a list of all available devices of the
selected type. You may programmatically determine which device you want to use or add the returned devices to
a ComboBox to allow the user to select a device.
string audioSelector = MediaDevice.GetAudioRenderSelector();
var outputDevices = await DeviceInformation.FindAllAsync(audioSelector);
foreach(var device in outputDevices)
{
var deviceItem = new ComboBoxItem();
deviceItem.Content = device.Name;
deviceItem.Tag = device;
_audioDeviceComboBox.Items.Add(deviceItem);
}

In the SelectionChanged event for the devices combo box, the AudioDevice property of the MediaPlayer is set
to the selected device, which was stored in the Tag property of the ComboBoxItem.

private void _audioDeviceComboBox_SelectionChanged(object sender, SelectionChangedEventArgs e)


{
DeviceInformation selectedDevice = (DeviceInformation)((ComboBoxItem)_audioDeviceComboBox.SelectedItem).Tag;
if (selectedDevice != null)
{
_mediaPlayer.AudioDevice = selectedDevice;
}
}

Playback session
As described previously in this article, many of the functions that are exposed by the MediaElement class have
been moved to the MediaPlaybackSession class. This includes information about the playback state of the
player, such as the current playback position, whether the player is paused or playing, and the current playback
speed. MediaPlaybackSession also provides several events to notify you when the state changes, including the
current buffering and download status of content being played and the natural size and aspect ratio of the
currently playing video content.
The following example shows you how to implement a button click handler that skips 10 seconds forward in the
content. First, the MediaPlaybackSession object for the player is retrieved with the PlaybackSession property.
Next the Position property is set to the current playback position plus 10 seconds.

private void _skipForwardButton_Click(object sender, RoutedEventArgs e)


{
var session = _mediaPlayer.PlaybackSession;
session.Position = session.Position + TimeSpan.FromSeconds(10);
}

The next example illustrates using a toggle button to toggle between normal playback speed and 2X speed by
setting the PlaybackRate property of the session.

private void _speedToggleButton_Checked(object sender, RoutedEventArgs e)


{
_mediaPlayer.PlaybackSession.PlaybackRate = 2.0;
}
private void _speedToggleButton_Unchecked(object sender, RoutedEventArgs e)
{
_mediaPlayer.PlaybackSession.PlaybackRate = 1.0;
}

Pinch and zoom video


MediaPlayer allows you to specify the source rectangle within video content that should be rendered, effectively
allowing you to zoom into video. The rectangle you specify is relative to a normalized rectangle (0,0,1,1) where 0,0
is the upper left hand of the frame and 1,1 specifies the full width and height of the frame. So, for example, to set
the zoom rectangle so that the top-right quadrant of the video is rendered, you would specify the rectangle
(.5,0,.5,.5). It is important that you check your values to make sure that your source rectangle is within the (0,0,1,1)
normalized rectangle. Attempting to set a value outside of this range will cause an exception to be thrown.
To implement pinch and zoom using multi-touch gestures, you must first specify which gestures you want to
support. In this example, scale and translate gestures are requested. The ManipulationDelta event is raised when
one of the subscribed gestures occurs. The DoubleTapped event will be used to reset the zoom to the full frame.

_mediaPlayerElement.ManipulationMode = ManipulationModes.Scale | ManipulationModes.TranslateX | ManipulationModes.TranslateY;


_mediaPlayerElement.ManipulationDelta += _mediaPlayerElement_ManipulationDelta;
_mediaPlayerElement.DoubleTapped += _mediaPlayerElement_DoubleTapped;

Next, declare a Rect object that will store the current zoom source rectangle.

Rect _sourceRect = new Rect(0, 0, 1, 1);

The ManipulationDelta handler adjusts either the scale or the translation of the zoom rectangle. If the delta scale
value is not 1, it means that the user performed a pinch gesture. If the value is greater than 1, the source rectangle
should be made smaller to zoom into the content. If the value is less than 1, then the source rectangle should be
made bigger to zoom out. Before setting the new scale values, the resulting rectangle is checked to make sure it
lies entirely within the (0,0,1,1) limits.
If the scale value is 1, then the translation gesture is handled. The rectangle is simply translated by the number of
pixels in gesture divided by the width and height of the control. Again, the resulting rectangle is checked to make
sure it lies within the (0,0,1,1) bounds.
Finally, the NormalizedSourceRect of the MediaPlaybackSession is set to the newly adjusted rectangle,
specifying the area within the video frame that should be rendered.
private void _mediaPlayerElement_ManipulationDelta(object sender, ManipulationDeltaRoutedEventArgs e)
{

if (e.Delta.Scale != 1)
{
var halfWidth = _sourceRect.Width / 2;
var halfHeight = _sourceRect.Height / 2;

var centerX = _sourceRect.X + halfWidth;


var centerY = _sourceRect.Y + halfHeight;

var scale = e.Delta.Scale;


var newHalfWidth = (_sourceRect.Width * e.Delta.Scale) / 2;
var newHalfHeight = (_sourceRect.Height * e.Delta.Scale) / 2;

if (centerX - newHalfWidth > 0 && centerX + newHalfWidth <= 1.0 &&


centerY - newHalfHeight > 0 && centerY + newHalfHeight <= 1.0)
{
_sourceRect.X = centerX - newHalfWidth;
_sourceRect.Y = centerY - newHalfHeight;
_sourceRect.Width *= e.Delta.Scale;
_sourceRect.Height *= e.Delta.Scale;
}
}
else
{
var translateX = -1 * e.Delta.Translation.X / _mediaPlayerElement.ActualWidth;
var translateY = -1 * e.Delta.Translation.Y / _mediaPlayerElement.ActualHeight;

if (_sourceRect.X + translateX >= 0 && _sourceRect.X + _sourceRect.Width + translateX <= 1.0 &&
_sourceRect.Y + translateY >= 0 && _sourceRect.Y + _sourceRect.Height + translateY <= 1.0)
{
_sourceRect.X += translateX;
_sourceRect.Y += translateY;
}
}

_mediaPlayer.PlaybackSession.NormalizedSourceRect = _sourceRect;
}

In the DoubleTapped event handler, the source rectangle is set back to (0,0,1,1) to cause the entire video frame
to be rendered.

private void _mediaPlayerElement_DoubleTapped(object sender, DoubleTappedRoutedEventArgs e)


{
_sourceRect = new Rect(0, 0, 1, 1);
_mediaPlayer.PlaybackSession.NormalizedSourceRect = _sourceRect;
}

Use MediaPlayerSurface to render video to a Windows.UI.Composition


surface
Starting with Windows 10, version 1607, you can use MediaPlayer to render video to an to render video to an
ICompositionSurface, which allows the player to interoperate with the APIs in the Windows.UI.Composition
namespace. The composition framework allows you to work with graphics in the visual layer between XAML and
the low-level DirectX graphics APIs. This enables scenarios like rendering video into any XAML control. For more
information on using the composition APIs, see Visual Layer.
The following example illustrates how to render video player content onto a Canvas control. The media player-
specific calls in this example are SetSurfaceSize and GetSurface. SetSurfaceSize tells the system the size of the
buffer that should be allocated for rendering content. GetSurface takes a Compositor as an arguemnt and
retreives an instance of the MediaPlayerSurface class. This class provides access to the MediaPlayer and
Compositor used to create the surface and exposes the surface itself through the CompositionSurface property.
The rest of the code in this example creates a SpriteVisual to which the video is rendered and sets the size to the
size of the canvas element that will display the visual. Next a CompositionBrush is created from the
MediaPlayerSurface and assigned to the Brush property of the visual. Next a ContainerVisual is created and
the SpriteVisual is inserted at the top of its visual tree. Finally, SetElementChildVisual is called to assign the
container visual to the Canvas.

_mediaPlayer.SetSurfaceSize(new Size(_compositionCanvas.ActualWidth, _compositionCanvas.ActualHeight));

var compositor = ElementCompositionPreview.GetElementVisual(this).Compositor;


MediaPlayerSurface surface = _mediaPlayer.GetSurface(compositor);

SpriteVisual spriteVisual = compositor.CreateSpriteVisual();


spriteVisual.Size =
new System.Numerics.Vector2((float)_compositionCanvas.ActualWidth, (float)_compositionCanvas.ActualHeight);

CompositionBrush brush = compositor.CreateSurfaceBrush(surface.CompositionSurface);


spriteVisual.Brush = brush;

ContainerVisual container = compositor.CreateContainerVisual();


container.Children.InsertAtTop(spriteVisual);

ElementCompositionPreview.SetElementChildVisual(_compositionCanvas, container);

Use MediaTimelineController to synchronize content across multiple


players.
As discussed previously in this article, your app can have several MediaPlayer objects active at a time. By default,
each MediaPlayer you create operates independently. For some scenarios, such as synchronizing a commentary
track to a video, you may want to synchronize the player state, playback position, and playback speed of multiple
players. Starting with Windows 10, version 1607, you can implement this behavior by using the
MediaTimelineController class.
Implement playback controls
The following example shows how to use a MediaTimelineController to control two instances of MediaPlayer.
First, each instance of the MediaPlayer is instantiated and the Source is set to a media file. Next, a new
MediaTimelineController is created. For each MediaPlayer, the MediaPlaybackCommandManager
associated with each player is disabled by setting the IsEnabled property to false. And then then the
TimelineController property is set to the timeline controller object.

MediaTimelineController _mediaTimelineController;
_mediaPlayer = new MediaPlayer();
_mediaPlayer.Source = MediaSource.CreateFromUri(new Uri("ms-appx:///Assets/example_video.mkv"));
_mediaPlayerElement.SetMediaPlayer(_mediaPlayer);

_mediaPlayer2 = new MediaPlayer();


_mediaPlayer2.Source = MediaSource.CreateFromUri(new Uri("ms-appx:///Assets/example_video_2.mkv"));
_mediaPlayerElement2.SetMediaPlayer(_mediaPlayer2);

_mediaTimelineController = new MediaTimelineController();

_mediaPlayer.CommandManager.IsEnabled = false;
_mediaPlayer.TimelineController = _mediaTimelineController;

_mediaPlayer2.CommandManager.IsEnabled = false;
_mediaPlayer2.TimelineController = _mediaTimelineController;

Caution The MediaPlaybackCommandManager provides automatic integration between MediaPlayer and


the System Media Transport Controls (SMTC), but this automatic integration can't be used with media players that
are controlled with a MediaTimelineController. Therefore you must disable the command manager for the
media player before setting the player's timeline controller. Failure to do so will result in an exception being
thrown with the following message: "Attaching Media Timeline Controller is blocked because of the current state
of the object." For more information on media player integration with the SMTC, see Integrate with the System
Media Transport Controls. If you are using a MediaTimelineController you can still control the SMTC manually.
For more information, see Manual control of the System Media Transport Controls.
Once you have attached a MediaTimelineController to one or more media players, you can control the playback
state by using the methods exposed by the controller. The following example calls Start to begin playback of all
associated media players at the beginning of the media.

private void PlayButton_Click(object sender, RoutedEventArgs e)


{
_mediaTimelineController.Start();
}

This example illustrates pausing and resuming all of the attached media players.

private void PauseButton_Click(object sender, RoutedEventArgs e)


{
if(_mediaTimelineController.State == MediaTimelineControllerState.Running)
{
_mediaTimelineController.Pause();
_pauseButton.Content = "Resume";
}
else
{
_mediaTimelineController.Resume();
_pauseButton.Content = "Pause";
}
}

To fast-forward all connected media players, set the playback speed to a value greater that 1.

private void FastForwardButton_Click(object sender, RoutedEventArgs e)


{
_mediaTimelineController.ClockRate = 2.0;
}
The next example shows how to use a Slider control to show the current playback position of the timeline
controller relative to the duration of the content of one of the connected media players. First, a new MediaSource
is created and a handler for the OpenOperationCompleted of the media source is registered.

var mediaSource = MediaSource.CreateFromUri(new Uri("ms-appx:///Assets/example_video.mkv"));


mediaSource.OpenOperationCompleted += MediaSource_OpenOperationCompleted;
_mediaPlayer.Source = mediaSource;
_mediaPlayerElement.SetMediaPlayer(_mediaPlayer);

The OpenOperationCompleted handler is used as an opportunity to discover the duration of the media source
content. Once the duration is determined, the maximum value of the Slider control is set to the total number of
seconds of the media item. The value is set inside a call to RunAsync to make sure it is run on the UI thread.

TimeSpan _duration;

private async void MediaSource_OpenOperationCompleted(MediaSource sender, MediaSourceOpenOperationCompletedEventArgs args)


{
_duration = sender.Duration.GetValueOrDefault();

await Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () =>


{
_positionSlider.Minimum = 0;
_positionSlider.Maximum = _duration.TotalSeconds;
_positionSlider.StepFrequency = 1;
});
}

Next, a handler for the timeline controller's PositionChanged event is registered. This is called periodically by the
system, approximately 4 times per second.

_mediaTimelineController.PositionChanged += _mediaTimelineController_PositionChanged;

In the handler for PositionChanged, the slider value is updated to reflect the current position of the timeline
controller.

private async void _mediaTimelineController_PositionChanged(MediaTimelineController sender, object args)


{
if (_duration != TimeSpan.Zero)
{
await Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () =>
{
_positionSlider.Value = sender.Position.TotalSeconds / (float)_duration.TotalSeconds;
});
}
}

Offset the playback position from the timeline position


In some cases you may want the playback position of one or more media players associated with a timeline
controller to be offset from the other players. You can do this by setting the TimelineControllerPositionOffset
property of the MediaPlayer object you want to be offset. The following example uses the durations of the
content of two media players to set the minimum and maximum values of two slider control to plus and minus
the length of the item.
_timelineOffsetSlider1.Minimum = -1 * _duration.TotalSeconds;
_timelineOffsetSlider1.Maximum = _duration.TotalSeconds;
_timelineOffsetSlider1.StepFrequency = 1;

_timelineOffsetSlider2.Minimum = -1 * _duration2.TotalSeconds;
_timelineOffsetSlider2.Maximum = _duration2.TotalSeconds;
_timelineOffsetSlider2.StepFrequency = 1;

In the ValueChanged event for each slider, the TimelineControllerPositionOffset for each player is set to the
corresponding value.

private void _timelineOffsetSlider1_ValueChanged(object sender, RangeBaseValueChangedEventArgs e)


{
_mediaPlayer.TimelineControllerPositionOffset = TimeSpan.FromSeconds(_timelineOffsetSlider1.Value);
}

private void _timelineOffsetSlider2_ValueChanged(object sender, RangeBaseValueChangedEventArgs e)


{
_mediaPlayer2.TimelineControllerPositionOffset = TimeSpan.FromSeconds(_timelineOffsetSlider2.Value);
}

Note that if the offset value of a player maps to a negative playback position, the clip will remain paused until the
offset reaches zero and then playback will begin. Likewise, if the offset value maps to a playback position greater
than the duration of the media item, the final frame will be shown, just as it does when a single media player
reached the end of its content.

Related topics
Media playback
Media items, playlists, and tracks
Integrate with the Sytem Media Transport Controls
Create, schedule, and manage media breaks
Play media in the background
Media items, playlists, and tracks
3/6/2017 18 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This article shows you how to use the MediaSource class, which provides a common way to reference and play
back media from different sources such as local or remote files and exposes a common model for accessing media
data, regardless of the underlying media format. The MediaPlaybackItem class extends the functionality of
MediaSource, allowing you to manage and select from multiple audio, video, and metadata tracks contained in a
media item. MediaPlaybackList allows you to create playback lists from one or more media playback items.

Create and play a MediaSource


Create a new instance of MediaSource by calling one of the factory methods exposed by the class:
CreateFromAdaptiveMediaSource
CreateFromIMediaSource
CreateFromMediaStreamSource
CreateFromMseStreamSource
CreateFromStorageFile
CreateFromStream
CreateFromStreamReference
CreateFromUri
After creating a MediaSource you can play it with a MediaPlayer by setting the Source property. Starting with
Windows 10, version 1607, you can assign a MediaPlayer to a MediaPlayerElement by calling
SetMediaPlayer in order to render the media player content in a XAML page. This is the preferred method over
using MediaElement. For more information on using MediaPlayer, see Play audio and video with
MediaPlayer.
The following example shows how to play back a user-selected media file in a MediaPlayer using MediaSource.
You will need to include the Windows.Media.Core and Windows.Media.Playback namespaces in order to
complete this scenario.

using Windows.Media.Core;
using Windows.Media.Playback;

Declare a variable of type MediaSource. For the examples in this article, the media source is declared as a class
member so that it can be accessed from multiple locations.

MediaSource mediaSource;

Declare a variable to store the MediaPlayer object and, if you want to render the media content in XAML, add a
MediaPlayerElement control to your page.

MediaPlayer _mediaPlayer;
<MediaPlayerElement x:Name="mediaPlayerElement"/>

To allow the user to pick a media file to play, use a FileOpenPicker. With the StorageFile object returned from
the picker's PickSingleFileAsync method, initialize a new MediaObject by calling
MediaSource.CreateFromStorageFile. Finally, set the media source as the playback source for the
MediaElement by calling the SetPlaybackSource method.

//Create a new picker


var filePicker = new Windows.Storage.Pickers.FileOpenPicker();

//Add filetype filters. In this case wmv and mp4.


filePicker.FileTypeFilter.Add(".wmv");
filePicker.FileTypeFilter.Add(".mp4");
filePicker.FileTypeFilter.Add(".mkv");

//Set picker start location to the video library


filePicker.SuggestedStartLocation = PickerLocationId.VideosLibrary;

//Retrieve file from picker


StorageFile file = await filePicker.PickSingleFileAsync();

if (file != null)
{
mediaSource = MediaSource.CreateFromStorageFile(file);
mediaElement.SetPlaybackSource(mediaSource);
}

By default, the MediaPlayer does not begin playing automatically when the media source is set. You can
manually begin playback by calling Play.

_mediaPlayer.Play();

You can also set the AutoPlay property of the MediaPlayer to true to tell the player to begin playing as soon as
the media source is set.

_mediaPlayer.AutoPlay = true;

Handle multiple audio, video, and metadata tracks with


MediaPlaybackItem
Using a MediaSource for playback is convenient because it provides a common way to playback media from
different kinds of sources, but more advanced behavior can be accessed by creating a MediaPlaybackItem from
the MediaSource. This includes the ability to access and manage multiple audio, video, and data tracks for a
media item.
Declare a variable to store your MediaPlaybackItem.

MediaPlaybackItem mediaPlaybackItem;

Create a MediaPlaybackItem by calling the constructor and passing in an initialized MediaSource object.
If your app supports multiple audio, video, or data tracks in a media playback item, register event handlers for the
AudioTracksChanged, VideoTracksChanged, or TimedMetadataTracksChanged events.
Finally, set the playback source of the MediaElement or MediaPlayer to your MediaPlaybackItem.

mediaSource = MediaSource.CreateFromStorageFile(file);
mediaPlaybackItem = new MediaPlaybackItem(mediaSource);

mediaPlaybackItem.AudioTracksChanged += PlaybackItem_AudioTracksChanged;
mediaPlaybackItem.VideoTracksChanged += MediaPlaybackItem_VideoTracksChanged;
mediaPlaybackItem.TimedMetadataTracksChanged += MediaPlaybackItem_TimedMetadataTracksChanged;

mediaElement.SetPlaybackSource(mediaPlaybackItem);

NOTE
A MediaSource can only be associated with a single MediaPlaybackItem. After creating a MediaPlaybackItem from a
source, attempting to create another playback item from the same source will result in an error. Also, after creating a
MediaPlaybackItem from a media source, you can't set the MediaSource object directly as the source for a MediaPlayer
but should instead use the MediaPlaybackItem.

The VideoTracksChanged event is raised after a MediaPlaybackItem containing multiple video tracks is
assigned as a playback source, and can be raised again if the list of video tracks changes for the item changes. The
handler for this event gives you the opportunity to update your UI to allow the user to switch between available
tracks. This example uses a ComboBox to display the available video tracks.

<ComboBox x:Name="videoTracksComboBox" SelectionChanged="videoTracksComboBox_SelectionChanged"/>

In the VideoTracksChanged handler, loop through all of the tracks in the playback item's VideoTracks list. For
each track, a new ComboBoxItem is created. If the track does not already have a label, a label is generated from
the track index. The Tag property of the combo box item is set to the track index so that it can be identified later.
Finally, the item is added to the combo box. Note that these operations are performed within a
CoreDispatcher.RunAsync call because all UI changes must be made on the UI thread and this event is raised on
a different thread.

private async void MediaPlaybackItem_VideoTracksChanged(MediaPlaybackItem sender, IVectorChangedEventArgs args)


{
await Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () =>
{
videoTracksComboBox.Items.Clear();
for (int index = 0; index < sender.VideoTracks.Count; index++)
{
var videoTrack = sender.VideoTracks[index];
ComboBoxItem item = new ComboBoxItem();
item.Content = String.IsNullOrEmpty(videoTrack.Label) ? "Track " + index : videoTrack.Label;
item.Tag = index;
videoTracksComboBox.Items.Add(item);
}
});
}

In the SelectionChanged handler for the combo box, the track index is retrieved from the selected item's Tag
property. Setting the SelectedIndex property of the media playback item's VideoTracks list causes the
MediaElement or MediaPlayer to switch the active video track to the specified index.
private void videoTracksComboBox_SelectionChanged(object sender, SelectionChangedEventArgs e)
{
int trackIndex = (int)((ComboBoxItem)((ComboBox)sender).SelectedItem).Tag;
mediaPlaybackItem.VideoTracks.SelectedIndex = trackIndex;
}

Managing media items with multiple audio tracks works exactly the same as with video tracks. Handle the
AudioTracksChanged to update your UI with the audio tracks found in the playback item's AudioTracks list.
When the user selects an audio track, set the SelectedIndex property of the AudioTracks list to cause the
MediaElement or MediaPlayer to switch the active audio track to the specified index.

<ComboBox x:Name="audioTracksComboBox" SelectionChanged="audioTracksComboBox_SelectionChanged"/>

private async void PlaybackItem_AudioTracksChanged(MediaPlaybackItem sender, IVectorChangedEventArgs args)


{
await Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () =>
{
audioTracksComboBox.Items.Clear();
for (int index = 0; index < sender.AudioTracks.Count; index++)
{
var audioTrack = sender.AudioTracks[index];
ComboBoxItem item = new ComboBoxItem();
item.Content = String.IsNullOrEmpty(audioTrack.Label) ? "Track " + index : audioTrack.Label;
item.Tag = index;
audioTracksComboBox.Items.Add(item);
}
});
}

private void audioTracksComboBox_SelectionChanged(object sender, SelectionChangedEventArgs e)


{
int trackIndex = (int)((ComboBoxItem)((ComboBox)sender).SelectedItem).Tag;
mediaPlaybackItem.AudioTracks.SelectedIndex = trackIndex;
}

In addition to audio and video, a MediaPlaybackItem object may contain zero or more TimedMetadataTrack
objects. A timed metadata track can contain subtitle or caption text, or it may contain custom data that is
proprietary to your app. A timed metadata track contains a list of cues represented by objects that inherit from
IMediaCue, such as a DataCue or a TimedTextCue. Each cue has a start time and a duration that determines
when the cue is activated and for how long.
Similar to audio tracks and video tracks, the timed metadata tracks for a media item can be discovered by
handling the TimedMetadataTracksChanged event of a MediaPlaybackItem. With timed metadata tracks,
however, the user may want to enable more than one metadata track at a time. Also, depending on your app
scenario, you may want to enable or disable metadata tracks automatically, without user intervention. For
illustration purposes, this example adds a ToggleButton for each metadata track in a media item to allow the
user to enable and disable the track. The Tag property of each button is set to the index of the associated
metadata track so that it can be identified when the button is toggled.

<StackPanel x:Name="MetadataButtonPanel" Orientation="Horizontal"/>


private async void MediaPlaybackItem_TimedMetadataTracksChanged(MediaPlaybackItem sender, IVectorChangedEventArgs args)
{
await Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () =>
{
for (int index = 0; index < sender.TimedMetadataTracks.Count; index++)
{
var timedMetadataTrack = sender.TimedMetadataTracks[index];

ToggleButton toggle = new ToggleButton()


{
Content = String.IsNullOrEmpty(timedMetadataTrack.Label) ? "Track " + index : timedMetadataTrack.Label,
Tag = (uint)index
};
toggle.Checked += Toggle_Checked;
toggle.Unchecked += Toggle_Unchecked;

MetadataButtonPanel.Children.Add(toggle);
}
});
}

Because more than one metadata track can be active at a time, you don't simply set the active index for the
metadata track list. Instead, call the MediaPlaybackItem object's SetPresentationMode method, passing in the
index of the track you want to toggle, and then providing a value from the
TimedMetadataTrackPresentationMode enumeration. The presentation mode you choose depends on the
implementation of your app. In this example, the metadata track is set to PlatformPresented when enabled. For
text-based tracks, this means that the system will automatically display the text cues in the track. When the toggle
button is toggled off, the presentation mode is set to Disabled, which means that no text is displayed and no cue
events are raised. Cue events are discussed later in this article.

private void Toggle_Checked(object sender, RoutedEventArgs e)


{
mediaPlaybackItem.TimedMetadataTracks.SetPresentationMode((uint)((ToggleButton)sender).Tag,
TimedMetadataTrackPresentationMode.PlatformPresented);
}

private void Toggle_Unchecked(object sender, RoutedEventArgs e)


{
mediaPlaybackItem.TimedMetadataTracks.SetPresentationMode((uint)((ToggleButton)sender).Tag,
TimedMetadataTrackPresentationMode.Disabled);
}

As you are processing the metadata tracks, you can access the set of cues within the track by accessing the Cues
or ActiveCues properties. You can do this to update your UI to show the cue locations for a media item.

Handle unsupported codecs and unknown errors when opening media


items
Starting with Windows 10, version 1607, you can check whether the codec required to playback a media item is
supported or partially supported on the device on which your app is running. In the event handler for the
MediaPlaybackItem tracks-changed events, such as AudioTracksChanged, first check to see if the track change
is an insertion of a new track. If so, you can get a reference to the track being inserted by using the index passed in
the IVectorChangedEventArgs.Index parameter with the appropriate track collection of the
MediaPlaybackItem parameter, such as the AudioTracks collection.
Once you have a reference to the inserted track, check the DecoderStatus of the track's SupportInfo property. If
the value is FullySupported, then the appropriate codec needed to play back the track is present on the device. If
the value is Degraded, then the track can be played by the system, but the playback will be degraded in some
way. For example, a 5.1 audio track may be played back as 2-channel stereo instead. If this is the case, you may
want to update your UI to alert the user of the degradation. If the value is UnsupportedSubtype or
UnsupportedEncoderProperties, then the track can't be played back at all with the current codecs on the device.
You may wish to alert the user and skip playback of the item or implement UI to allow the user to download the
correct codec. The track's GetEncodingProperties method can be used to determine the required codec for
playback.
Finally, you can register for the track's OpenFailed event, which will be raised if the track is supported on the
device but failed to open due to an unknown error in the pipeline.

private async void SnippetAudioTracksChanged_CodecCheck(MediaPlaybackItem sender, IVectorChangedEventArgs args)


{
if(args.CollectionChange == CollectionChange.ItemInserted)
{
var insertedTrack = sender.AudioTracks[(int)args.Index];

var decoderStatus = insertedTrack.SupportInfo.DecoderStatus;


if (decoderStatus != MediaDecoderStatus.FullySupported)
{
if (decoderStatus == MediaDecoderStatus.Degraded)
{
ShowMessageToUser(string.Format("Track {0} can play but playback will be degraded. {1}",
insertedTrack.Name, insertedTrack.SupportInfo.DegradationReason));
} else
{
// status is MediaDecoderStatus.UnsupportedSubtype or MediaDecoderStatus.UnsupportedEncoderProperties
ShowMessageToUser(string.Format("Track {0} uses an unsupported media format.", insertedTrack.Name));
}

Windows.Media.MediaProperties.AudioEncodingProperties props = insertedTrack.GetEncodingProperties();


await HelpUserInstallCodec(props);
}
else
{
insertedTrack.OpenFailed += InsertedTrack_OpenFailed;
}
}

In the OpenFailed event handler, you can check to see if the MediaSource status is unknown, and if so, you can
programatically select a different track to play, allow the user to choose a different track, or abandon playback.

private async void InsertedTrack_OpenFailed(AudioTrack sender, AudioTrackOpenFailedEventArgs args)


{
LogError(args.ExtendedError.HResult);

if(sender.SupportInfo.MediaSourceStatus == MediaSourceStatus.Unknown)
{
await SelectAnotherTrackOrSkipPlayback(sender.PlaybackItem);
}
}

Set display properties used by the System Media Transport Controls


Starting with Windows 10, version 1607, media played in a MediaPlayer is automatically integrated into the
System Media Transport Controls (SMTC) by default. You can specify the metadata that will be displayed by the
SMTC by updating the display properties for a MediaPlaybackItem. Get an object representing the display
properties for an item by calling GetDisplayProperties. Set whether the playback item is music or video by
setting the Type property. Then, set the properties of the object's VideoProperties or MusicProperties. Call
ApplyDisplayProperties to update the item's properties to the values you provided. Typically, an app will
retrieve the display values dynamically from a web service, but the following example illustrates this process with
hardcoded values.

MediaItemDisplayProperties props = mediaPlaybackItem.GetDisplayProperties();


props.Type = Windows.Media.MediaPlaybackType.Video;
props.VideoProperties.Title = "Video title";
props.VideoProperties.Subtitle = "Video subtitle";
props.VideoProperties.Genres.Add("Documentary");
mediaPlaybackItem.ApplyDisplayProperties(props);

props = mediaPlaybackItem.GetDisplayProperties();
props.Type = Windows.Media.MediaPlaybackType.Music;
props.MusicProperties.Title = "Song title";
props.MusicProperties.Artist = "Song artist";
props.MusicProperties.Genres.Add("Polka");
mediaPlaybackItem.ApplyDisplayProperties(props);

Add external timed text with TimedTextSource


For some scenarios, you may have external files that contains timed text associated with a media item, such as
separate files that contain subtitles for different locales. Use the TimedTextSource class to load in external timed
text files from a stream or URI.
This example uses a Dictionary collection to store a list of the timed text sources for the media item using the
source URI and the TimedTextSource object as the key/value pair in order to identify the tracks after they have
been resolved.

Dictionary<TimedTextSource, Uri> timedTextSourceMap;

Create a new TimedTextSource for each external timed text file by calling CreateFromUri. Add an entry to the
Dictionary for the timed text source. Add a handler for the TimedTextSource.Resolved event to handle if the
item failed to load or to set additional properties after the item was loaded successfully.
Register all of your TimedTextSource objects with the MediaSource by adding them to the
ExternalTimedTextSources collection. Note that external timed text sources are added to directly the
MediaSource and not the MediaPlaybackItem created from the source. To update your UI to reflect the external
text tracks, register and handle the TimedMetadataTracksChanged event as described previously in this article.
// Create the TimedTextSource and add entry to URI map
var timedTextSourceUri_En = new Uri("http://contoso.com/MyClipTimedText_en.srt");
var timedTextSource_En = TimedTextSource.CreateFromUri(timedTextSourceUri_En);
timedTextSourceMap[timedTextSource_En] = timedTextSourceUri_En;
timedTextSource_En.Resolved += TimedTextSource_Resolved;

var timedTextSourceUri_Pt = new Uri("http://contoso.com/MyClipTimedText_pt.srt");


var timedTextSource_Pt = TimedTextSource.CreateFromUri(timedTextSourceUri_Pt);
timedTextSourceMap[timedTextSource_Pt] = timedTextSourceUri_Pt;
timedTextSource_Pt.Resolved += TimedTextSource_Resolved;

// Add the TimedTextSource to the MediaSource


mediaSource.ExternalTimedTextSources.Add(timedTextSource_En);
mediaSource.ExternalTimedTextSources.Add(timedTextSource_Pt);

mediaPlaybackItem = new MediaPlaybackItem(mediaSource);


mediaPlaybackItem.TimedMetadataTracksChanged += MediaPlaybackItem_TimedMetadataTracksChanged;

mediaElement.SetPlaybackSource(mediaPlaybackItem);

In the handler for the TimedTextSource.Resolved event, check the Error property of the
TimedTextSourceResolveResultEventArgs passed into the handler to determine if an error occurred while
trying to load the timed text data. If the item was resolved successfully, you can use this handler to update
additional properties of the resolved track. This example adds a label for each track based on the URI previously
stored in the Dictionary.

private void TimedTextSource_Resolved(TimedTextSource sender, TimedTextSourceResolveResultEventArgs args)


{
var timedTextSourceUri = timedTextSourceMap[sender];

if(args.Error != null)
{
// Show that there was an error in your UI
ShowMessageToUser("There was an error resolving track: " + timedTextSourceUri);
return;
}

// Add a label for each resolved track


var timedTextSourceUriString = timedTextSourceUri.AbsoluteUri;
if(timedTextSourceUriString.Contains("_en"))
{
args.Tracks[0].Label = "English";
}
else if(timedTextSourceUriString.Contains("_pt"))
{
args.Tracks[0].Label = "Portuguese";
}
}

Add additional metadata tracks


You can dynamically create custom metadata tracks in code and associate them with a media source. The tracks
you create can contain subtitle or caption text, or they can contain your proprietary app data.
Create a new TimedMetadataTrack by calling the constructor and specifying an ID, the language identifier, and a
value from the TimedMetadataKind enumeration. Register handlers for the CueEntered and CueExited events.
These events are raised when the start time for a cue has been reached and when the duration for a cue has
expired, respectively.
Create a new cue object, appropriate for the type of metadata track you created, and set the ID, start time, and
duration for the track. This example creates a data track, so a set of DataCue objects are generated and a buffer
containing app-specific data is provided for each cue. To register the new track, add it to the
ExternalTimedMetadataTracks collection of the MediaSource object.

TimedMetadataTrack metadataTrack = new TimedMetadataTrack("ID_0", "en-us", TimedMetadataKind.Data);


metadataTrack.Label = "Custom data track";
metadataTrack.CueEntered += MetadataTrack_DataCueEntered;
metadataTrack.CueExited += MetadataTrack_CueExited;

// Example cue data


string data = "Cue data";
byte[] bytes = new byte[data.Length * sizeof(char)];
System.Buffer.BlockCopy(data.ToCharArray(), 0, bytes, 0, bytes.Length);
Windows.Storage.Streams.IBuffer buffer = bytes.AsBuffer();

for (int i = 0; i < 10; i++)


{
DataCue cue = new DataCue()
{
Id = "ID_" + i,
Data = buffer,
StartTime = TimeSpan.FromSeconds(3 + i * 3),
Duration = TimeSpan.FromSeconds(2)
};

metadataTrack.AddCue(cue);

mediaSource.ExternalTimedMetadataTracks.Add(metadataTrack);

The CueEntered event is raised when a cue's start time has been reached as long as the associated track has a
presentation mode of ApplicationPresented, Hidden, or PlatformPresented. Cue events are not raised for
metadata tracks while the presentation mode for the track is Disabled. This example simply outputs the custom
data associated with the cue to the debug window.

private void MetadataTrack_DataCueEntered(TimedMetadataTrack sender, MediaCueEventArgs args)


{
DataCue cue = (DataCue)args.Cue;
string data = System.Text.Encoding.Unicode.GetString(cue.Data.ToArray());
System.Diagnostics.Debug.WriteLine("Cue entered: " + data);
}

This example adds a custom text track by specifying TimedMetadataKind.Caption when creating the track and
using TimedTextCue objects to add cues to the track.
TimedMetadataTrack metadataTrack = new TimedMetadataTrack("TrackID_0", "en-us", TimedMetadataKind.Caption);
metadataTrack.Label = "Custom text track";
metadataTrack.CueEntered += MetadataTrack_TextCueEntered;

for (int i = 0; i < 10; i++)


{
TimedTextCue cue = new TimedTextCue()
{
Id = "TextCueID_" + i,
StartTime = TimeSpan.FromSeconds(i * 3),
Duration = TimeSpan.FromSeconds(2)
};

cue.Lines.Add(new TimedTextLine() { Text = "This is a custom timed text cue." });


metadataTrack.AddCue(cue);
}

mediaSource.ExternalTimedMetadataTracks.Add(metadataTrack);

private void MetadataTrack_TextCueEntered(TimedMetadataTrack sender, MediaCueEventArgs args)


{
TimedTextCue cue = (TimedTextCue)args.Cue;
System.Diagnostics.Debug.WriteLine("Cue entered: " + cue.Id + " " + cue.Lines[0].Text);
}

Play a list of media items with MediaPlaybackList


The MediaPlaybackList allows you to create a playlist of media items, which are represented by
MediaPlaybackItem objects.
Note Items in a MediaPlaybackList are rendered using gapless playback. The system will use provided metadata
in MP3 or AAC encoded files to determine the delay or padding compensation needed for gapless playback. If the
MP3 or AAC encoded files don't provide this metadata, then the system determines the delay or padding
heuristically. For lossless formats, such as PCM, FLAC, or ALAC, the system takes no action because these encoders
don't introduce delay or padding.
To get started, declare a variable to store your MediaPlaybackList.

MediaPlaybackList mediaPlaybackList;

Create a MediaPlaybackItem for each media item you want to add to your list using the same procedure
described previously in this article. Initialize your MediaPlaybackList object and add the media playback items to
it. Register a handler for the CurrentItemChanged event. This event allows you to update your UI to reflect the
currently playing media item. Finally, set the playback source of the MediaPlayer to your MediaPlaybackList.
StorageFile file = await filePicker.PickSingleFileAsync();
mediaSource = MediaSource.CreateFromStorageFile(file);
mediaSource.CustomProperties["Title"] = "Clip 1 title";
mediaPlaybackItem = new MediaPlaybackItem(mediaSource);

file = await filePicker.PickSingleFileAsync();


mediaSource2 = MediaSource.CreateFromStorageFile(file);
mediaSource2.CustomProperties["Title"] = "Clip 2 title";
mediaPlaybackItem2 = new MediaPlaybackItem(mediaSource2);

mediaPlaybackList = new MediaPlaybackList();


mediaPlaybackList.Items.Add(mediaPlaybackItem);
mediaPlaybackList.Items.Add(mediaPlaybackItem2);

mediaPlaybackList.CurrentItemChanged += MediaPlaybackList_CurrentItemChanged;

mediaElement.SetPlaybackSource(mediaPlaybackList);

In the CurrentItemChanged event handler, update your UI to reflect the currently playing item, which can be
retrieved using the NewItem property of the CurrentMediaPlaybackItemChangedEventArgs object passed
into the event. Remember that if you update the UI from this event, you should do so within a call to
CoreDispatcher.RunAsync so that the updates are made on the UI thread.

NOTE
The system does not automatically dispose of media items after they are played. This means that if the user navigates
backwards through the list, previous played songs can be played again gaplessly, but this means that as more items in the
list are played, the memory usage of your app will increase. You must be sure to free up the resources for previously played
media items periodically. This is especially important to address when your app is playing in the background and more
tightly resource-constrained.

You can use the CurrentItemChanged event as an opportunity to release the resources from previously played
media items. To keep a reference to previously played items, create a Queue collection. And set a variable that
determines the maximum number of media items to keep in memory. In the handler, get a reference to the
previously played item and add it to the queue and dequeue the oldest entry in the queue. Call Reset on the
returned item to free up its resources, but first check to make sure it's not still on the queue or currently being
played to handle cases where the item is played multiple times.

Queue<MediaPlaybackItem> _playbackItemQueue = new Queue<MediaPlaybackItem>();


int maxCachedItems = 3;

private async void MediaPlaybackList_CurrentItemChanged(MediaPlaybackList sender, CurrentMediaPlaybackItemChangedEventArgs args)


{
if(args.NewItem.Source.CustomProperties["Title"] != null)
{
await Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () =>
{
clipTitleTextBlock.Text = args.NewItem.Source.CustomProperties["Title"] as string;
});
}
}

Call MovePrevious or MoveNext to cause the media player to play the previous or next item in your
MediaPlaybackList.
private void prevButton_Click(object sender, RoutedEventArgs e)
{
mediaPlaybackList.MovePrevious();
}

private void nextButton_Click(object sender, RoutedEventArgs e)


{
mediaPlaybackList.MoveNext();
}

Set the ShuffleEnabled property to specify whether the media player should play the items in your list in random
order.

private async void shuffleButton_Click(object sender, RoutedEventArgs e)


{
mediaPlaybackList.ShuffleEnabled = !mediaPlaybackList.ShuffleEnabled;

await Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () =>


{
shuffleButton.FontWeight =
mediaPlaybackList.ShuffleEnabled ? Windows.UI.Text.FontWeights.Bold : Windows.UI.Text.FontWeights.Light;
});
}

Set the AutoRepeatEnabled property to specify whether the media player should loop playback of your list.

private async void autoRepeatButton_Click(object sender, RoutedEventArgs e)


{
mediaPlaybackList.AutoRepeatEnabled = !mediaPlaybackList.AutoRepeatEnabled;

await Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () =>


{
autoRepeatButton.FontWeight =
mediaPlaybackList.AutoRepeatEnabled ? Windows.UI.Text.FontWeights.Bold : Windows.UI.Text.FontWeights.Light;
});
}

Handle the failure of media items in a playback list


The ItemFailed event is raised when an item in the list fails to open. The ErrorCode property of the
MediaPlaybackItemError object passed into the handler enumerates the specific cause of the failure when
possible, including network errors, decoding errors, or encryption errors.

private void MediaPlaybackList_ItemFailed(MediaPlaybackList sender, MediaPlaybackItemFailedEventArgs args)


{
LogError(args.Error.ErrorCode.ToString());
LogError(args.Error.ExtendedError.HResult);

Related topics
Media playback
Play audio and video with MediaPlayer
Integrate with the Sytem Media Transport Controls
Play media in the background
Integrate with the System Media Transport Controls
3/6/2017 5 min to read Edit on GitHub

This article shows you how to interact with the System Media Transport Controls (SMTC). The SMTC is a set of
controls that are common to all Windows 10 devices and that provide a consistent way for users to control media
playback for all running apps that use MediaPlayer for playback.
For a complete sample that demonstrates integration with the SMTC, see System Media Tranport Controls sample
on github.

Automatic integration with SMTC


Starting with Windows 10, version 1607, UWP apps that use the MediaPlayer class to play media are
automatically integrated with the SMTC by default. Simply instantiate a new instance of MediaPlayer and assign
a MediaSource, MediaPlaybackItem, or MediaPlaybackList to the player's Source property and the user will
see your app name in the SMTC and can play, pause, and move through your playback lists by using the SMTC
controls.
Your app can create and use multiple MediaPlayer objects at once. For each active MediaPlayer instance in your
app, a separate tab is created in the SMTC, allowing the user to switch between your active media players and
those of other running apps. Whichever media player is currently selected in the SMTC is the one that the controls
will affect.
For more information on using MediaPlayer in your app, including binding it to a MediaPlayerElement in your
XAML page, see Play audio and video with MediaPlayer.
For more information on working with MediaSource, MediaPlaybackItem, and MediaPlaybackList, see Media
items, playlists, and tracks.

Add metadata to be displayed by the SMTC


If you want add or modify the metadata that is displayed for your media items in the SMTC, such as a video or
song title, you need to update the display properties for the MediaPlaybackItem representing your media item.
First, get a reference to the MediaItemDisplayProperties object by calling GetDisplayProperties. Next, set the
type of media, music or video, for the item with the Type property. Then you can populate the fields of the
MusicProperties or VideoProperties, depending on which media type you specified. Finally, update the
metadata for the media item by calling ApplyDisplayProperties.

MediaItemDisplayProperties props = mediaPlaybackItem.GetDisplayProperties();


props.Type = Windows.Media.MediaPlaybackType.Video;
props.VideoProperties.Title = "Video title";
props.VideoProperties.Subtitle = "Video subtitle";
props.VideoProperties.Genres.Add("Documentary");
mediaPlaybackItem.ApplyDisplayProperties(props);

props = mediaPlaybackItem.GetDisplayProperties();
props.Type = Windows.Media.MediaPlaybackType.Music;
props.MusicProperties.Title = "Song title";
props.MusicProperties.Artist = "Song artist";
props.MusicProperties.Genres.Add("Polka");
mediaPlaybackItem.ApplyDisplayProperties(props);
Use CommandManager to modify or override the default SMTC
commands
Your app can modify or completely override the behavior of the SMTC controls with the
MediaPlaybackCommandManager class. A command manager instance can be obtained for each instance of
the MediaPlayer class by accessing the CommandManager property.
For every command, such as the Next command which by default skips to the next item in a MediaPlaybackList,
the command manager exposes a received event, like NextReceived, and an object that manages the behavior of
the command, like NextBehavior.
The following example registers a handler for the NextReceived event and for the IsEnabledChanged event of
the NextBehavior.

_mediaPlayer.CommandManager.NextReceived += CommandManager_NextReceived;
_mediaPlayer.CommandManager.NextBehavior.IsEnabledChanged += NextBehavior_IsEnabledChanged;

The following example illustrates a scenario where the app wants to disable the Next command after the user has
clicked through five items in the playlist, perhaps requiring some user interaction before continuing playing
content. Each ## the NextReceived event is raised, a counter is incremented. Once the counter reaches the target
number, the EnablingRule for the Next command is set to Never, which disables the command.

int _nextPressCount = 0;
private void CommandManager_NextReceived(MediaPlaybackCommandManager sender,
MediaPlaybackCommandManagerNextReceivedEventArgs args)
{
_nextPressCount++;
if (_nextPressCount > 5)
{
sender.NextBehavior.EnablingRule = MediaCommandEnablingRule.Never;
// Perform app tasks while the Next button is disabled
}
}

You can also set the command to Always, which means the command will always be enabled even if, for the Next
command example, there are no more items in the playlist. Or you can set the command to Auto, where the
system determines whether the command should be enabled based on the current content being played.
For the scenario described above, at some point the app will want to reenable the Next command and does so by
setting the EnablingRule to Auto.

_mediaPlayer.CommandManager.NextBehavior.EnablingRule = MediaCommandEnablingRule.Auto;
_nextPressCount = 0;

Because your app may have it's own UI for controlling playback while it is in the foreground, you can use the
IsEnabledChanged events to update your own UI to match the SMTC as commands are enabled or disabled by
accessing the IsEnabled of the MediaPlaybackCommandManagerCommandBehavior passed into the
handler.

private void NextBehavior_IsEnabledChanged(MediaPlaybackCommandManagerCommandBehavior sender, object args)


{
MyNextButton.IsEnabled = sender.IsEnabled;
}

In some cases, you may want to completely override the behavior of an SMTC command. The example below
illustrates a scenario where an app uses the Next and Previous commands to switch between internet radio
stations instead of skipping between tracks in the current playlist. As in the previous example, a handler is
registered for when a command is received, in this case it is the PreviousReceived event.

_mediaPlayer.CommandManager.PreviousReceived += CommandManager_PreviousReceived;

In the PreviousReceived handler, first a Deferral is obtained by calling the GetDeferral of the
MediaPlaybackCommandManagerPreviousReceivedEventArgs passed into the handler. This tells the system
to wait for until the deferall is complete before executing the command. This is extremely important if you are
going to make asynchronous calls in the handler. At this point, the example calls a custom method that returns a
MediaPlaybackItem representing the previous radio station.
Next, the Handled property is checked to make sure that the event wasn't already handled by another handler. If
not, the Handled property is set to true. This lets the SMTC, and any other subscribed handlers, know that they
should take no action to execute this command because it has already been handled. The code then sets the new
source for the media player and starts the player.
Finally, Complete is called on the deferral object to let the system know that you are done processing the
command.

private async void CommandManager_PreviousReceived(MediaPlaybackCommandManager sender,


MediaPlaybackCommandManagerPreviousReceivedEventArgs args)
{
var deferral = args.GetDeferral();
MediaPlaybackItem mediaPlaybackItem = await GetPreviousStation();

if(args.Handled != true)
{
args.Handled = true;
sender.MediaPlayer.Source = mediaPlaybackItem;
sender.MediaPlayer.Play();
}
deferral.Complete();
}

Manual control of the SMTC


As mentioned previously in this article, the SMTC will automatically detect and display information for every
instance of MediaPlayer that your app creates. If you want to use multiple instances of MediaPlayer but want
the SMTC to provide a single entry for your app, then you must manually control the behavior of the SMTC
instead of relying on automatic integration. Also, if you are using MediaTimelineController to control one or
more media players, you must use manual SMTC integration. Also, if your app uses an API other than
MediaPlayer, such as the AudioGraph class, to play media, you must implement manual SMTC integration for
the user to use the SMTC to control your app. For information on how to manually control the SMTC, see Manual
control of the System Media Transport Controls.

Related topics
Media playback
Play audio and video with MediaPlayer
Manual control of the System Media Transport Controls
System Media Tranport Controls sample on github
Manual control of the System Media Transport
Controls
3/6/2017 7 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Starting with Windows 10, version 1607, UWP apps that use the MediaPlayer class to play media are
automatically integrated with the System Media Transport Controls (SMTC) by default. This is the recommended
way of interacting with the SMTC for most scenarios. For more information on customizing the SMTC's default
integration with MediaPlayer, see Integrate with the System Media Transport Controls.
There are a few scenarios where you may need to implement manual control of the SMTC. These include if you are
using a MediaTimelineController to control playback of one or more media players. Or if you are using multiple
media players and only want to have one instance of SMTC for your app. You must manually control the SMTC if
you are using MediaElement to play media.

Set up transport controls


If you are using MediaPlayer to play media, you can get an instance of the SystemMediaTransportControls
class by accessing the MediaPlayer.SystemMediaTransportControls property. If you are going to manually
control the SMTC, you should disable the automatic integration provided by MediaPlayer by setting the
CommandManager.IsEnabled property to false.

NOTE
If you disable the MediaPlaybackCommandManager of the MediaPlayer by setting IsEnabled to false, it will break the
link between the MediaPlayer the TransportControls provided by the MediaPlayerElement, so the built-in transport
controls will no longer automatically control the playback of the player. Instead, you must implement your own controls to
control the MediaPlayer.

_mediaPlayer = new MediaPlayer();


_systemMediaTransportControls = _mediaPlayer.SystemMediaTransportControls;
_mediaPlayer.CommandManager.IsEnabled = false;

You can also get an instance of the SystemMediaTransportControls by calling GetForCurrentView. You must
get the object with this method if you are using MediaElement to play media.

_systemMediaTransportControls = SystemMediaTransportControls.GetForCurrentView();

Enable the buttons that your app will use by setting the corresponding "is enabled" property of the
SystemMediaTransportControls object, such as IsPlayEnabled, IsPauseEnabled, IsNextEnabled, and
IsPreviousEnabled. See the SystemMediaTransportControls reference documentation for a complete list of
available controls.

_systemMediaTransportControls.IsPlayEnabled = true;
_systemMediaTransportControls.IsPauseEnabled = true;

Register a handler for the ButtonPressed event to receive notifications when the user presses a button.
_systemMediaTransportControls.ButtonPressed += SystemControls_ButtonPressed;

Handle system media transport controls button presses


The ButtonPressed event is raised by the system transport controls when one of the enabled buttons is pressed.
The Button property of the SystemMediaTransportControlsButtonPressedEventArgs passed into the event
handler is a member of the SystemMediaTransportControlsButton enumeration that indicates which of the
enabled buttons was pressed.
In order to update objects on the UI thread from the ButtonPressed event handler, such as a MediaElement
object, you must marshal the calls through the CoreDispatcher. This is because the ButtonPressed event handler
is not called from the UI thread and therefore an exception will be thrown if you attempt to modify the UI directly.

async void SystemControls_ButtonPressed(SystemMediaTransportControls sender,


SystemMediaTransportControlsButtonPressedEventArgs args)
{
switch (args.Button)
{
case SystemMediaTransportControlsButton.Play:
await Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () =>
{
mediaElement.Play();
});
break;
case SystemMediaTransportControlsButton.Pause:
await Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () =>
{
mediaElement.Pause();
});
break;
default:
break;
}
}

Update the system media transport controls with the current media
status
You should notify the SystemMediaTransportControls when the state of the media has changed so that the
system can update the controls to reflect the current state. To do this, set the PlaybackStatus property to the
appropriate MediaPlaybackStatus value from within the CurrentStateChanged event of the MediaElement,
which is raised when the media state changes.
void MediaElement_CurrentStateChanged(object sender, RoutedEventArgs e)
{
switch (mediaElement.CurrentState)
{
case MediaElementState.Playing:
_systemMediaTransportControls.PlaybackStatus = MediaPlaybackStatus.Playing;
break;
case MediaElementState.Paused:
_systemMediaTransportControls.PlaybackStatus = MediaPlaybackStatus.Paused;
break;
case MediaElementState.Stopped:
_systemMediaTransportControls.PlaybackStatus = MediaPlaybackStatus.Stopped;
break;
case MediaElementState.Closed:
_systemMediaTransportControls.PlaybackStatus = MediaPlaybackStatus.Closed;
break;
default:
break;
}
}

Update the system media transport controls with media info and
thumbnails
Use the SystemMediaTransportControlsDisplayUpdater class to update the media info that is displayed by the
transport controls, such as the song title or the album art for the currently playing media item. Get an instance of
this class with the SystemMediaTransportControls.DisplayUpdater property. For typical scenarios, the
recommended way to pass the metadata is to call CopyFromFileAsync, passing in the currently playing media
file. The display updater will automatically extract the metadata and thumbnail image from the file.
Call the Update to cause the system media transport controls to update its UI with the new metadata and
thumbnail.

async void MediaElement_MediaOpened(object sender, RoutedEventArgs e)


{
// Get the updater.
SystemMediaTransportControlsDisplayUpdater updater = _systemMediaTransportControls.DisplayUpdater;

await updater.CopyFromFileAsync(MediaPlaybackType.Music, currentMediaFile);

// Update the system media transport controls


updater.Update();
}

If your scenario requires it, you can update the metadata displayed by the system media transport controls
manually by setting the values of the MusicProperties, ImageProperties, or VideoProperties objects exposed
by the DisplayUpdater class.
// Get the updater.
SystemMediaTransportControlsDisplayUpdater updater = _systemMediaTransportControls.DisplayUpdater;

// Music metadata.
updater.MusicProperties.Artist = "artist";
updater.MusicProperties.AlbumArtist = "album artist";
updater.MusicProperties.Title = "song title";

// Set the album art thumbnail.


// RandomAccessStreamReference is defined in Windows.Storage.Streams
updater.Thumbnail =
RandomAccessStreamReference.CreateFromUri(new Uri("ms-appx:///Music/music1_AlbumArt.jpg"));

// Update the system media transport controls.


updater.Update();

Update the system media transport controls timeline properties


The system transport controls display information about the timeline of the currently playing media item,
including the current playback position, the start time, and the end time of the media item. To update the system
transport controls timeline properties, create a new SystemMediaTransportControlsTimelineProperties object.
Set the properties of the object to reflect the current state of the playing media item. Call
SystemMediaTransportControls.UpdateTimelineProperties to cause the controls to update the timeline.

// Create our timeline properties object


var timelineProperties = new SystemMediaTransportControlsTimelineProperties();

// Fill in the data, using the media elements properties


timelineProperties.StartTime = TimeSpan.FromSeconds(0);
timelineProperties.MinSeekTime = TimeSpan.FromSeconds(0);
timelineProperties.Position = mediaElement.Position;
timelineProperties.MaxSeekTime = mediaElement.NaturalDuration.TimeSpan;
timelineProperties.EndTime = mediaElement.NaturalDuration.TimeSpan;

// Update the System Media transport Controls


_systemMediaTransportControls.UpdateTimelineProperties(timelineProperties);

You must provide a value for the StartTime, EndTime and Position in order for the system controls to
display a timeline for your playing item.
MinSeekTime and MaxSeekTime allow you to specify the range within the timeline that the user can
seek. A typical scenario for this is to allow content providers to include advertisement breaks in their media.
You must set MinSeekTime and MaxSeekTime in order for the PositionChangeRequest to be raised.
It is recommended that you keep the system controls in sync with your media playback by updating these
properties approximately every 5 seconds during playback and again whenever the state of playback
changes, such as pausing or seeking to a new position.

Respond to player property changes


There is a set of system transport controls properties that relate to the current state of the media player itself,
rather than the state of the playing media item. Each of these properties is matched with an event that is raised
when the user adjusts the associated control. These properties and events include:
PROPERTY EVENT

AutoRepeatMode AutoRepeatModeChangeRequested

PlaybackRate PlaybackRateChangeRequested

ShuffleEnabled ShuffleEnabledChangeRequested

To handle user interaction with one of these controls, first register a handler for the associated event.

_systemMediaTransportControls.PlaybackRateChangeRequested += SystemControls_PlaybackRateChangeRequested;

In the handler for the event, first make sure that the requested value is within a valid and expected range. If it is, set
the corresponding property on MediaElement and then set the corresponding property on the
SystemMediaTransportControls object.

void SystemControls_PlaybackRateChangeRequested(SystemMediaTransportControls sender, PlaybackRateChangeRequestedEventArgs args)


{
// Check the requested value to make sure it is within a valid and expected range
if (args.RequestedPlaybackRate >= 0 && args.RequestedPlaybackRate <= 2)
{
// Set the requested value on the MediaElement
mediaElement.PlaybackRate = args.RequestedPlaybackRate;

// Update the system media controls to reflect the new value


_systemMediaTransportControls.PlaybackRate = mediaElement.PlaybackRate;
}
}

In order for one of these player property events to be raised, you must set an initial value for the property. For
example, PlaybackRateChangeRequested will not be raised until after you have set a value for the
PlaybackRate property at least one time.

Use the system media transport controls for background audio


If you are not using the automatic SMTC integration provided by MediaPlayer you must manually integrate with
the SMTC to enable background audio. At a minimum, your app must enable the play and pause buttons by
setting IsPlayEnabled and IsPauseEnabled to true. Your app must also handle the ButtonPressed event. If your
app does not meet these requirements, audio playback will stop when your app moves to the background.
Apps that use the new one-process model for background audio should get an instance of the
SystemMediaTransportControls by calling GetForCurrentView. Apps that use the legacy two-process model
for background audio must use BackgroundMediaPlayer.Current.SystemMediaTransportControls to get
access to the SMTC from their background process.
For more information on playing audio in the background, see Play media in the background.

Related topics
Media playback
Integrate with the System Media Transport Controls
System Media Tranport sample
Create, schedule, and manage media breaks
3/6/2017 8 min to read Edit on GitHub

This article shows you how to create, schedule, and manage media breaks to your media playback app. Media
breaks are typically used to insert audio or video ads into media content. Starting with Windows 10, version 1607,
you can use the MediaBreakManager class to quickly and easily add media breaks to any MediaPlaybackItem
that you play with a MediaPlayer.
After you schedule one or more media breaks, the system will automatically play your media content at the
specified time during playback. The MediaBreakManager provides events so that your app can react when media
breaks start, end, or when they are skipped by the user. You can also access a MediaPlaybackSession for your
media breaks to monitor events such as download and buffering progress updates.

Schedule media breaks


Every MediaPlaybackItem object has its own MediaBreakSchedule that you use to configure the media breaks
that will play when the item is played. The first step for using media breaks in your app is to create a
MediaPlaybackItem for your main playback content.

MediaPlaybackItem moviePlaybackItem =
new MediaPlaybackItem(MediaSource.CreateFromUri(new Uri("http://www.fabrikam.com/movie.mkv")));

For more information about working with MediaPlaybackItem, MediaPlaybackList and other fundamental
media playback APIs, see Media items, playlists, and tracks.
The next example shows how to add a preroll break to the MediaPlaybackItem, which means that the system will
play the media break before playing the playback item to which the break belongs. First a new MediaBreak object
is instantiated. In this example, the constructor is called with MediaBreakInsertionMethod.Interrupt, meaning
that the main content will be paused while the break content is played.
Next, a new MediaPlaybackItem is created for the content that will be played during the break, such as an ad. The
CanSkip property of this playback item is set to false. This means that the user will not be able to skip the item
using the built-in media controls. Your app can still choose to skip the add programatically by calling
SkipCurrentBreak.
The media break's PlaybackList property is a MediaPlaybackList that allows you to play multiple media items as
a playlist. Add one or more MediaPlaybackItem objects from the list's Items collection to include them in the
media break's playlist.
Finally, schedule the media break by using the main content playback item's BreakSchedule property. Specify the
break to be a preroll break by assigning it to the PrerollBreak property of the schedule object.

MediaBreak preRollMediaBreak = new MediaBreak(MediaBreakInsertionMethod.Interrupt);


MediaPlaybackItem prerollAd =
new MediaPlaybackItem(MediaSource.CreateFromUri(new Uri("http://www.fabrikam.com/preroll_ad.mp4")));
prerollAd.CanSkip = false;
preRollMediaBreak.PlaybackList.Items.Add(prerollAd);

moviePlaybackItem.BreakSchedule.PrerollBreak = preRollMediaBreak;

Now you can play back the main media item, and the media break that you created will play before the main
content. Create a new MediaPlayer object and optionally set the AutoPlay property to true to start playback
automatically. Set the Source property of the MediaPlayer to your main content playback item. It's not required,
but you can assign the MediaPlayer to a MediaPlayerElement to render the media in a XAML page. For more
information about using MediaPlayer, see Play audio and video with MediaPlayer.

_mediaPlayer = new MediaPlayer();


_mediaPlayer.AutoPlay = true;
_mediaPlayer.Source = moviePlaybackItem;
mediaPlayerElement.SetMediaPlayer(_mediaPlayer);

Add a postroll break that plays after the MediaPlaybackItem containing your main content finishes playing, by
using the same technique as a preroll break, except that you assign your MediaBreak object to the PostrollBreak
property.

MediaBreak postrollMediaBreak = new MediaBreak(MediaBreakInsertionMethod.Interrupt);


MediaPlaybackItem postRollAd =
new MediaPlaybackItem(MediaSource.CreateFromUri(new Uri("http://www.fabrikam.com/postroll_ad.mp4")));
postrollMediaBreak.PlaybackList.Items.Add(postRollAd);

moviePlaybackItem.BreakSchedule.PostrollBreak = postrollMediaBreak;

You can also schedule one or more midroll breaks that play at a specified time within the playback of the main
content. In the following example, the MediaBreak is created with the constructor overload that accepts a
TimeSpan object, which specifies the time within the playback of the main media item when the break will be
played. Again, MediaBreakInsertionMethod.Interrupt is specified to indicate that the main content's playback
will be paused while the break plays. The midroll break is added to the schedule by calling InsertMidrollBreak.
You can get a read-only list of the current midroll breaks in the schedule by accessing the MidrollBreaks property.

MediaBreak midrollMediaBreak = new MediaBreak(MediaBreakInsertionMethod.Interrupt, TimeSpan.FromMinutes(10));


midrollMediaBreak.PlaybackList.Items.Add(
new MediaPlaybackItem(MediaSource.CreateFromUri(new Uri("http://www.fabrikam.com/midroll_ad_1.mp4"))));
midrollMediaBreak.PlaybackList.Items.Add(
new MediaPlaybackItem(MediaSource.CreateFromUri(new Uri("http://www.fabrikam.com/midroll_ad_2.mp4"))));
moviePlaybackItem.BreakSchedule.InsertMidrollBreak(midrollMediaBreak);

The next midroll break example shown uses the MediaBreakInsertionMethod.Replace insertion method, which
means that the system will continue processing the main content while the break is playing. This option is typically
used by live streaming media apps where you don't want the content to pause and fall behind the live stream while
the ad is played.
This example also uses an overload of the MediaPlaybackItem constructor that accepts two TimeSpan
parameters. The first parameter specifies the starting point within the media break item where playback will begin.
The second parameter specifies the duration for which the media break item will be played. So, in the following
example, the MediaBreak will begin playing at 20 minutes into the main content. When it plays, the media item
will start 30 seconds from the beginning of the break media item and will play for 15 seconds before the main
media content resumes playing.

midrollMediaBreak = new MediaBreak(MediaBreakInsertionMethod.Replace, TimeSpan.FromMinutes(20));


MediaPlaybackItem ad =
new MediaPlaybackItem(MediaSource.CreateFromUri(new Uri("http://www.fabrikam.com/midroll_ad_3.mp4")),
TimeSpan.FromSeconds(30),
TimeSpan.FromSeconds(15));
ad.CanSkip = false;
midrollMediaBreak.PlaybackList.Items.Add(ad);
Skip media breaks
As mentioned previously in this article, the CanSkip property of a MediaPlaybackItem can be set to prevent the
user from skipping the content with the built-in controls. However, you can call SkipCurrentBreak from your code
at any time to skip the current break.

private void SkipButton_Click(object sender, RoutedEventArgs e)


{
_mediaPlayer.BreakManager.SkipCurrentBreak();
}

Handle MediaBreak events


There are several events related to media breaks that you can register for in order to take action based on the
changing state of media breaks.

_mediaPlayer.BreakManager.BreakStarted += BreakManager_BreakStarted;
_mediaPlayer.BreakManager.BreakEnded += BreakManager_BreakEnded;
_mediaPlayer.BreakManager.BreakSkipped += BreakManager_BreakSkipped;
_mediaPlayer.BreakManager.BreaksSeekedOver += BreakManager_BreaksSeekedOver;

The BreakStarted is raised when a media break starts. You may want to update your UI to let the user know that
media break content is playing. This example uses the MediaBreakStartedEventArgs passed into the handler to
get a reference to the media break that started. Then the CurrentItemIndex property is used to determine which
media item in the media break's playlist is being played. Then the UI is updated to show the user the current ad
index and the number of ads remaining in the break. Remember that updates to the UI must be made on the UI
thread, so the call should be made inside a call to RunAsync.

private async void BreakManager_BreakStarted(MediaBreakManager sender, MediaBreakStartedEventArgs args)


{
MediaBreak currentBreak = sender.CurrentBreak;
var currentIndex = currentBreak.PlaybackList.CurrentItemIndex;
var itemCount = currentBreak.PlaybackList.Items.Count;

await Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () =>


{
statusTextBlock.Text = String.Format("Playing ad {0} of {1}", currentIndex + 1, itemCount);
});
}

BreakEnded is raised when all of the media items in the break have finished playing or have been skipped over.
You can use the handler for this event to update the UI to indicate that media break content is no longer playing.

private async void BreakManager_BreakEnded(MediaBreakManager sender, MediaBreakEndedEventArgs args)


{
// Update UI to show that the MediaBreak is no longer playing
await Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () =>
{
statusTextBlock.Text = "";
});

args.MediaBreak.CanStart = false;
}

The BreakSkipped event is raised when the user presses the Next button in the built-in UI during playback of an
item for which CanSkip is true, or when you skip a break in your code by calling SkipCurrentBreak.
The following example uses the Source property of the MediaPlayer to get a reference to the media item for the
main content. The skipped media break belongs to the break schedule of this item. Next, the code checks to see if
the media break that was skipped is the same as the media break set to the PrerollBreak property of the schedule.
If so, this means that the preroll break was the break that was skipped, and in this case, a new midroll break is
created and scheduled to play 10 minutes into the main content.

private async void BreakManager_BreakSkipped(MediaBreakManager sender, MediaBreakSkippedEventArgs args)


{
// Update UI to show that the MediaBreak is no longer playing
await Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () =>
{
statusTextBlock.Text = "";
});

MediaPlaybackItem currentItem = _mediaPlayer.Source as MediaPlaybackItem;


if(currentItem.BreakSchedule.PrerollBreak != null
&& currentItem.BreakSchedule.PrerollBreak == args.MediaBreak)
{
MediaBreak mediaBreak = new MediaBreak(MediaBreakInsertionMethod.Interrupt, TimeSpan.FromMinutes(10));
mediaBreak.PlaybackList.Items.Add(await GetAdPlaybackItem());
currentItem.BreakSchedule.InsertMidrollBreak(mediaBreak);
}
}

BreaksSeekedOver is raised when the playback position of the main media item passes over the scheduled time
for one or more media breaks. The following example checks to see if more than one media break was seeked over,
if the playback position was moved forward, and if it was moved forward less than 10 minutes. If so, the first break
that was seeked over, obtained from the SeekedOverBreaks collection exposed by the event args, is played
immediately with a call to the PlayBreak method of the MediaPlayer.BreakManager.

private void BreakManager_BreaksSeekedOver(MediaBreakManager sender, MediaBreakSeekedOverEventArgs args)


{
if(args.SeekedOverBreaks.Count > 1
&& args.NewPosition.TotalMinutes > args.OldPosition.TotalMinutes
&& args.NewPosition.TotalMinutes - args.OldPosition.TotalMinutes < 10.0)
{
_mediaPlayer.BreakManager.PlayBreak(args.SeekedOverBreaks[0]);
}
}

Get information about the current media break


As mentioned previously in this article, the CurrentItemIndex property can be used to determine which media
item in a media break is currently playing. You may want to periodically check for the currently playing item in
order to update your UI. Be sure to check the CurrentBreak property for null first. If the property is null, no media
break is currently playing.
public int GetCurrentBreakItemIndex()
{
MediaBreak mediaBreak = _mediaPlayer.BreakManager.CurrentBreak;
if(mediaBreak != null)
{
return (int)mediaBreak.PlaybackList.CurrentItemIndex;
}
else
{
return -1;
}
}

Access the current playback session


The MediaPlaybackSession object uses the MediaPlayer class to provide data and events related to the
currently playing media content. The MediaBreakManager also has a MediaPlaybackSession that you can
access to get data and events specifically related to the media break content that is being played. Information you
can get from the playback session includes the current playback state, playing or paused, and the current playback
position within the content. You can use the NaturalVideoWidth and NaturalVideoHeight properties and the
NaturalVideoSizeChanged to adjust your video UI if the media break content has a different aspect ratio than
your main content. You can also receive events such as BufferingStarted, BufferingEnded, and
DownloadProgressChanged that can provide valuable telemetry about the performance of your app.
The following example registers a handler for the BufferingProgressChanged event; in the event handler, it
updates the UI to show the current buffering progress.

_mediaPlayer.BreakManager.PlaybackSession.BufferingProgressChanged += PlaybackSession_BufferingProgressChanged;

private async void PlaybackSession_BufferingProgressChanged(MediaPlaybackSession sender, object args)


{
await Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () =>
{
bufferingProgressBar.Value = sender.BufferingProgress;
});
}

Related topics
Media playback
Play audio and video with MediaPlayer
Manual control of the System Media Transport Controls
Play media in the background
3/6/2017 5 min to read Edit on GitHub

This article shows you how to configure your app so that media continues to play when your app moves from the
foreground to the background. This means that even after the user has minimized your app, returned to the home
screen, or has navigated away from your app in some other way, your app can continue to play audio.
Scenarios for background audio playback include:
Long-running playlists: The user briefly brings up a foreground app to select and start a playlist, after
which the user expects the playlist to continue playing in the background.
Using task switcher: The user briefly brings up a foreground app to start playing audio, then switches to
another open app using the task switcher. The user expects the audio to continue playing in the
background.
The background audio implementation described in this article will allow your app to run universally on all
Windows devices including Mobile, Desktop, and Xbox.

NOTE
The code in this article was adapted from the UWP Background Audio sample.

Explanation of one-process model


With Windows 10, version 1607, a new single-process model has been introduced that greatly simplifies the
process of enabling background audio. Previously, your app was required to manage a background process in
addition to your foreground app and then manually communicate state changes between the two processes.
Under the new model, you simply add the background audio capability to your app manifest, and your app will
automatically continue playing audio when it moves to the background. Two new application lifecycle events,
EnteredBackground and LeavingBackground let your app know when it is entering and leaving the
background. When your app moves into the transitions to or from the background, the memory constraints
enforced by the system may change, so you can use these events to check your current memory consumption and
free up resources in order to stay below the limit.
By eliminating the complex cross-process communication and state management, the new model allows you to
implement background audio much more quickly with a significant reduction in code. However, the two-process
model is still supported in the current release for backwards compatibility. For more information, see Legacy
background audio model.

Requirements for background audio


Your app must meet the following requirements for audio playback while your app is in the background.
Add the Background Media Playback capability to your app manifest, as described later in this article.
If your app disables the automatic integration of MediaPlayer with the System Media Transport Controls
(SMTC), such as by setting the CommandManager.IsEnabled property to false, then you must implement
manual integration with the SMTC in order to enable background media playback. You must also manually
integrate with SMTC if you are using an API other than MediaPlayer, such as AudioGraph, to play audio if
you want to have the audio continue to play when your app moves to the background. The minimum SMTC
integration requirements are described in the "Use the system media transport controls for background audio"
section of Manual control of the System Media Transport Controls.
While your app is in the background, you must stay under the memory usage limits set by the system for
background apps. Guidance for managing memory while in the background is provided later in this article.

Background media playback manifest capability


To enable background audio, you must add the background media playback capability to the app manifest file,
Package.appxmanifest.
To add capabilities to the app manifest using the manifest designer
1. In Microsoft Visual Studio, in Solution Explorer, open the designer for the application manifest by double-
clicking the package.appxmanifest item.
2. Select the Capabilities tab.
3. Select the Background Media Playback check box.
To set the capability by manually editing the app manifest xml, first make sure that the uap3 namespace prefix is
defined in the Package element. If not, add it as shown below.

<Package
xmlns="http://schemas.microsoft.com/appx/manifest/foundation/windows10"
xmlns:mp="http://schemas.microsoft.com/appx/2014/phone/manifest"
xmlns:uap="http://schemas.microsoft.com/appx/manifest/uap/windows10"
xmlns:uap3="http://schemas.microsoft.com/appx/manifest/uap/windows10/3"
IgnorableNamespaces="uap uap3 mp">

Next, add the backgroundMediaPlayback capability to the Capabilities element:

<Capabilities>
<uap3:Capability Name="backgroundMediaPlayback"/>
</Capabilities>

Handle transitioning between foreground and background


When your app moves from the foreground to the background, the EnteredBackground event is raised. And
when your app returns to the foreground, the LeavingBackground event is raised. Because these are app
lifecycle events, you should register handlers for these events when your app is created. In the default project
template, this means adding it to the App class constructor in App.xaml.cs.

public App()
{
this.InitializeComponent();
this.Suspending += OnSuspending;

this.EnteredBackground += App_EnteredBackground;
this.LeavingBackground += App_LeavingBackground;
}

Create a variable to track whether you are currently running in the background.

bool _isInBackgroundMode = false;

When the EnteredBackground event is raised, set the tracking variable to indicate that you are currently running
in the background. You should not perform long-running tasks in the EnteredBackground event because this
may cause the transition to the background to appear slow to the user.

private void App_EnteredBackground(object sender, EnteredBackgroundEventArgs e)


{
_isInBackgroundMode = true;
}

In the LeavingBackground event handler, you should set the tracking variable to indicate that your app is no
longer running in the background.

private void App_LeavingBackground(object sender, LeavingBackgroundEventArgs e)


{
_isInBackgroundMode = false;
}

Memory management requirements


The most important part of handling the transition between foreground and background is managing the memory
that your app uses. Because running in the background will reduce the memory resources your app is allowed to
retain by the system, you should also register for the AppMemoryUsageIncreased and
AppMemoryUsageLimitChanging events. When these events are raised, you should check your app's current
memory usage and the current limit, and then reduce your memory usage if needed. For information about
reducing your memory usage while running in the background, see Free memory when your app moves to the
background.

Network availability for background media apps


All network aware media sources, those that are not created from a stream or a file, will keep the network
connection active while retrieving remote content, and they release it when they are not. MediaStreamSource,
specifically, relies on the application to correctly report the correct buffered range to the platform using
SetBufferedRange. After the entire content is fully buffered, the network will no longer be reserved on the apps
behalf.
If you need to make network calls that occur in the background when media is not downloading, these must be
wrapped in an appropriate task like ApplicationTrigger, MaintenanceTrigger, or TimeTrigger. For more
information, see Support your app with background tasks.

Related topics
Media playback
Play audio and video with MediaPlayer
Integrate with the Sytem Media Transport Controls
Background Audio sample
Legacy background media playback
3/6/2017 6 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This article describes the legacy, two-process model for adding background audio support to your UWP app.
Starting with Windows 10, version 1607, a single-process model for background audio that is much simpler to
implement. For more information on the current recommendations for background audio, see Play media in the
background. This article is intended to provide support for apps that are have already been developed using the
legacy two-process model.

Background audio architecture


An app performing background playback consists of two processes. The first process is the main app, which
contains the app UI and client logic, running in the foreground. The second process is the background playback
task, which implements IBackgroundTask like all UWP app background tasks. The background task contains the
audio playback logic and background services. The background task communicates with the system through the
System Media Transport Controls.
The following diagram is an overview of how the system is designed.

MediaPlayer
The Windows.Media.Playback namespace contains APIs used to play audio in the background. There is a single
instance of MediaPlayer per app through which playback occurs. Your background audio app calls methods and
sets properties on the MediaPlayer class to set the current track, start playback, pause, fast forward, rewind, and so
on. The media player object instance is always accessed through the BackgroundMediaPlayer.Current property.

MediaPlayer Proxy and Stub


When BackgroundMediaPlayer.Current is accessed from your app's background process, the MediaPlayer
instance is activated in the background task host and can be manipulated directly.
When BackgroundMediaPlayer.Current is accessed from the foreground application, the MediaPlayer instance
that is returned is actually a proxy that communicates with a stub in the background process. This stub
communicates with the actual MediaPlayer instance, which is also hosted in the background process.
Both the foreground and background process can access most of the properties of the MediaPlayer instance, with
the exception of MediaPlayer.Source and MediaPlayer.SystemMediaTransportControls which can only be
accessed from the background process. The foreground app and the background process can both receive
notifications of media-specific events like MediaOpened, MediaEnded, and MediaFailed.

Playback Lists
A common scenario for background audio applications is to play multiple items in a row. This is most easily
accomplished in your background process by using a MediaPlaybackList object, which can be set as a source on
the MediaPlayer by assigning it to the MediaPlayer.Source property.
It is not possible to access a MediaPlaybackList from the foreground process that was set in the background
process.

System Media Transport Controls


A user may control audio playback without directly using your app's UI through means such as Bluetooth devices,
SmartGlass, and the System Media Transport Controls. Your background task uses the
SystemMediaTransportControls class to subscribe to these user-initiated system events.
To get a SystemMediaTransportControls instance from within the background process, use the
MediaPlayer.SystemMediaTransportControls property. Foreground apps get an instance of the class by calling
SystemMediaTransportControls.GetForCurrentView, but the instance returned is a foreground-only instance
that does not relate to the background task.

Sending Messages Between Tasks


There are times when you will want to communicate between the two processes of a background audio app. For
example, you might want the background task to notify the foreground task when a new track starts playing, and
then send the new song title to the foreground task to display on the screen.
A simple communication mechanism raises events in both the foreground and background processes. The
SendMessageToForeground and SendMessageToBackground methods each invoke events in the
corresponding process. Messages can be received by subscribing to the MessageReceivedFromBackground and
MessageReceivedFromForeground events.
Data can be passed as an argument to the send message methods that are then passed into the message received
event handlers. Pass data using the ValueSet class. This class is a dictionary that contains a string as a key and
other value types as values. You can pass simple value types such as integers, strings, and booleans.

Background Task Life Cycle


The lifetime of a background task is closely tied to your app's current playback status. For example, when the user
pauses audio playback, the system may terminate or cancel your app depending on the circumstances. After a
period of time without audio playback, the system may automatically shut down the background task.
The IBackgroundTask.Run method is called the first time your app accesses either
BackgroundMediaPlayer.Current from code running in the foreground app or when you register a handler for
the MessageReceivedFromBackground event, whichever occurs first. It is recommended that you register for the
message received handler before calling BackgroundMediaPlayer.Current for the first time so that the
foreground app doesn't miss any messages sent from the background process.
To keep the background task alive, your app must request a BackgroundTaskDeferral from within the Run
method and call BackgroundTaskDeferral.Complete when the task instance receives the Canceled or
Completed events. Do not loop or wait in the Run method because this consumes resources and may cause your
app's background task to be terminated by the system.
Your background task gets the Completed event when the Run method is completed and deferral is not
requested. In some cases, when your app gets the Canceled event, it can be also followed by the Completed
event. Your task may receive a Canceled event while Run is executing, so be sure to manage this potential
concurrence.
Situations in which the background task can be cancelled include:
A new app with audio playback capabilities starts on systems that enforce the exclusivity sub-policy. See the
System policies for background audio task lifetime section below.
A background task has been launched but music is not yet playing, and then the foreground app is
suspended.
Other media interruptions, such as incoming phone calls or VoIP calls.
Situations in which the background task can be terminated without notice include:
A VoIP call comes in and there is not enough available memory on the system to keep the background task
alive.
A resource policy is violated.
Task cancellation or completion does not end gracefully.

System policies for background audio task lifetime


The following policies help determine how the system manages the lifetime of background audio tasks.
Exclusivity
If enabled, this sub-policy limits the number of background audio tasks to be at most 1 at any given time. It is
enabled on Mobile and other non-Desktop SKUs.
Inactivity Timeout
Due to resource constraints, the system may terminate your background task after a period of inactivity.
A background task is considered inactive if both of the following conditions are met:
The foreground app is not visible (it is suspended or terminated).
The background media player is not in the playing state.
If both of these conditions are satisfied, the background media system policy will start a timer. If neither condition
has changed when the timer expires, the background media system policy will terminate the background task.
Shared Lifetime
If enabled, this sub-policy forces the background task to be dependent on the lifetime of the foreground task. If the
foreground task is shut down, either by the user or the system, the background task will also shut down.
However, note that this does not mean that the foreground is dependent on the background. If the background task
is shut down, this does not force the foreground task to shut down.
The following table lists the which policies are enforced on which device types.
SUB-POLICY DESKTOP MOBILE OTHER

Exclusivity Disabled Enabled Enabled

Inactivity Timeout Disabled Enabled Disabled

Shared Lifetime Enabled Disabled Disabled


Adaptive streaming
3/6/2017 4 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This article describes how to add playback of adaptive streaming multimedia content to a Universal Windows
Platform (UWP) app. This feature currently supports playback of Http Live Streaming (HLS) and Dynamic
Streaming over HTTP (DASH) content.
For a list of supported HLS protocol tags, see HLS tag support.

NOTE
The code in this article was adapted from the UWP Adaptive streaming sample.

Simple adaptive streaming with MediaPlayer and MediaPlayerElement


To play adaptive streaming media in a UWP app, create a Uri object pointing to a DASH or HLS manifest file.
Create an instance of the MediaPlayer class. Call MediaSource.CreateFromUri to create a new MediaSource
object and then set that to the Source property of the MediaPlayer. Call Play to start playback of the media
content.

MediaPlayer _mediaPlayer;

System.Uri manifestUri = new Uri("http://amssamples.streaming.mediaservices.windows.net/49b57c87-f5f3-48b3-ba22-


c55cfdffa9cb/Sintel.ism/manifest(format=m3u8-aapl)");
_mediaPlayer = new MediaPlayer();
_mediaPlayer.Source = MediaSource.CreateFromUri(manifestUri);
_mediaPlayer.Play();

The above example will play the audio of the media content but it doesn't automatically render the content in your
UI. Most apps that play video content will want to render the content in a XAML page. To do this, add a
MediaPlayerElement control to your XAML page.

<MediaPlayerElement x:Name="mediaPlayerElement" HorizontalAlignment="Stretch" AreTransportControlsEnabled="True"/>

Call MediaSource.CreateFromUri to create a MediaSource from the URI of a DASH or HLS manifest file. Then
set the Source property of the MediaPlayerElement. The MediaPlayerElement will automatically create a new
MediaPlayer object for the content. You can call Play on the MediaPlayer to start playback of the content.

System.Uri manifestUri = new Uri("http://amssamples.streaming.mediaservices.windows.net/49b57c87-f5f3-48b3-ba22-


c55cfdffa9cb/Sintel.ism/manifest(format=m3u8-aapl)");
mediaPlayerElement.Source = MediaSource.CreateFromUri(manifestUri);
mediaPlayerElement.MediaPlayer.Play();
NOTE
Starting with Windows 10, version 1607, it is recommended that you use the MediaPlayer class to play media items. The
MediaPlayerElement is a lightweight XAML control that is used to render the content of a MediaPlayer in a XAML page.
The MediaElement control continues to be supported for backwards compatibility. For more information about using
MediaPlayer and MediaPlayerElement to play media content, see Play audio and video with MediaPlayer. For information
about using MediaSource and related APIs to work with media content, see Media items, playlists, and tracks.

Adaptive streaming with AdaptiveMediaSource


If your app requires more advanced adaptive streaming features, such as providing custom HTTP headers,
monitoring the current download and playback bitrates, or adjusting the ratios that determine when the system
switches bitrates of the adaptive stream, use the AdaptiveMediaSource object.
The adaptive streaming APIs are found in the Windows.Media.Streaming.Adaptive namespace.

using Windows.Media.Streaming.Adaptive;
using System.Threading.Tasks;
using Windows.Storage.Streams;
using Windows.Media.Playback;
using Windows.Media.Core;

Initialize the AdaptiveMediaSource with the URI of an adaptive streaming manifest file by calling
CreateFromUriAsync. The AdaptiveMediaSourceCreationStatus value returned from this method lets you
know if the media source was created successfully. If so, you can set the object as the stream source for your
MediaPlayer by calling SetMediaSource. In this example, the AvailableBitrates property is queried to
determine the maximum supported bitrate for this stream, and then that value is set as the inital bitrate. This
example also registers handlers for the DownloadRequested, DownloadBitrateChanged, and
PlaybackBitrateChanged events which are discussed later in this article.

async private void InitializeAdaptiveMediaSource(System.Uri uri)


{
AdaptiveMediaSourceCreationResult result = await AdaptiveMediaSource.CreateFromUriAsync(uri);

if (result.Status == AdaptiveMediaSourceCreationStatus.Success)
{
ams = result.MediaSource;
mediaPlayerElement.SetMediaPlayer(new MediaPlayer());
mediaPlayerElement.MediaPlayer.SetMediaSource(ams);
mediaPlayerElement.MediaPlayer.Play();

ams.InitialBitrate = ams.AvailableBitrates.Max<uint>();

//Register for download requests


ams.DownloadRequested += DownloadRequested;

//Register for bitrate change events


ams.DownloadBitrateChanged += DownloadBitrateChanged;
ams.PlaybackBitrateChanged += PlaybackBitrateChanged;
}
else
{
// Handle failure to create the adaptive media source
}
}
If you need to set custom HTTP headers for getting the manifest file, you can create an HttpClient object, set the
desired headers, and then pass the object into the overload of CreateFromUriAsync.

httpClient = new Windows.Web.Http.HttpClient();


httpClient.DefaultRequestHeaders.TryAppendWithoutValidation("X-CustomHeader", "This is a custom header");
AdaptiveMediaSourceCreationResult result = await AdaptiveMediaSource.CreateFromUriAsync(manifestUri, httpClient);

The DownloadRequested event is raised when the system is about to retrieve a resource from the server. The
AdaptiveMediaSourceDownloadRequestedEventArgs passed into the event handler exposes properties that
provide information about the resource being requested such as the type and URI of the resource.
You can use the DownloadRequested event handler to modify the resource request by updating the properties of
the AdaptiveMediaSourceDownloadResult object provided by the event args. In the example below, the URI
from which the resource will be retrieved is modified by updating the ResourceUri properties of the result object.
You can override the content of the requested resource by setting the Buffer or InputStream properties of the
result object. In the example below, the contents of the manifest resource are replaced by setting the Buffer
property. Note that if you are updating the resource request with data that is obtained asynchronously, such as
retrieving data from a remote server or asynchronous user authentication, you must call
AdaptiveMediaSourceDownloadRequestedEventArgs.GetDeferral to get a deferral and then call Complete
when the operation is complete to signal the system that the download request operation can continue.

private async void DownloadRequested(AdaptiveMediaSource sender, AdaptiveMediaSourceDownloadRequestedEventArgs args)


{

// rewrite key URIs to replace http:// with https://


if (args.ResourceType == AdaptiveMediaSourceResourceType.Key)
{
string originalUri = args.ResourceUri.ToString();
string secureUri = originalUri.Replace("http:", "https:");

// override the URI by setting property on the result sub object


args.Result.ResourceUri = new Uri(secureUri);
}

if (args.ResourceType == AdaptiveMediaSourceResourceType.Manifest)
{
AdaptiveMediaSourceDownloadRequestedDeferral deferral = args.GetDeferral();
args.Result.Buffer = await CreateMyCustomManifest(args.ResourceUri);
deferral.Complete();
}
}

The AdaptiveMediaSource object provides events that allow you to react when the download or playback
bitrates change. In this example, the current bitrates are simply updated in the UI. Note that you can modify the
ratios that determine when the system switches bitrates of the adaptive stream. For more information, see the
AdvancedSettings property.
private async void DownloadBitrateChanged(AdaptiveMediaSource sender, AdaptiveMediaSourceDownloadBitrateChangedEventArgs args)
{
await this.Dispatcher.RunAsync(CoreDispatcherPriority.Normal, new DispatchedHandler(() =>
{
txtDownloadBitrate.Text = args.NewValue.ToString();
}));
}

private async void PlaybackBitrateChanged(AdaptiveMediaSource sender, AdaptiveMediaSourcePlaybackBitrateChangedEventArgs args)


{
await this.Dispatcher.RunAsync(CoreDispatcherPriority.Normal, new DispatchedHandler(() =>
{
txtPlaybackBitrate.Text = args.NewValue.ToString();
}));
}

Related topics
Media playback
HLS tag support
Play audio and video with MediaPlayer
Play media in the background
HTTP Live Streaming (HLS) tag support
3/8/2017 2 min to read Edit on GitHub

The following table lists the HLS tags that are supported for UWP apps.

NOTE
Custom tags that start with "X-" can be accessed as timed metadata as described in the article Media items, playlists, and
tracks.

INTRODUCED HLS PROTOCOL


IN HLS DOCUMENT JULY RELEASE
PROTOCOL DRAFT REQUIRED ON OF WINDOWS WINDOWS 10, WINDOWS 10,
TAG VERSION VERSION CLIENT 10 VERSION 1511 VERSION 1607

4.3.1. Basic
Tags

4.3.1.1. 1 0 REQUIRED Supported Supported Supported


EXTM3U

4.3.1.2. EXT- 2 3 REQUIRED Supported Supported Supported


X-VERSION

4.3.2. Media
Segment Tags

4.3.2.1. 1 0 REQUIRED Supported Supported Supported


EXTINF

4.3.2.2. EXT- 4 7 OPTIONAL Supported Supported Supported


X-BYTERANGE

4.3.2.3. EXT- 1 2 OPTIONAL Supported Supported Supported


X-
DISCONTINUI
TY

4.3.2.4. EXT- 1 0 OPTIONAL Supported Supported Supported


X-KEY

METHOD 1 0 Attribute "NONE, AES- "NONE, AES- "NONE, AES-


128" 128" 128, SAMPLE-
AES"

URI 1 0 Attribute Supported Supported Supported

IV 2 3 Attribute Supported Supported Supported

5 9 Attribute Not Not Not


KEYFORMAT Supported Supported Supported
INTRODUCED HLS PROTOCOL
IN HLS DOCUMENT JULY RELEASE
PROTOCOL DRAFT REQUIRED ON OF WINDOWS WINDOWS 10, WINDOWS 10,
TAG VERSION VERSION CLIENT 10 VERSION 1511 VERSION 1607

5 9 Attribute Not Not Not


KEYFORMATV Supported Supported Supported
ERSIONS

4.3.2.5. EXT- 5 9 OPTIONAL Not Not Not


X-MAP Supported Supported Supported

URI 5 9 Attribute Not Not Not


Supported Supported Supported

5 9 Attribute Not Not Not


BYTERANGE Supported Supported Supported

4.3.2.6. EXT- 1 0 OPTIONAL Not Not Not


X-PROGRAM- Supported Supported Supported
DATE-TIME

4.3.3. Media
Playlist Tags

4.3.3.1. EXT- 1 0 REQUIRED Supported Supported Supported


X-
TARGETDURA
TION

4.3.3.2. EXT- 1 0 OPTIONAL Supported Supported Supported


X-MEDIA-
SEQUENCE

4.3.3.3. EXT- 6 12 OPTIONAL Not Not Not


X- Supported Supported Supported
DISCONTINUI
TY-SEQUENCE

4.3.3.4. EXT- 1 0 OPTIONAL Supported Supported Supported


X-ENDLIST

4.3.3.5. EXT- 3 6 OPTIONAL Supported Supported Supported


X-PLAYLIST-
TYPE

4.3.3.6. EXT- 4 7 OPTIONAL Not Not Not


X-I-FRAMES- Supported Supported Supported
ONLY

4.3.4. Master
Playlist Tags

4.3.4.1. EXT- 4 7 OPTIONAL Supported Supported Supported


X-MEDIA
INTRODUCED HLS PROTOCOL
IN HLS DOCUMENT JULY RELEASE
PROTOCOL DRAFT REQUIRED ON OF WINDOWS WINDOWS 10, WINDOWS 10,
TAG VERSION VERSION CLIENT 10 VERSION 1511 VERSION 1607

TYPE 4 7 Attribute "AUDIO, "AUDIO, "AUDIO,


VIDEO" VIDEO" VIDEO,
SUBTITLES"

URI 4 7 Attribute Supported Supported Supported

GROUP-ID 4 7 Attribute Supported Supported Supported

LANGUAGE 4 7 Attribute Supported Supported Supported

ASSOC- 6 13 Attribute Not Not Not


LANGUAGE Supported Supported Supported

NAME 4 7 Attribute Not Not Supported


Supported Supported

DEFAULT 4 7 Attribute Not Not Not


Supported Supported Supported

4 7 Attribute Not Not Not


AUTOSELECT Supported Supported Supported

FORCED 5 9 Attribute Not Not Not


Supported Supported Supported

INSTREAM- 6 12 Attribute Not Not Not


ID Supported Supported Supported

5 9 Attribute Not Not Not


CHARACTERIS Supported Supported Supported
TICS

4.3.4.2. EXT- 1 0 REQUIRED Supported Supported Supported


X-STREAM-
INF

1 0 Attribute Supported Supported Supported


BANDWIDTH

PROGRAM- 1 0 Attribute NA NA NA
ID

AVERAGE- 7 14 Attribute Not Not Not


BANDWIDTH Supported Supported Supported

CODECS 1 0 Attribute Supported Supported Supported

2 3 Attribute Supported Supported Supported


RESOLUTION
INTRODUCED HLS PROTOCOL
IN HLS DOCUMENT JULY RELEASE
PROTOCOL DRAFT REQUIRED ON OF WINDOWS WINDOWS 10, WINDOWS 10,
TAG VERSION VERSION CLIENT 10 VERSION 1511 VERSION 1607

FRAME- 7 15 Attribute NA NA NA
RATE

AUDIO 4 7 Attribute Supported Supported Supported

VIDEO 4 7 Attribute Supported Supported Supported

SUBTITLES 5 9 Attribute Not Not Supported


Supported Supported

CLOSED- 6 12 Attribute Not Not Not


CAPTIONS Supported Supported Supported

4.3.4.3. EXT- 4 7 OPTIONAL Not Not Not


X-I-FRAME- Supported Supported Supported
STREAM-INF

4.3.4.4. EXT- 7 14 OPTIONAL Not Not Not


X-SESSION- Supported Supported Supported
DATA

4.3.4.5. EXT- 7 17 OPTIONAL Not Not Not


X-SESSION- Supported Supported Supported
KEY

4.3.5. Media
or Master
Playlist Tags

4.3.5.1. EXT- 6 13 OPTIONAL Not Supported Supported


X- Supported
INDEPENDEN
T-SEGMENTS

4.3.5.2. EXT- 6 12 OPTIONAL Not Partially Partially


X-START Supported Supported Supported

TIME- 6 12 Attribute Not Supported Supported


OFFSET Supported

PRECISE 6 12 Attribute Not Default "NO" Default "NO"


Supported supported supported

Related topics
Media playback
Adaptive streaming
Dynamic Adaptive Streaming over HTTP (DASH)
profile support
3/8/2017 1 min to read Edit on GitHub

The following table lists the DASH profiles that are supported for UWP apps.

JULY
RELEASE OF WINDOWS WINDOWS WINDOWS WINDOWS
MANIFEST WINDOWS 10, VERSION 10, VERSION 10, VERSION 10, VERSION
TAG TYPE NOTES 10 1511 1607 1607 1703

urn:mpeg Static Supported Supported Supported Supported Supported


profile:isoff-
live:2011

urn:mpeg Best effort Supported Supported Supported Supported Supported


profile:isoff-
main:2011

urn:mpeg Dynamic $Time$ is Not Not Not Not Supported


profile:isoff- supported Supported Supported Supported Supported
live:2011 but
$Number$
is
unsupporte
d in
segment
templates

Related topics
Media playback
Adaptive streaming
Media casting
3/6/2017 10 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This article shows you how to cast media to remote devices from a Universal Windows app.

Built-in media casting with MediaPlayerElement


The simplest way to cast media from a Universal Windows app is to use the built-in casting capability of the
MediaPlayerElement control.
To allow the user to open a video file to be played in the MediaPlayerElement control, add the following
namespaces to your project.

using Windows.Storage;
using Windows.Storage.Pickers;
using Windows.Storage.Streams;
using Windows.Media.Core;

In your app's XAML file, add a MediaPlayerElement and set AreTransportControlsEnabled to true.

<MediaPlayerElement Name="mediaPlayerElement" MinHeight="100" MaxWidth="600" HorizontalAlignment="Stretch"


AreTransportControlsEnabled="True"/>

Add a button to let the user initiate picking a file.

<Button x:Name="openButton" Click="openButton_Click" Content="Open"/>

In the Click event handler for the button, create a new instance of the FileOpenPicker, add video file types to the
FileTypeFilter collection, and set the starting location to the user's videos library.
Call PickSingleFileAsync to launch the file picker dialog. When this method returns, the result is a StorageFile
object representing the video file. Check to make sure the file isn't null, which it will be if the user cancels the
picking operation. Call the file's OpenAsync method to get an IRandomAccessStream for the file. Finally, create a
new MediaSource object from the selected file by calling CreateFromStorageFile and assign it to the
MediaPlayerElement object's Source property to make the video file the video source for the control.
private async void openButton_Click(object sender, RoutedEventArgs e)
{
//Create a new picker
FileOpenPicker filePicker = new FileOpenPicker();

//Add filetype filters. In this case wmv and mp4.


filePicker.FileTypeFilter.Add(".wmv");
filePicker.FileTypeFilter.Add(".mp4");

//Set picker start location to the video library


filePicker.SuggestedStartLocation = PickerLocationId.VideosLibrary;

//Retrieve file from picker


StorageFile file = await filePicker.PickSingleFileAsync();

//If we got a file, load it into the media lement


if (file != null)
{
mediaPlayerElement.Source = MediaSource.CreateFromStorageFile(file);
mediaPlayerElement.MediaPlayer.Play();
}
}

Once the video is loaded in the MediaPlayerElement, the user can simply press the casting button on the
transport controls to launch a built-in dialog that allows them to choose a device to which the loaded media will be
cast.

NOTE
Starting with Windows 10, version 1607, it is recommended that you use the MediaPlayer class to play media items. The
MediaPlayerElement is a lightweight XAML control that is used to render the content of a MediaPlayer in a XAML page.
The MediaElement control continues to be supported for backwards compatibility. For more information on using
MediaPlayer and MediaPlayerElement to play media content, see Play audio and video with MediaPlayer. For information
on using MediaSource and related APIs to work with media content, see Media items, playlists, and tracks.

Media casting with the CastingDevicePicker


A second way to cast media to a device is to use the CastingDevicePicker. To use this class, include the
Windows.Media.Casting namespace in your project.

using Windows.Media.Casting;

Declare a member variable for the CastingDevicePicker object.

CastingDevicePicker castingPicker;

When you page is initialized, create a new instance of the casting picker and set the Filter to SupportsVideo
property to indicate that the casting devices listed by the picker should support video. Register a handler for the
CastingDeviceSelected event, which is raised when the user picks a device for casting.
//Initialize our picker object
castingPicker = new CastingDevicePicker();

//Set the picker to filter to video capable casting devices


castingPicker.Filter.SupportsVideo = true;

//Hook up device selected event


castingPicker.CastingDeviceSelected += CastingPicker_CastingDeviceSelected;

In your XAML file, add a button to allow the user to launch the picker.

<Button x:Name="castPickerButton" Content="Cast Button" Click="castPickerButton_Click"/>

In the Click event handler for the button, call TransformToVisual to get the transform of a UI element relative to
another element. In this example, the transform is the position of the cast picker button relative to the visual root of
the application window. Call the Show method of the CastingDevicePicker object to launch the casting picker
dialog. Specify the location and dimensions of the cast picker button so that the system can make the dialog fly out
from the button that the user pressed.

private void castPickerButton_Click(object sender, RoutedEventArgs e)


{
//Retrieve the location of the casting button
GeneralTransform transform = castPickerButton.TransformToVisual(Window.Current.Content as UIElement);
Point pt = transform.TransformPoint(new Point(0, 0));

//Show the picker above our casting button


castingPicker.Show(new Rect(pt.X, pt.Y, castPickerButton.ActualWidth, castPickerButton.ActualHeight),
Windows.UI.Popups.Placement.Above);
}

In the CastingDeviceSelected event handler, call the CreateCastingConnection method of the


SelectedCastingDevice property of the event args, which represents the casting device selected by the user.
Register handlers for the ErrorOccurred and StateChanged events. Finally, call RequestStartCastingAsync to
begin casting, passing in the result to the MediaPlayerElement control's MediaPlayer object's
GetAsCastingSource method to specify that the media to be cast is the content of the MediaPlayer associated
with the MediaPlayerElement.

NOTE
The casting connection must be initiated on the UI thread. Since the CastingDeviceSelected is not called on the UI thread,
you must place these calls inside a call to CoreDispatcher.RunAsync which causes them to be called on the UI thread.
private async void CastingPicker_CastingDeviceSelected(CastingDevicePicker sender, CastingDeviceSelectedEventArgs args)
{
//Casting must occur from the UI thread. This dispatches the casting calls to the UI thread.
await Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, async () =>
{
//Create a casting conneciton from our selected casting device
CastingConnection connection = args.SelectedCastingDevice.CreateCastingConnection();

//Hook up the casting events


connection.ErrorOccurred += Connection_ErrorOccurred;
connection.StateChanged += Connection_StateChanged;

//Cast the content loaded in the media element to the selected casting device
await connection.RequestStartCastingAsync(mediaPlayerElement.MediaPlayer.GetAsCastingSource());
});
}

In the ErrorOccurred and StateChanged event handlers, you should update your UI to inform the user of the
current casting status. These events are discussed in detail in the following section on creating a custom casting
device picker.

private async void Connection_StateChanged(CastingConnection sender, object args)


{
await Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () =>
{
ShowMessageToUser("Casting Connection State Changed: " + sender.State);
});
}

private async void Connection_ErrorOccurred(CastingConnection sender, CastingConnectionErrorOccurredEventArgs args)


{
await Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () =>
{
ShowMessageToUser("Casting Connection State Changed: " + sender.State);
});
}

Media casting with a custom device picker


The following section describes how to create your own casting device picker UI by enumerating the casting
devices and initiating the connection from your code.
To enumerate the available casting devices, include the Windows.Devices.Enumeration namespace in your
project.

using Windows.Devices.Enumeration;

Add the following controls to your XAML page to implement the rudimentary UI for this example:
A button to start the device watcher that looks for available casting devices.
A ProgressRing control to provide feedback to the user that casting enumeration is ongoing.
A ListBox to list the discovered casting devices. Define an ItemTemplate for the control so that we can assign
the casting device objects directly to the control and still display the FriendlyName property.
A button to allow the user to disconnect the casting device.
<Button x:Name="startWatcherButton" Content="Watcher Button" Click="startWatcherButton_Click"/>
<ProgressRing x:Name="watcherProgressRing" IsActive="False"/>
<ListBox x:Name="castingDevicesListBox" MaxWidth="300" HorizontalAlignment="Left"
SelectionChanged="castingDevicesListBox_SelectionChanged">
<!--Listbox content is bound to the FriendlyName field of our casting devices-->
<ListBox.ItemTemplate>
<DataTemplate>
<TextBlock Text="{Binding Path=FriendlyName}"/>
</DataTemplate>
</ListBox.ItemTemplate>
</ListBox>
<Button x:Name="disconnectButton" Content="Disconnect" Click="disconnectButton_Click" Visibility="Collapsed"/>

In your code behind, declare member variables for the DeviceWatcher and the CastingConnection.

DeviceWatcher deviceWatcher;
CastingConnection castingConnection;

In the Click handler for the startWatcherButton, first update the UI by disabling the button and making the
progress ring active while device enumeration is ongoing. Clear the list box of casting devices.
Next, create a device watcher by calling DeviceInformation.CreateWatcher. This method can be used to watch
for many different types of devices. Specify that you want to watch for devices that support video casting by using
the device selector string returned by CastingDevice.GetDeviceSelector.
Finally, register event handlers for the Added, Removed, EnumerationCompleted, and Stopped events.

private void startWatcherButton_Click(object sender, RoutedEventArgs e)


{
startWatcherButton.IsEnabled = false;
watcherProgressRing.IsActive = true;

castingDevicesListBox.Items.Clear();

//Create our watcher and have it find casting devices capable of video casting
deviceWatcher = DeviceInformation.CreateWatcher(CastingDevice.GetDeviceSelector(CastingPlaybackTypes.Video));

//Register for watcher events


deviceWatcher.Added += DeviceWatcher_Added;
deviceWatcher.Removed += DeviceWatcher_Removed;
deviceWatcher.EnumerationCompleted += DeviceWatcher_EnumerationCompleted;
deviceWatcher.Stopped += DeviceWatcher_Stopped;
}

The Added event is raised when a new device is discovered by the watcher. In the handler for this event, create a
new CastingDevice object by calling CastingDevice.FromIdAsync and passing in the ID of the discovered
casting device, which is contained in the DeviceInformation object passed into the handler.
Add the CastingDevice to the casting device ListBox so that the user can select it. Because of the ItemTemplate
defined in the XAML, the FriendlyName property will be used as the item text for in the list box. Because this event
handler is not called on the UI thread, you must update the UI from within a call to CoreDispatcher.RunAsync.
private async void DeviceWatcher_Added(DeviceWatcher sender, DeviceInformation args)
{
await Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, async () =>
{
//Add each discovered device to our listbox
CastingDevice addedDevice = await CastingDevice.FromIdAsync(args.Id);
castingDevicesListBox.Items.Add(addedDevice);
});
}

The Removed event is raised when the watcher detects that a casting device is no longer present. Compare the ID
property of the Added object passed into the handler to the ID of each Added in the list box's Items collection. If
the ID matches, remove that object from the collection. Again, because the UI is being updated, this call must be
made from within a RunAsync call.

private async void DeviceWatcher_Removed(DeviceWatcher sender, DeviceInformationUpdate args)


{
await Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () =>
{
foreach (CastingDevice currentDevice in castingDevicesListBox.Items)
{
if (currentDevice.Id == args.Id)
{
castingDevicesListBox.Items.Remove(currentDevice);
}
}
});
}

The EnumerationCompleted event is raised when the watcher has finished detecting devices. In the handler for
this event, update the UI to let the user know that device enumeration has completed and stop the device watcher
by calling Stop.

private async void DeviceWatcher_EnumerationCompleted(DeviceWatcher sender, object args)


{
await Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () =>
{
//If enumeration completes, update UI and transition watcher to the stopped state
ShowMessageToUser("Watcher completed enumeration of devices");
deviceWatcher.Stop();
});
}

The Stopped event is raised when the device watcher has finished stopping. In the handler for this event, stop the
ProgressRing control and reenable the startWatcherButton so that the user can restart the device enumeration
process.

private async void DeviceWatcher_Stopped(DeviceWatcher sender, object args)


{
await Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () =>
{
//Update UX when the watcher stops
startWatcherButton.IsEnabled = true;
watcherProgressRing.IsActive = false;
});
}

When the user selects one of the casting devices from the list box, the SelectionChanged event is raised. It is
within this handler that the casting connection will be created and casting will be started.
First, make sure the device watcher is stopped so that device enumeration doesn't interfere with media casting.
Create a casting connection by calling CreateCastingConnection on the CastingDevice object selected by the
user. Add event handlers for the StateChanged and ErrorOccurred events.
Start media casting by calling RequestStartCastingAsync, passing in the casting source returned by calling the
MediaPlayer method GetAsCastingSource. Finally, make the disconnect button visible to allow the user to stop
media casting.

private async void castingDevicesListBox_SelectionChanged(object sender, SelectionChangedEventArgs e)


{
if (castingDevicesListBox.SelectedItem != null)
{
//When a device is selected, first thing we do is stop the watcher so it's search doesn't conflict with streaming
if (deviceWatcher.Status != DeviceWatcherStatus.Stopped)
{
deviceWatcher.Stop();
}

//Create a new casting connection to the device that's been selected


castingConnection = ((CastingDevice)castingDevicesListBox.SelectedItem).CreateCastingConnection();

//Register for events


castingConnection.ErrorOccurred += Connection_ErrorOccurred;
castingConnection.StateChanged += Connection_StateChanged;

//Cast the loaded video to the selected casting device.


await castingConnection.RequestStartCastingAsync(mediaPlayerElement.MediaPlayer.GetAsCastingSource());
disconnectButton.Visibility = Visibility.Visible;
}
}

In the state changed handler, the action you take depends on the new state of the casting connection:
If the state is Connected or Rendering, make sure the ProgressRing control is inactive and the disconnect
button is visible.
If the state is Disconnected, unselect the current casting device in the list box, make the ProgressRing control
inactive, and hide the disconnect button.
If the state is Connecting, make the ProgressRing control active and hide the disconnect button.
If the state is Disconnecting, make the ProgressRing control active and hide the disconnect button.
private async void Connection_StateChanged(CastingConnection sender, object args)
{
await Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () =>
{
//Update the UX based on the casting state
if (sender.State == CastingConnectionState.Connected || sender.State == CastingConnectionState.Rendering)
{
disconnectButton.Visibility = Visibility.Visible;
watcherProgressRing.IsActive = false;
}
else if (sender.State == CastingConnectionState.Disconnected)
{
disconnectButton.Visibility = Visibility.Collapsed;
castingDevicesListBox.SelectedItem = null;
watcherProgressRing.IsActive = false;
}
else if (sender.State == CastingConnectionState.Connecting)
{
disconnectButton.Visibility = Visibility.Collapsed;
ShowMessageToUser("Connecting");
watcherProgressRing.IsActive = true;
}
else
{
//Disconnecting is the remaining state
disconnectButton.Visibility = Visibility.Collapsed;
watcherProgressRing.IsActive = true;
}
});
}

In the handler for the ErrorOccurred event, update your UI to let the user know that a casting error occurred and
unselect the current CastingDevice object in the list box.

private async void Connection_ErrorOccurred(CastingConnection sender, CastingConnectionErrorOccurredEventArgs args)


{
await Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () =>
{
//Clear the selection in the listbox on an error
ShowMessageToUser("Casting Error: " + args.Message);
castingDevicesListBox.SelectedItem = null;
});
}

Finally, implement the handler for the disconnect button. Stop media casting and disconnect from the casting
device by calling the CastingConnection object's DisconnectAsync method. This call must be dispatched to the
UI thread by calling CoreDispatcher.RunAsync.

private async void disconnectButton_Click(object sender, RoutedEventArgs e)


{
if (castingConnection != null)
{
//When disconnect is clicked, the casting conneciton is disconnected. The video should return locally to the media element.
await castingConnection.DisconnectAsync();
}
}
PlayReady DRM
3/6/2017 18 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This topic describes how to add PlayReady protected media content to your Universal Windows Platform (UWP)
app.
PlayReady DRM enables developers to create UWP apps capable of providing PlayReady content to the user while
enforcing the access rules defined by the content provider. This section describes changes made to Microsoft
PlayReady DRM for Windows 10 and how to modify your PlayReady UWP app to support the changes made from
the previous Windows 8.1 version to the Windows 10 version.

TOPIC DESCRIPTION

Hardware DRM This topic provides an overview of how to add PlayReady


hardware-based digital rights management (DRM) to your
UWP app.

Adaptive streaming with PlayReady This article describes how to add adaptive streaming of
multimedia content with Microsoft PlayReady content
protection to a Universal Windows Platform (UWP) app. This
feature currently supports playback of Http Live Streaming
(HLS) and Dynamic Streaming over HTTP (DASH) content.

What's new in PlayReady DRM


The following list describes the new features and changes made to PlayReady DRM for Windows 10.
Added hardware digital rights management (HWDRM).
Hardware-based content protection support enables secure playback of high definition (HD) and ultra-high
definition (UHD) content on multiple device platforms. Key material (including private keys, content keys,
and any other key material used to derive or unlock said keys), and decrypted compressed and
uncompressed video samples are protected by leveraging hardware security. When Hardware DRM is being
used, neither unknown enabler (play to unknown / play to unknown with downres) has meaning as the
HWDRM pipeline always knows the output being used. For more information, see Hardware DRM.
PlayReady is no longer an appX framework component, but instead is an in-box operating system
component. The namespace was changed from Microsoft.Media.PlayReadyClient to
Windows.Media.Protection.PlayReady.
The following headers defining the PlayReady error codes are now part of the Windows Software Development
Kit (SDK): Windows.Media.Protection.PlayReadyErrors.h and Windows.Media.Protection.PlayReadyResults.h.
Provides proactive acquisition of non-persistent licenses.
Previous versions of PlayReady DRM did not support proactive acquisition of non-persistent licenses. This
capability has been added to this version. This can decrease the time to first frame. For more information,
see Proactively Acquire a Non-Persistent License Before Playback.
Provides acquisition of multiple licenses in one message.
Allows the client app to acquire multiple non-persistent licenses in one license acquisition message. This can
decrease the time to first frame by acquiring licenses for multiple pieces of content while the user is still
browsing your content library; this prevents a delay for license acquisition when the user selects the content
to play. In addition, it allows audio and video streams to be encrypted to separate keys by enabling a content
header that includes multiple key identifiers (KIDs); this enables a single license acquisition to acquire all
licenses for all streams within a content file instead of having to use custom logic and multiple license
acquisition requests to achieve the same result.
Added real time expiration support, or limited duration license (LDL).
Provides the ability to set real-time expiration on licenses and smoothly transition from an expiring license
to another (valid) license in the middle of playback. When combined with acquisition of multiple licenses in
one message, this allows an app to acquire several LDLs asynchronously while the user is still browsing the
content library and only acquire a longer duration license once the user has selected content to playback.
Playback will then start more quickly (because a license is already available) and, since the app will have
acquired a longer duration license by the time the LDL expires, smoothly continue playback to the end of the
content without interruption.
Added non-persistent license chains.
Added support for time-based restrictions (including expiration, expire after first play, and real time expiration)
on non-persistent licenses.
Added HDCP Type 1 (version 2.2 on Windows 10) policy support.
See Things to Consider for more information.
Miracast is now implicit as an output.
Added secure stop.
Secure stop provides the means for a PlayReady device to confidently assert to a media streaming service
that media playback has stopped for any given piece of content. This capability ensures your media
streaming services provide accurate enforcement and reporting of usage limitations on different devices for
a given account.
Added audio and video license separation.
Separate tracks prevent video from being decoded as audio; enabling more robust content protection.
Emerging standards are requiring separate keys for audio and visual tracks.
Added MaxResDecode.
This feature was added to limit playback of content to a maximum resolution even when in possession of a
more capable key (but not a license). It supports cases where multiple stream sizes are encoded with a
single key.
The following new interfaces, classes, and enumerations were added to PlayReady DRM:
IPlayReadyLicenseAcquisitionServiceRequest interface
IPlayReadyLicenseSession interface
IPlayReadySecureStopServiceRequest interface
PlayReadyLicenseSession class
PlayReadySecureStopIterable class
PlayReadySecureStopIterator class
PlayReadyHardwareDRMFeatures enumerator
A new sample has been created to demonstrate how to use the new features of PlayReady DRM. The sample can
be downloaded from http://go.microsoft.com/fwlink/p/?linkid=331670&clcid=0x409.
Things to consider
PlayReady DRM now supports HDCP Type 1 (supported in HDCP version 2.1 or later). PlayReady carries an
HDCP Type Restriction policy in the license for the device to enforce. On Windows 10, this policy will enforce
that HDCP 2.2 or later is engaged. This feature can be enabled in your PlayReady Server v3.0 SDK license (the
server controls this policy in the license using the HDCP Type Restriction GUID). For more information, see the
PlayReady Compliance and Robustness Rules.
Windows Media Video (also known as VC-1) is not supported in hardware DRM (see Override Hardware DRM).
PlayReady DRM now supports the High Efficiency Video Coding (HEVC /H.265) video compression standard. To
support HEVC, your app must use Common Encryption Scheme (CENC) version 2 content which includes
leaving the content's slice headers in the clear. Refer to ISO/IEC 23001-7 Information technology -- MPEG
systems technologies -- Part 7: Common encryption in ISO base media file format files (Spec version ISO/IEC
23001-7:2015 or higher is required.) for more information. Microsoft also recommends using CENC version 2
for all HWDRM content. In addition, some hardware DRM will support HEVC and some will not (see Override
Hardware DRM).
To take advantage of certain new PlayReady 3.0 features (including, but not limited to, SL3000 for hardware-
based clients, acquiring multiple non-persistent licenses in one license acquisition message, and time-based
restrictions on non-persistent licenses), the PlayReady server is required to be the Microsoft PlayReady Server
Software Development Kit v3.0.2769 Release version or later.
Depending on the Output Protection Policy specified in the content license, media playback may fail for end
users if their connected output does not support those requirements. The following table lists the set of
common errors that occur as a result. For more information, see the PlayReady Compliance and Robustness
Rules.

ERROR VALUE DESCRIPTION

ERROR_GRAPHICS_OPM_OUTPUT_DOE 0xC0262513 The license's Output Protection Policy


S_NOT_SUPPORT_HDCP requires the monitor to engage HDCP,
but HDCP was unable to be engaged.

MF_E_POLICY_UNSUPPORTED 0xC00D7159 The license's Output Protection Policy


requires the monitor to engage HDCP
Type 1, but HDCP Type 1 was unable to
be engaged.

DRM_E_TEE_OUTPUT_PROTECTION_RE 0x8004CD22 This error code only occurs when


QUIREMENTS_NOT_MET running under hardware DRM. The
license's Output Protection Policy
requires the monitor to engage HDCP
or to reduce the content's effective
resolution, but HDCP was unable to be
engaged and the content's effective
resolution could not be reduced
because hardware DRM does not
support reducing the content's
resolution. Under software DRM, the
content plays. See Considerations for
Using Hardware DRM.
ERROR VALUE DESCRIPTION

ERROR_GRAPHICS_OPM_NOT_SUPPOR 0xc0262500 The graphics driver does not support


TED Output Protection. For example, the
monitor is connected through VGA or
an appropriate graphics driver for the
digital output is not installed. In the
latter case, the typical driver that is
installed is the Microsoft Basic Display
Adapter and installing an appropriate
graphics driver will resolve the issue.

Output protection
The following section describes the behavior when using PlayReady DRM for Windows 10 with output protection
policies in a PlayReady license.
PlayReady DRM supports output protection levels contained in the Microsoft PlayReady Extensible Media
Rights Specification. This document can be found in the documentation pack that comes with PlayReady licensed
products.

NOTE
The allowed values for output protection levels that can be set by a licensing server are governed by the PlayReady
Compliance Rules.

PlayReady DRM allows playback of content with output protection policies only on output connectors as specified
in the PlayReady Compliance Rules. For more information about output connector terms specified in the
PlayReady Compliance Rules, see Defined Terms for PlayReady Compliance and Robustness Rules.
This section focuses on output protection scenarios with PlayReady DRM for Windows 10 and PlayReady
Hardware DRM for Windows 10, which is also available on some Windows clients. With PlayReady HWDRM, all
output protections are enforced from within the Windows TEE implementation (see Hardware DRM). As a result,
some behaviors differ from when using PlayReady SWDRM (software DRM):
Support for Output Protection Level (OPL) for Uncompressed Digital Video 270: PlayReady HWDRM for
Windows 10 doesn't support down-resolution and will enforce that HDCP (High-bandwidth Digital Content
Protection) is engaged. It is recommended that high definition content for HWDRM have an OPL greater than
270 (although it is not required). Additionally, you should set HDCP type restriction in the license (HDCP version
2.2 or later).
Unlike SWDRM, with HWDRM, output protections are enforced on all monitors based on the least capable
monitor. For example, if the user has two monitors connected where one supports HDCP and the other doesn't,
playback will fail if the license requires HDCP even if the content is only being rendered on the monitor that
supports HDCP. In SWDRM, content will play back as long as it's only being rendered on the monitor that
supports HDCP.
HWDRM is not guaranteed to be used by the client and secure unless the following conditions are met by the
content keys and licenses:
The license used for the video content key must have a minimum security level of 3000.
Audio must be encrypted to a different content key than video, and the license used for audio must have
a minimum security level of 2000. Alternatively, audio could be left in the clear.
All SWDRM scenarios require that the minimum security level of the PlayReady license used for the audio
and/or video content key is lower or equal to 2000.
Output protection levels
The following table outlines the mappings between various OPLs in the PlayReady license and how PlayReady
DRM for Windows 10 enforces them.
Video

OPL COMPRESSED DIGITAL UNCOMPRESSED DIGITAL VIDEO ANALOG TV


VIDEO

ANY HDMI, DVI, DISPLAYPORT, MHL COMPONENT,


COMPOSITE

100 N/A* Passes content Passes content

150 N/A* Passes content when


CGMS-A CopyNever
is engaged or if
CGMS-A can't be
engaged

200 Passes content when


CGMS-A CopyNever
is engaged

250 Attempts to engage HDCP, but passes content N/A*


regardless of result

270 SWDRM: Attempts to HWDRM: Passes


engage HDCP. If content with HDCP. If
HDCP fails to engage, HDCP fails to engage,
the PC will constrain playback to
the effective HDMI/DVI ports is
resolution to 520,000 blocked
pixels per frame and
pass the content

300
When HDCP type restriction is NOT
defined: Passes content with HDCP. If
HDCP fails to engage, playback to
HDMI/DVI ports is blocked.
When HDCP type restriction IS defined:
Passes content with HDCP 2.2 and content
stream type set to 1. If HDCP fails to engage
or content stream type can't be set to 1,
playback to HDMI/DVI ports is blocked.

400 Windows 10 never N/A*


passes compressed
500 digital video content
to outputs, regardless
of the subsequent
OPL value. For more
information about
compressed digital
video content, see the
Compliance Rules for
PlayReady Products.

* Not all values for output protection levels can be set by a licensing server. For more information, see the
PlayReady Compliance Rules.
Audio

OPL COMPRESSED DIGITAL AUDIO UNCOMPRESSED DIGITAL ANALOG OR USB AUDIO


AUDIO

HDMI, DISPLAYPORT, MHL HDMI, DISPLAYPORT, MHL ANY

100 Passes content Passes content Passes content

150 Does NOT pass content


200

250 Passes content when HDCP


is engaged on HDMI,
DisplayPort, or MHL, or
when SCMS is engaged and
set to CopyNever

300 Passes content when HDCP


is engaged on HDMI,
DisplayPort, or MHL

Miracast
PlayReady DRM allows you to play content over Miracast output as soon as HDCP 2.0 or later is engaged. On
Windows 10, however, Miracast is considered a digital output. For more information about Miracast scenarios, see
the PlayReady Compliance Rules. The following table outlines the mappings between various OPLs in the
PlayReady license and how PlayReady DRM enforces them on Miracast outputs.

OPL COMPRESSED DIGITAL UNCOMPRESSED COMPRESSED DIGITAL UNCOMPRESSED


AUDIO DIGITAL AUDIO VIDEO DIGITAL VIDEO

100 Passes content when Passes content when N/A* Passes content when
HDCP 2.0 or later is HDCP 2.0 or later is HDCP 2.0 or later is
engaged. If it fails to engaged. If it fails to engaged. If it fails to
engage, it does NOT engage, it does NOT engage, it does NOT
pass content pass content pass content

150 Does NOT pass N/A*


content
200

250 Passes content when


HDCP 2.0 or later is
270 N/A* engaged. If it fails to
engage, it does NOT
pass content
300 Passes content when Does NOT pass When HDCP
HDCP 2.0 or later is content type restriction
engaged. If it fails to is NOT defined:
engage, it does NOT Passes content
pass content when HDCP 2.0
or later is
engaged. If it fails
to engage, it does
NOT pass
content.
When HDCP
type restriction
IS defined:
Passes content
with HDCP 2.2
and content
stream type set
to 1. If HDCP fails
to engage or
content stream
type can't be set
to 1, it does NOT
pass content.

400 N/A* Windows 10 never N/A*


passes compressed
500 digital video content
to outputs, regardless
of the subsequent
OPL value. For more
information about
compressed digital
video content, see the
Compliance Rules for
PlayReady Products.

* Not all values for output protection levels can be set by a licensing server. For more information, see the
PlayReady Compliance Rules.
Additional explicit output restrictions
The following table describes the PlayReady DRM for Windows 10 implementation of explicit digital video output
protection restrictions.

SCENARIO GUID IF... THEN...


MAXIMUM EFFECTIVE 9645E831-E01D-4FFF- Connected output is: digital
RESOLUTION DECODE SIZE Passes content when
8342-0A720E3E028F video output, Miracast, constrained to:
HDMI, DVI, etc.
(a) the width of the
frame must be less
than or equal to the
maximum frame
width in pixels and
the height of the
frame less than or
equal to the
maximum frame
height in pixels, or
(b) the height of the
frame must be less
than or equal to the
maximum frame
width in pixels and
the width of the
frame less than or
equal to the
maximum frame
height in pixels

HDCP TYPE RESTRICTION ABB2C6F1-E663-4625- Connected output is: digital Passes content with HDCP
A945-972D17B231E7 video output, Miracast, 2.2 and the content stream
HDMI, DVI, etc. type set to 1. If HDCP 2.2
fails to engage or the
content stream type can't be
set to 1, it does NOT pass
content. Uncompressed
digital video output
protection level of a value
greater than or equal to 271
must also be specified

The following table describes the PlayReady DRM for Windows 10 implementation of explicit analog video output
protection restrictions.

SCENARIO GUID IF... THEN...

ANALOG COMPUTER D783A191-E083- Connected output is: SWDRM: PC will HWDRM: Does NOT
MONITOR
4BAF-B2DA- VGA, DVIanalog, etc. constrain effective pass content
E69F910B3772 resolution to 520,000
epx per frame and
pass content

ANALOG COMPONENT 811C5110-46C8- Connected output is: SWDRM: PC will HWDRM: Does NOT
4C6E-8163- component constrain effective pass content
C0482A15D47E resolution to 520,000
epx per frame and
pass content

ANALOG TV OUTPUTS 2098DE8D-7DDD- Analog TV OPL is less CGMS-A must be engaged


4BAB-96C6- than 151
32EBB6FABEA3
225CD36F-F132- Analog TV OPL is less CGMS-A engagement must be attempted, but
49EF-BA8C- than 101 and license content may play regardless of result
C91EA28E4369 doesn't contain
2098DE8D-7DDD-
4BAB-96C6-
32EBB6FABEA3

AUTOMATIC GAIN C3FD11C6-F8B7- Passing content with Sets AGC only for component video and PAL
CONTROL AND COLOR
STRIPE 4D20-B008- resolution less than or mode when resolution is less than 520,000 px
1DB17D61F2DA equal to 520,000 px and sets AGC and color stripe information for
to analog TV output NTSC when resolution is less than 520,000 px,
according to table 3.5.7.3. in Compliance Rules

DIGITAL-ONLY OUTPUT 760AE755-682A- Connected output is Does not pass content


41E0-B1B3- analog
DCDF836A7306

NOTE
When using an adapter dongle such as "Mini DisplayPort to VGA" for playback, Windows 10 sees the output as digital video
output, and can't enforce analog video policies.

The following table describes the PlayReady DRM for Windows 10 implementation that enables playing in other
circumstances.

SCENARIO GUID IF... THEN...

UNKNOWN OUTPUT 786627D8-C2A6- If output can't SWDRM: Passes HWDRM: Does NOT
44BE-8F88- reasonably be content pass content
08AE255B01A7 determined, or OPM
can't be established
with graphics driver

UNKNOWN OUTPUT B621D91F-EDCC- If output can't SWDRM: PC will HWDRM: Does NOT
WITH CONSTRICTION
4035-8D4B- reasonably be constrain effective pass content
DC71760D43E9 determined, or OPM resolution to 520,000
can't be established epx per frame and
with graphics driver pass content

Prerequisites
Before you begin creating your PlayReady-protected UWP app, the following software needs to be installed on
your system:
Windows 10.
If you are compiling any of the samples for PlayReady DRM for UWP apps, you must use Microsoft Visual
Studio 2015 or later to compile the samples. You can still use Microsoft Visual Studio 2013 to compile any of
the samples from PlayReady DRM for Windows 8.1 Store Apps.

PlayReady Windows Store app migration guide


This section includes information on how to migrate your existing PlayReady Windows 8.x Store apps to Windows
10.
The namespace for PlayReady UWP apps on Windows 10 was changed from Microsoft.Media.PlayReadyClient
to Windows.Media.Protection.PlayReady. This means that you will need to search and replace the old
namespace with the new one in your code. You will still be referencing a winmd file. It is part of
windows.media.winmd on the Windows 10 operating system. It is in windows.winmd as part of the THs Windows
SDK. For UWP, its referenced in windows.foundation.univeralappcontract.winmd.
To play back PlayReady-protected high definition (HD) content (1080p) and ultra-high definition (UHD) content,
you will need to implement PlayReady hardware DRM. For information on how to implement PlayReady hardware
DRM, see Hardware DRM.
Some content is not supported in hardware DRM. For information on disabling hardware DRM and enabling
software DRM, see Override Hardware DRM.
Regarding the media protection manager, make sure your code has the following settings if it doesnt already:

var mediaProtectionManager = new Windows.Media.Protection.MediaProtectionManager();

mediaProtectionManager.properties["Windows.Media.Protection.MediaProtectionSystemId"] =
'{F4637010-03C3-42CD-B932-B48ADF3A6A54}'
var cpsystems = new Windows.Foundation.Collections.PropertySet();
cpsystems["{F4637010-03C3-42CD-B932-B48ADF3A6A54}"] =
"Windows.Media.Protection.PlayReady.PlayReadyWinRTTrustedInput";
mediaProtectionManager.properties["Windows.Media.Protection.MediaProtectionSystemIdMapping"] = cpsystems;

mediaProtectionManager.properties["Windows.Media.Protection.MediaProtectionContainerGuid"] =
"{9A04F079-9840-4286-AB92-E65BE0885F95}";

Proactively acquire a non-persistent license before playback


This section describes how to acquire non-persistent licenses proactively before playback begins.
In previous versions of PlayReady DRM, non-persistent licenses could only be acquired reactively during playback.
In this version, you can acquire non-persistent licenses proactively before playback begins.
1. Proactively create a playback session where the non-persistent license can be stored. For example:

var cpsystems = new Windows.Foundation.Collections.PropertySet();


cpsystems["{F4637010-03C3-42CD-B932-B48ADF3A6A54}"] =
"Windows.Media.Protection.PlayReady.PlayReadyWinRTTrustedInput"; // PlayReady

var pmpSystemInfo = new Windows.Foundation.Collections.PropertySet();


pmpSystemInfo["Windows.Media.Protection.MediaProtectionSystemId"] = "{F4637010-03C3-42CD-B932-B48ADF3A6A54}";
pmpSystemInfo["Windows.Media.Protection.MediaProtectionSystemIdMapping"] = cpsystems;
var pmpServer = new Windows.Media.Protection.MediaProtectionPMPServer( pmpSystemInfo );

2. Tie that playback session to the license acquisition class. For example:

var licenseSessionProperties = new Windows.Foundation.Collections.PropertySet();


licenseSessionProperties["Windows.Media.Protection.MediaProtectionPMPServer"] = pmpServer;
var licenseSession = new Windows.Media.Protection.PlayReady.PlayReadyLicenseSession( licenseSessionProperties );

3. Create a license service request. For example:

var laSR = licenseSession.CreateLAServiceRequest();


4. Perform the license acquisition using the service request created from step 3. The license will be stored in
the playback session.
5. Tie the playback session to the media source for playback. For example:

licenseSession.configureMediaProtectionManager( mediaProtectionManager );
videoPlayer.msSetMediaProtectionManager( mediaProtectionManager );

Add secure stop


This section describes how to add secure stop to your UWP app.
Secure stop provides the means for a PlayReady device to confidently assert to a media streaming service that
media playback has stopped for any given piece of content. This capability ensures your media streaming services
provide accurate enforcement and reporting of usage limitations on different devices for a given account.
There are two primary scenarios for sending a secure stop challenge:
When the media presentation stops because end of content was reached or when the user stopped the media
presentation somewhere in the middle.
When the previous session ends unexpectedly (for example, due to a system or app crash). The app will need to
query, either at startup or shutdown, for any outstanding secure stop sessions and send challenge(s) separate
from any other media playback.
For a sample implementation of secure stop, see the securestop.cs file in the PlayReady sample located at
http://go.microsoft.com/fwlink/p/?linkid=331670&clcid=0x409.

Use PlayReady DRM on Xbox One


To use PlayReady DRM in a UWP app on Xbox One, you will first need to register your Dev Center account that
you're using to publish the app for authorization to use PlayReady. You can do this in one of two ways:
Have your contact at Microsoft request permission.
Apply for authorization by sending your Dev Center account and company name to pronxbox@microsoft.com.
Once you receive authorization, you'll need to add an additional <DeviceCapability> to the app manifest. You'll have
to add this manually because there is currently no setting available in the App Manifest Designer. Follow these
steps to configure it:
1. With the project open in Visual Studio, open the Solution Explorer and right-click Package.appxmanifest.
2. Select Open With..., choose XML (Text) Editor, and click OK.
3. Between the <Capabilities> tags, add the following <DeviceCapability> :

<DeviceCapability Name="6a7e5907-885c-4bcb-b40a-073c067bd3d5" />

4. Save the file.


Finally, there is one last consideration when using PlayReady on Xbox One: on development kits, there is an SL150
limit (that is, they can't play SL2000 or SL3000 content). Retail devices are able to play content with higher security
levels, but to test your app on a dev kit, you'll need to use SL150 content. You can test this content in one of the
following ways:
Use curated test content that requires SL150 licenses.
Implement logic so that only certain authenticated test accounts are able to acquire SL150 licenses for certain
content.
Use the approach that makes the most sense for your company and your product.

See also
Media playback
Hardware DRM
3/6/2017 6 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This topic provides an overview of how to add PlayReady hardware-based digital rights management (DRM) to
your Universal Windows Platform (UWP) app.

NOTE
Hardware-based PlayReady DRM is supported on a multitude of devices, including both Windows and non-Windows devices
such as TV sets, phones, and tablets. For a Windows device to support PlayReady Hardware DRM, it must be running
Windows 10 and have a supported hardware configuration.

Increasingly, content providers are moving towards hardware-based protections for granting permission to play
back full high value content in apps. Robust support for a hardware implementation of the cryptographic core has
been added to PlayReady to meet this need. This support enables secure playback of high definition (1080p) and
ultra-high definition (UHD) content on multiple device platforms. Key material (including private keys, content
keys, and any other key material used to derive or unlock said keys), and decrypted compressed and
uncompressed video samples are protected by leveraging hardware security.

Windows TEE implementation


This topic provides a brief overview of how Windows 10 implements the trusted execution environment (TEE).
The details of the Windows TEE implementation is out of scope for this document. However, a brief discussion of
the difference between the standard porting kit TEE port and the Windows port will be beneficial. Windows
implements the OEM proxy layer and transfers the serialized PRITEE functions calls to a user mode driver in the
Windows Media Foundation subsystem. This will eventually get routed to either the Windows TrEE (Trusted
Execution Environment) driver or the OEMs graphics driver. The details of either of these approaches is out of
scope for this document. The following diagram shows the general component interaction for the Windows port. If
you want to develop a Windows PlayReady TEE implementation, you can contact WMLA@Microsoft.com.

Considerations for using hardware DRM


This topic provides a brief list of items that should be considered when developing apps designed to use hardware
DRM. As explained in PlayReady DRM, with PlayReady HWDRM for Windows 10, all output protections are
enforced from within the Windows TEE implementation, which has some consequences on output protection
behaviors:
Support for output protection level (OPL) for uncompressed digital video 270: PlayReady HWDRM for
Windows 10 doesn't support down-resolution and will enforce that HDCP is engaged. We recommend that
high definition content for HWDRM have an OPL greater than 270 (although it is not required). Additionally, we
recommend that you set HDCP type restriction in the license (HDCP version 2.2 on Windows 10).
Unlike software DRM (SWDRM), output protections are enforced on all monitors based on the least
capable monitor. For example, if the user has two monitors connected where one of the monitors supports
HDCP and the other does not, playback will fail if the license requires HDCP even if the content is only being
rendered on the monitor that supports HDCP. In software DRM, content would play back as long as it is only
being rendered on the monitor that supports HDCP.
HWDRM is not guaranteed to be used by the client and secure unless the following conditions are
met by the content keys and licenses:
The license used for the video content key must have a Minimum Security level property of 3000.
Audio must be encrypted to a different content key than video, and the license used for the audio must
have a Minimum Security level property of 2000. Alternatively, audio could be left in the clear.
Additionally, you should take the following items into consideration when using HWDRM:
Protected Media Process (PMP) is not supported.
Windows Media Video (also known as VC-1) is not supported (see Override hardware DRM).
Multiple graphics processing units (GPUs) are not supported for persistent licenses.
To handle persistent licenses on machines with multiple GPUs, consider the following scenario:
1. A customer buys a new machine with an integrated graphics card.
2. The customer uses an app that acquires persistent licenses while using hardware DRM.
3. The persistent license is now bound to that graphics cards hardware keys.
4. The customer then installs a new graphics card.
5. All licenses in the hashed data store (HDS) are bound to the integrated video card, but the customer now wants
to play back protected content using the newly-installed graphics card.
To prevent playback from failing because the licenses cant be decrypted by the hardware, PlayReady uses a
separate HDS for each graphics card that it encounters. This will cause PlayReady to attempt license acquisition for
a piece of content where PlayReady would normally already have a license (that is, in the software DRM case or
any case without a hardware change, PlayReady wouldnt need to reacquire a license). Therefore, if the app
acquires a persistent license while using hardware DRM, your app needs to be able to handle the case where that
license is effectively lost if the end user installs (or uninstalls) a graphics card. Because this is not a common
scenario, you may decide to handle the support calls when the content no longer plays after a hardware change
rather than figure out how to deal with a hardware change in the client/server code.

Override hardware DRM


This section describes how to override hardware DRM (HWDRM) if the content to be played back does not support
hardware DRM.
By default, hardware DRM is used if the system supports it. However, some content is not supported in hardware
DRM. One example of this is Cocktail content. Another example is any content that uses a video codec other than
H.264 and HEVC. Another example is HEVC content, as some hardware DRM will support HEVC and some will not.
Therefore, if you want to play a piece of content and hardware DRM doesnt support it on the system in question,
you may want to opt out of hardware DRM.
The following example shows how to opt-out of hardware DRM. You only need to do this before you switch. Also,
make sure you dont have any PlayReady object in memory, otherwise behavior is undefined.

var applicationData = Windows.Storage.ApplicationData.current;


var localSettings = applicationData.localSettings.createContainer("PlayReady", Windows.Storage.ApplicationDataCreateDisposition.always);
localSettings.values["SoftwareOverride"] = 1;

To switch back to hardware DRM, set the SoftwareOverride value to 0.


For every media playback, you need to set MediaProtectionManager to:

mediaProtectionManager.properties["Windows.Media.Protection.UseSoftwareProtectionLayer"] = true;

The best way to tell if you are in hardware DRM or software DRM is to look at C:\Users\
<username>\AppData\Local\Packages\<application name>\LocalState\PlayReady\*
If there is an mspr.hds file, you are in software DRM.
If you have another *.hds file, you are in hardware DRM.
You can delete the entire PlayReady folder and retry your test as well.

Detect the type of hardware DRM


This section describes how to detect what type of hardware DRM is supported on the system.
You can use the PlayReadyStatics.CheckSupportedHardware method to determine whether the system
supports a specific hardware DRM feature. For example:

boolean PlayReadyStatics->CheckSupportedHardware(PlayReadyHardwareDRMFeatures enum);

The PlayReadyHardwareDRMFeatures enumeration contains the valid list of hardware DRM feature values that
can be queried. To determine if hardware DRM is supported, use the HardwareDRM member in the query. To
determine if the hardware supports the High Efficiency Video Coding (HEVC)/H.265 codec, use the HEVC member
in the query.
You can also use the PlayReadyStatics.PlayReadyCertificateSecurityLevel property to get the security level of
the client certificate to determine if hardware DRM is supported. Unless the returned certificate security level is
greater than or equal to 3000, either the client is not individualized or provisioned (in which case this property
returns 0) or hardware DRM is not in use (in which case this property returns a value that is less than 3000).

See also
PlayReady DRM
Adaptive streaming with PlayReady
3/6/2017 4 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This article describes how to add adaptive streaming of multimedia content with Microsoft PlayReady content
protection to a Universal Windows Platform (UWP) app.
This feature currently supports playback of Dynamic streaming over HTTP (DASH) content.
HLS (Apple's HTTP Live Streaming) is not supported with PlayReady.
Smooth streaming is also currently not supported natively; however, PlayReady is extensible and by using
additional code or libraries, PlayReady-protected Smooth streaming can be supported, leveraging software or even
hardware DRM (digital rights management).
This article only deals with the aspects of adaptive streaming specific to PlayReady. For information about
implementing adaptive streaming in general, see Adaptive streaming.
This article uses code from the Adaptive streaming sample in Microsoft's Windows-universal-samples repository
on GitHub. Scenario 4 deals with using adaptive streaming with PlayReady. You can download the repo in a ZIP file
by navigating to the root level of the repository and selecting the Download ZIP button.
You will need the following using statements:

using LicenseRequest;
using System;
using System.Net.Http;
using System.Net.Http.Headers;
using System.Runtime.InteropServices;
using System.Threading.Tasks;
using Windows.Foundation.Collections;
using Windows.Media.Protection;
using Windows.Media.Protection.PlayReady;
using Windows.Media.Streaming.Adaptive;
using Windows.UI.Xaml.Controls;

The LicenseRequest namespace is from CommonLicenseRequest.cs, a PlayReady file provided by Microsoft to


licensees.
You will need to declare a few global variables:

private AdaptiveMediaSource ams = null;


private MediaProtectionManager protectionManager = null;
private string playReadyLicenseUrl = "";
private string playReadyChallengeCustomData = "";

You will also want to declare the following constant:

private const uint MSPR_E_CONTENT_ENABLING_ACTION_REQUIRED = 0x8004B895;

Setting up the MediaProtectionManager


To add PlayReady content protection to your UWP app, you will need to set up a MediaProtectionManager object.
You do this when initializing your AdaptiveMediaSource object.
The following code sets up a MediaProtectionManager:

private void SetUpProtectionManager(ref MediaElement mediaElement)


{
protectionManager = new MediaProtectionManager();

protectionManager.ComponentLoadFailed +=
new ComponentLoadFailedEventHandler(ProtectionManager_ComponentLoadFailed);

protectionManager.ServiceRequested +=
new ServiceRequestedEventHandler(ProtectionManager_ServiceRequested);

PropertySet cpSystems = new PropertySet();

cpSystems.Add(
"{F4637010-03C3-42CD-B932-B48ADF3A6A54}",
"Windows.Media.Protection.PlayReady.PlayReadyWinRTTrustedInput");

protectionManager.Properties.Add("Windows.Media.Protection.MediaProtectionSystemIdMapping", cpSystems);

protectionManager.Properties.Add(
"Windows.Media.Protection.MediaProtectionSystemId",
"{F4637010-03C3-42CD-B932-B48ADF3A6A54}");

protectionManager.Properties.Add(
"Windows.Media.Protection.MediaProtectionContainerGuid",
"{9A04F079-9840-4286-AB92-E65BE0885F95}");

mediaElement.ProtectionManager = protectionManager;
}

This code can simply be copied to your app, since it is mandatory for setting up content protection.
The ComponentLoadFailed event is fired when the load of binary data fails. We need to add an event handler to
handle this, signaling that the load did not complete:

private void ProtectionManager_ComponentLoadFailed(


MediaProtectionManager sender,
ComponentLoadFailedEventArgs e)
{
e.Completion.Complete(false);
}

Similarly, we need to add an event handler for the ServiceRequested event, which fires when a service is requested.
This code checks what kind of request it is, and responds appropriately:
private async void ProtectionManager_ServiceRequested(
MediaProtectionManager sender,
ServiceRequestedEventArgs e)
{
if (e.Request is PlayReadyIndividualizationServiceRequest)
{
PlayReadyIndividualizationServiceRequest IndivRequest =
e.Request as PlayReadyIndividualizationServiceRequest;

bool bResultIndiv = await ReactiveIndivRequest(IndivRequest, e.Completion);


}
else if (e.Request is PlayReadyLicenseAcquisitionServiceRequest)
{
PlayReadyLicenseAcquisitionServiceRequest licenseRequest =
e.Request as PlayReadyLicenseAcquisitionServiceRequest;

LicenseAcquisitionRequest(
licenseRequest,
e.Completion,
playReadyLicenseUrl,
playReadyChallengeCustomData);
}
}

Individualization service requests


The following code reactively makes a PlayReady individualization service request. We pass in the request as a
parameter to the function. We surround the call in a try/catch block, and if there are no exceptions, we say the
request completed successfully:
async Task<bool> ReactiveIndivRequest(
PlayReadyIndividualizationServiceRequest IndivRequest,
MediaProtectionServiceCompletion CompletionNotifier)
{
bool bResult = false;
Exception exception = null;

try
{
await IndivRequest.BeginServiceRequest();
}
catch (Exception ex)
{
exception = ex;
}
finally
{
if (exception == null)
{
bResult = true;
}
else
{
COMException comException = exception as COMException;
if (comException != null && comException.HResult == MSPR_E_CONTENT_ENABLING_ACTION_REQUIRED)
{
IndivRequest.NextServiceRequest();
}
}
}

if (CompletionNotifier != null) CompletionNotifier.Complete(bResult);


return bResult;
}

Alternatively, we may want to proactively make an individualization service request, in which case we call the
following function in place of the code calling ReactiveIndivRequest in ProtectionManager_ServiceRequested :

async void ProActiveIndivRequest()


{
PlayReadyIndividualizationServiceRequest indivRequest = new PlayReadyIndividualizationServiceRequest();
bool bResultIndiv = await ReactiveIndivRequest(indivRequest, null);
}

License acquisition service requests


If instead the request was a PlayReadyLicenseAcquisitionServiceRequest, we call the following function to request
and acquire the PlayReady license. We tell the MediaProtectionServiceCompletion object that we passed in
whether the request was successful or not, and we complete the request:

async void LicenseAcquisitionRequest(


PlayReadyLicenseAcquisitionServiceRequest licenseRequest,
MediaProtectionServiceCompletion CompletionNotifier,
string Url,
string ChallengeCustomData)
{
bool bResult = false;
string ExceptionMessage = string.Empty;

try
{
if (!string.IsNullOrEmpty(Url))
{
{
if (!string.IsNullOrEmpty(ChallengeCustomData))
{
System.Text.UTF8Encoding encoding = new System.Text.UTF8Encoding();
byte[] b = encoding.GetBytes(ChallengeCustomData);
licenseRequest.ChallengeCustomData = Convert.ToBase64String(b, 0, b.Length);
}

PlayReadySoapMessage soapMessage = licenseRequest.GenerateManualEnablingChallenge();

byte[] messageBytes = soapMessage.GetMessageBody();


HttpContent httpContent = new ByteArrayContent(messageBytes);

IPropertySet propertySetHeaders = soapMessage.MessageHeaders;

foreach (string strHeaderName in propertySetHeaders.Keys)


{
string strHeaderValue = propertySetHeaders[strHeaderName].ToString();

if (strHeaderName.Equals("Content-Type", StringComparison.OrdinalIgnoreCase))
{
httpContent.Headers.ContentType = MediaTypeHeaderValue.Parse(strHeaderValue);
}
else
{
httpContent.Headers.Add(strHeaderName.ToString(), strHeaderValue);
}
}

CommonLicenseRequest licenseAcquision = new CommonLicenseRequest();

HttpContent responseHttpContent =
await licenseAcquision.AcquireLicense(new Uri(Url), httpContent);

if (responseHttpContent != null)
{
Exception exResult = licenseRequest.ProcessManualEnablingResponse(
await responseHttpContent.ReadAsByteArrayAsync());

if (exResult != null)
{
throw exResult;
}
bResult = true;
}
else
{
ExceptionMessage = licenseAcquision.GetLastErrorMessage();
}
}
else
{
await licenseRequest.BeginServiceRequest();
bResult = true;
}
}
catch (Exception e)
{
ExceptionMessage = e.Message;
}

CompletionNotifier.Complete(bResult);
}

Initializing the AdaptiveMediaSource


Finally, you will need a function to initialize the AdaptiveMediaSource, created from a given Uri and MediaElement.
The Uri should be the link to the media file (HLS or DASH); the MediaElement should be defined in your XAML.

async private void InitializeAdaptiveMediaSource(System.Uri uri, MediaElement m)


{
AdaptiveMediaSourceCreationResult result = await AdaptiveMediaSource.CreateFromUriAsync(uri);
if (result.Status == AdaptiveMediaSourceCreationStatus.Success)
{
ams = result.MediaSource;
SetUpProtectionManager(ref m);
m.SetMediaStreamSource(ams);
}
else
{
// Error handling
}
}

You can call this function in whichever event handles the start of adaptive streaming; for example, in a button click
event.

See also
PlayReady DRM
PlayReady Encrypted Media Extension
3/6/2017 8 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This section describes how to modify your PlayReady web app to support the changes made from the previous
Windows 8.1 version to the Windows 10 version.
Using PlayReady media elements in Internet Explorer enables developers to create web apps capable of providing
PlayReady content to the user while enforcing the access rules defined by the content provider. This section
describes how to add PlayReady media elements to your existing web apps by using only HTML5 and JavaScript.

What's new in PlayReady Encrypted Media Extension


This section provides a list of changes made to the PlayReady Encrypted Media Extension (EME) to enable
PlayReady content protection on Windows 10.
The following list describes the new features and changes made to PlayReady Encrypted Media Extension for
Windows 10:
Added hardware digital rights management (DRM).
Hardware-based content protection support enables secure playback of high definition (HD) and ultra-high
definition (UHD) content on multiple device platforms. Key material (including private keys, content keys,
and any other key material used to derive or unlock said keys), and decrypted compressed and
uncompressed video samples are protected by leveraging hardware security.
Provides proactive acquisition of non-persistent licenses.
Provides acquisition of multiple licenses in one message.
You can either use a PlayReady object with multiple key identifiers (KeyIDs) as in Windows 8.1, or use
content decryption model data (CDMData) with multiple KeyIDs.

NOTE
In Windows 10, multiple key identifiers are supported under <KeyID> in CDMData.

Added real time expiration support, or limited duration license (LDL).


Provides the ability to set real-time expiration on licenses.
Added HDCP Type 1 (version 2.2) policy support.
Miracast is now implicit as an output.
Added secure stop.
Secure stop provides the means for a PlayReady device to confidently assert to a media streaming service
that media playback has stopped for any given piece of content.
Added audio and video license separation.
Separate tracks prevent video from being decoded as audio; enabling more robust content protection.
Emerging standards are requiring separate keys for audio and visual tracks.
Added MaxResDecode.
This feature was added to limit playback of content to a maximum resolution even when in possession of a
more capable key (but not a license). It supports cases where multiple stream sizes are encoded with a single
key.

Encrypted Media Extension support in PlayReady


This section describes the version of the W3C Encrypted Media Extension supported by PlayReady.
PlayReady for Web Apps is currently bound to the W3C Encrypted Media Extension (EME) draft of May 10, 2013.
This support will be changed to the updated EME specification in future versions of Windows.

Use hardware DRM


This section describes how your web app can use PlayReady hardware DRM, and how to disable hardware DRM if
the protected content does not support it.
To use PlayReady hardware DRM, your JavaScript web app should use the isTypeSupported EME method with a
key system identifier of com.microsoft.playready.hardware to query for PlayReady hardware DRM support from the
browser.
Occasionally, some content is not supported in hardware DRM. Cocktail content is never supported in hardware
DRM; if you want to play cocktail content, you must opt out of hardware DRM. Some hardware DRM will support
HEVC and some will not; if you want to play HEVC content and hardware DRM doesnt support it, you will want to
opt out as well.

NOTE
To determine whether HEVC content is supported, after instantiating com.microsoft.playready , use the
PlayReadyStatics.CheckSupportedHardware method.

Add secure stop to your web app


This section describes how to add secure stop to your web app.
Secure stop provides the means for a PlayReady device to confidently assert to a media streaming service that
media playback has stopped for any given piece of content. This capability ensures your media streaming services
provide accurate enforcement and reporting of usage limitations on different devices for a given account.
There are two primary scenarios for sending a secure stop challenge:
When the media presentation stops because end of content was reached or when the user stopped the media
presentation somewhere in the middle.
When the previous session ends unexpectedly (for example, due to a system or app crash). The app will need to
query, either at startup or shutdown, for any outstanding secure stop sessions and send challenge(s) separate
from any other media playback.
The following procedures describe how to set up secure stop for various scenarios.
To set up secure stop for a normal end of a presentation:
1. Register the onEnded event before playback starts.
2. The onEnded event handler needs to call removeAttribute(src) from the video/audio element object to set the
source to NULL which will trigger the media foundation to tear down the topology, destroy the decryptor(s),
and set the stop state.
3. You can start the secure stop CDM session inside the handler to send the secure stop challenge to the server to
notify the playback has stopped at this time, but it can be done later as well.
To set up secure stop if the user navigates away from the page or closes down the tab or browser:
No app action is required to record the stop state; it will be recorded for you.
To set up secure stop for custom page controls or user actions (such as custom navigation buttons or starting a
new presentation before the current presentation completed):
When custom user action occurs, the app needs to set the source to NULL which will trigger the media
foundation to tear down the topology, destroy the decryptor(s), and set the stop state.
The following example demonstrates how to use secure stop in your web app:

// JavaScript source code

var g_prkey = null;


var g_keySession = null;
var g_fUseSpecificSecureStopSessionID = false;
var g_encodedMeteringCert = 'Base64 encoded of your metering cert (aka publisher cert)';

// Note: g_encodedLASessionId is the CDM session ID of the proactive or reactive license acquisition
// that we want to initiate the secure stop process.
var g_encodedLASessionId = null;

function main()
{
...

g_prkey = new MSMediaKeys("com.microsoft.playready");

...

// add 'onended' event handler to the video element


// Assume 'myvideo' is the ID of the video element
var videoElement = document.getElementById("myvideo");
videoElement.onended = function (e) {

//
// Calling removeAttribute("src") will set the source to null
// which will trigger the MF to tear down the topology, destroy the
// decryptor(s) and set the stop state. This is required in order
// to set the stop state.
//
videoElement.removeAttribute("src");
videoElement.load();

onEndOfStream();
};
}

function onEndOfStream()
{
...

createSecureStopCDMSession();

...
}

function createSecureStopCDMSession()
{
try{
var targetMediaCodec = "video/mp4";
var customData = "my custom data";
var encodedSessionId = g_encodedLASessionId;
if( !g_fUseSpecificSecureStopSessionID )
{
// Use "*" (wildcard) as the session ID to include all secure stop sessions
// TODO: base64 encode "*" and place encoded result to encodedSessionId
}

var int8ArrayCDMdata = formatSecureStopCDMData( encodedSessionId, customData, g_encodedMeteringCert );


var emptyArrayofInitData = new Uint8Array();

g_keySession = g_prkey.createSession(targetMediaCodec, emptyArrayofInitData, int8ArrayCDMdata);

addPlayreadyKeyEventHandler();

} catch( e )
{
// TODO: Handle exception
}
}

function addPlayreadyKeyEventHandler()
{
// add 'keymessage' eventhandler
g_keySession.addEventListener('mskeymessage', function (event) {

// TODO: Get the keyMessage from event.message.buffer which contains the secure stop challenge
// The keyMessage format for the secure stop is similar to LA as below:
//
// <PlayReadyKeyMessage type="SecureStop" >
// <SecureStop version="1.0" >
// <Challenge encoding="base64encoded">
// secure stop challenge
// </Challenge>
// <HttpHeaders>
// <HttpHeader>
// <name>Content-Type</name>
// <value>"content type data"</value>
// </HttpHeader>
// <HttpHeader>
// <name>SOAPAction</name>
// <value>soap action</value>
// </HttpHeader>
// ....
// </HttpHeaders>
// </SecureStop>
// </PlayReadyKeyMessage>

// TODO: send the secure stop challenge to a server that handles the secure stop challenge

// TODO: Recevie and response and call event.target.Update() to proecess the response
});

// add 'keyerror' eventhandler


g_keySession.addEventListener('mskeyerror', function (event) {
var session = event.target;

...

session.close();
});

// add 'keyadded' eventhandler


g_keySession.addEventListener('mskeyadded', function (event) {

var session = event.target;

...
session.close();
});
}

/**
* desc@ formatSecureStopCDMData
* generate playready CDMData
* CDMData is in xml format:
* <PlayReadyCDMData type="SecureStop">
* <SecureStop version="1.0">
* <SessionID>B64 encoded session ID</SessionID>
* <CustomData>B64 encoded custom data</CustomData>
* <ServerCert>B64 encoded server cert</ServerCert>
* </SecureCert>
* </PlayReadyCDMData>
*/
function formatSecureStopCDMData(encodedSessionId, customData, encodedPublisherCert)
{
var encodedCustomData = null;

// TODO: base64 encode the custom data and place the encoded result to encodedCustomData

var CDMDataStr = "<PlayReadyCDMData type=\"SecureStop\">" +


"<SecureStop version=\"1.0\" >" +
"<SessionID>" + encodedSessionId + "</SessionID>" +
"<CustomData>" + encodedCustomData + "</CustomData>" +
"<ServerCert>" + encodedPublisherCert + "</ServerCert>" +
"</SecureStop></PlayReadyCDMData>";

var int8ArrayCDMdata = null

// TODO: Convert CDMDataStr to Uint8 byte array and palce the converted result to int8ArrayCDMdata

return int8ArrayCDMdata;
}

NOTE
The secure stop datas <SessionID>B64 encoded session ID</SessionID> in the sample above can be an asterisk (*), which
is a wild card for all the secure stop sessions recorded. That is, the SessionID tag can be a specific session, or a wild card (*) to
select all the secure stop sessions.

Programming considerations for Encrypted Media Extension


This section lists the programming considerations that you should take into account when creating your
PlayReady-enabled web app for Windows 10.
The MSMediaKeys and MSMediaKeySession objects created by your app must be kept alive until your app
closes. One way of ensuring these objects stay alive is to assign them as global variables (the variables would
become out of scope and subject to garbage collection if declared as a local variable inside of a function). For
example, the following sample assigns the variables g_msMediaKeys and g_mediaKeySession as global variables,
which are then assigned to the MSMediaKeys and MSMediaKeySession objects in the function.
var g_msMediaKeys;
var g_mediaKeySession;

function foo() {
...
g_msMediaKeys = new MSMediaKeys("com.microsoft.playready");
...
g_mediaKeySession = g_msMediaKeys.createSession("video/mp4", intiData, null);
g_mediaKeySession.addEventListener(this.KEYMESSAGE_EVENT, function (e)
{
...
downloadPlayReadyKey(url, keyMessage, function (data)
{
g_mediaKeySession.update(data);
});
});
g_mediaKeySession.addEventListener(this.KEYADDED_EVENT, function ()
{
...
g_mediaKeySession.close();
g_mediaKeySession = null;
});
}

For more information, see the sample applications.

See also
PlayReady DRM
Detect faces in images or videos
3/6/2017 7 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
[Some information relates to pre-released product which may be substantially modified before it's commercially
released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
This topic shows how to use the FaceDetector to detect faces in an image. The FaceTracker is optimized for
tracking faces over time in a sequence of video frames.
For an alternative method of tracking faces using the FaceDetectionEffect, see Scene analysis for media capture.
The code in this article was adapted from the Basic Face Detection and Basic Face Tracking samples. You can
download these samples to see the code used in context or to use the sample as a starting point for your own app.

Detect faces in a single image


The FaceDetector class allows you to detect one or more faces in a still image.
This example uses APIs from the following namespaces.

using Windows.Storage;
using Windows.Storage.Pickers;
using Windows.Storage.Streams;
using Windows.Graphics.Imaging;
using Windows.Media.FaceAnalysis;
using Windows.UI.Xaml.Media.Imaging;
using Windows.UI.Xaml.Shapes;

Declare a class member variable for the FaceDetector object and for the list of DetectedFace objects that will be
detected in the image.

FaceDetector faceDetector;
IList<DetectedFace> detectedFaces;

Face detection operates on a SoftwareBitmap object which can be created in a variety of ways. In this example a
FileOpenPicker is used to allow the user to pick an image file in which faces will be detected. For more
information about working with software bitmaps, see Imaging.

FileOpenPicker photoPicker = new FileOpenPicker();


photoPicker.ViewMode = PickerViewMode.Thumbnail;
photoPicker.SuggestedStartLocation = PickerLocationId.PicturesLibrary;
photoPicker.FileTypeFilter.Add(".jpg");
photoPicker.FileTypeFilter.Add(".jpeg");
photoPicker.FileTypeFilter.Add(".png");
photoPicker.FileTypeFilter.Add(".bmp");

StorageFile photoFile = await photoPicker.PickSingleFileAsync();


if (photoFile == null)
{
return;
}
Use the BitmapDecoder class to decode the image file into a SoftwareBitmap. The face detection process is
quicker with a smaller image and so you may want to scale the source image down to a smaller size. This can be
performed during decoding by creating a BitmapTransform object, setting the ScaledWidth and ScaledHeight
properties and passing it into the call to GetSoftwareBitmapAsync, which returns the decoded and scaled
SoftwareBitmap.

IRandomAccessStream fileStream = await photoFile.OpenAsync(FileAccessMode.Read);


BitmapDecoder decoder = await BitmapDecoder.CreateAsync(fileStream);

BitmapTransform transform = new BitmapTransform();


const float sourceImageHeightLimit = 1280;

if (decoder.PixelHeight > sourceImageHeightLimit)


{
float scalingFactor = (float)sourceImageHeightLimit / (float)decoder.PixelHeight;
transform.ScaledWidth = (uint)Math.Floor(decoder.PixelWidth * scalingFactor);
transform.ScaledHeight = (uint)Math.Floor(decoder.PixelHeight * scalingFactor);
}

SoftwareBitmap sourceBitmap = await decoder.GetSoftwareBitmapAsync(decoder.BitmapPixelFormat, BitmapAlphaMode.Premultiplied, transform,


ExifOrientationMode.IgnoreExifOrientation, ColorManagementMode.DoNotColorManage);

In the current version, the FaceDetector class only supports images in Gray8 or Nv12. The SoftwareBitmap class
provides the Convert method, which converts a bitmap from one format to another. This example converts the
source image into the Gray8 pixel format if it is not already in that format. If you want, you can use the
GetSupportedBitmapPixelFormats and IsBitmapPixelFormatSupported methods to determine at runtime if a
pixel format is supported, in case the set of supported formats is expanded in future versions.

// Use FaceDetector.GetSupportedBitmapPixelFormats and IsBitmapPixelFormatSupported to dynamically


// determine supported formats
const BitmapPixelFormat faceDetectionPixelFormat = BitmapPixelFormat.Gray8;

SoftwareBitmap convertedBitmap;

if (sourceBitmap.BitmapPixelFormat != faceDetectionPixelFormat)
{
convertedBitmap = SoftwareBitmap.Convert(sourceBitmap, faceDetectionPixelFormat);
}
else
{
convertedBitmap = sourceBitmap;
}

Instantiate the FaceDetector object by calling CreateAsync and then calling DetectFacesAsync, passing in the
bitmap that has been scaled to a reasonable size and converted to a supported pixel format. This method returns a
list of DetectedFace objects. ShowDetectedFaces is a helper method, shown below, that draws squares around
the faces in the image.

if (faceDetector == null)
{
faceDetector = await FaceDetector.CreateAsync();
}

detectedFaces = await faceDetector.DetectFacesAsync(convertedBitmap);


ShowDetectedFaces(sourceBitmap, detectedFaces);

Be sure to dispose of the objects that were created during the face detection process.
sourceBitmap.Dispose();
fileStream.Dispose();
convertedBitmap.Dispose();

To display the image and draw boxes around the detected faces, add a Canvas element to your XAML page.

<Canvas x:Name="VisualizationCanvas" Visibility="Visible" Grid.Row="0" HorizontalAlignment="Stretch" VerticalAlignment="Stretch"/>

Define some member variables to style the squares that will be drawn.

private readonly SolidColorBrush lineBrush = new SolidColorBrush(Windows.UI.Colors.Yellow);


private readonly double lineThickness = 2.0;
private readonly SolidColorBrush fillBrush = new SolidColorBrush(Windows.UI.Colors.Transparent);

In the ShowDetectedFaces helper method, a new ImageBrush is created and the source is set to a
SoftwareBitmapSource created from the SoftwareBitmap representing the source image. The background of
the XAML Canvas control is set to the image brush.
If the list of faces passed into the helper method isn't empty, loop through each face in the list and use the
FaceBox property of the DetectedFace class to determine the position and size of the rectangle within the image
that contains the face. Because the Canvas control is very likely to be a different size than the source image, you
should multiply both the X and Y coordinates and the width and height of the FaceBox by a scaling value that is
the ratio of the source image size to the actual size of the Canvas control.

private async void ShowDetectedFaces(SoftwareBitmap sourceBitmap, IList<DetectedFace> faces)


{
ImageBrush brush = new ImageBrush();
SoftwareBitmapSource bitmapSource = new SoftwareBitmapSource();
await bitmapSource.SetBitmapAsync(sourceBitmap);
brush.ImageSource = bitmapSource;
brush.Stretch = Stretch.Fill;
this.VisualizationCanvas.Background = brush;

if (detectedFaces != null)
{
double widthScale = sourceBitmap.PixelWidth / this.VisualizationCanvas.ActualWidth;
double heightScale = sourceBitmap.PixelHeight / this.VisualizationCanvas.ActualHeight;

foreach (DetectedFace face in detectedFaces)


{
// Create a rectangle element for displaying the face box but since we're using a Canvas
// we must scale the rectangles according to the images actual size.
// The original FaceBox values are saved in the Rectangle's Tag field so we can update the
// boxes when the Canvas is resized.
Rectangle box = new Rectangle();
box.Tag = face.FaceBox;
box.Width = (uint)(face.FaceBox.Width / widthScale);
box.Height = (uint)(face.FaceBox.Height / heightScale);
box.Fill = this.fillBrush;
box.Stroke = this.lineBrush;
box.StrokeThickness = this.lineThickness;
box.Margin = new Thickness((uint)(face.FaceBox.X / widthScale), (uint)(face.FaceBox.Y / heightScale), 0, 0);

this.VisualizationCanvas.Children.Add(box);
}
}
}
Track faces in a sequence of frames
If you want to detect faces in video, it is more efficient to use the FaceTracker class rather than the FaceDetector
class, although the implementation steps are very similar. The FaceTracker uses information about previously
processed frames to optimize the detection process.

using Windows.Media;
using System.Threading;
using Windows.System.Threading;

Declare a class variable for the FaceTracker object. This example uses a ThreadPoolTimer to initiate face tracking
on a defined interval. A SemaphoreSlim is used to make sure that only one face tracking operation is running at a
time.

private FaceTracker faceTracker;


private ThreadPoolTimer frameProcessingTimer;
private SemaphoreSlim frameProcessingSemaphore = new SemaphoreSlim(1);

To initialize the face tracking operation, create a new FaceTracker object by calling CreateAsync. Initialize the
desired timer interval and then create the timer. The ProcessCurrentVideoFrame helper method will be called
every time the specified interval elapses.

this.faceTracker = await FaceTracker.CreateAsync();


TimeSpan timerInterval = TimeSpan.FromMilliseconds(66); // 15 fps
this.frameProcessingTimer = Windows.System.Threading.ThreadPoolTimer.CreatePeriodicTimer(new
Windows.System.Threading.TimerElapsedHandler(ProcessCurrentVideoFrame), timerInterval);

The ProcessCurrentVideoFrame helper is called asynchronously by the timer, so the method first calls the
semaphore's Wait method to see if a tracking operation is ongoing, and if it is the method returns without trying
to detect faces. At the end of this method, the semaphore's Release method is called, which allows the subsequent
call to ProcessCurrentVideoFrame to continue.
The FaceTracker class operates on VideoFrame objects. There are multiple ways you can obtain a VideoFrame
including capturing a preview frame from a running MediaCapture object or by implementing the ProcessFrame
method of the IBasicVideoEffect. This example uses an undefined helper method that returns a video frame,
GetLatestFrame, as a placeholder for this operation. For information about getting video frames from the preview
stream of a running media capture device, see Get a preview frame.
As with FaceDetector, the FaceTracker supports a limited set of pixel formats. This example abandons face
detection if the supplied frame is not in the Nv12 format.
Call ProcessNextFrameAsync to retrieve a list of DetectedFace objects representing the faces in the frame. After
you have the list of faces, you can display them in the same manner described above for face detection. Note that,
because the face tracking helper method is not called on the UI thread, you must make any UI updates in within a
call CoredDispatcher.RunAsync.
public async void ProcessCurrentVideoFrame(ThreadPoolTimer timer)
{
if (!frameProcessingSemaphore.Wait(0))
{
return;
}

VideoFrame currentFrame = await GetLatestFrame();

// Use FaceDetector.GetSupportedBitmapPixelFormats and IsBitmapPixelFormatSupported to dynamically


// determine supported formats
const BitmapPixelFormat faceDetectionPixelFormat = BitmapPixelFormat.Nv12;

if (currentFrame.SoftwareBitmap.BitmapPixelFormat != faceDetectionPixelFormat)
{
return;
}

try
{
IList<DetectedFace> detectedFaces = await faceTracker.ProcessNextFrameAsync(currentFrame);

var previewFrameSize = new Windows.Foundation.Size(currentFrame.SoftwareBitmap.PixelWidth, currentFrame.SoftwareBitmap.PixelHeight);


var ignored = this.Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () =>
{
this.SetupVisualization(previewFrameSize, detectedFaces);
});
}
catch (Exception e)
{
// Face tracking failed
}
finally
{
frameProcessingSemaphore.Release();
}

currentFrame.Dispose();
}

Related topics
Scene analysis for media capture
Basic Face Detection sample
Basic Face Tracking sample
Camera
Basic photo, video, and audio capture with MediaCapture
Media playback
Custom video effects
3/6/2017 12 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This article describes how to create a Windows Runtime component that implements the IBasicVideoEffect
interface to create custom effects for video streams. Custom effects can be used with several different Windows
Runtime APIs including MediaCapture, which provides access to a device's camera, and MediaComposition,
which allows you to create complex compositions out of media clips.

Add a custom effect to your app


A custom video effect is defined in a class that implements the IBasicVideoEffect interface. This class can't be
included directly in your app's project. Instead, you must use a Windows Runtime component to host your video
effect class.
Add a Windows Runtime component for your video effect
1. In Microsoft Visual Studio, with your solution open, go to the File menu and select Add->New Project.
2. Select the Windows Runtime Component (Universal Windows) project type.
3. For this example, name the project VideoEffectComponent. This name will be referenced in code later.
4. Click OK.
5. The project template creates a class called Class1.cs. In Solution Explorer, right-click the icon for Class1.cs and
select Rename.
6. Rename the file to ExampleVideoEffect.cs. Visual Studio will show a prompt asking if you want to update all
references to the new name. Click Yes.
7. Open ExampleVideoEffect.cs and update the class definition to implement the IBasicVideoEffect interface.

public sealed class ExampleVideoEffect : IBasicVideoEffect

You need to include the following namespaces in your effect class file in order to access all of the types used in the
examples in this article.

using Windows.Media.Effects;
using Windows.Media.MediaProperties;
using Windows.Foundation.Collections;
using Windows.Graphics.DirectX.Direct3D11;
using Windows.Graphics.Imaging;

Implement the IBasicVideoEffect interface using software processing


Your video effect must implement all of the methods and properties of the IBasicVideoEffect interface. This
section walks you through a simple implementation of this interface that uses software processing.
Close method
The system will call the Close method on your class when the effect should shut down. You should use this method
to dispose of any resources you have created. The argument to the method is a MediaEffectClosedReason that
lets you know whether the effect was closed normally, if an error occurred, or if the effect does not support the
required encoding format.
public void Close(MediaEffectClosedReason reason)
{
// Dispose of effect resources
}

DiscardQueuedFrames method
The DiscardQueuedFrames method is called when your effect should reset. A typical scenario for this is if your
effect stores previously processed frames to use in processing the current frame. When this method is called, you
should dispose of the set of previous frames you saved. This method can be used to reset any state related to
previous frames, not only accumulated video frames.

private int frameCount;


public void DiscardQueuedFrames()
{
frameCount = 0;
}

IsReadOnly property
The IsReadOnly property lets the system know if your effect will write to the output of the effect. If your app does
not modify the video frames (for example, an effect that only performs analysis of the video frames), you should
set this property to true, which will cause the system to efficiently copy the frame input to the frame output for you.

TIP
When the IsReadOnly property is set to true, the system copies the input frame to the output frame before ProcessFrame
is called. Setting the IsReadOnly property to true does not restrict you from writing to the effect's output frames in
ProcessFrame.

public bool IsReadOnly { get { return false; } }

SetEncodingProperties method
The system calls SetEncodingProperties on your effect to let you know the encoding properties for the video
stream upon which the effect is operating. This method also provides a reference to the Direct3D device used for
hardware rendering. The usage of this device is shown in the hardware processing example later in this article.

private VideoEncodingProperties encodingProperties;


public void SetEncodingProperties(VideoEncodingProperties encodingProperties, IDirect3DDevice device)
{
this.encodingProperties = encodingProperties;
}

SupportedEncodingProperties property
The system checks the SupportedEncodingProperties property to determine which encoding properties are
supported by your effect. Note that if the consumer of your effect can't encode video using the properties you
specify, it will call Close on your effect and will remove your effect from the video pipeline.
public IReadOnlyList<VideoEncodingProperties> SupportedEncodingProperties
{
get
{
var encodingProperties = new VideoEncodingProperties();
encodingProperties.Subtype = "ARGB32";
return new List<VideoEncodingProperties>() { encodingProperties };

// If the list is empty, the encoding type will be ARGB32.


// return new List<VideoEncodingProperties>();
}
}

NOTE
If you return an empty list of VideoEncodingProperties objects from SupportedEncodingProperties, the system will
default to ARGB32 encoding.

SupportedMemoryTypes property
The system checks the SupportedMemoryTypes property to determine whether your effect will access video
frames in software memory or in hardware (GPU) memory. If you return MediaMemoryTypes.Cpu, your effect
will be passed input and output frames that contain image data in SoftwareBitmap objects. If you return
MediaMemoryTypes.Gpu, your effect will be passed input and output frames that contain image data in
IDirect3DSurface objects.

public MediaMemoryTypes SupportedMemoryTypes { get { return MediaMemoryTypes.Cpu; } }

NOTE
If you specify MediaMemoryTypes.GpuAndCpu, the system will use either GPU or system memory, whichever is more
efficient for the pipeline. When using this value, you must check in the ProcessFrame method to see whether the
SoftwareBitmap or IDirect3DSurface passed into the method contains data, and then process the frame accordingly.

TimeIndependent property
The TimeIndependent property lets the system know if your effect does not require uniform timing. When set to
true, the system can use optimizations that enhance effect performance.

public bool TimeIndependent { get { return true; } }

SetProperties method
The SetProperties method allows the app that is using your effect to adjust effect parameters. Properties are
passed as an IPropertySet map of property names and values.

private IPropertySet configuration;


public void SetProperties(IPropertySet configuration)
{
this.configuration = configuration;
}

This simple example will dim the pixels in each video frame according to a specified value. A property is declared
and TryGetValue is used to get the value set by the calling app. If no value was set, a default value of .5 is used.
public double FadeValue
{
get
{
object val;
if (configuration != null && configuration.TryGetValue("FadeValue", out val))
{
return (double)val;
}
return .5;
}
}

ProcessFrame method
The ProcessFrame method is where your effect modifies the image data of the video. The method is called once
per frame and is passed a ProcessVideoFrameContext object. This object contains an input VideoFrame object
that contains the incoming frame to be processed and an output VideoFrame object to which you write image
data that will be passed on to rest of the video pipeline. Each of these VideoFrame objects has a SoftwareBitmap
property and a Direct3DSurface property, but which of these can be used is determined by the value you returned
from the SupportedMemoryTypes property.
This example shows a simple implementation of the ProcessFrame method using software processing. For more
information about working with SoftwareBitmap objects, see Imaging. An example ProcessFrame
implementation using hardware processing is shown later in this article.
Accessing the data buffer of a SoftwareBitmap requires COM interop, so you should include the
System.Runtime.InteropServices namespace in your effect class file.

using System.Runtime.InteropServices;

Add the following code inside the namespace for your effect to import the interface for accessing the image buffer.

[ComImport]
[Guid("5B0D3235-4DBA-4D44-865E-8F1D0E4FD04D")]
[InterfaceType(ComInterfaceType.InterfaceIsIUnknown)]
unsafe interface IMemoryBufferByteAccess
{
void GetBuffer(out byte* buffer, out uint capacity);
}

NOTE
Because this technique accesses a native, unmanaged image buffer, you will need to configure your project to allow unsafe
code.
1. In Solution Explorer, right-click the VideoEffectComponent project and select Properties.
2. Select the Build tab.
3. Select the Allow unsafe code check box.

Now you can add the ProcessFrame method implementation. First, this method obtains a BitmapBuffer object
from both the input and output software bitmaps. Note that the output frame is opened for writing and the input
for reading. Next, an IMemoryBufferReference is obtained for each buffer by calling CreateReference. Then, the
actual data buffer is obtained by casting the IMemoryBufferReference objects as the COM interop interface
defined above, IMemoryByteAccess, and then calling GetBuffer.
Now that the data buffers have been obtained, you can read from the input buffer and write to the output buffer.
The layout of the buffer is obtained by calling GetPlaneDescription, which provides information on the width,
stride, and initial offset of the buffer. The bits per pixel is determined by the encoding properties set previously with
the SetEncodingProperties method. The buffer format information is used to find the index into the buffer for
each pixel. The pixel value from the source buffer is copied into the target buffer, with the color values being
multiplied by the FadeValue property defined for this effect to dim them by the specified amount.

public unsafe void ProcessFrame(ProcessVideoFrameContext context)


{
using (BitmapBuffer buffer = context.InputFrame.SoftwareBitmap.LockBuffer(BitmapBufferAccessMode.Read))
using (BitmapBuffer targetBuffer = context.OutputFrame.SoftwareBitmap.LockBuffer(BitmapBufferAccessMode.Write))
{
using (var reference = buffer.CreateReference())
using (var targetReference = targetBuffer.CreateReference())
{
byte* dataInBytes;
uint capacity;
((IMemoryBufferByteAccess)reference).GetBuffer(out dataInBytes, out capacity);

byte* targetDataInBytes;
uint targetCapacity;
((IMemoryBufferByteAccess)targetReference).GetBuffer(out targetDataInBytes, out targetCapacity);

var fadeValue = FadeValue;

// Fill-in the BGRA plane


BitmapPlaneDescription bufferLayout = buffer.GetPlaneDescription(0);
for (int i = 0; i < bufferLayout.Height; i++)
{
for (int j = 0; j < bufferLayout.Width; j++)
{

byte value = (byte)((float)j / bufferLayout.Width * 255);

int bytesPerPixel = 4;
if (encodingProperties.Subtype != "ARGB32")
{
// If you support other encodings, adjust index into the buffer accordingly
}

int idx = bufferLayout.StartIndex + bufferLayout.Stride * i + bytesPerPixel * j;

targetDataInBytes[idx + 0] = (byte)(fadeValue * (float)dataInBytes[idx + 0]);


targetDataInBytes[idx + 1] = (byte)(fadeValue * (float)dataInBytes[idx + 1]);
targetDataInBytes[idx + 2] = (byte)(fadeValue * (float)dataInBytes[idx + 2]);
targetDataInBytes[idx + 3] = dataInBytes[idx + 3];
}
}
}
}
}

Implement the IBasicVideoEffect interface using hardware processing


Creating a custom video effect by using hardware (GPU) processing is almost identical to using software
processing as described above. This section will show the few differences in an effect that uses hardware
processing. This example uses the Win2D Windows Runtime API. For more information about using Win2D, see the
Win2D documentation.
Use the following steps to add the Win2D NuGet package to the project you created as described in the Add a
custom effect to your app section at the beginning of this article.
To add the Win2D NuGet package to your effect project
1. In Solution Explorer, right-click the VideoEffectComponent project and select Manage NuGet Packages.
2. At the top of the window, select the Browse tab.
3. In the search box, enter Win2D.
4. Select Win2D.uwp, and then select Install in the right pane.
5. The Review Changes dialog shows you the package to be installed. Click OK.
6. Accept the package license.
In addition to the namespaces included in the basic project setup, you will need to include the following
namespaces provided by Win2D.

using Microsoft.Graphics.Canvas.Effects;
using Microsoft.Graphics.Canvas;

Because this effect will use GPU memory for operating on the image data, you should return
MediaMemoryTypes.Gpu from the SupportedMemoryTypes property.

public MediaMemoryTypes SupportedMemoryTypes { get { return MediaMemoryTypes.Gpu; } }

Set the encoding properties that your effect will support with the SupportedEncodingProperties property. When
working with Win2D, you must use ARGB32 encoding.

public IReadOnlyList<VideoEncodingProperties> SupportedEncodingProperties {


get
{
var encodingProperties = new VideoEncodingProperties();
encodingProperties.Subtype = "ARGB32";
return new List<VideoEncodingProperties>() { encodingProperties };
}
}

Use the SetEncodingProperties method to create a new Win2D CanvasDevice object from the
IDirect3DDevice passed into the method.

private CanvasDevice canvasDevice;


public void SetEncodingProperties(VideoEncodingProperties encodingProperties, IDirect3DDevice device)
{
canvasDevice = CanvasDevice.CreateFromDirect3D11Device(device);
}

The SetProperties implementation is identical to the previous software processing example. This example uses a
BlurAmount property to configure a Win2D blur effect.

private IPropertySet configuration;


public void SetProperties(IPropertySet configuration)
{
this.configuration = configuration;
}
public double BlurAmount
{
get
{
object val;
if (configuration != null && configuration.TryGetValue("BlurAmount", out val))
{
return (double)val;
}
return 3;
}
}

The last step is to implement the ProcessFrame method that actually processes the image data.
Using Win2D APIs, a CanvasBitmap is created from the input frame's Direct3DSurface property. A
CanvasRenderTarget is created from the output frame's Direct3DSurface and a CanvasDrawingSession is
created from this render target. A new Win2D GaussianBlurEffect is initialized, using the BlurAmount property
our effect exposes via SetProperties. Finally, the CanvasDrawingSession.DrawImage method is called to draw
the input bitmap to the render target using the blur effect.

public void ProcessFrame(ProcessVideoFrameContext context)


{

using (CanvasBitmap inputBitmap = CanvasBitmap.CreateFromDirect3D11Surface(canvasDevice, context.InputFrame.Direct3DSurface))


using (CanvasRenderTarget renderTarget = CanvasRenderTarget.CreateFromDirect3D11Surface(canvasDevice,
context.OutputFrame.Direct3DSurface))
using (CanvasDrawingSession ds = renderTarget.CreateDrawingSession())
{

var gaussianBlurEffect = new GaussianBlurEffect


{
Source = inputBitmap,
BlurAmount = (float)BlurAmount,
Optimization = EffectOptimization.Speed
};

ds.DrawImage(gaussianBlurEffect);

}
}

Adding your custom effect to your app


To use your video effect from your app, you must add a reference to the effect project to your app.
1. In Solution Explorer, under your app project, right-click References and select Add reference.
2. Expand the Projects tab, select Solution, and then select the check box for your effect project name. For this
example, the name is VideoEffectComponent.
3. Click OK.
Add your custom effect to a camera video stream
You can set up a simple preview stream from the camera by following the steps in the article Simple camera
preview access. Following those steps will provide you with an initialized MediaCapture object that is used to
access the camera's video stream.
To add your custom video effect to a camera stream, first create a new VideoEffectDefinition object, passing in
the namespace and class name for your effect. Next, call the MediaCapture object's AddVideoEffect method to
add your effect to the specified stream. This example uses the MediaStreamType.VideoPreview value to specify
that the effect should be added to the preview stream. If your app supports video capture, you could also use
MediaStreamType.VideoRecord to add the effect to the capture stream. AddVideoEffect returns an
IMediaExtension object representing your custom effect. You can use the SetProperties method to set the
configuration for your effect.
After the effect has been added, StartPreviewAsync is called to start the preview stream.

var videoEffectDefinition = new VideoEffectDefinition("VideoEffectComponent.ExampleVideoEffect");

IMediaExtension videoEffect =
await mediaCapture.AddVideoEffectAsync(videoEffectDefinition, MediaStreamType.VideoPreview);

videoEffect.SetProperties(new PropertySet() { { "FadeValue", .25 } });

await mediaCapture.StartPreviewAsync();

Add your custom effect to a clip in a MediaComposition


For general guidance for creating media compositions from video clips, see Media compositions and editing. The
following code snippet shows the creation of a simple media composition that uses a custom video effect. A
MediaClip object is created by calling CreateFromFileAsync, passing in a video file that was selected by the user
with a FileOpenPicker, and the clip is added to a new MediaComposition. Next a new VideoEffectDefinition
object is created, passing in the namespace and class name for your effect to the constructor. Finally, the effect
definition is added to the VideoEffectDefinitions collection of the MediaClip object.

MediaComposition composition = new MediaComposition();


var clip = await MediaClip.CreateFromFileAsync(pickedFile);
composition.Clips.Add(clip);

var videoEffectDefinition = new VideoEffectDefinition("VideoEffectComponent.ExampleVideoEffect", new PropertySet() { { "FadeValue", .5 } });

clip.VideoEffectDefinitions.Add(videoEffectDefinition);

Related topics
Simple camera preview access
Media compositions and editing
Win2D documentation
Media playback
Custom audio effects
3/6/2017 9 min to read Edit on GitHub

This article describes how to create a Windows Runtime component that implements the IBasicAudioEffect
interface to create custom effects for audio streams. Custom effects can be used with several different Windows
Runtime APIs including MediaCapture, which provides access to a device's camera, MediaComposition, which
allows you to create complex compositions out of media clips, and AudioGraph which allows you to quickly
assemble a graph of various audio input, output, and submix nodes.

Add a custom effect to your app


A custom audio effect is defined in a class that implements the IBasicAudioEffect interface. This class can't be
included directly in your app's project. Instead, you must use a Windows Runtime component to host your audio
effect class.
Add a Windows Runtime component for your audio effect
1. In Microsoft Visual Studio, with your solution open, go to the File menu and select Add->New Project.
2. Select the Windows Runtime Component (Universal Windows) project type.
3. For this example, name the project AudioEffectComponent. This name will be referenced in code later.
4. Click OK.
5. The project template creates a class called Class1.cs. In Solution Explorer, right-click the icon for Class1.cs and
select Rename.
6. Rename the file to ExampleAudioEffect.cs. Visual Studio will show a prompt asking if you want to update all
references to the new name. Click Yes.
7. Open ExampleAudioEffect.cs and update the class definition to implement the IBasicAudioEffect interface.

public sealed class ExampleAudioEffect : IBasicAudioEffect

You need to include the following namespaces in your effect class file in order to access all of the types used in the
examples in this article.

using Windows.Media.Effects;
using Windows.Media.MediaProperties;
using Windows.Foundation.Collections;
using System.Runtime.InteropServices;
using Windows.Media;
using Windows.Foundation;

Implement the IBasicAudioEffect interface


Your audio effect must implement all of the methods and properties of the IBasicAudioEffect interface. This
section walks you through a simple implementation of this interface to create a basic echo effect.
SupportedEncodingProperties property
The system checks the SupportedEncodingProperties property to determine which encoding properties are
supported by your effect. Note that if the consumer of your effect can't encode audio using the properties you
specify, the system will call Close on your effect and will remove your effect from the audio pipeline. In this
example, AudioEncodingProperties objects are created and added to the returned list to support 44.1 kHz and
48 kHz, 32-bit float, mono encoding.

public IReadOnlyList<AudioEncodingProperties> SupportedEncodingProperties


{
get
{
var supportedEncodingProperties = new List<AudioEncodingProperties>();
AudioEncodingProperties encodingProps1 = AudioEncodingProperties.CreatePcm(44100, 1, 32);
encodingProps1.Subtype = MediaEncodingSubtypes.Float;
AudioEncodingProperties encodingProps2 = AudioEncodingProperties.CreatePcm(48000, 1, 32);
encodingProps2.Subtype = MediaEncodingSubtypes.Float;

supportedEncodingProperties.Add(encodingProps1);
supportedEncodingProperties.Add(encodingProps2);

return supportedEncodingProperties;

}
}

SetEncodingProperties method
The system calls SetEncodingProperties on your effect to let you know the encoding properties for the audio
stream upon which the effect is operating. In order to implement an echo effect, this example uses a buffer to store
one second of audio data. This method provides the opportunity to initialize the size of the buffer to the number of
samples in one second of audio, based on the sample rate in which the audio is encoded. The delay effect also uses
an integer counter to keep track of the current position in the delay buffer. Since SetEncodingProperties is called
whenever the effect is added to the audio pipeline, this is a good time to initialize that value to 0. You may also
want to capture the AudioEncodingProperties object passed into this method to use elsewhere in your effect.

private float[] echoBuffer;


private int currentActiveSampleIndex;
private AudioEncodingProperties currentEncodingProperties;

public void SetEncodingProperties(AudioEncodingProperties encodingProperties)


{
currentEncodingProperties = encodingProperties;
echoBuffer = new float[encodingProperties.SampleRate]; // exactly one second delay
currentActiveSampleIndex = 0;
}

SetProperties method
The SetProperties method allows the app that is using your effect to adjust effect parameters. Properties are
passed as an IPropertySet map of property names and values.

IPropertySet configuration;
public void SetProperties(IPropertySet configuration)
{
this.configuration = configuration;
}

This simple example will mix the current audio sample with a value from the delay buffer according the value of the
Mix property. A property is declared and TryGetValue is used to get the value set by the calling app. If no value was
set, a default value of .5 is used. Note that this property is read-only. The property value must be set using
SetProperties.
public float Mix
{
get
{
object val;
if (configuration != null && configuration.TryGetValue("Mix", out val))
{
return (float)val;
}
return .5f;
}
}

ProcessFrame method
The ProcessFrame method is where your effect modifies the audio data of the stream. The method is called once
per frame and is passed a ProcessAudioFrameContext object. This object contains an input AudioFrame object
that contains the incoming frame to be processed and an output AudioFrame object to which you write audio data
that will be passed on to rest of the audio pipeline. An audio frame is a buffer of audio samples representing a
short slice of audio data.
Accessing the data buffer of a AudioFrame requires COM interop, so you should include the
System.Runtime.InteropServices namespace in your effect class file and then add the following code inside the
namespace for your effect to import the interface for accessing the audio buffer.

[ComImport]
[Guid("5B0D3235-4DBA-4D44-865E-8F1D0E4FD04D")]
[InterfaceType(ComInterfaceType.InterfaceIsIUnknown)]
unsafe interface IMemoryBufferByteAccess
{
void GetBuffer(out byte* buffer, out uint capacity);
}

NOTE
Because this technique accesses a native, unmanaged image buffer, you will need to configure your project to allow unsafe
code.
1. In Solution Explorer, right-click the AudioEffectComponent project and select Properties.
2. Select the Build tab.
3. Select the Allow unsafe code check box.

Now you can add the ProcessFrame method implementation to your effect. First, this method obtains a
AudioBuffer object from both the input and output audio frames. Note that the output frame is opened for writing
and the input for reading. Next, an IMemoryBufferReference is obtained for each buffer by calling
CreateReference. Then, the actual data buffer is obtained by casting the IMemoryBufferReference objects as the
COM interop interface defined above, IMemoryByteAccess, and then calling GetBuffer.
Now that the data buffers have been obtained, you can read from the input buffer and write to the output buffer.
For each sample in the inputbuffer, the value is obtained and multiplied by 1 - Mix to set the dry signal value of the
effect. Next, a sample is retrieved from the current position in the echo buffer and multiplied by Mix to set the wet
value of the effect. The output sample is set to the sum of the dry and wet values. Finally, each input sample is
stored in the echo buffer and the current sample index is incremented.
unsafe public void ProcessFrame(ProcessAudioFrameContext context)
{
AudioFrame inputFrame = context.InputFrame;
AudioFrame outputFrame = context.OutputFrame;

using (AudioBuffer inputBuffer = inputFrame.LockBuffer(AudioBufferAccessMode.Read),


outputBuffer = outputFrame.LockBuffer(AudioBufferAccessMode.Write))
using (IMemoryBufferReference inputReference = inputBuffer.CreateReference(),
outputReference = outputBuffer.CreateReference())
{
byte* inputDataInBytes;
byte* outputDataInBytes;
uint inputCapacity;
uint outputCapacity;

((IMemoryBufferByteAccess)inputReference).GetBuffer(out inputDataInBytes, out inputCapacity);


((IMemoryBufferByteAccess)outputReference).GetBuffer(out outputDataInBytes, out outputCapacity);

float* inputDataInFloat = (float*)inputDataInBytes;


float* outputDataInFloat = (float*)outputDataInBytes;

float inputData;
float echoData;

// Process audio data


int dataInFloatLength = (int)inputBuffer.Length / sizeof(float);

for (int i = 0; i < dataInFloatLength; i++)


{
inputData = inputDataInFloat[i] * (1.0f - this.Mix);
echoData = echoBuffer[currentActiveSampleIndex] * this.Mix;
outputDataInFloat[i] = inputData + echoData;
echoBuffer[currentActiveSampleIndex] = inputDataInFloat[i];
currentActiveSampleIndex++;

if (currentActiveSampleIndex == echoBuffer.Length)
{
// Wrap around (after one second of samples)
currentActiveSampleIndex = 0;
}
}
}
}

Close method
The system will call the Close Close method on your class when the effect should shut down. You should use this
method to dispose of any resources you have created. The argument to the method is a
MediaEffectClosedReason that lets you know whether the effect was closed normally, if an error occurred, or if
the effect does not support the required encoding format.

public void Close(MediaEffectClosedReason reason)


{
// Dispose of effect resources
echoBuffer = null;
}

DiscardQueuedFrames method
The DiscardQueuedFrames method is called when your effect should reset. A typical scenario for this is if your
effect stores previously processed frames to use in processing the current frame. When this method is called, you
should dispose of the set of previous frames you saved. This method can be used to reset any state related to
previous frames, not only accumulated audio frames.
public void DiscardQueuedFrames()
{
// Reset contents of the samples buffer
Array.Clear(echoBuffer, 0, echoBuffer.Length - 1);
currentActiveSampleIndex = 0;
}

TimeIndependent property
The TimeIndependent TimeIndependent property lets the system know if your effect does not require uniform
timing. When set to true, the system can use optimizations that enhance effect performance.

public bool TimeIndependent { get { return true; } }

UseInputFrameForOutput property
Set the UseInputFrameForOutput property to true to tell the system that your effect will write its output to the
audio buffer of the InputFrame of the ProcessAudioFrameContext passed into ProcessFrame instead of writing
to the OutputFrame.

public bool UseInputFrameForOutput { get { return false; } }

Adding your custom effect to your app


To use your audio effect from your app, you must add a reference to the effect project to your app.
1. In Solution Explorer, under your app project, right-click References and select Add reference.
2. Expand the Projects tab, select Solution, and then select the check box for your effect project name. For this
example, the name is AudioEffectComponent.
3. Click OK
If your audio effect class is declared is a different namespace, be sure to include that namespace in your code file.

using AudioEffectComponent;

Add your custom effect to an AudioGraph node


For general information about using audio graphs, see Audio garphs. The following code snippet shows you how
to add the example echoe effect shown in this article to an audio graph node. First, a PropertySet is created and a
value for the Mix property, defined by the effect, is set. Next, the AudioEffectDefinition constructor is called,
passing in the full class name of the custom effect type and the property set. Finally, the effect definition is added to
the EffectDefinitions property of an existing FileInputNode, causing the audio emitted to be processed by the
custom effect.

// Create a property set and add a property/value pair


PropertySet echoProperties = new PropertySet();
echoProperties.Add("Mix", 0.5f);

// Instantiate the custom effect defined in the 'AudioEffectComponent' project


AudioEffectDefinition echoEffectDefinition = new AudioEffectDefinition(typeof(ExampleAudioEffect).FullName, echoProperties);
fileInputNode.EffectDefinitions.Add(echoEffectDefinition);

After it has been added to a node, the custom effect can be disabled by calling DisableEffectsByDefinition and
passing in the AudioEffectDefinition object. For more information about using audio graphs in your app, see
AudioGraph.
Add your custom effect to a clip in a MediaComposition
The following code snippet demonstrates adding the custom audio effect to a video clip and a background audio
track in a media composition. For general guidance for creating media compositions from video clips and adding
background audio tracks, see Media compositions and editing.

// Create a property set and add a property/value pair


PropertySet echoProperties = new PropertySet();
echoProperties.Add("Mix", 0.5f);

// Instantiate the custom effect defined in the 'AudioEffectComponent' project


AudioEffectDefinition echoEffectDefinition = new AudioEffectDefinition(typeof(ExampleAudioEffect).FullName, echoProperties);

// Add custom audio effect to the current clip in the timeline


var currentClip = composition.Clips.FirstOrDefault(
mc => mc.StartTimeInComposition <= mediaPlayerElement.MediaPlayer.PlaybackSession.Position &&
mc.EndTimeInComposition >= mediaPlayerElement.MediaPlayer.PlaybackSession.Position);
currentClip.AudioEffectDefinitions.Add(echoEffectDefinition);

// Add custom audio effect to the first background audio track


if (composition.BackgroundAudioTracks.Count > 0)
{
composition.BackgroundAudioTracks[0].AudioEffectDefinitions.Add(echoEffectDefinition);
}

Related topics
Simple camera preview access
Media compositions and editing
Win2D documentation
Media playback
Media compositions and editing
3/6/2017 10 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This article shows you how to use the APIs in the Windows.Media.Editing namespace to quickly develop apps
that enable the users to create media compositions from audio and video source files. Features of the framework
include the ability to programmatically append multiple video clips together, add video and image overlays, add
background audio, and apply both audio and video effects. Once created, media compositions can be rendered
into a flat media file for playback or sharing, but compositions can also be serialized to and deserialized from disk,
allowing the user to load and modify compositions that they have previously created. All of this functionality is
provided in an easy-to-use Windows Runtime interface that dramatically reduces the amount and complexity of
code required to perform these tasks when compared to the low-level Microsoft Media Foundation API.

Create a new media composition


The MediaComposition class is the container for all of the media clips that make up the composition and is
responsible for rendering the final composition, loading and saving compositions to disc, and providing a preview
stream of the composition so that the user can view it in the UI. To use MediaComposition in your app, include
the Windows.Media.Editing namespace as well as the Windows.Media.Core namespace that provides related
APIs that you will need.

using Windows.Media.Editing;
using Windows.Media.Core;
using Windows.Media.Playback;
using System.Threading.Tasks;

The MediaComposition object will be accessed from multiple points in your code, so typically you will declare a
member variable in which to store it.

private MediaComposition composition;

The constructor for MediaComposition takes no arguments.

composition = new MediaComposition();

Add media clips to a composition


Media compositions typically contain one or more video clips. You can use a FileOpenPicker to allow the user to
select a video file. Once the file has been selected, create a new MediaClip object to contain the video clip by
calling MediaClip.CreateFromFileAsync. Then you add the clip to the MediaComposition object's Clips list.
private async Task PickFileAndAddClip()
{
var picker = new Windows.Storage.Pickers.FileOpenPicker();
picker.SuggestedStartLocation = Windows.Storage.Pickers.PickerLocationId.VideosLibrary;
picker.FileTypeFilter.Add(".mp4");
Windows.Storage.StorageFile pickedFile = await picker.PickSingleFileAsync();
if (pickedFile == null)
{
ShowErrorMessage("File picking cancelled");
return;
}

// These files could be picked from a location that we won't have access to later
var storageItemAccessList = Windows.Storage.AccessCache.StorageApplicationPermissions.FutureAccessList;
storageItemAccessList.Add(pickedFile);

var clip = await MediaClip.CreateFromFileAsync(pickedFile);


composition.Clips.Add(clip);

Media clips appear in the MediaComposition in the same order as they appear in Clips list.
A MediaClip can only be included in a composition once. Attempting to add a MediaClip that is already
being used by the composition will result in an error. To reuse a video clip multiple times in a composition,
call Clone to create new MediaClip objects which can then be added to the composition.
Universal Windows apps do not have permission to access the entire file system. The FutureAccessList
property of the StorageApplicationPermissions class allows your app to store a record of a file that has
been selected by the user so that you can retain permissions to access the file. The FutureAccessList has a
maxium of 1000 entries, so your app needs to manage the list to make sure it does not become full. This is
especially important if you plan to support loading and modifying previously created compositions.
A MediaComposition supports video clips in MP4 format.
If a video file contains multiple embedded audio tracks, you can select which audio track is used in the
composition by setting the SelectedEmbeddedAudioTrackIndex property.
Create a MediaClip with a single color filling the entire frame by calling CreateFromColor and specifying
a color and a duration for the clip.
Create a MediaClip from an image file by calling CreateFromImageFileAsync and specifying an image
file and a duration for the clip.
Create a MediaClip from a IDirect3DSurface by calling CreateFromSurface and specifying a surface
and a duration from the clip.

Preview the composition in a MediaElement


To enable the user to view the media composition, add a MediaPlayerElement to the XAML file that defines
your UI.

<MediaPlayerElement x:Name="mediaPlayerElement" AutoPlay="False" Margin="5" HorizontalAlignment="Stretch"


AreTransportControlsEnabled="True" />

Declare a member variable of type MediaStreamSource.

private MediaStreamSource mediaStreamSource;


Call the MediaComposition object's GeneratePreviewMediaStreamSource method to create a
MediaStreamSource for the composition. Create a MediaSource object by calling the factory method
CreateFromMediaStreamSource and assign it to the Source property of the MediaPlayerElement. Now the
composition can be viewed in the UI.

public void UpdateMediaElementSource()


{

mediaStreamSource = composition.GeneratePreviewMediaStreamSource(
(int)mediaPlayerElement.ActualWidth,
(int)mediaPlayerElement.ActualHeight);

mediaPlayerElement.Source = MediaSource.CreateFromMediaStreamSource(mediaStreamSource);

The MediaComposition must contain at least one media clip before calling
GeneratePreviewMediaStreamSource, or the returned object will be null.
The MediaElement timeline is not automatically updated to reflect changes in the composition. It is
recommended that you call both GeneratePreviewMediaStreamSource and set the
MediaPlayerElement Source property every time you make a set of changes to the composition and
want to update the UI.
It is recommended that you set the MediaStreamSource object and the Source property of the
MediaPlayerElement to null when the user navigates away from the page in order to release associated
resources.

protected override void OnNavigatedFrom(NavigationEventArgs e)


{
mediaPlayerElement.Source = null;
mediaStreamSource = null;
base.OnNavigatedFrom(e);

Render the composition to a video file


To render a media composition to a flat video file so that it can be shared and viewed on other devices, you will
need to use APIs from the Windows.Media.Transcoding namespace. To update the UI on the progress of the
async operation, you will also need APIs from the Windows.UI.Core namespace.

using Windows.Media.Transcoding;
using Windows.UI.Core;

After allowing the user to select an output file with a FileSavePicker, render the composition to the selected file
by calling the MediaComposition object's RenderToFileAsync. The rest of the code in the following example
simply follows the pattern of handling an AsyncOperationWithProgress.
private async Task RenderCompositionToFile()
{
var picker = new Windows.Storage.Pickers.FileSavePicker();
picker.SuggestedStartLocation = Windows.Storage.Pickers.PickerLocationId.VideosLibrary;
picker.FileTypeChoices.Add("MP4 files", new List<string>() { ".mp4" });
picker.SuggestedFileName = "RenderedComposition.mp4";

Windows.Storage.StorageFile file = await picker.PickSaveFileAsync();


if (file != null)
{
// Call RenderToFileAsync
var saveOperation = composition.RenderToFileAsync(file, MediaTrimmingPreference.Precise);

saveOperation.Progress = new AsyncOperationProgressHandler<TranscodeFailureReason, double>(async (info, progress) =>


{
await this.Dispatcher.RunAsync(CoreDispatcherPriority.Normal, new DispatchedHandler(() =>
{
ShowErrorMessage(string.Format("Saving file... Progress: {0:F0}%", progress));
}));
});
saveOperation.Completed = new AsyncOperationWithProgressCompletedHandler<TranscodeFailureReason, double>(async (info, status)
=>
{
await this.Dispatcher.RunAsync(CoreDispatcherPriority.Normal, new DispatchedHandler(() =>
{
try
{
var results = info.GetResults();
if (results != TranscodeFailureReason.None || status != AsyncStatus.Completed)
{
ShowErrorMessage("Saving was unsuccessful");
}
else
{
ShowErrorMessage("Trimmed clip saved to file");
}
}
finally
{
// Update UI whether the operation succeeded or not
}

}));
});
}
else
{
ShowErrorMessage("User cancelled the file selection");
}
}

The MediaTrimmingPreference allows you to prioritize speed of the transcoding operation versus the
precision of trimming of adjacent media clips. Fast causes transcoding to be faster with lower-precision
trimming, Precise causes transcoding to be slower but with more precise trimming.

Trim a video clip


Trim the duration of a video clip in a composition by setting the MediaClip objects TrimTimeFromStart
property, the TrimTimeFromEnd property, or both.
private void TrimClipBeforeCurrentPosition()
{
var currentClip = composition.Clips.FirstOrDefault(
mc => mc.StartTimeInComposition <= mediaPlayerElement.MediaPlayer.PlaybackSession.Position &&
mc.EndTimeInComposition >= mediaPlayerElement.MediaPlayer.PlaybackSession.Position);

TimeSpan positionFromStart = mediaPlayerElement.MediaPlayer.PlaybackSession.Position - currentClip.StartTimeInComposition;


currentClip.TrimTimeFromStart = positionFromStart;

Your can use any UI that you want to let the user specify the start and end trim values. The example above uses
the Position property of the MediaPlaybackSession associated with the MediaPlayerElement to first
determine which MediaClip is playing back at the current position in the composition by checking the
StartTimeInComposition and EndTimeInComposition. Then the Position and StartTimeInComposition
properties are used again to calculate the amount of time to trim from the beginning of the clip. The
FirstOrDefault method is an extension method from the System.Linq namespace that simplifies the code for
selecting items from a list.
The OriginalDuration property of the MediaClip object lets you know the duration of the media clip without
any clipping applied.
The TrimmedDuration property lets you know the duration of the media clip after trimming is applied.
Specifying a trimming value that is larger than the original duration of the clip does not throw an error.
However, if a composition contains only a single clip and that is trimmed to zero length by specifying a large
trimming value, a subsequent call to GeneratePreviewMediaStreamSource will return null, as if the
composition has no clips.

Add a background audio track to a composition


To add a background track to a composition, load an audio file and then create a BackgroundAudioTrack object
by calling the factory method BackgroundAudioTrack.CreateFromFileAsync. Then, add the
BackgroundAudioTrack to the composition's BackgroundAudioTracks property.

private async Task AddBackgroundAudioTrack()


{
// Add background audio
var picker = new Windows.Storage.Pickers.FileOpenPicker();
picker.SuggestedStartLocation = Windows.Storage.Pickers.PickerLocationId.MusicLibrary;
picker.FileTypeFilter.Add(".mp3");
picker.FileTypeFilter.Add(".wav");
picker.FileTypeFilter.Add(".flac");
Windows.Storage.StorageFile audioFile = await picker.PickSingleFileAsync();
if (audioFile == null)
{
ShowErrorMessage("File picking cancelled");
return;
}

// These files could be picked from a location that we won't have access to later
var storageItemAccessList = Windows.Storage.AccessCache.StorageApplicationPermissions.FutureAccessList;
storageItemAccessList.Add(audioFile);

var backgroundTrack = await BackgroundAudioTrack.CreateFromFileAsync(audioFile);

composition.BackgroundAudioTracks.Add(backgroundTrack);

A MediaComposition supports background audio tracks in the following formats: MP3, WAV, FLAC
A background audio track
As with video files, you should use the StorageApplicationPermissions class to preserve access to files
in the composition.
As with MediaClip, a BackgroundAudioTrack can only be included in a composition once. Attempting to
add a BackgroundAudioTrack that is already being used by the composition will result in an error. To
reuse an audio track multiple times in a composition, call Clone to create new MediaClip objects which
can then be added to the composition.
By default, background audio tracks begin playing at the start of the composition. If multiple background
tracks are present, all of the tracks will begin playing at the start of the composition. To cause a background
audio track to be begin playback at another time, set the Delay property to the desired time offset.

Add an overlay to a composition


Overlays allow you to stack multiple layers of video on top of each other in a composition. A composition can
contain multiple overlay layers, each of which can include multiple overlays. Create a MediaOverlay object by
passing a MediaClip into its constructor. Set the position and opacity of the overlay, then create a new
MediaOverlayLayer and add the MediaOverlay to its Overlays list. Finally, add the MediaOverlayLayer to the
composition's OverlayLayers list.

private void AddOverlay(MediaClip overlayMediaClip, double scale, double left, double top, double opacity)
{
Windows.Media.MediaProperties.VideoEncodingProperties encodingProperties =
overlayMediaClip.GetVideoEncodingProperties();

Rect overlayPosition;

overlayPosition.Width = (double)encodingProperties.Width * scale;


overlayPosition.Height = (double)encodingProperties.Height * scale;
overlayPosition.X = left;
overlayPosition.Y = top;

MediaOverlay mediaOverlay = new MediaOverlay(overlayMediaClip);


mediaOverlay.Position = overlayPosition;
mediaOverlay.Opacity = opacity;

MediaOverlayLayer mediaOverlayLayer = new MediaOverlayLayer();


mediaOverlayLayer.Overlays.Add(mediaOverlay);

composition.OverlayLayers.Add(mediaOverlayLayer);
}

Overlays within a layer are z-ordered based on their order in their containing layer's Overlays list. Higher
indices within the list are rendered on top of lower indices. The same is true of overlay layers within a
composition. A layer with higher index in the composition's OverlayLayers list will be rendered on top of
lower indices.
Because overlays are stacked on top of each other instead of being played sequentially, all overlays start
playback at the beginning of the composition by default. To cause an overlay to be begin playback at
another time, set the Delay property to the desired time offset.

Add effects to a media clip


Each MediaClip in a composition has a list of audio and video effects to which multiple effects can be added. The
effects must implement IAudioEffectDefinition and IVideoEffectDefinition respectively. The following
example uses the current MediaPlayerElement position to choose the currently viewed MediaClip and then
creates a new instance of the VideoStabilizationEffectDefinition and appends it to the media clip's
VideoEffectDefinitions list.

private void AddVideoEffect()


{
var currentClip = composition.Clips.FirstOrDefault(
mc => mc.StartTimeInComposition <= mediaPlayerElement.MediaPlayer.PlaybackSession.Position &&
mc.EndTimeInComposition >= mediaPlayerElement.MediaPlayer.PlaybackSession.Position);

VideoStabilizationEffectDefinition videoEffect = new VideoStabilizationEffectDefinition();


currentClip.VideoEffectDefinitions.Add(videoEffect);
}

Save a composition to a file


Media compositions can be serialized to a file to be modified at a later time. Pick an output file and then call the
MediaComposition method SaveAsync to save the composition.

private async Task SaveComposition()


{
var picker = new Windows.Storage.Pickers.FileSavePicker();
picker.SuggestedStartLocation = Windows.Storage.Pickers.PickerLocationId.VideosLibrary;
picker.FileTypeChoices.Add("Composition files", new List<string>() { ".cmp" });
picker.SuggestedFileName = "SavedComposition";

Windows.Storage.StorageFile compositionFile = await picker.PickSaveFileAsync();


if (compositionFile == null)
{
ShowErrorMessage("User cancelled the file selection");
}
else
{
var action = composition.SaveAsync(compositionFile);
action.Completed = (info, status) =>
{
if (status != AsyncStatus.Completed)
{
ShowErrorMessage("Error saving composition");
}

};
}
}

Load a composition from a file


Media compositions can be deserialized from a file to allow the user to view and modify the composition. Pick a
composition file and then call the MediaComposition method LoadAsync to load the composition.
private async Task OpenComposition()
{
var picker = new Windows.Storage.Pickers.FileOpenPicker();
picker.SuggestedStartLocation = Windows.Storage.Pickers.PickerLocationId.VideosLibrary;
picker.FileTypeFilter.Add(".cmp");

Windows.Storage.StorageFile compositionFile = await picker.PickSingleFileAsync();


if (compositionFile == null)
{
ShowErrorMessage("File picking cancelled");
}
else
{
composition = null;
composition = await MediaComposition.LoadAsync(compositionFile);

if (composition != null)
{
UpdateMediaElementSource();

}
else
{
ShowErrorMessage("Unable to open composition");
}
}
}

If a media file in the composition is not in a location that can be accessed by your app and is not in the
FutureAccessList property of the StorageApplicationPermissions class for your app, an error will be
thrown when loading the composition.
Audio device information properties
3/6/2017 1 min to read Edit on GitHub

This article lists the device information properties related to audio devices. On Windows, each hardware device has
associated DeviceInformation properties providing detailed information about a device that you can use when
you need specific information about the device or when you are building a device selector. For general information
about enumerating devices on Windows, see Enumerate devices and Device information properties.

NAME TYPE DESCRIPTION

System.Devices.AudioDevice.Microp Double Specifies the microphone sensitivity in


hone.SensitivityInDbfs decibels relative to full scale (dBFS)
units.

System.Devices.AudioDevice.Microp Double Specifies the microphone signal to noise


hone.SignalToNoiseRationInDb ratio (SNR) measured in decibel (dB)
units.

System.Devices.AudioDevice.Speech Boolean Indicates whether the audio device


ProcessingSupported supports speech processing.

System.Devices.AudioDevice.RawPr Boolean Indicates whether the audio device


ocessingSupported supports raw processing.

System.Devices.MicrophoneArray.G unsigned char[] Geometry data for a microphone array.


eometry

Related topics
Enumerate devices
Device information properties
Build a device selector
Media playback
Create, edit, and save bitmap images
3/6/2017 8 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This article explains how to load and save image files using BitmapDecoder and BitmapEncoder and how to
use the SoftwareBitmap object to represent bitmap images.
The SoftwareBitmap class is a versatile API that can be created from multiple sources including image files,
WriteableBitmap objects, Direct3D surfaces, and code. SoftwareBitmap allows you to easily convert between
different pixel formats and alpha modes, and allows low-level access to pixel data. Also, SoftwareBitmap is a
common interface used by multiple features of Windows, including:
CapturedFrame allows you to get frames captured by the camera as a SoftwareBitmap.
VideoFrame allows you to get a SoftwareBitmap representation of a VideoFrame.
FaceDetector allows you to detect faces in a SoftwareBitmap.
The sample code in this article uses APIs from the following namespaces.

using Windows.Storage;
using Windows.Storage.Pickers;
using Windows.Storage.Streams;
using Windows.Graphics.Imaging;
using Windows.UI.Xaml.Media.Imaging;

Create a SoftwareBitmap from an image file with BitmapDecoder


To create a SoftwareBitmap from a file, get an instance of StorageFile containing the image data. This example
uses a FileOpenPicker to allow the user to select an image file.

FileOpenPicker fileOpenPicker = new FileOpenPicker();


fileOpenPicker.SuggestedStartLocation = PickerLocationId.PicturesLibrary;
fileOpenPicker.FileTypeFilter.Add(".jpg");
fileOpenPicker.ViewMode = PickerViewMode.Thumbnail;

var inputFile = await fileOpenPicker.PickSingleFileAsync();

if (inputFile == null)
{
// The user cancelled the picking operation
return;
}

Call the OpenAsync method of the StorageFile object to get a random access stream containing the image
data. Call the static method BitmapDecoder.CreateAsync to get an instance of the BitmapDecoder class for
the specified stream. Call GetSoftwareBitmapAsync to get a SoftwareBitmap object containing the image.
SoftwareBitmap softwareBitmap;

using (IRandomAccessStream stream = await inputFile.OpenAsync(FileAccessMode.Read))


{
// Create the decoder from the stream
BitmapDecoder decoder = await BitmapDecoder.CreateAsync(stream);

// Get the SoftwareBitmap representation of the file


softwareBitmap = await decoder.GetSoftwareBitmapAsync();
}

Save a SoftwareBitmap to a file with BitmapEncoder


To save a SoftwareBitmap to a file, get an instance of StorageFile to which the image will be saved. This
example uses a FileSavePicker to allow the user to select an output file.

FileSavePicker fileSavePicker = new FileSavePicker();


fileSavePicker.SuggestedStartLocation = PickerLocationId.PicturesLibrary;
fileSavePicker.FileTypeChoices.Add("JPEG files", new List<string>() { ".jpg" });
fileSavePicker.SuggestedFileName = "image";

var outputFile = await fileSavePicker.PickSaveFileAsync();

if (outputFile == null)
{
// The user cancelled the picking operation
return;
}

Call the OpenAsync method of the StorageFile object to get a random access stream to which the image will
be written. Call the static method BitmapEncoder.CreateAsync to get an instance of the BitmapEncoder class
for the specified stream. The first parameter to CreateAsync is a GUID representing the codec that should be
used to encode the image. BitmapEncoder class exposes a property containing the ID for each codec supported
by the encoder, such as JpegEncoderId.
Use the SetSoftwareBitmap method to set the image that will be encoded. You can set values of the
BitmapTransform property to apply basic transforms to the image while it is being encoded. The
IsThumbnailGenerated property determines whether a thumbnail is generated by the encoder. Note that not
all file formats support thumbnails, so if you use this feature, you should catch the unsupported operation error
that will be thrown if thumbnails are not supported.
Call FlushAsync to cause the encoder to write the image data to the specified file.
private async void SaveSoftwareBitmapToFile(SoftwareBitmap softwareBitmap, StorageFile outputFile)
{
using (IRandomAccessStream stream = await outputFile.OpenAsync(FileAccessMode.ReadWrite))
{
// Create an encoder with the desired format
BitmapEncoder encoder = await BitmapEncoder.CreateAsync(BitmapEncoder.JpegEncoderId, stream);

// Set the software bitmap


encoder.SetSoftwareBitmap(softwareBitmap);

// Set additional encoding parameters, if needed


encoder.BitmapTransform.ScaledWidth = 320;
encoder.BitmapTransform.ScaledHeight = 240;
encoder.BitmapTransform.Rotation = Windows.Graphics.Imaging.BitmapRotation.Clockwise90Degrees;
encoder.BitmapTransform.InterpolationMode = BitmapInterpolationMode.Fant;
encoder.IsThumbnailGenerated = true;

try
{
await encoder.FlushAsync();
}
catch (Exception err)
{
switch (err.HResult)
{
case unchecked((int)0x88982F81): //WINCODEC_ERR_UNSUPPORTEDOPERATION
// If the encoder does not support writing a thumbnail, then try again
// but disable thumbnail generation.
encoder.IsThumbnailGenerated = false;
break;
default:
throw err;
}
}

if (encoder.IsThumbnailGenerated == false)
{
await encoder.FlushAsync();
}

}
}

You can specify additional encoding options when you create the BitmapEncoder by creating a new
BitmapPropertySet object and populating it with one or more BitmapTypedValue objects representing the
encoder settings. For a list of supported encoder options, see BitmapEncoder options reference.

var propertySet = new Windows.Graphics.Imaging.BitmapPropertySet();


var qualityValue = new Windows.Graphics.Imaging.BitmapTypedValue(
1.0, // Maximum quality
Windows.Foundation.PropertyType.Single
);

propertySet.Add("ImageQuality", qualityValue);

await Windows.Graphics.Imaging.BitmapEncoder.CreateAsync(
Windows.Graphics.Imaging.BitmapEncoder.JpegEncoderId,
stream,
propertySet
);

Use SoftwareBitmap with a XAML Image control


To display an image within a XAML page using the Image control, first define an Image control in your XAML
page.

<Image x:Name="imageControl"/>

Currently, the Image control only supports images that use BGRA8 encoding and pre-multiplied or no alpha
channel. Before attempting to display an image, test to make sure it has the correct format, and if not, use the
SoftwareBitmap static Convert method to convert the image to the supported format.
Create a new SoftwareBitmapSource object. Set the contents of the source object by calling SetBitmapAsync,
passing in a SoftwareBitmap. Then you can set the Source property of the Image control to the newly created
SoftwareBitmapSource.

if (softwareBitmap.BitmapPixelFormat != BitmapPixelFormat.Bgra8 ||
softwareBitmap.BitmapAlphaMode == BitmapAlphaMode.Straight)
{
softwareBitmap = SoftwareBitmap.Convert(softwareBitmap, BitmapPixelFormat.Bgra8, BitmapAlphaMode.Premultiplied);
}

var source = new SoftwareBitmapSource();


await source.SetBitmapAsync(softwareBitmap);

// Set the source of the Image control


imageControl.Source = source;

You can also use SoftwareBitmapSource to set a SoftwareBitmap as the ImageSource for an ImageBrush.

Create a SoftwareBitmap from a WriteableBitmap


You can create a SoftwareBitmap from an existing WriteableBitmap by calling
SoftwareBitmap.CreateCopyFromBuffer and supplying the PixelBuffer property of the WriteableBitmap
to set the pixel data. The second argument allows you to request a pixel format for the newly created
WriteableBitmap. You can use the PixelWidth and PixelHeight properties of the WriteableBitmap to
specify the dimensions of the new image.

SoftwareBitmap outputBitmap = SoftwareBitmap.CreateCopyFromBuffer(


writeableBitmap.PixelBuffer,
BitmapPixelFormat.Bgra8,
writeableBitmap.PixelWidth,
writeableBitmap.PixelHeight
);

Create or edit a SoftwareBitmap programmatically


So far this topic has addressed working with image files. You can also create a new SoftwareBitmap
programatically in code and use the same technique to access and modify the SoftwareBitmap's pixel data.
SoftwareBitmap uses COM interop to expose the raw buffer containing the pixel data.
To use COM interop, you must include a reference to the System.Runtime.InteropServices namespace in your
project.

using System.Runtime.InteropServices;

Initialize the IMemoryBufferByteAccess COM interface by adding the following code within your namespace.
[ComImport]
[Guid("5B0D3235-4DBA-4D44-865E-8F1D0E4FD04D")]
[InterfaceType(ComInterfaceType.InterfaceIsIUnknown)]
unsafe interface IMemoryBufferByteAccess
{
void GetBuffer(out byte* buffer, out uint capacity);
}

Create a new SoftwareBitmap with pixel format and size you want. Or, use an existing SoftwareBitmap for
which you want to edit the pixel data. Call SoftwareBitmap.LockBuffer to obtain an instance of the
BitmapBuffer class representing the pixel data buffer. Cast the BitmapBuffer to the
IMemoryBufferByteAccess COM interface and then call IMemoryBufferByteAccess.GetBuffer to populate a
byte array with data. Use the BitmapBuffer.GetPlaneDescription method to get a BitmapPlaneDescription
object that will help you calculate the offset into the buffer for each pixel.

softwareBitmap = new SoftwareBitmap(BitmapPixelFormat.Bgra8, 100, 100);

using (BitmapBuffer buffer = softwareBitmap.LockBuffer(BitmapBufferAccessMode.Write))


{
using (var reference = buffer.CreateReference())
{
byte* dataInBytes;
uint capacity;
((IMemoryBufferByteAccess)reference).GetBuffer(out dataInBytes, out capacity);

// Fill-in the BGRA plane


BitmapPlaneDescription bufferLayout = buffer.GetPlaneDescription(0);
for (int i = 0; i < bufferLayout.Height; i++)
{
for (int j = 0; j < bufferLayout.Width; j++)
{

byte value = (byte)((float)j / bufferLayout.Width * 255);


dataInBytes[bufferLayout.StartIndex + bufferLayout.Stride * i + 4 * j + 0] = value;
dataInBytes[bufferLayout.StartIndex + bufferLayout.Stride * i + 4 * j + 1] = value;
dataInBytes[bufferLayout.StartIndex + bufferLayout.Stride * i + 4 * j + 2] = value;
dataInBytes[bufferLayout.StartIndex + bufferLayout.Stride * i + 4 * j + 3] = (byte)255;
}
}
}
}

Because this method accesses the raw buffer underlying the Windows Runtime types, it must be declared using
the unsafe keyword. You must also configure your project in Microsoft Visual Studio to allow the compilation of
unsafe code by opening the project's Properties page, clicking the Build property page, and selecting the Allow
Unsafe Code checkbox.

Create a SoftwareBitmap from a Direct3D surface


To create a SoftwareBitmap object from a Direct3D surface, you must include the
Windows.Graphics.DirectX.Direct3D11 namespace in your project.

using Windows.Graphics.DirectX.Direct3D11;

Call CreateCopyFromSurfaceAsync to create a new SoftwareBitmap from the surface. As the name indicates,
the new SoftwareBitmap has a separate copy of the image data. Modifications to the SoftwareBitmap will not
have any effect on the Direct3D surface.
private async void CreateSoftwareBitmapFromSurface(IDirect3DSurface surface)
{
softwareBitmap = await SoftwareBitmap.CreateCopyFromSurfaceAsync(surface);
}

Convert a SoftwareBitmap to a different pixel format


The SoftwareBitmap class provides the static method, Convert, that allows you to easily create a new
SoftwareBitmap that uses the pixel format and alpha mode you specify from an existing SoftwareBitmap.
Note that the newly created bitmap has a separate copy of the image data. Modifications to the new bitmap will
not affect the source bitmap.

SoftwareBitmap bitmapBGRA8 = SoftwareBitmap.Convert(softwareBitmap, BitmapPixelFormat.Bgra8, BitmapAlphaMode.Premultiplied);

Transcode an image file


You can transcode an image file directly from a BitmapDecoder to a BitmapEncoder. Create a
IRandomAccessStream from the file to be transcoded. Create a new BitmapDecoder from the input stream.
Create a new InMemoryRandomAccessStream for the encoder to write to and call
BitmapEncoder.CreateForTranscodingAsync, passing in the in-memory stream and the decoder object. Set
the encoding properties you want. Any properties in the input image file that you do not specifically set on the
encoder, will be written to the output file unchanged. Call FlushAsync to cause the encoder to encode to the in-
memory stream. Finally, seek the file stream and the in-memory stream to the beginning and call CopyAsync to
write the in-memory stream out to the file stream.

private async void TranscodeImageFile(StorageFile imageFile)


{

using (IRandomAccessStream fileStream = await imageFile.OpenAsync(FileAccessMode.ReadWrite))


{
BitmapDecoder decoder = await BitmapDecoder.CreateAsync(fileStream);

var memStream = new Windows.Storage.Streams.InMemoryRandomAccessStream();


BitmapEncoder encoder = await BitmapEncoder.CreateForTranscodingAsync(memStream, decoder);

encoder.BitmapTransform.ScaledWidth = 320;
encoder.BitmapTransform.ScaledHeight = 240;

await encoder.FlushAsync();

memStream.Seek(0);
fileStream.Seek(0);
fileStream.Size = 0;
await RandomAccessStream.CopyAsync(memStream, fileStream);

memStream.Dispose();
}
}

Related topics
BitmapEncoder options reference
Image Metadata
BitmapEncoder options reference
3/6/2017 1 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This article lists the encoding options that can be used with BitmapEncoder. An encoding option is defined by its
name, which is a string, and a value in a particular data type (Windows.Foundation.PropertyType). For
information about working with images, see Create, edit, and save bitmap images.

NAME PROPERTYTYPE USAGE NOTES VALID FORMATS

ImageQuality single Valid values from 0 to 1.0. JPEG, JPEG-XR


Higher values indicate higher
quality

CompressionQuality single Valid values from 0 to 1.0. TIFF


Higher values indicate a
more efficient and slower
compression scheme

Lossless boolean If this is set to true, the JPEG-XR


ImageQuality option is
ignored

InterlaceOption boolean Whether to interlace the PNG


image

FilterOption uint8 Use the PngFilterMode PNG


enumeration

TiffCompressionMethod uint8 Use the TIFF


TiffCompressionMode
enumeration

Luminance uint32Array An array of 64 elements JPEG


containing luminance
quantization constants

Chrominance uint32Array An array of 64 elements JPEG


containing chrominance
quantization constants

JpegYCrCbSubsampling uint8 Use the JPEG


JpegSubsamplingMode
enumeration

SuppressApp0 boolean Whether to suppress the JPEG


creation of an App0
metadata block

EnableV5Header32bppBGRA boolean Whether to encode to a BMP


version 5 BMP which
supports alpha
Related topics
Create, edit, and save bitmap images
Supported codecs
Image Metadata
3/6/2017 5 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This article shows how to read and write image metadata properties and how to geotag files using the
GeotagHelper utility class.

Image properties
The StorageFile.Properties property returns a StorageItemContentProperties object that provides access to
content-related information about the file. Get the image-specific properties by calling
GetImagePropertiesAsync. The returned ImageProperties object exposes members that contain basic image
metadata fields, like the title of the image and the capture date.

private async void GetImageProperties(StorageFile imageFile)


{
ImageProperties props = await imageFile.Properties.GetImagePropertiesAsync();

string title = props.Title;


if (title == null)
{
// Format does not support, or image does not contain Title property
}

DateTimeOffset date = props.DateTaken;


if (date == null)
{
// Format does not support, or image does not contain DateTaken property
}
}

To access a larger set of file metadata, use the Windows Property System, a set of file metadata properties that can
be retrieved with a unique string identifier. Create a list of strings and add the identifier for each property you want
to retrieve. The ImageProperties.RetrievePropertiesAsync method takes this list of strings and returns a
dictionary of key/value pairs where the key is the property identifier and the value is the property value.

ImageProperties props = await imageFile.Properties.GetImagePropertiesAsync();

var requests = new System.Collections.Generic.List<string>();


requests.Add("System.Photo.Orientation");
requests.Add("System.Photo.Aperture");

IDictionary<string, object> retrievedProps = await props.RetrievePropertiesAsync(requests);

ushort orientation;
if (retrievedProps.ContainsKey("System.Photo.Orientation"))
{
orientation = (ushort)retrievedProps["System.Photo.Orientation"];
}

double aperture;
if (retrievedProps.ContainsKey("System.Photo.Aperture"))
{
aperture = (double)retrievedProps["System.Photo.Aperture"];
}
For a complete list of Windows Properties, including the identifiers and type for each property, see Windows
Properties.
Some properties are only supported for certain file containers and image codecs. For a listing of the image
metadata supported for each image type, see Photo Metadata Policies.
Because properties that are unsupported may return a null value when retrieved, always check for null
before using a returned metadata value.

Geotag helper
GeotagHelper is a utility class that makes it easy to tag images with geographic data using the
Windows.Devices.Geolocation APIs directly, without having to manually parse or construct the metadata
format.
If you already have a Geopoint object representing the location you want to tag in the image, either from a
previous use of the geolocation APIs or some other source, you can set the geotag data by calling
GeotagHelper.SetGeotagAsync and passing in a StorageFile and the Geopoint.

var point = new Geopoint(


new BasicGeoposition
{
Latitude = 48.8567,
Longitude = 2.3508,
});

await GeotagHelper.SetGeotagAsync(imageFile, point);

To set the geotag data using the device's current location, create a new Geolocator object and call
GeotagHelper.SetGeotagFromGeolocatorAsync passing in the Geolocator and the file to be tagged.

var locator = new Geolocator();

// Shows the user consent UI if needed


var accessStatus = await Geolocator.RequestAccessAsync();
if (accessStatus == GeolocationAccessStatus.Allowed)
{
await GeotagHelper.SetGeotagFromGeolocatorAsync(imageFile, locator);
}

You must include the location device capability in your app manifest in order to use the
SetGeotagFromGeolocatorAsync API.
You must call RequestAccessAsync before calling SetGeotagFromGeolocatorAsync to ensure the user
has granted your app permission to use their location.
For more information on the geolocation APIs, see Maps and location.
To get a GeoPoint representing the geotagged location of an image file, call GetGeotagAsync.

Geopoint geoPoint = await GeotagHelper.GetGeotagAsync(imageFile);

Decode and encode image metadata


The most advanced way of working with image data is to read and write the properties on the stream level using a
BitmapDecoder or a BitmapEncoder. For these operations you can use Windows Properties to specify the data
you are reading or writing, but you can also use the metadata query language provided by the Windows Imaging
Component (WIC) to specify the path to a requested property.
Reading image metadata using this technique requires you to have a BitmapDecoder that was created with the
source image file stream. For information on how to do this, see Imaging.
Once you have the decoder, create a list of strings and add a new entry for each metadata property you want to
retrieve, using either the Windows Property identifier string or a WIC metadata query. Call the
BitmapPropertiesView.GetPropertiesAsync method on the decoder's BitmapProperties member to request
the specified properties. The properties are returned in a dictionary of key/value pairs containing the property
name or path and the property value.

private async void ReadImageMetadata(BitmapDecoder bitmapDecoder)


{

var requests = new System.Collections.Generic.List<string>();


requests.Add("System.Photo.Orientation"); // Windows property key for EXIF orientation
requests.Add("/xmp/dc:creator"); // WIC metadata query for Dublin Core creator

try
{
var retrievedProps = await bitmapDecoder.BitmapProperties.GetPropertiesAsync(requests);

ushort orientation;
if (retrievedProps.ContainsKey("System.Photo.Orientation"))
{
orientation = (ushort)retrievedProps["System.Photo.Orientation"].Value;
}

string creator;
if (retrievedProps.ContainsKey("/xmp/dc:creator"))
{
creator = (string)retrievedProps["/xmp/dc:creator"].Value;
}
}
catch (Exception err)
{
switch (err.HResult)
{
case unchecked((int)0x88982F41): // WINCODEC_ERR_PROPERTYNOTSUPPORTED
// The file format does not support the requested metadata.
break;
case unchecked((int)0x88982F81): // WINCODEC_ERR_UNSUPPORTEDOPERATION
// The file format does not support any metadata.
default:
throw err;
}
}
}

For information on the WIC metadata query language and the properties supported, see WIC image format
native metadata queries.
Many metadata properties are only supported by a subset of image types. GetPropertiesAsync will fail
with the error code 0x88982F41 if one of the requested properties is not supported by the image associated
with the decoder and 0x88982F81 if the image does not support metadata at all. The constants associated
with these error codes are WINCODEC_ERR_PROPERTYNOTSUPPORTED and
WINCODEC_ERR_UNSUPPORTEDOPERATION and are defined in the winerror.h header file.
Because an image may or may not contain a value for a particular property, use the IDictionary.ContainsKey
to verify that a property is present in the results before attempting to access it.
Writing image metadata to the stream requires a BitmapEncoder associated with the image output file.
Create a BitmapPropertySet object to contain the property values you want set. Create a BitmapTypedValue
object to represent the property value. This object uses an object as the value and member of the PropertyType
enumeration that defines the type of the value. Add the BitmapTypedValue to the BitmapPropertySet and then
call BitmapProperties.SetPropertiesAsync to cause the encoder to write the properties to the stream.

private async void WriteImageMetadata(BitmapEncoder bitmapEncoder)


{
var propertySet = new Windows.Graphics.Imaging.BitmapPropertySet();
var orientationValue = new Windows.Graphics.Imaging.BitmapTypedValue(
1, // Defined as EXIF orientation = "normal"
Windows.Foundation.PropertyType.UInt16
);

propertySet.Add("System.Photo.Orientation", orientationValue);

try
{
await bitmapEncoder.BitmapProperties.SetPropertiesAsync(propertySet);
}
catch (Exception err)
{
switch (err.HResult)
{
case unchecked((int)0x88982F41): // WINCODEC_ERR_PROPERTYNOTSUPPORTED
// The file format does not support this property.
break;
default:
throw err;
}
}
}

For details on which properties are supported for which image file types, see Windows Properties, Photo
Metadata Policies, and WIC image format native metadata queries.
SetPropertiesAsync will fail with the error code 0x88982F41 if one of the requested properties is not
supported by the image associated with the encoder.

Related topics
Imaging
Transcode media files
3/6/2017 3 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
You can use the Windows.Media.Transcoding APIs to transcode video files from one format to another.
Transcoding is the conversion of a digital media file, such as a video or audio file, from one format to another. This
is usually done by decoding and then re-encoding the file. For example, you might convert a Windows Media file to
MP4 so that it can be played on a portable device that supports MP4 format. Or, you might convert a high-
definition video file to a lower resolution. In that case, the re-encoded file might use the same codec as the original
file, but it would have a different encoding profile.

Set up your project for transcoding


In addition to the namespaces referenced by the default project template, you will need to reference these
namespaces in order to transcode media files using the code in this article.

using Windows.Storage;
using Windows.Media.MediaProperties;
using Windows.Media.Transcoding;

Select source and destination files


The way that your app determines the source and destination files for transcoding depends on your
implementation. This example uses a FileOpenPicker and a FileSavePicker to allow the user to pick a source and
a destination file.

var openPicker = new Windows.Storage.Pickers.FileOpenPicker();

openPicker.SuggestedStartLocation = Windows.Storage.Pickers.PickerLocationId.VideosLibrary;
openPicker.FileTypeFilter.Add(".wmv");
openPicker.FileTypeFilter.Add(".mp4");

StorageFile source = await openPicker.PickSingleFileAsync();

var savePicker = new Windows.Storage.Pickers.FileSavePicker();

savePicker.SuggestedStartLocation =
Windows.Storage.Pickers.PickerLocationId.VideosLibrary;

savePicker.DefaultFileExtension = ".mp4";
savePicker.SuggestedFileName = "New Video";

savePicker.FileTypeChoices.Add("MPEG4", new string[] { ".mp4" });

StorageFile destination = await savePicker.PickSaveFileAsync();

Create a media encoding profile


The encoding profile contains the settings that determine how the destination file will be encoded. This is where
you have the greatest number of options when you transcode a file.
The MediaEncodingProfile class provides static methods for creating predefined encoding profiles:
Wav
AAC audio (M4A)
MP3 audio
Windows Media Audio (WMA)
Avi
MP4 video (H.264 video plus AAC audio)
Windows Media Video (WMV)
The first four profiles in this list contain audio only. The other three contain video and audio.
The following code creates a profile for MP4 video.

MediaEncodingProfile profile =
MediaEncodingProfile.CreateMp4(VideoEncodingQuality.HD720p);

The static CreateMp4 method creates an MP4 encoding profile. The parameter for this method gives the target
resolution for the video. In this case, VideoEncodingQuality.hd720p means 1280 x 720 pixels at 30 frames per
second. ("720p" stands for 720 progressive scan lines per frame.) The other methods for creating predefined
profiles all follow this pattern.
Alternatively, you can create a profile that matches an existing media file by using the
MediaEncodingProfile.CreateFromFileAsync method. Or, if you know the exact encoding settings that you
want, you can create a new MediaEncodingProfile object and fill in the profile details.

Transcode the file


To transcode the file, create a new MediaTranscoder object and call the
MediaTranscoder.PrepareFileTranscodeAsync method. Pass in the source file, the destination file, and the
encoding profile. Then call the TranscodeAsync method on the PrepareTranscodeResult object that was
returned from the async transcode operation.
MediaTranscoder transcoder = new MediaTranscoder();

PrepareTranscodeResult prepareOp = await


transcoder.PrepareFileTranscodeAsync(source, destination, profile);

if (prepareOp.CanTranscode)
{
var transcodeOp = prepareOp.TranscodeAsync();

transcodeOp.Progress +=
new AsyncActionProgressHandler<double>(TranscodeProgress);
transcodeOp.Completed +=
new AsyncActionWithProgressCompletedHandler<double>(TranscodeComplete);
}
else
{
switch (prepareOp.FailureReason)
{
case TranscodeFailureReason.CodecNotFound:
System.Diagnostics.Debug.WriteLine("Codec not found.");
break;
case TranscodeFailureReason.InvalidProfile:
System.Diagnostics.Debug.WriteLine("Invalid profile.");
break;
default:
System.Diagnostics.Debug.WriteLine("Unknown failure.");
break;
}
}

Respond to transcoding progress


You can register events to respond when the progress of the asynchronous TranscodeAsync changes. These
events are part of the async programming framework for Universal Windows Platform (UWP) apps and are not
specific to the transcoding API.

void TranscodeProgress(IAsyncActionWithProgress<double> asyncInfo, double percent)


{
// Display or handle progress info.
}

void TranscodeComplete(IAsyncActionWithProgress<double> asyncInfo, AsyncStatus status)


{
asyncInfo.GetResults();
if (asyncInfo.Status == AsyncStatus.Completed)
{
// Display or handle complete info.
}
else if (asyncInfo.Status == AsyncStatus.Canceled)
{
// Display or handle cancel info.
}
else
{
// Display or handle error info.
}
}
Process media files in the background
3/6/2017 8 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This article shows you how to use the MediaProcessingTrigger and a background task to process media files in
the background.
The example app described in this article allows the user to select an input media file to transcode and specify an
output file for the transcoding result. Then, a background task is launched to perform the transcoding operation.
The MediaProcessingTrigger is intended to support many different media processing scenarios besides
transcoding, including rendering media compositions to disk and uploading processed media files after processing
is complete.
For more detailed information on the different Universal Windows app features utilized in this sample, see:
Transcode media files
Launching resuming and background tasks
Tiles badges and notifications

Create a media processing background task


To add a background task to your existing solution in Microsoft Visual Studio, Enter a name for your comp
1. From the File menu, select Add and then New Project....
2. Select the project type Windows Runtime Component (Universal Windows).
3. Enter a name for your new component project. This example uses the project name
MediaProcessingBackgroundTask.
4. Click OK.
In Solution Explorer, right-click the icon for the "Class1.cs" file that is created by default and select Rename.
Rename the file to "MediaProcessingTask.cs". When Visual Studio asks if you want to rename all of the references
to this class, click Yes.
In the renamed class file, add the following using directives to include these namespaces in your project.

using Windows.ApplicationModel.Background;
using Windows.Storage;
using Windows.UI.Notifications;
using Windows.Data.Xml.Dom;
using Windows.Media.MediaProperties;
using Windows.Media.Transcoding;
using System.Threading;

Update your class declaration to make your class inherit from IBackgroundTask.

public sealed class MediaProcessingTask : IBackgroundTask


{

Add the following member variables to your class:


An IBackgroundTaskInstance that will be used to update the foreground app with the progress of the
background task.
A BackgroundTaskDeferral that keeps the system from shutting down your background task while media
transcoding is being performed asynchronously.
A CancellationTokenSource object that can be used to cancel the asynchronous transcoding operation.
The MediaTranscoder object that will be used to transcode media files.

IBackgroundTaskInstance backgroundTaskInstance;
BackgroundTaskDeferral deferral;
CancellationTokenSource cancelTokenSource = new CancellationTokenSource();
MediaTranscoder transcoder;

The system calls Run method of a background task when the task is launched. Set the IBackgroundTask object
passed into the method to the corresponding member variable. Register a handler for the Canceled event, which
will be raised if the system needs to shut down the background task. Then, set the Progress property to zero.
Next, call the background task object's GetDeferral method to obtain a deferral. This tells the system not to shut
down your task because you are performing asynchronous operations.
Next, call the helper method TranscodeFileAsync, which is defined in the next section. If that completes
successfully, a helper method is called to launch a toast notification to alert the user that transcoding is complete.
At the end of the Run method, call Complete on the deferral object to let the system know that your background
task is complete and can be terminated.

public async void Run(IBackgroundTaskInstance taskInstance)


{
Debug.WriteLine("In background task Run method");

backgroundTaskInstance = taskInstance;
taskInstance.Canceled += new BackgroundTaskCanceledEventHandler(OnCanceled);
taskInstance.Progress = 0;

deferral = taskInstance.GetDeferral();
Debug.WriteLine("Background " + taskInstance.Task.Name + " is called @ " + (DateTime.Now).ToString());

try
{
await TranscodeFileAsync();
ApplicationData.Current.LocalSettings.Values["TranscodingStatus"] = "Completed Successfully";
SendToastNotification("File transcoding complete.");

}
catch (Exception e)
{
Debug.WriteLine("Exception type: {0}", e.ToString());
ApplicationData.Current.LocalSettings.Values["TranscodingStatus"] = "Error ocurred: " + e.ToString();
}

deferral.Complete();
}

In the TranscodeFileAsync helper method, the file names for the input and output files for the transcoding
operations are retrieved from the LocalSettings for your app. These values will be set by your foreground app.
Create a StorageFile object for the input and output files and then create an encoding profile to use for
transcoding.
Call PrepareFileTranscodeAsync, passing in the input file, output file, and encoding profile. The
PrepareTranscodeResult object returned from this call lets you know if transcoding can be performed. If the
CanTranscode property is true, call TranscodeAsync to perform the transcoding operation.
The AsTask method enables you to track the progress the asynchronous operation or cancel it. Create a new
Progress object, specifying the units of progress you desire and the name of the method that will be called to
notify you of the current progress of the task. Pass the Progress object into the AsTask method along with the
cancellation token that allows you to cancel the task.

private async Task TranscodeFileAsync()


{
transcoder = new MediaTranscoder();

try
{
var settings = ApplicationData.Current.LocalSettings;

settings.Values["TranscodingStatus"] = "Started";

var inputFileName = ApplicationData.Current.LocalSettings.Values["InputFileName"] as string;


var outputFileName = ApplicationData.Current.LocalSettings.Values["OutputFileName"] as string;

if (inputFileName == null || outputFileName == null)


{
return;
}

// retrieve the transcoding information


var inputFile = await Windows.Storage.StorageFile.GetFileFromPathAsync(inputFileName);
var outputFile = await Windows.Storage.StorageFile.GetFileFromPathAsync(outputFileName);

// create video encoding profile


MediaEncodingProfile encodingProfile = MediaEncodingProfile.CreateMp4(VideoEncodingQuality.HD720p);

Debug.WriteLine("PrepareFileTranscodeAsync");
settings.Values["TranscodingStatus"] = "Preparing to transcode ";
PrepareTranscodeResult preparedTranscodeResult = await transcoder.PrepareFileTranscodeAsync(
inputFile,
outputFile,
encodingProfile);

if (preparedTranscodeResult.CanTranscode)
{
var startTime = TimeSpan.FromMilliseconds(DateTime.Now.Millisecond);
Debug.WriteLine("Starting transcoding @" + startTime);

var progress = new Progress<double>(TranscodeProgress);


settings.Values["TranscodingStatus"] = "Transcoding ";
settings.Values["ProcessingFileName"] = inputFileName;
await preparedTranscodeResult.TranscodeAsync().AsTask(cancelTokenSource.Token, progress);

}
else
{
Debug.WriteLine("Source content could not be transcoded.");
Debug.WriteLine("Transcode status: " + preparedTranscodeResult.FailureReason.ToString());
var endTime = TimeSpan.FromMilliseconds(DateTime.Now.Millisecond);
Debug.WriteLine("End time = " + endTime);
}
}
catch (Exception e)
{
Debug.WriteLine("Exception type: {0}", e.ToString());
throw;
}
}
In the method you used to create the Progress object in the previous step, Progress, set the progress of the
background task instance. This will pass the progress to the foreground app, if it is running.

void TranscodeProgress(double percent)


{
Debug.WriteLine("Transcoding progress: " + percent.ToString().Split('.')[0] + "%");
backgroundTaskInstance.Progress = (uint)percent;
}

The SendToastNotification helper method creates a new toast notification by getting a template XML document
for a toast that only has text content. The text element of the toast XML is set and then a new ToastNotification
object is created from the XML document. Finally, the toast is shown to the user by calling ToastNotifier.Show.

private void SendToastNotification(string toastMessage)


{
ToastTemplateType toastTemplate = ToastTemplateType.ToastText01;
XmlDocument toastXml = ToastNotificationManager.GetTemplateContent(toastTemplate);

//Supply text content for your notification


XmlNodeList toastTextElements = toastXml.GetElementsByTagName("text");
toastTextElements[0].AppendChild(toastXml.CreateTextNode(toastMessage));

//Create the toast notification based on the XML content you've specified.
ToastNotification toast = new ToastNotification(toastXml);

//Send your toast notification.


ToastNotificationManager.CreateToastNotifier().Show(toast);
}

In the handler for the Canceled event, which is called when the system cancels your background task, you can log
the error for telemetry purposes.

private void OnCanceled(IBackgroundTaskInstance sender, BackgroundTaskCancellationReason reason)


{
Debug.WriteLine("Background " + sender.Task.Name + " Cancel Requested..." + reason.ToString());
}

Register and launch the background task


Before you can launch the background task from your foreground app, you must update your foreground app's
Package.appmanifest file to let the system know that your app uses a background task.
1. In Solution Explorer, double-click the Package.appmanifest file icon to open the manifest editor.
2. Select the Declarations tab.
3. From Available Declarations, select Background Tasks and click Add.
4. Under Supported Declarations make sure that the Background Tasks item is selected. Under Properties,
select the checkbox for Media processing.
5. In the Entry Point text box, specify the namespace and class name for your background test, separated by a
period. For this example, the entry is: csharp MediaProcessingBackgroundTask.MediaProcessingTask Next, you need to add
a reference to your background task to your foreground app.
6. In Solution Explorer, under your foreground app project, right-click the References folder and select Add
Reference....
7. Expand the Projects node and select Solution.
8. Check the box next to your background task project and click OK.
The rest of the code in this example should be added to your foreground app. First, you will need to add the
following namespaces to your project.

using Windows.ApplicationModel.Background;
using Windows.Storage;

Next, add the following member variables that are needed to register the background task.

MediaProcessingTrigger mediaProcessingTrigger;
string backgroundTaskBuilderName = "TranscodingBackgroundTask";
BackgroundTaskRegistration taskRegistration;

The PickFilesToTranscode helper method uses a FileOpenPicker and a FileSavePicker to open the input and
output files for transcoding. The user may select files in a location that your app does not have access to. To make
sure your background task can open the files, add them to the FutureAccessList for your app.
Finally, set entries for the input and output file names in the LocalSettings for your app. The background task
retrieves the file names from this location.

private async void PickFilesToTranscode()


{
var openPicker = new Windows.Storage.Pickers.FileOpenPicker();

openPicker.SuggestedStartLocation = Windows.Storage.Pickers.PickerLocationId.VideosLibrary;
openPicker.FileTypeFilter.Add(".wmv");
openPicker.FileTypeFilter.Add(".mp4");

StorageFile source = await openPicker.PickSingleFileAsync();

var savePicker = new Windows.Storage.Pickers.FileSavePicker();

savePicker.SuggestedStartLocation =
Windows.Storage.Pickers.PickerLocationId.VideosLibrary;

savePicker.DefaultFileExtension = ".mp4";
savePicker.SuggestedFileName = "New Video";

savePicker.FileTypeChoices.Add("MPEG4", new string[] { ".mp4" });

StorageFile destination = await savePicker.PickSaveFileAsync();

if(source == null || destination == null)


{
return;
}

var storageItemAccessList = Windows.Storage.AccessCache.StorageApplicationPermissions.FutureAccessList;


storageItemAccessList.Add(source);
storageItemAccessList.Add(destination);

ApplicationData.Current.LocalSettings.Values["InputFileName"] = source.Path;
ApplicationData.Current.LocalSettings.Values["OutputFileName"] = destination.Path;
}

To register the background task, create a new MediaProcessingTrigger and a new BackgroundTaskBuilder. Set
the name of the background task builder so that you can identify it later. Set the TaskEntryPoint to the same
namespace and class name string you used in the manifest file. Set the Trigger property to the
MediaProcessingTrigger instance.
Before registering the task, make sure you unregister any previously registered tasks by looping through the
AllTasks collection and calling Unregister on any tasks that have the name you specified in the
BackgroundTaskBuilder.Name property.
Register the background task by calling Register. Register handlers for the Completed and Progress events.

private void RegisterBackgroundTask()


{
// New a MediaProcessingTrigger
mediaProcessingTrigger = new MediaProcessingTrigger();

var builder = new BackgroundTaskBuilder();

builder.Name = backgroundTaskBuilderName;
builder.TaskEntryPoint = "MediaProcessingBackgroundTask.MediaProcessingTask";
builder.SetTrigger(mediaProcessingTrigger);

// unregister old ones


foreach (var cur in BackgroundTaskRegistration.AllTasks)
{
if (cur.Value.Name == backgroundTaskBuilderName)
{
cur.Value.Unregister(true);
}
}

taskRegistration = builder.Register();
taskRegistration.Progress += new BackgroundTaskProgressEventHandler(OnProgress);
taskRegistration.Completed += new BackgroundTaskCompletedEventHandler(OnCompleted);

return;
}

Launch the background task by calling the MediaProcessingTrigger object's RequestAsync method. The
MediaProcessingTriggerResult object returned by this method lets you know whether the background task was
started successfully, and if not, lets you know why the background task wasn't launched.
private async void LaunchBackgroundTask()
{
var success = true;

if (mediaProcessingTrigger != null)
{
MediaProcessingTriggerResult activationResult;
activationResult = await mediaProcessingTrigger.RequestAsync();

switch (activationResult)
{
case MediaProcessingTriggerResult.Allowed:
// Task starting successfully
break;

case MediaProcessingTriggerResult.CurrentlyRunning:
// Already Triggered

case MediaProcessingTriggerResult.DisabledByPolicy:
// Disabled by system policy

case MediaProcessingTriggerResult.UnknownError:
// All other failures
success = false;
break;
}

if (!success)
{
// Unregister the media processing trigger background task
taskRegistration.Unregister(true);
}
}

The OnProgress event handler is called when the background task updates the progress of the operation. You can
use this opportunity to update your UI with progress information.

private void OnProgress(IBackgroundTaskRegistration task, BackgroundTaskProgressEventArgs args)


{
string progress = "Progress: " + args.Progress + "%";
Debug.WriteLine(progress);
}

The OnCompleted event handler is called when the background task has finished running. This is another
opportunity to update your UI to give status information to the user.

private void OnCompleted(IBackgroundTaskRegistration task, BackgroundTaskCompletedEventArgs args)


{
Debug.WriteLine(" background task complete");
}
Audio graphs
3/6/2017 20 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This article shows how to use the APIs in the Windows.Media.Audio namespace to create audio graphs for audio
routing, mixing, and processing scenarios.
An audio graph is a set of interconnected audio nodes through which audio data flows.
Audio input nodes supply audio data to the graph from audio input devices, audio files, or from custom
code.
Audio output nodes are the destination for audio processed by the graph. Audio can be routed out of the
graph to audio output devices, audio files, or custom code.
Submix nodes take audio from one or more nodes and combine them into a single output that can be
routed to other nodes in the graph.
After all of the nodes have been created and the connections between them set up, you simply start the audio
graph and the audio data flows from the input nodes, through any submix nodes, to the output nodes. This model
makes scenarios such as recording from a device's microphone to an audio file, playing audio from a file to a
device's speaker, or mixing audio from multiple sources quick and easy to implement.
Additional scenarios are enabled with the addition of audio effects to the audio graph. Every node in an audio
graph can be populated with zero or more audio effects that perform audio processing on the audio passing
through the node. There are several built-in effects such as echo, equalizer, limiting, and reverb that can be
attached to an audio node with just a few lines of code. You can also create your own custom audio effects that
work exactly the same as the built-in effects.

NOTE
The AudioGraph UWP sample implements the code discussed in this overview. You can download the sample to see the
code in context or to use as a starting point for your own app.

Choosing Windows Runtime AudioGraph or XAudio2


The Windows Runtime audio graph APIs offer functionality that can also be implemented by using the COM-based
XAudio2 APIs. The following are features of the Windows Runtime audio graph framework that differ from
XAudio2.
The Windows Runtime audio graph APIs:
Are significantly easier to use than XAudio2.
Can be used from C# in addition to being supported for C++.
Can use audio files, including compressed file formats, directly. XAudio2 only operates on audio buffers and
does not provide any file I/O capabilities.
Can use the low-latency audio pipeline in Windows 10.
Support automatic endpoint switching when default endpoint parameters are used. For example, if the user
switches from a device's speaker to a headset, the audio is automatically redirected to the new input.
AudioGraph class
The AudioGraph class is the parent of all nodes that make up the graph. Use this object to create instances of all
of the audio node types. Create an instance of the AudioGraph class by initializing an AudioGraphSettings
object containing configuration settings for the graph, and then calling AudioGraph.CreateAsync. The returned
CreateAudioGraphResult gives access to the created audio graph or provides an error value if audio graph
creation fails.

AudioGraph audioGraph;

private async Task InitAudioGraph()


{

AudioGraphSettings settings = new AudioGraphSettings(Windows.Media.Render.AudioRenderCategory.Media);

CreateAudioGraphResult result = await AudioGraph.CreateAsync(settings);


if (result.Status != AudioGraphCreationStatus.Success)
{
ShowErrorMessage("AudioGraph creation error: " + result.Status.ToString());
}

audioGraph = result.Graph;

All audio node types are created by using the Create* methods of the AudioGraph class.
The AudioGraph.Start method causes the audio graph to start processing audio data. The AudioGraph.Stop
method stops audio processing. Each node in the graph can be started and stopped independently while the
graph is running, but no nodes are active when the graph is stopped. ResetAllNodes causes all nodes in the
graph to discard any data currently in their audio buffers.
The QuantumStarted event occurs when the graph is starting the processing of a new quantum of audio
data. The QuantumProcessed event occurs when the processing of a quantum is completed.
The only AudioGraphSettings property that is required is AudioRenderCategory. Specifying this value
allows the system to optimize the audio pipeline for the specified category.
The quantum size of the audio graph determines the number of samples that are processed at one time. By
default, the quantum size is 10 ms based at the default sample rate. If you specify a custom quantum size by
setting the DesiredSamplesPerQuantum property, you must also set the QuantumSizeSelectionMode
property to ClosestToDesired or the supplied value is ignored. If this value is used, the system will choose a
quantum size as close as possible to the one you specify. To determine the actual quantum size, check the
SamplesPerQuantum of the AudioGraph after it has been created.
If you only plan to use the audio graph with files and don't plan to output to an audio device, it is
recommended that you use the default quantum size by not setting the DesiredSamplesPerQuantum
property.
The DesiredRenderDeviceAudioProcessing property determines the amount of processing the primary
render device performs on the output of the audio graph. The Default setting allows the system to use the
default audio processing for the specified audio render category. This processing can significantly improve the
sound of audio on some devices, particularly mobile devices with small speakers. The Raw setting can improve
performance by minimizing the amount of signal processing performed, but can result in inferior sound quality
on some devices.
If the QuantumSizeSelectionMode is set to LowestLatency, the audio graph will automatically use Raw for
DesiredRenderDeviceAudioProcessing.
The EncodingProperties determines the audio format used by the graph. Only 32-bit float formats are
supported.
The PrimaryRenderDevice sets the primary render device for the audio graph. If you don't set this, the default
system device is used. The primary render device is used to calculate the quantum sizes for other nodes in the
graph. If there are no audio render devices present on the system, audio graph creation will fail.
You can let the audio graph use the default audio render device or use the
Windows.Devices.Enumeration.DeviceInformation class to get a list of the system's available audio render
devices by calling FindAllAsync and passing in the audio render device selector returned by
Windows.Media.Devices.MediaDevice.GetAudioRenderSelector. You can choose one of the returned
DeviceInformation objects programatically or show UI to allow the user to select a device and then use it to set
the PrimaryRenderDevice property.

Windows.Devices.Enumeration.DeviceInformationCollection devices =
await Windows.Devices.Enumeration.DeviceInformation.FindAllAsync(Windows.Media.Devices.MediaDevice.GetAudioRenderSelector());

// Show UI to allow the user to select a device


Windows.Devices.Enumeration.DeviceInformation selectedDevice = ShowMyDeviceSelectionUI(devices);

settings.PrimaryRenderDevice = selectedDevice;

Device input node


A device input node feeds audio into the graph from an audio capture device connected to the system, such as a
microphone. Create a DeviceInputNode object that uses the system's default audio capture device by calling
CreateDeviceInputNodeAsync. Provide an AudioRenderCategory to allow the system to optimize the audio
pipeline for the specified category.

AudioDeviceInputNode deviceInputNode;

private async Task CreateDeviceInputNode()


{
// Create a device output node
CreateAudioDeviceInputNodeResult result = await
audioGraph.CreateDeviceInputNodeAsync(Windows.Media.Capture.MediaCategory.Media);

if (result.Status != AudioDeviceNodeCreationStatus.Success)
{
// Cannot create device output node
ShowErrorMessage(result.Status.ToString());
return;
}

deviceInputNode = result.DeviceInputNode;
}

If you want to specify a specific audio capture device for the device input node, you can use the
Windows.Devices.Enumeration.DeviceInformation class to get a list of the system's available audio capture
devices by calling FindAllAsync and passing in the audio render device selector returned by
Windows.Media.Devices.MediaDevice.GetAudioRenderSelector. You can choose one of the returned
DeviceInformation objects programmatically or show UI to allow the user to select a device and then pass it into
CreateDeviceInputNodeAsync.
Windows.Devices.Enumeration.DeviceInformationCollection devices =
await Windows.Devices.Enumeration.DeviceInformation.FindAllAsync(Windows.Media.Devices.MediaDevice.GetAudioCaptureSelector());

// Show UI to allow the user to select a device


Windows.Devices.Enumeration.DeviceInformation selectedDevice = ShowMyDeviceSelectionUI(devices);

CreateAudioDeviceInputNodeResult result =
await audioGraph.CreateDeviceInputNodeAsync(Windows.Media.Capture.MediaCategory.Media, audioGraph.EncodingProperties,
selectedDevice);

Device output node


A device output node pushes audio from the graph to an audio render device, such as speakers or a headset.
Create a DeviceOutputNode by calling CreateDeviceOutputNodeAsync. The output node uses the
PrimaryRenderDevice of the audio graph.

AudioDeviceOutputNode deviceOutputNode;

private async Task CreateDeviceOutputNode()


{
// Create a device output node
CreateAudioDeviceOutputNodeResult result = await audioGraph.CreateDeviceOutputNodeAsync();

if (result.Status != AudioDeviceNodeCreationStatus.Success)
{
// Cannot create device output node
ShowErrorMessage(result.Status.ToString());
return;
}

deviceOutputNode = result.DeviceOutputNode;
}

File input node


A file input node allows you to feed data from an audio file into the graph. Create an AudioFileInputNode by
calling CreateFileInputNodeAsync.

AudioFileInputNode fileInputNode;
private async Task CreateFileInputNode()
{
if (audioGraph == null)
return;

FileOpenPicker filePicker = new FileOpenPicker();


filePicker.SuggestedStartLocation = PickerLocationId.MusicLibrary;
filePicker.FileTypeFilter.Add(".mp3");
filePicker.FileTypeFilter.Add(".wav");
filePicker.FileTypeFilter.Add(".wma");
filePicker.FileTypeFilter.Add(".m4a");
filePicker.ViewMode = PickerViewMode.Thumbnail;
StorageFile file = await filePicker.PickSingleFileAsync();

// File can be null if cancel is hit in the file picker


if (file == null)
{
return;
}
CreateAudioFileInputNodeResult result = await audioGraph.CreateFileInputNodeAsync(file);

if (result.Status != AudioFileNodeCreationStatus.Success)
{
ShowErrorMessage(result.Status.ToString());
}

fileInputNode = result.FileInputNode;
}

File input nodes support the following file formats: mp3, wav, wma, m4a.
Set the StartTime property to specify the time offset into the file where playback should begin. If this property
is null, the beginning of the file is used. Set the EndTime property to specify the time offset into the file where
playback should end. If this property is null, the end of the file is used. The start time value must be lower than
the end time value, and the end time value must be less than or equal to the duration of the audio file, which
can be determined by checking the Duration property value.
Seek to a position in the audio file by calling Seek and specifying the time offset into the file to which the
playback position should be moved. The specified value must be within the StartTime and EndTime range.
Get the current playback position of the node with the read-only Position property.
Enable looping of the audio file by setting the LoopCount property. When non-null, this value indicates the
number of times the file will be played in after the initial playback. So, for example, setting LoopCount to 1 will
cause the file to be played 2 times in total, and setting it to 5 will cause the file to be played 6 times in total.
Setting LoopCount to null causes the file to be looped indefinitely. To stop looping, set the value to 0.
Adjust the speed at which the audio file is played back by setting the PlaybackSpeedFactor. A value of 1
indicates the original speed of the file, .5 is half-speed, and 2 is double speed.

File output node


A file output node lets you direct audio data from the graph into an audio file. Create an AudioFileOutputNode
by calling CreateFileOutputNodeAsync.

AudioFileOutputNode fileOutputNode;
private async Task CreateFileOutputNode()
{
FileSavePicker saveFilePicker = new FileSavePicker();
saveFilePicker.FileTypeChoices.Add("Pulse Code Modulation", new List<string>() { ".wav" });
saveFilePicker.FileTypeChoices.Add("Windows Media Audio", new List<string>() { ".wma" });
saveFilePicker.FileTypeChoices.Add("MPEG Audio Layer-3", new List<string>() { ".mp3" });
saveFilePicker.SuggestedFileName = "New Audio Track";
StorageFile file = await saveFilePicker.PickSaveFileAsync();

// File can be null if cancel is hit in the file picker


if (file == null)
{
return;
}

Windows.Media.MediaProperties.MediaEncodingProfile mediaEncodingProfile;
switch (file.FileType.ToString().ToLowerInvariant())
{
case ".wma":
mediaEncodingProfile = MediaEncodingProfile.CreateWma(AudioEncodingQuality.High);
break;
case ".mp3":
mediaEncodingProfile = MediaEncodingProfile.CreateMp3(AudioEncodingQuality.High);
break;
case ".wav":
mediaEncodingProfile = MediaEncodingProfile.CreateWav(AudioEncodingQuality.High);
break;
default:
throw new ArgumentException();
}

// Operate node at the graph format, but save file at the specified format
CreateAudioFileOutputNodeResult result = await audioGraph.CreateFileOutputNodeAsync(file, mediaEncodingProfile);

if (result.Status != AudioFileNodeCreationStatus.Success)
{
// FileOutputNode creation failed
ShowErrorMessage(result.Status.ToString());
return;
}

fileOutputNode = result.FileOutputNode;
}

File output nodes support the following file formats: mp3, wav, wma, m4a.
You must call AudioFileOutputNode.Stop to stop the node's processing before calling
AudioFileOutputNode.FinalizeAsync or an exception will be thrown.

Audio frame input node


An audio frame input node allows you to push audio data that you generate in your own code into the audio
graph. This enables scenarios like creating a custom software synthesizer. Create an AudioFrameInputNode by
calling CreateFrameInputNode.

AudioFrameInputNode frameInputNode;
private void CreateFrameInputNode()
{
// Create the FrameInputNode at the same format as the graph, except explicitly set mono.
AudioEncodingProperties nodeEncodingProperties = audioGraph.EncodingProperties;
nodeEncodingProperties.ChannelCount = 1;
frameInputNode = audioGraph.CreateFrameInputNode(nodeEncodingProperties);

// Initialize the Frame Input Node in the stopped state


frameInputNode.Stop();

// Hook up an event handler so we can start generating samples when needed


// This event is triggered when the node is required to provide data
frameInputNode.QuantumStarted += node_QuantumStarted;
}

The FrameInputNode.QuantumStarted event is raised when the audio graph is ready to begin processing the
next quantum of audio data. You supply your custom generated audio data from within the handler to this event.

private void node_QuantumStarted(AudioFrameInputNode sender, FrameInputNodeQuantumStartedEventArgs args)


{
// GenerateAudioData can provide PCM audio data by directly synthesizing it or reading from a file.
// Need to know how many samples are required. In this case, the node is running at the same rate as the rest of the graph
// For minimum latency, only provide the required amount of samples. Extra samples will introduce additional latency.
uint numSamplesNeeded = (uint)args.RequiredSamples;

if (numSamplesNeeded != 0)
{
AudioFrame audioData = GenerateAudioData(numSamplesNeeded);
frameInputNode.AddFrame(audioData);
}
}

The FrameInputNodeQuantumStartedEventArgs object passed into the QuantumStarted event handler


exposes the RequiredSamples property that indicates how many samples the audio graph needs to fill up the
quantum to be processed.
Call AudioFrameInputNode.AddFrame to pass an AudioFrame object filled with audio data into the graph.
An example implementation of the GenerateAudioData helper method is shown below.
To populate an AudioFrame with audio data, you must get access to the underlying memory buffer of the audio
frame. To do this you must initialize the IMemoryBufferByteAccess COM interface by adding the following code
within your namespace.

[ComImport]
[Guid("5B0D3235-4DBA-4D44-865E-8F1D0E4FD04D")]
[InterfaceType(ComInterfaceType.InterfaceIsIUnknown)]
unsafe interface IMemoryBufferByteAccess
{
void GetBuffer(out byte* buffer, out uint capacity);
}

The following code shows an example implementation of a GenerateAudioData helper method that creates an
AudioFrame and populates it with audio data.
unsafe private AudioFrame GenerateAudioData(uint samples)
{
// Buffer size is (number of samples) * (size of each sample)
// We choose to generate single channel (mono) audio. For multi-channel, multiply by number of channels
uint bufferSize = samples * sizeof(float);
AudioFrame frame = new Windows.Media.AudioFrame(bufferSize);

using (AudioBuffer buffer = frame.LockBuffer(AudioBufferAccessMode.Write))


using (IMemoryBufferReference reference = buffer.CreateReference())
{
byte* dataInBytes;
uint capacityInBytes;
float* dataInFloat;

// Get the buffer from the AudioFrame


((IMemoryBufferByteAccess)reference).GetBuffer(out dataInBytes, out capacityInBytes);

// Cast to float since the data we are generating is float


dataInFloat = (float*)dataInBytes;

float freq = 1000; // choosing to generate frequency of 1kHz


float amplitude = 0.3f;
int sampleRate = (int)audioGraph.EncodingProperties.SampleRate;
double sampleIncrement = (freq * (Math.PI * 2)) / sampleRate;

// Generate a 1kHz sine wave and populate the values in the memory buffer
for (int i = 0; i < samples; i++)
{
double sinValue = amplitude * Math.Sin(theta);
dataInFloat[i] = (float)sinValue;
theta += sampleIncrement;
}
}

return frame;
}

Because this method accesses the raw buffer underlying the Windows Runtime types, it must be declared using
the unsafe keyword. You must also configure your project in Microsoft Visual Studio to allow the compilation
of unsafe code by opening the project's Properties page, clicking the Build property page, and selecting the
Allow Unsafe Code checkbox.
Initialize a new instance of AudioFrame, in the Windows.Media namespace, by passing in the desired buffer
size to the constructor. The buffer size is the number of samples multiplied by the size of each sample.
Get the AudioBuffer of the audio frame by calling LockBuffer.
Get an instance of the IMemoryBufferByteAccess COM interface from the audio buffer by calling
CreateReference.
Get a pointer to raw audio buffer data by calling IMemoryBufferByteAccess.GetBuffer and cast it to the
sample data type of the audio data.
Fill the buffer with data and return the AudioFrame for submission into the audio graph.

Audio frame output node


An audio frame output node allows you to receive and process audio data output from the audio graph with
custom code that you create. An example scenario for this is performing signal analysis on the audio output.
Create an AudioFrameOutputNode by calling CreateFrameOutputNode.

AudioFrameOutputNode frameOutputNode;
private void CreateFrameOutputNode()
{
frameOutputNode = audioGraph.CreateFrameOutputNode();
audioGraph.QuantumProcessed += AudioGraph_QuantumProcessed;
}

The AudioGraph.QuantumProcessed event is raised when the audio graph has completed processing a
quantum of audio data. You can access the audio data from within the handler for this event.

private void AudioGraph_QuantumProcessed(AudioGraph sender, object args)


{
AudioFrame frame = frameOutputNode.GetFrame();
ProcessFrameOutput(frame);

Call GetFrame to get an AudioFrame object filled with audio data from the graph.
An example implementation of the ProcessFrameOutput helper method is shown below.

unsafe private void ProcessFrameOutput(AudioFrame frame)


{
using (AudioBuffer buffer = frame.LockBuffer(AudioBufferAccessMode.Write))
using (IMemoryBufferReference reference = buffer.CreateReference())
{
byte* dataInBytes;
uint capacityInBytes;
float* dataInFloat;

// Get the buffer from the AudioFrame


((IMemoryBufferByteAccess)reference).GetBuffer(out dataInBytes, out capacityInBytes);

dataInFloat = (float*)dataInBytes;
}
}

Like the audio frame input node example above, you will need to declare the IMemoryBufferByteAccess
COM interface and configure your project to allow unsafe code in order to access the underlying audio buffer.
Get the AudioBuffer of the audio frame by calling LockBuffer.
Get an instance of the IMemoryBufferByteAccess COM interface from the audio buffer by calling
CreateReference.
Get a pointer to raw audio buffer data by calling IMemoryBufferByteAccess.GetBuffer and cast it to the
sample data type of the audio data.

Node connections and submix nodes


All input nodes types expose the AddOutgoingConnection method that routes the audio produced by the node
to the node that is passed into the method. The following example connects an AudioFileInputNode to an
AudioDeviceOutputNode, which is a simple setup for playing an audio file on the device's speaker.

fileInputNode.AddOutgoingConnection(deviceOutputNode);

You can create more than one connection from an input node to other nodes. The following example adds another
connection from the AudioFileInputNode to an AudioFileOutputNode. Now, the audio from the audio file is
played to the device's speaker and is also written out to an audio file.
fileInputNode.AddOutgoingConnection(fileOutputNode);

Output nodes can also receive more than one connection from other nodes. In the following example a connection
is made from a AudioDeviceInputNode to the AudioDeviceOutput node. Because the output node has
connections from the file input node and the device input node, the output will contain a mix of audio from both
sources. AddOutgoingConnection provides an overload that lets you specify a gain value for the signal passing
through the connection.

deviceInputNode.AddOutgoingConnection(deviceOutputNode, .5);

Although output nodes can accept connections from multiple nodes, you may want to create an intermediate mix
of signals from one or more nodes before passing the mix to an output. For example, you may want to set the level
or apply effects to a subset of the audio signals in a graph. To do this, use the AudioSubmixNode. You can
connect to a submix node from one or more input nodes or other submix nodes. In the following example, a new
submix node is created with AudioGraph.CreateSubmixNode. Then, connections are added from a file input
node and a frame output node to the submix node. Finally, the submix node is connected to a file output node.

private void CreateSubmixNode()


{
AudioSubmixNode submixNode = audioGraph.CreateSubmixNode();
fileInputNode.AddOutgoingConnection(submixNode);
frameInputNode.AddOutgoingConnection(submixNode);
submixNode.AddOutgoingConnection(fileOutputNode);
}

Starting and stopping audio graph nodes


When AudioGraph.Start is called, the audio graph begins processing audio data. Every node type provides Start
and Stop methods that cause the individual node to start or stop processing data. When AudioGraph.Stop is
called, all audio processing in the all nodes is stopped regardless of the state of individual nodes, but the state of
each node can be set while the audio graph is stopped. For example, you could call Stop on an individual node
while the graph is stopped and then call AudioGraph.Start, and the individual node will remain in the stopped
state.
All node types expose the ConsumeInput property that, when set to false, allows the node to continue audio
processing but stops it from consuming any audio data being input from other nodes.
All node types expose the Reset method that causes the node to discard any audio data currently in its buffer.

Adding audio effects


The audio graph API allows you to add audio effects to every type of node in a graph. Output nodes, input nodes,
and submix nodes can each have an unlimited number of audio effects, limited only by the capabilities of the
hardware.The following example demonstrates adding the built-in echo effect to a submix node.

EchoEffectDefinition echoEffect = new EchoEffectDefinition(audioGraph);


echoEffect.Delay = 1000.0;
echoEffect.Feedback = .2;
echoEffect.WetDryMix = .5;

submixNode.EffectDefinitions.Add(echoEffect);

All audio effects implement IAudioEffectDefinition. Every node exposes an EffectDefinitions property
representing the list of effects applied to that node. Add an effect by adding it's definition object to the list.
There are several effect definition classes that are provided in the Windows.Media.Audio namespace. These
include:
EchoEffectDefinition
EqualizerEffectDefinition
LimiterEffectDefinition
ReverbEffectDefinition
You can create your own audio effects that implement IAudioEffectDefinition and apply them to any node in
an audio graph.
Every node type exposes a DisableEffectsByDefinition method that disables all effects in the node's
EffectDefinitions list that were added using the specified definition. EnableEffectsByDefinition enables the
effects with the specified definition.

Spatial audio
Starting with Windows 10, version 1607, AudioGraph supports spatial audio, which allows you to specify the
location in 3D space from which audio from any input or submix node is emitted. You can also specify a shape and
direction in which audio is emitted,a velocity that will be used to Doppler shift the node's audio, and define a decay
model that describes how the audio is attenuated with distance.
To create an emitter, you can first create a shape in which the sound is projected from the emitter, which can be a
cone or omnidirectional. The AudioNodeEmitterShape class provides static methods for creating each of these
shapes. Next, create a decay model. This defines how the volume of the audio from the emitter decreases as the
distance from the listener increases. The CreateNatural method creates a decay model that emulates the natural
decay of sound using a distance squared falloff model. Finally, create an AudioNodeEmitterSettings object.
Currently, this object is only used to enable and disable velocity-based Doppler attenuation of the emitter's audio.
Call the AudioNodeEmitter constructor, passing in the initialization objects you just created. By default, the
emitter is placed at the origin, but you can set the position of the emitter with the Position property.

NOTE
Audio node emitters can only process audio that is formatted in mono with a sample rate of 48kHz. Attempting to use
stereo audio or audio with a different sample rate will result in an exception.

You assign the emitter to an audio node when you create it by using the overloaded creation method for the type
of node you want. In this example, CreateFileInputNodeAsync is used to create a file input node from a specified
file and the AudioNodeEmitter object you want to associate with the node.

var emitterShape = AudioNodeEmitterShape.CreateOmnidirectional();


var decayModel = AudioNodeEmitterDecayModel.CreateNatural(.1, 1, 10, 100);
var settings = AudioNodeEmitterSettings.None;

var emitter = new AudioNodeEmitter(emitterShape, decayModel, settings);


emitter.Position = new System.Numerics.Vector3(10, 0, 5);

CreateAudioFileInputNodeResult result = await audioGraph.CreateFileInputNodeAsync(file, emitter);

if (result.Status != AudioFileNodeCreationStatus.Success)
{
ShowErrorMessage(result.Status.ToString());
}

fileInputNode = result.FileInputNode;

The AudioDeviceOutputNode that outputs audio from the graph to the user has a listener object, accessed with
the Listener property, which represents the location, orientation, and velocity of the user in the 3D space. The
positions of all of the emitters in the graph are relative to the position and orientation of the emitter object. By
default, the listener is located at the origin (0,0,0) facing forward along the Z axis, but you can set it's position and
orientation with the Position and Orientation properties.

deviceOutputNode.Listener.Position = new System.Numerics.Vector3(100, 0, 0);


deviceOutputNode.Listener.Orientation = System.Numerics.Quaternion.CreateFromYawPitchRoll(0, (float)Math.PI, 0);

You can update the location, velocity, and direction of emitters at runtime to simulate the movement of an audio
source through 3D space.

var emitter = fileInputNode.Emitter;


emitter.Position = newObjectPosition;
emitter.DopplerVelocity = newObjectPosition - oldObjectPosition;

You can also update the location, velocity, and orientation of the listener object at runtime to simulate the
movement of the user through 3D space.

deviceOutputNode.Listener.Position = newUserPosition;

By default, spatial audio is calculated using Microsoft's head-relative transfer function (HRTF) algorithm to
attenuate the audio based on its shape, velocity, and position relative to the listener. You can set the
SpatialAudioModel property to FoldDown to use a simple stereo mix method of simulating spatial audio that is
less accurate but requires less CPU and memory resources.

See also
Media playback
MIDI
3/6/2017 8 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This article shows you how to enumerate MIDI (Musical Instrument Digital Interface) devices and send and receive
MIDI messages from a Universal Windows app.

Enumerate MIDI devices


Before enumerating and using MIDI devices, add the following namespaces to your project.

using Windows.Devices.Enumeration;
using Windows.Devices.Midi;

Add a ListBox control to your XAML page that will allow the user to select one of the MIDI input devices attached
to the system. Add another one to list the MIDI output devices.

<ListBox x:Name="midiInPortListBox" SelectionChanged="midiInPortListBox_SelectionChanged"/>


<ListBox x:Name="midiOutPortListBox" SelectionChanged="midiOutPortListBox_SelectionChanged"/>

The FindAllAsync method DeviceInformation class is used to enumerate many different types of devices that
are recognized by Windows. To specify that you only want the method to find MIDI input devices, use the selector
string returned by MidiInPort.GetDeviceSelector. FindAllAsync returns a DeviceInformationCollection that
contains a DeviceInformation for each MIDI input device registered with the system. If the returned collection
contains no items, then there are no available MIDI input devices. If there are items in the collection, loop through
the DeviceInformation objects and add the name of each device to the MIDI input device ListBox.

private async Task EnumerateMidiInputDevices()


{
// Find all input MIDI devices
string midiInputQueryString = MidiInPort.GetDeviceSelector();
DeviceInformationCollection midiInputDevices = await DeviceInformation.FindAllAsync(midiInputQueryString);

midiInPortListBox.Items.Clear();

// Return if no external devices are connected


if (midiInputDevices.Count == 0)
{
this.midiInPortListBox.Items.Add("No MIDI input devices found!");
this.midiInPortListBox.IsEnabled = false;
return;
}

// Else, add each connected input device to the list


foreach (DeviceInformation deviceInfo in midiInputDevices)
{
this.midiInPortListBox.Items.Add(deviceInfo.Name);
this.midiInPortListBox.IsEnabled = true;
}
}

Enumerating MIDI output devices works the exact same way as enumerating input devices, except that you should
specify the selector string returned by MidiOutPort.GetDeviceSelector when calling FindAllAsync.
private async Task EnumerateMidiOutputDevices()
{

// Find all output MIDI devices


string midiOutportQueryString = MidiOutPort.GetDeviceSelector();
DeviceInformationCollection midiOutputDevices = await DeviceInformation.FindAllAsync(midiOutportQueryString);

midiOutPortListBox.Items.Clear();

// Return if no external devices are connected


if (midiOutputDevices.Count == 0)
{
this.midiOutPortListBox.Items.Add("No MIDI output devices found!");
this.midiOutPortListBox.IsEnabled = false;
return;
}

// Else, add each connected input device to the list


foreach (DeviceInformation deviceInfo in midiOutputDevices)
{
this.midiOutPortListBox.Items.Add(deviceInfo.Name);
this.midiOutPortListBox.IsEnabled = true;
}
}

Create a device watcher helper class


The Windows.Devices.Enumeration namespace provides the DeviceWatcher which can notify your app if
devices are added or removed from the system, or if the information for a device is updated. Since MIDI-enabled
apps typically are interested in both input and output devices, this example creates a helper class that implements
the DeviceWatcher pattern, so that the same code can be used for both MIDI input and MIDI output devices,
without the need for duplication.
Add a new class to your project to serve as your device watcher. In this example the class is named
MyMidiDeviceWatcher. The rest of the code in this section is used to implement the helper class.
Add some member variables to the class:
A DeviceWatcher object that will monitor for device changes.
A device selector string that will contain the MIDI in port selector string for one instance and the MIDI out port
selector string for another instance.
A ListBox control that will be populated with the names of the available devices.
A CoreDispatcher that is required to update the UI from a thread other than the UI thread.

DeviceWatcher deviceWatcher;
string deviceSelectorString;
ListBox deviceListBox;
CoreDispatcher coreDispatcher;

Add a DeviceInformationCollection property that is used to access the current list of devices from outside the
helper class.

public DeviceInformationCollection DeviceInformationCollection { get; set; }

In class constructor, the caller passes in the MIDI device selector string, the ListBox for listing the devices, and the
Dispatcher needed to update the UI.
Call DeviceInformation.CreateWatcher to create a new instance of the DeviceWatcher class, passing in the
MIDI device selector string.
Register handlers for the watcher's event handlers.

public MyMidiDeviceWatcher(string midiDeviceSelectorString, ListBox midiDeviceListBox, CoreDispatcher dispatcher)


{
deviceListBox = midiDeviceListBox;
coreDispatcher = dispatcher;

deviceSelectorString = midiDeviceSelectorString;

deviceWatcher = DeviceInformation.CreateWatcher(deviceSelectorString);
deviceWatcher.Added += DeviceWatcher_Added;
deviceWatcher.Removed += DeviceWatcher_Removed;
deviceWatcher.Updated += DeviceWatcher_Updated;
deviceWatcher.EnumerationCompleted += DeviceWatcher_EnumerationCompleted;
}

The DeviceWatcher has the following events:


Added - Raised when a new device is added to the system.
Removed - Raised when a device is removed from the system.
Updated - Raised when the information associated with an existing device is updated.
EnumerationCompleted - Raised when the watcher has completed its enumeration of the requested device
type.
In the event handler for each of these events, a helper method, UpdateDevices, is called to update the ListBox
with the current list of devices. Because UpdateDevices updates UI elements and these event handlers are not
called on the UI thread, each call must be wrapped in a call to RunAsync, which causes the specified code to be run
on the UI thread.
private async void DeviceWatcher_Removed(DeviceWatcher sender, DeviceInformationUpdate args)
{
await coreDispatcher.RunAsync(CoreDispatcherPriority.High, () =>
{
// Update the device list
UpdateDevices();
});
}

private async void DeviceWatcher_Added(DeviceWatcher sender, DeviceInformation args)


{
await coreDispatcher.RunAsync(CoreDispatcherPriority.High, () =>
{
// Update the device list
UpdateDevices();
});
}

private async void DeviceWatcher_EnumerationCompleted(DeviceWatcher sender, object args)


{
await coreDispatcher.RunAsync(CoreDispatcherPriority.High, () =>
{
// Update the device list
UpdateDevices();
});
}

private async void DeviceWatcher_Updated(DeviceWatcher sender, DeviceInformationUpdate args)


{
await coreDispatcher.RunAsync(CoreDispatcherPriority.High, () =>
{
// Update the device list
UpdateDevices();
});
}

The UpdateDevices helper method calls DeviceInformation.FindAllAsync and updates the ListBox with the
names of the returned devices as described previously in this article.

private async void UpdateDevices()


{
// Get a list of all MIDI devices
this.DeviceInformationCollection = await DeviceInformation.FindAllAsync(deviceSelectorString);

deviceListBox.Items.Clear();

if (!this.DeviceInformationCollection.Any())
{
deviceListBox.Items.Add("No MIDI devices found!");
}

foreach(var deviceInformation in this.DeviceInformationCollection)


{
deviceListBox.Items.Add(deviceInformation.Name);
}
}

Add methods to start the watcher, using the DeviceWatcher object's Start method, and to stop the watcher, using
the Stop method.
public void StartWatcher()
{
deviceWatcher.Start();
}
public void StopWatcher()
{
deviceWatcher.Stop();
}

Provide a destructor to unregister the watcher event handlers and set the device watcher to null.

~MyMidiDeviceWatcher()
{
deviceWatcher.Added -= DeviceWatcher_Added;
deviceWatcher.Removed -= DeviceWatcher_Removed;
deviceWatcher.Updated -= DeviceWatcher_Updated;

deviceWatcher = null;
}

Create MIDI ports to send and receive messages


In the code behind for your page, declare member variables to hold two instances of the MyMidiDeviceWatcher
helper class, one for input devices and one for output devices.

MyMidiDeviceWatcher inputDeviceWatcher;
MyMidiDeviceWatcher outputDeviceWatcher;

Create a new instance of the watcher helper classes, passing in the device selector string, the ListBox to be
populated, and the CoreDispatcher object that can be accessed through the page's Dispatcher property. Then,
call the method to start each object's DeviceWatcher.
Shortly after each DeviceWatcher is started, it will finish enumerating the current devices connected to the system
and raise its EnumerationCompleted event, which will cause each ListBox to be updated with the current MIDI
devices.

inputDeviceWatcher =
new MyMidiDeviceWatcher(MidiInPort.GetDeviceSelector(), midiInPortListBox, Dispatcher);

inputDeviceWatcher.StartWatcher();

outputDeviceWatcher =
new MyMidiDeviceWatcher(MidiOutPort.GetDeviceSelector(), midiOutPortListBox, Dispatcher);

outputDeviceWatcher.StartWatcher();

Shortly after each DeviceWatcher is started, it will finish enumerating the current devices connected to the system
and raise its EnumerationCompleted event, which will cause each ListBox to be updated with the current MIDI
devices.
When the user selects an item in the MIDI input ListBox, the SelectionChanged event is raised. In the handler for
this event, access the DeviceInformationCollection property of the helper class to get the current list of devices.
If there are entries in the list, select the DeviceInformation object with the index corresponding to the ListBox
control's SelectedIndex.
Create the MidiInPort object representing the selected input device by calling MidiInPort.FromIdAsync, passing
in the Id property of the selected device.
Register a handler for the MessageReceived event, which is raised whenever a MIDI message is received through
the specified device.

private async void midiInPortListBox_SelectionChanged(object sender, SelectionChangedEventArgs e)


{
var deviceInformationCollection = inputDeviceWatcher.DeviceInformationCollection;

if (deviceInformationCollection == null)
{
return;
}

DeviceInformation devInfo = deviceInformationCollection[midiOutPortListBox.SelectedIndex];

if (devInfo == null)
{
return;
}

midiInPort = await MidiInPort.FromIdAsync(devInfo.Id);

if(midiInPort == null)
{
System.Diagnostics.Debug.WriteLine("Unable to create MidiInPort from input device");
return;
}
midiInPort.MessageReceived += MidiInPort_MessageReceived;
}

When the MessageReceived handler is called, the message is contained in the Message property of the
MidiMessageReceivedEventArgs. The Type of the message object is a value from the MidiMessageType
enumeration indicating the type of message that was received. The data of the message depends on the type of the
message. This example checks to see if the message is a note on message and, if so, outputs the midi channel, note,
and velocity of the message.

private void MidiInPort_MessageReceived(MidiInPort sender, MidiMessageReceivedEventArgs args)


{
IMidiMessage receivedMidiMessage = args.Message;

System.Diagnostics.Debug.WriteLine(receivedMidiMessage.Timestamp.ToString());

if (receivedMidiMessage.Type == MidiMessageType.NoteOn)
{
System.Diagnostics.Debug.WriteLine(((MidiNoteOnMessage)receivedMidiMessage).Channel);
System.Diagnostics.Debug.WriteLine(((MidiNoteOnMessage)receivedMidiMessage).Note);
System.Diagnostics.Debug.WriteLine(((MidiNoteOnMessage)receivedMidiMessage).Velocity);
}
}

The SelectionChanged handler for the output device ListBox works the same as the handler for input devices,
except no event handler is registered.
private async void midiOutPortListBox_SelectionChanged(object sender, SelectionChangedEventArgs e)
{
var deviceInformationCollection = outputDeviceWatcher.DeviceInformationCollection;

if(deviceInformationCollection == null)
{
return;
}

DeviceInformation devInfo = deviceInformationCollection[midiOutPortListBox.SelectedIndex];

if(devInfo == null)
{
return;
}

midiOutPort = await MidiOutPort.FromIdAsync(devInfo.Id);

if (midiOutPort == null)
{
System.Diagnostics.Debug.WriteLine("Unable to create MidiOutPort from output device");
return;
}

Once the output device is created, you can send a message by creating a new IMidiMessage for the type of
message you want to send. In this example, the message is a NoteOnMessage. The SendMessage method of the
IMidiOutPort object is called to send the message.

byte channel = 0;
byte note = 60;
byte velocity = 127;
IMidiMessage midiMessageToSend = new MidiNoteOnMessage(channel, note, velocity);

midiOutPort.SendMessage(midiMessageToSend);

When your app is deactivated, be sure to clean up your apps resources. Unregister your event handlers and set the
MIDI in port and out port objects to null. Stop the device watchers and set them to null.

inputDeviceWatcher.StopWatcher();
inputDeviceWatcher = null;

outputDeviceWatcher.StopWatcher();
outputDeviceWatcher = null;

midiInPort.MessageReceived += MidiInPort_MessageReceived;
midiInPort.Dispose();
midiInPort = null;

midiOutPort.Dispose();
midiOutPort = null;

Using the built-in Windows General MIDI synth


When you enumerate output MIDI devices using the technique described above, your app will discover a MIDI
device called "Microsoft GS Wavetable Synth". This is a built-in General MIDI synthesizer that you can play from
your app. However, attempting to create a MIDI outport to this device will fail unless you have included the SDK
extension for the built-in synth in your project.
To include the General MIDI Synth SDK extension in your app project
1. In Solution Explorer, under your project, right-click References and select Add reference...
2. Expand the Universal Windows node.
3. Select Extensions.
4. From the list of extensions, select Microsoft General MIDI DLS for Universal Windows Apps. > [!NOTE] > If
there are multiple versions of the extension, be sure to select the version that matches the target for your app.
You can see which SDK version your app is targeting on the Application tab of the project Properties.
Import media from a device
3/8/2017 9 min to read Edit on GitHub

This article describes how to import media from a device, including searching for available media sources,
importing files such as videos, photos, and sidecar files, and deleting the imported files from the source device.

NOTE
The code in this article was adapted from the MediaImport UWP app sample . You can clone or download this sample
from the Universal Windows app samples Git repo to see the code in context or to use it as a starting point for your own
app.

Create a simple media import UI


The example in this article uses a minimal UI to enable the core media import scenarios. To see how to create a
more robust UI for a media import app, see the MediaImport sample. The following XAML creates a stack panel
with the following controls:
A Button to initiate searching for sources from which media can be imported.
A ComboBox to list and select from the media import sources that are found.
A ListView control to display and select from the media items from the selected import source.
A Button to initiate importing media items from the selected source.
A Button to initiate deleting the items that have been imported from the selected source.
A Button to cancel an asynchronous media import operation.
<StackPanel Orientation="Vertical">
<Button x:Name="findSourcesButton" Click="findSourcesButton_Click" Content="Find sources"/>
<ComboBox x:Name="sourcesComboBox" SelectionChanged="sourcesComboBox_SelectionChanged"/>
<ListView x:Name="fileListView"
HorizontalAlignment="Left" Margin="182,260,0,171"
Width="715"
SelectionMode="None"
BorderBrush="#FF858585"
BorderThickness="1"
ScrollViewer.VerticalScrollBarVisibility="Visible">
<ListView.ItemTemplate>
<DataTemplate>
<Grid>
<Grid.ColumnDefinitions>
<ColumnDefinition Width="0.05*"/>
<ColumnDefinition Width="0.20*"/>
<ColumnDefinition Width="0.75*"/>
</Grid.ColumnDefinitions>
<CheckBox Grid.Column="0" IsChecked="{Binding ImportableItem.IsSelected, Mode=TwoWay}" />
<!-- Click="CheckBox_Click"/>-->
<Image Grid.Column="1" Source="{Binding Thumbnail}" Width="120" Height="120" Stretch="Uniform"/>
<TextBlock Grid.Column="2" Text="{Binding ImportableItem.Name}" VerticalAlignment="Center" Margin="10,0"/>
</Grid>
</DataTemplate>
</ListView.ItemTemplate>
</ListView>

<Button x:Name="importButton" Click="importButton_Click" Content="Import"/>


<Button x:Name="deleteButton" Click="deleteButton_Click" Content="Delete"/>
<Button x:Name="cancelButton" Click="cancelButton_Click" Content="Cancel"/>
<ProgressBar x:Name="progressBar" SmallChange="0.01" LargeChange="0.1" Maximum="1"/>

</StackPanel>

Set up your code-behind file


Add using directives to include the namespaces used by this example that are not already included in the default
project template.

using Windows.Media.Import;
using System.Threading;
using Windows.UI.Core;
using System.Text;

Set up task cancellation for media import operations


Because media import operations can take a long time, they are performed asynchronously using
IAsyncOperationWithProgress. Declare a class member variable of type CancellationTokenSource that will be
used to cancel an in-progress operation if the user clicks the cancel button.

CancellationTokenSource cts;

Implement a handler for the cancel button. The examples shown later in this article will initialize the
CancellationTokenSource when an operation begins and set it to null when the operation completes. In the
cancel button handler, check to see if the token is null, and if not, call Cancel to cancel the operation.
private void cancelButton_Click(object sender, RoutedEventArgs e)
{
if (cts != null)
{
cts.Cancel();
System.Diagnostics.Debug.WriteLine("Operation canceled by the Cancel button.");
}
}

Data binding helper classes


In a typical media import scenario you show the user a list of available media items to import, there can be a large
number of media files to choose from and, typically, you want to show a thumbnail for each media item. For this
reason, this example uses three helper classes to incrementally load entries into the ListView control as the user
scrolls down through the list.
IncrementalLoadingBase class - Implements the IList, ISupportIncrementalLoading, and
INotifyCollectionChanged to provide the base incremental loading behavior.
GeneratorIncrementalLoadingClass class - Provides an implementation of the incremental loading base
class.
ImportableItemWrapper class - A thin wrapper around the PhotoImportItem class to add a bindable
BitmapImage property for the thumbnail image for each imported item.
These classes are provided in the MediaImport sample and can be added to your project without modifications.
After adding the helper classes to your project, declare a class member variable of type
GeneratorIncrementalLoadingClass that will be used later in this example.

GeneratorIncrementalLoadingClass<ImportableItemWrapper> itemsToImport = null;

Find available sources from which media can be imported


In the click handler for the find sources button, call the static method
PhotoImportManager.FindAllSourcesAsync to start the system searching for devices from which media can be
imported. After awaiting the completion of the operation, loop through each PhotoImportSource object in the
returned list and add an entry to the ComboBox, setting the Tag property to the source object itself so it can be
easily retrieved when the user makes a selection.

private async void findSourcesButton_Click(object sender, RoutedEventArgs e)


{
var sources = await PhotoImportManager.FindAllSourcesAsync();
foreach (PhotoImportSource source in sources)
{
ComboBoxItem item = new ComboBoxItem();
item.Content = source.DisplayName;
item.Tag = source;
sourcesComboBox.Items.Add(item);
}
}

Declare a class member variable to store the user's selected import source.

PhotoImportSource importSource;

In the SelectionChanged handler for the import source ComboBox, set the class member variable to the selected
source and then call the FindItems helper method which will be shown later in this article.

private void sourcesComboBox_SelectionChanged(object sender, SelectionChangedEventArgs e)


{
this.importSource = (PhotoImportSource)((ComboBoxItem)sourcesComboBox.SelectedItem).Tag;
FindItems();
}

Find items to import


Add class member variables of type PhotoImportSession and PhotoImportFindItemsResult which will be used
in the following steps.

PhotoImportSession importSession;
PhotoImportFindItemsResult itemsResult;

In the FindItems method, initialize the CancellationTokenSource variable so it can be used to cancel the find
operation if necessary. Within a try block, create a new import session by calling CreateImportSession on the
PhotoImportSource object selected by the user. Create a new Progress object to provide a callback to display the
progress of the find operation. Next, call FindItemsAsync to start the find operation. Provide a
PhotoImportContentTypeFilter value to specify whether photos, videos, or both should be returned. Provide a
PhotoImportItemSelectionMode value to specify whether all, none, or only the new media items are returned
with their IsSelected property set to true. This property is bound to a checkbox for each media item in our ListBox
item template.
FindItemsAsync returns an IAsyncOperationWithProgress. The extension method AsTask is used to create a
task that can be awaited, can be cancelled with the cancellation token, and that reports progress using the supplied
Progress object.
Next the data binding helper class, GeneratorIncrementalLoadingClass is initialized. FindItemsAsync, when it
returns from being awaited, returns a PhotoImportFindItemsResult object. This object contains status
information about the find operation, including the success of the operation and the count of different types of
media items that were found. The FoundItems property contains a list of PhotoImportItem objects representing
the found media items. The GeneratorIncrementalLoadingClass constructor takes as arguments the total count
of items that will be loaded incrementally, and a function that generates new items to be loaded as needed. In this
case, the provided lambda expression creates a new instance of the ImportableItemWrapper which wraps
PhotoImportItem and includes a thumbnail for each item. Once the incremental loading class has been initialized,
set it to the ItemsSource property of the ListView control in the UI. Now, the found media items will be loaded
incrementally and displayed in the list.
Next, the status information of the find operation are output. A typical app will display this information to the user
in the UI, but this example simply outputs the information to the debug console. Finally, set the cancellation token
to null because the operation is complete.
private async void FindItems()
{
this.cts = new CancellationTokenSource();

try
{
this.importSession = this.importSource.CreateImportSession();

// Progress handler for FindItemsAsync


var progress = new Progress<uint>((result) =>
{
System.Diagnostics.Debug.WriteLine(String.Format("Found {0} Files", result.ToString()));
});

this.itemsResult =
await this.importSession.FindItemsAsync(PhotoImportContentTypeFilter.ImagesAndVideos, PhotoImportItemSelectionMode.SelectAll)
.AsTask(this.cts.Token, progress);

// GeneratorIncrementalLoadingClass is used to incrementally load items in the Listview view including thumbnails
this.itemsToImport = new GeneratorIncrementalLoadingClass<ImportableItemWrapper>(this.itemsResult.TotalCount,
(int index) =>
{
return new ImportableItemWrapper(this.itemsResult.FoundItems[index]);
});

// Set the items source for the ListView control


this.fileListView.ItemsSource = this.itemsToImport;

// Log the find results


if (this.itemsResult != null)
{
var findResultProperties = new System.Text.StringBuilder();
findResultProperties.AppendLine(String.Format("Photos\t\t\t : {0} \t\t Selected Photos\t\t: {1}", itemsResult.PhotosCount,
itemsResult.SelectedPhotosCount));
findResultProperties.AppendLine(String.Format("Videos\t\t\t : {0} \t\t Selected Videos\t\t: {1}", itemsResult.VideosCount,
itemsResult.SelectedVideosCount));
findResultProperties.AppendLine(String.Format("SideCars\t\t : {0} \t\t Selected Sidecars\t: {1}", itemsResult.SidecarsCount,
itemsResult.SelectedSidecarsCount));
findResultProperties.AppendLine(String.Format("Siblings\t\t\t : {0} \t\t Selected Sibilings\t: {1} ", itemsResult.SiblingsCount,
itemsResult.SelectedSiblingsCount));
findResultProperties.AppendLine(String.Format("Total Items Items\t : {0} \t\t Selected TotalCount \t: {1}", itemsResult.TotalCount,
itemsResult.SelectedTotalCount));
System.Diagnostics.Debug.WriteLine(findResultProperties.ToString());
}

if (this.itemsResult.HasSucceeded)
{
// Update UI to indicate success
System.Diagnostics.Debug.WriteLine("FindItemsAsync succeeded.");
}
else
{
// Update UI to indicate that the operation did not complete
System.Diagnostics.Debug.WriteLine("FindItemsAsync did not succeed or was not completed.");
}
}
catch (Exception ex)
{
System.Diagnostics.Debug.WriteLine("Photo import find items operation failed. " + ex.Message);
}

this.cts = null;
}
Import media items
Before implementing the import operation, declare a PhotoImportImportItemsResult object to store the results
of the import operation. This will be used later to delete media items that were successfully imported from the
source.

private PhotoImportImportItemsResult importedResult;

Before starting the media import operation initialize the CancellationTokenSource variable and by setting the
value of the ProgressBar control to 0.
If there are no selected items in the ListView control, then there is nothing to import. Otherwise, initialize a
Progress object to provide a progress callback which updates the value of the progress bar control. Register a
handler for the ItemImported event of the PhotoImportFindItemsResult returned by the find operation. This
event will be raised whenever an item is imported and, in this example, outputs the name of each imported file to
the debug console.
Call ImportItemsAsync to begin the import operation. Just as with the find operation, the AsTask extension
method is used to convert the returned operation to a task that can be awaited, reports progress, and can be
cancelled.
After the import operation is complete, the operation status can be obtained from the
PhotoImportImportItemsResult object returned by ImportItemsAsync. This example outputs the status
information to the debug console and then finallly, sets the cancellation token to null.
private async void importButton_Click(object sender, RoutedEventArgs e)
{
cts = new CancellationTokenSource();
progressBar.Value = 0;

try
{
if (itemsResult.SelectedTotalCount <= 0)
{
System.Diagnostics.Debug.WriteLine("Nothing Selected for Import.");
}
else
{
var progress = new Progress<PhotoImportProgress>((result) =>
{
progressBar.Value = result.ImportProgress;
});

this.itemsResult.ItemImported += async (s, a) =>


{
await Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () =>
{
System.Diagnostics.Debug.WriteLine(String.Format("Imported: {0}", a.ImportedItem.Name));
});
};

// import items from the our list of selected items


this.importedResult = await this.itemsResult.ImportItemsAsync().AsTask(cts.Token, progress);

if (importedResult != null)
{
StringBuilder importedSummary = new StringBuilder();
importedSummary.AppendLine(String.Format("Photos Imported \t: {0} ", importedResult.PhotosCount));
importedSummary.AppendLine(String.Format("Videos Imported \t: {0} ", importedResult.VideosCount));
importedSummary.AppendLine(String.Format("SideCars Imported \t: {0} ", importedResult.SidecarsCount));
importedSummary.AppendLine(String.Format("Siblings Imported \t: {0} ", importedResult.SiblingsCount));
importedSummary.AppendLine(String.Format("Total Items Imported \t: {0} ", importedResult.TotalCount));
importedSummary.AppendLine(String.Format("Total Bytes Imported \t: {0} ", importedResult.TotalSizeInBytes));

System.Diagnostics.Debug.WriteLine(importedSummary.ToString());
}

if (!this.importedResult.HasSucceeded)
{
System.Diagnostics.Debug.WriteLine("ImportItemsAsync did not succeed or was not completed");
}
}
}
catch (Exception ex)
{
System.Diagnostics.Debug.WriteLine("Files could not be imported. " + "Exception: " + ex.ToString());
}

cts = null;
}

Delete imported items


To delete the successfully imported items from the source from which they were imported, first initialize the
cancellation token so that the delete operation can be cancelled and set the progress bar value to 0. Make sure that
the PhotoImportImportItemsResult returned from ImportItemsAsync is not null. If not, once again create a
Progress object to provide a progress callback for the delete operation. Call
DeleteImportedItemsFromSourceAsync to start deleting the imported items. Us AsTask to convert the result to
an awaitable task with progress and cancellation capabilities. After awaiting, the returned
PhotoImportDeleteImportedItemsFromSourceResult object can be used to get and display status information
about the delete operation.

private async void deleteButton_Click(object sender, RoutedEventArgs e)


{
cts = new CancellationTokenSource();
progressBar.Value = 0;

try
{
if (importedResult == null)
{
System.Diagnostics.Debug.WriteLine("Nothing was imported for deletion.");
}
else
{
var progress = new Progress<double>((result) =>
{
this.progressBar.Value = result;
});

PhotoImportDeleteImportedItemsFromSourceResult deleteResult = await


this.importedResult.DeleteImportedItemsFromSourceAsync().AsTask(cts.Token, progress);

if (deleteResult != null)
{
StringBuilder deletedResults = new StringBuilder();
deletedResults.AppendLine(String.Format("Total Photos Deleted:\t{0} ", deleteResult.PhotosCount));
deletedResults.AppendLine(String.Format("Total Videos Deleted:\t{0} ", deleteResult.VideosCount));
deletedResults.AppendLine(String.Format("Total Sidecars Deleted:\t{0} ", deleteResult.SidecarsCount));
deletedResults.AppendLine(String.Format("Total Sibilings Deleted:\t{0} ", deleteResult.SiblingsCount));
deletedResults.AppendLine(String.Format("Total Files Deleted:\t{0} ", deleteResult.TotalCount));
deletedResults.AppendLine(String.Format("Total Bytes Deleted:\t{0} ", deleteResult.TotalSizeInBytes));
System.Diagnostics.Debug.WriteLine(deletedResults.ToString());
}

if (!deleteResult.HasSucceeded)
{
System.Diagnostics.Debug.WriteLine("Delete operation did not succeed or was not completed");
}
}

}
catch (Exception ex)
{
System.Diagnostics.Debug.WriteLine("Files could not be Deleted." + "Exception: " + ex.ToString());
}

// set the CancellationTokenSource to null when the work is complete.


cts = null;

}
Camera-independent Flashlight
3/6/2017 3 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This article shows how to access and use a device's lamp, if one is present. Lamp functionality is managed
separately from the device's camera and camera flash functionality. In addition to acquiring a reference to the lamp
and adjusting its settings, this article also shows you how to properly free up the lamp resource when it's not in
use, and how to detect when the lamp's availability changes in case it is being used by another app.

Get the device's default lamp


To get a device's default lamp device, call Lamp.GetDefaultAsync. The lamp APIs are found in the
Windows.Devices.Lights namespace. Be sure to add a using directive for this namespace before attempting to
access these APIs.

using Windows.Devices.Lights;

Lamp lamp;

lamp = await Lamp.GetDefaultAsync();

if (lamp == null)
{
ShowErrorMessage("No Lamp device found");
return;
}

If the returned object is null, the Lamp API is unsupported on the device. Some devices may not support the Lamp
API even if there is a lamp physically present on the device.

Get a specific lamp using the lamp selector string


Some devices may have more than one lamp. To obtain a list of lamps available on the device, get the device
selector string by calling GetDeviceSelector. This selector string can then be passed into
DeviceInformation.FindAllAsync. This method is used to enumerate many different kinds of devices and the
selector string lets the method know to return only lamp devices. The DeviceInformationCollection object
returned from FindAllAsync is a collection of DeviceInformation objects representing the lamps available on the
device. Select one of the objects in the list and then pass the Id property to Lamp.FromIdAsync to get a reference
to the requested lamp. This example uses the GetFirstOrDefault extension method from the System.Linq
namespace to select the DeviceInformation object where the EnclosureLocation.Panel property has a value of
Back, which selects a lamp that is on the back of the device's enclosure, if one exists.
Note that the DeviceInformation APIs are found in the Windows.Devices.Enumeration namespace.

using Windows.Devices.Enumeration;
using System.Linq;
string selectorString = Lamp.GetDeviceSelector();

DeviceInformationCollection devices = await DeviceInformation.FindAllAsync(selectorString);

DeviceInformation deviceInfo =
devices.FirstOrDefault(di => di.EnclosureLocation != null &&
di.EnclosureLocation.Panel == Windows.Devices.Enumeration.Panel.Back);

if (deviceInfo == null)
{
ShowErrorMessage("No Lamp device found");
}

lamp = await Lamp.FromIdAsync(deviceInfo.Id);

Adjust lamp settings


After you have an instance of the Lamp class, turn the lamp on by setting the IsEnabled property to true.

lamp.IsEnabled = true;

Turn the lamp off by setting the IsEnabled property to false.

lamp.IsEnabled = false;

Some devices have lamps that support color values. Check if a lamp supports color by checking the
IsColorSettable property. If this value is true, you can set the color of the lamp with the Color property.

if (lamp.IsColorSettable)
{
lamp.Color = Windows.UI.Colors.Blue;
}

Register to be notified if the lamp availability changes


Lamp access is granted to the most recent app to request access. So, if another app is launched and requests a
lamp resource that your app is currently using, your app will no longer be able to control the lamp until the other
app has released the resource. To receive a notification when the availability of the lamp changes, register a
handler for the Lamp.AvailabilityChanged event.

lamp = await Lamp.GetDefaultAsync();

if (lamp == null)
{
ShowErrorMessage("No Lamp device found");
return;
}

lamp.AvailabilityChanged += Lamp_AvailabilityChanged;

In the handler for the event, check the LampAvailabilityChanged.IsAvailable property to determine if the lamp
is available. In this example, a toggle switch for turning the lamp on and off is enabled or disabled based on the
lamp availability.
private void Lamp_AvailabilityChanged(Lamp sender, LampAvailabilityChangedEventArgs args)
{
lampToggleSwitch.IsEnabled = args.IsAvailable;
}

Properly dispose of the lamp resource when not in use


When you are no longer using the lamp, you should disable it and call Lamp.Close to release the resource and
allow other apps to access the lamp. This property is mapped to the Dispose method if you are using C#. If you
registered for the AvailabilityChanged, you should unregister the handler when you dispose of the lamp
resource. The right place in your code to dispose of the lamp resource depends on your app. To scope lamp access
to a single page, release the resource in the OnNavigatingFrom event.

protected override void OnNavigatingFrom(NavigatingCancelEventArgs e)


{
lamp.AvailabilityChanged -= Lamp_AvailabilityChanged;
lamp.IsEnabled = false;
lamp.Dispose();
lamp = null;
}

Related topics
Media playback
Supported codecs
3/6/2017 4 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This article lists the audio, video, and image codec and format support for UWP apps.
In the tables below "D" indicates decoder support and "E" indicates encoder support.

Audio codec & format support


The following tables show the audio codec and format support for each device family.

NOTE
Where AMR-NB support is indicated, this codec is not supported on Server SKUs.

Desktop

CODE
C/CO
NTAI MPEG MPEG MPEG
NER -4 -3 -2 ADTS ASF RIFF AVI AC-3 AMR 3GP FLAC WAV

HE- D D
AAC
v1 /
AAC
+

HE- D D
AAC
v2 /
eAAC
+

AAC- D/E D
LC

AC3 D/E D D

EAC3 D D D
/ EC3

ALAC D/E

AMR D D/E D
-NB

FLAC D/E
CODE
C/CO
NTAI MPEG MPEG MPEG
NER -4 -3 -2 ADTS ASF RIFF AVI AC-3 AMR 3GP FLAC WAV

G.71 D
1 (A-
Law,
-
law)

GSM D
6.10

IMA D
ADP
CM

LPC D/E
M

MP3 D/E

MPE D
G-
1/2

MS D
ADP
CM

WMA D/E
1/2/3

WMA D/E
Pro

WMA D/E
Voice

Mobile

CODE
C/CO
NTAI MPEG MPEG MPEG
NER -4 -3 -2 ADTS ASF RIFF AVI AC-3 AMR 3GP FLAC WAV

HE- D D
AAC
v1 /
AAC
+

HE- D D
AAC
v2 /
eAAC
+
CODE
C/CO
NTAI MPEG MPEG MPEG
NER -4 -3 -2 ADTS ASF RIFF AVI AC-3 AMR 3GP FLAC WAV

AAC- D/E D
LC

AC3 D, D, D, D, D, D,
Only Only Only Only Only Only
on on on on on on
Lumi Lumi Lumi Lumi Lumi Lumi
a a a a a a
Icon, Icon, Icon, Icon, Icon, Icon,
830, 830, 830, 830, 830, 830,
930, 930, 930, 930, 930, 930,
1520 1520 1520 1520 1520 1520

EAC3
/ EC3

ALAC D

AMR D D/E D
-NB

FLAC D

G.71 D
1 (A-
Law,
-
law)

GSM D
6.10

IMA D
ADP
CM

LPC D/E
M

MP3 D

MPE
G-
1/2

MS D
ADP
CM

WMA D
1/2/3
CODE
C/CO
NTAI MPEG MPEG MPEG
NER -4 -3 -2 ADTS ASF RIFF AVI AC-3 AMR 3GP FLAC WAV

WMA D
Pro

WMA
Voice

IoT Core (x86)

CODE
C/CO
NTAI MPEG MPEG MPEG
NER -4 -3 -2 ADTS ASF RIFF AVI AC-3 AMR 3GP FLAC WAV

HE- D D
AAC
v1 /
AAC
+

HE- D D
AAC
v2 /
eAAC
+

AAC- D/E D
LC

AC3 D D D

EAC3 D D D
/ EC3

ALAC D

AMR D D/E D
-NB

FLAC D

G.71 D
1 (A-
Law,
-
law)

GSM D
6.10

IMA D
ADP
CM
CODE
C/CO
NTAI MPEG MPEG MPEG
NER -4 -3 -2 ADTS ASF RIFF AVI AC-3 AMR 3GP FLAC WAV

LPC D/E
M

MP3 D

MPE D
G-
1/2

MS D
ADP
CM

WMA D
1/2/3

WMA D
Pro

WMA D
Voice

IoT Core (ARM)

CODE
C/CO
NTAI MPEG MPEG MPEG
NER -4 -3 -2 ADTS ASF RIFF AVI AC-3 AMR 3GP FLAC WAV

HE- D D
AAC
v1 /
AAC
+

HE- D D
AAC
v2 /
eAAC
+

AAC- D/E D
LC

AC3

EAC3
/ EC3

ALAC D

AMR D D/E D
-NB
CODE
C/CO
NTAI MPEG MPEG MPEG
NER -4 -3 -2 ADTS ASF RIFF AVI AC-3 AMR 3GP FLAC WAV

FLAC D

G.71 D
1 (A-
Law,
-
law)

GSM D
6.10

IMA D
ADP
CM

LPC D/E
M

MP3 D

MPE D
G-
1/2

MS D
ADP
CM

WMA D
1/2/3

WMA D
Pro

WMA D
Voice

XBox

CODE
C/CO
NTAI MPEG MPEG MPEG
NER -4 -3 -2 ADTS ASF RIFF AVI AC-3 AMR 3GP FLAC WAV

HE- D D
AAC
v1 /
AAC
+
CODE
C/CO
NTAI MPEG MPEG MPEG
NER -4 -3 -2 ADTS ASF RIFF AVI AC-3 AMR 3GP FLAC WAV

HE- D D
AAC
v2 /
eAAC
+

AAC- D/E D
LC

AC3 D D D

EAC3 D D D
/ EC3

ALAC D

AMR D D/E D
-NB

FLAC D

G.71 D
1 (A-
Law,
-
law)

GSM D
6.10

IMA D
ADP
CM

LPC D/E
M

MP3 D

MPE D
G-
1/2

MS D
ADP
CM

WMA D
1/2/3

WMA D
Pro
CODE
C/CO
NTAI MPEG MPEG MPEG
NER -4 -3 -2 ADTS ASF RIFF AVI AC-3 AMR 3GP FLAC WAV

WMA D
Voice

Video codec & format support


The following tables show the video codec and format support for each device family.

NOTE
Where H.265 support is indicated, it is not necessarily supported by all devices within the device family. Where MPEG-
2/MPEG-1 support is indicated, it is only supported with the installation of the optional Microsoft DVD Universal Windows
app.

Desktop

CODE
C/CO MPE MPE
NTAI FOU MPE G-2 G-2 MPE 3GPP AVC
NER RCC FMP4 G-4 PS TS G-1 3GPP 2 HD ASF AVI MKV DV

MPE D/E D/E D D/E D


G-1

MPE D/E D/E D/E D


G-2

MPE D D D D
G-4
(Part
2)

H.26 D D/E D
5

H.26 D/E D/E D/E D/E D/E D/E D D


4

H.26 D/E D/E D


3

VC- D D D D
1

WM D/E D/E D/E


V7/8
/9

WM D D D
V9
Scre
en
CODE
C/CO MPE MPE
NTAI FOU MPE G-2 G-2 MPE 3GPP AVC
NER RCC FMP4 G-4 PS TS G-1 3GPP 2 HD ASF AVI MKV DV

DV D D D

Moti D D
on
JPEG

Mobile

CODE
C/CO MPE MPE
NTAI FOU MPE G-2 G-2 MPE 3GPP AVC
NER RCC FMP4 G-4 PS TS G-1 3GPP 2 HD ASF AVI MKV DV

MPE
G-1

MPE
G-2

MPE D D D D
G-4
(Part
2)

H.26 D D/E D
5

H.26 D/E D/E D/E D/E D/E D/E D D


4

H.26 D/E D/E D


3

VC- D D D
1

WM
V7/8
/9

WM
V9
Scre
en

DV

Moti D D
on
JPEG

IoT Core (x86)


CODE
C/CO MPE MPE
NTAI FOU MPE G-2 G-2 MPE 3GPP AVC
NER RCC FMP4 G-4 PS TS G-1 3GPP 2 HD ASF AVI MKV DV

MPE
G-1

MPE
G-2

MPE D D D D
G-4
(Part
2)

H.26 D D/E D
5

H.26 D/E D/E D/E D/E D/E D/E D D


4

H.26 D/E D/E D


3

VC- D D D D
1

WM D D D
V7/8
/9

WM D D D
V9
Scre
en

DV

Moti D D
on
JPEG

IoT (ARM)

CODE
C/CO MPE MPE
NTAI FOU MPE G-2 G-2 MPE 3GPP AVC
NER RCC FMP4 G-4 PS TS G-1 3GPP 2 HD ASF AVI MKV DV

MPE
G-1

MPE
G-2
CODE
C/CO MPE MPE
NTAI FOU MPE G-2 G-2 MPE 3GPP AVC
NER RCC FMP4 G-4 PS TS G-1 3GPP 2 HD ASF AVI MKV DV

MPE
G-4
(Part
2)

H.26 D D/E D
5

H.26 D/E D/E D/E D/E D/E D/E D D


4

H.26
3

VC- D D D D
1

WM D D D
V7/8
/9

WM
V9
Scre
en

DV

Moti D D
on
JPEG

XBox

CODE
C/CO MPE MPE
NTAI FOU MPE G-2 G-2 MPE 3GPP AVC
NER RCC FMP4 G-4 PS TS G-1 3GPP 2 HD ASF AVI MKV DV

MPE D/E D/E D D/E D


G-1

MPE D/E D/E D/E D


G-2

MPE D D D D
G-4
(Part
2)

H.26 D D/E D
5
CODE
C/CO MPE MPE
NTAI FOU MPE G-2 G-2 MPE 3GPP AVC
NER RCC FMP4 G-4 PS TS G-1 3GPP 2 HD ASF AVI MKV DV

H.26 D/E D/E D/E D/E D/E D/E D D


4

H.26 D/E D/E D


3

VC- D D D
1

WM
V7/8
/9

WM
V9
Scre
en

DV

Moti D D
on
JPEG

Image codec & format support


CODEC DESKTOP OTHER DEVICE FAMILIES

BMP D/E D/E

DDS D/E1 D/E1

DNG D2 D2

GIF D/E D/E

ICO D D

JPEG D/E D/E

JPEG-XR D/E D/E

PNG D/E D/E

TIFF D/E D/E

Camera RAW D3 No

1 DDS images using BC1 through BC5 compression are supported.


2 DNG images with a non-RAW embedded preview are supported.
3
3 Only certain camera RAW formats are supported.

For more information on image codecs, see Native WIC Codecs.


Contacts and calendar
3/6/2017 1 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
You can let your users access their contacts and appointments so they can share content, email, calendar info, or
messages with each other, or whatever functionality you design.
To see a few different ways in which your app can access contacts and appointments, see these topics:

TOPIC DESCRIPTION

Select contacts Through the Windows.ApplicationModel.Contacts


namespace, you have several options for selecting contacts.
Here, we'll show you how to select a single contact or multiple
contacts, and we'll show you how to configure the contact
picker to retrieve only the contact information that your app
needs.

Send email Shows how to launch the compose email dialog to allow the
user to send an email message. You can pre-populate the
fields of the email with data before showing the dialog. The
message will not be sent until the user taps the send button.

Send an SMS message This topic shows you how to launch the compose SMS dialog
to allow the user to send an SMS message. You can pre-
populate the fields of the SMS with data before showing the
dialog. The message will not be sent until the user taps the
send button.

Manage appointments Through the Windows.ApplicationModel.Appointments


namespace, you can create and manage appointments in a
user's calendar app. Here, we'll show you how to create an
appointment, add it to a calendar app, replace it in the
calendar app, and remove it from the calendar app. We'll also
show how to display a time span for a calendar app and create
an appointment-recurrence object.

Connect your app to actions on a contact card Shows how to make your app appear next to actions on a
contact card or mini contact card. Users can choose your app
to perform an action such as open a profile page, place a call,
or send a message.

Related topics
Appointments API sample
Contact manager API sample
Contact Picker app sample
Handling Contact Actions sample
Contact Card Integration Sample
Select contacts
3/6/2017 3 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Through the Windows.ApplicationModel.Contacts namespace, you have several options for selecting contacts.
Here, we'll show you how to select a single contact or multiple contacts, and we'll show you how to configure the
contact picker to retrieve only the contact information that your app needs.

Set up the contact picker


Create an instance of Windows.ApplicationModel.Contacts.ContactPicker and assign it to a variable.

var contactPicker = new Windows.ApplicationModel.Contacts.ContactPicker();

Set the selection mode (optional)


By default, the contact picker retrieves all of the available data for the contacts that the user selects. The
SelectionMode property lets you configure the contact picker to retrieve only the data fields that your app needs.
This is a more efficient way to use the contact picker if you only need a subset of the available contact data.
First, set the SelectionMode property to Fields:

contactPicker.SelectionMode = Windows.ApplicationModel.Contacts.ContactSelectionMode.Fields;

Then, use the DesiredFieldsWithContactFieldType property to specify the fields that you want the contact
picker to retrieve. This example configures the contact picker to retrieve email addresses:

contactPicker.DesiredFieldsWithContactFieldType.Add(Windows.ApplicationModel.Contacts.ContactFieldType.Email);

Launch the picker


Contact contact = await contactPicker.PickContactAsync();

Use PickContactsAsync if you want the user to select one or more contacts.

public IList<Contact> contacts;


contacts = await contactPicker.PickContactsAsync();

Process the contacts


When the picker returns, check whether the user has selected any contacts. If so, process the contact information.
This example shows how to processes a single contact. Here we retrieve the contact's name and copy it into a
TextBlock control called OutputName.
if (contact != null)
{
OutputName.Text = contact.DisplayName;
}
else
{
rootPage.NotifyUser("No contact was selected.", NotifyType.ErrorMessage);
}

This example shows how to process multiple contacts.

if (contacts != null && contacts.Count > 0)


{
foreach (Contact contact in contacts)
{
// Do something with the contact information.
}
}

Complete example (single contact)


This example uses the contact picker to retrieve a single contact's name along with an email address, location or
phone number.

// ...
using Windows.ApplicationModel.Contacts;
// ...

private async void PickAContactButton-Click(object sender, RoutedEventArgs e)


{
ContactPicker contactPicker = new ContactPicker();

contactPicker.DesiredFieldsWithContactFieldType.Add(ContactFieldType.Email);
contactPicker.DesiredFieldsWithContactFieldType.Add(ContactFieldType.Address);
contactPicker.DesiredFieldsWithContactFieldType.Add(ContactFieldType.PhoneNumber);

Contact contact = await contactPicker.PickContactAsync();

if (contact != null)
{
OutputFields.Visibility = Visibility.Visible;
OutputEmpty.Visibility = Visibility.Collapsed;

OutputName.Text = contact.DisplayName;

AppendContactFieldValues(OutputEmails, contact.Emails);
AppendContactFieldValues(OutputPhoneNumbers, contact.Phones);
AppendContactFieldValues(OutputAddresses, contact.Addresses);
}
else
{
OutputEmpty.Visibility = Visibility.Visible;
OutputFields.Visibility = Visibility.Collapsed;
}
}

private void AppendContactFieldValues<T>(TextBlock content, IList<T> fields)


{
if (fields.Count > 0)
{
StringBuilder output = new StringBuilder();

if (fields[0].GetType() == typeof(ContactEmail))
if (fields[0].GetType() == typeof(ContactEmail))
{
foreach (ContactEmail email in fields as IList<ContactEmail>)
{
output.AppendFormat("Email: {0} ({1})\n", email.Address, email.Kind);
}
}
else if (fields[0].GetType() == typeof(ContactPhone))
{
foreach (ContactPhone phone in fields as IList<ContactPhone>)
{
output.AppendFormat("Phone: {0} ({1})\n", phone.Number, phone.Kind);
}
}
else if (fields[0].GetType() == typeof(ContactAddress))
{
List<String> addressParts = null;
string unstructuredAddress = "";

foreach (ContactAddress address in fields as IList<ContactAddress>)


{
addressParts = (new List<string> { address.StreetAddress, address.Locality, address.Region, address.PostalCode });
unstructuredAddress = string.Join(", ", addressParts.FindAll(s => !string.IsNullOrEmpty(s)));
output.AppendFormat("Address: {0} ({1})\n", unstructuredAddress, address.Kind);
}
}

content.Visibility = Visibility.Visible;
content.Text = output.ToString();
}
else
{
content.Visibility = Visibility.Collapsed;
}
}

Complete example (multiple contacts)


This example uses the contact picker to retrieve multiple contacts and then adds the contacts to a ListView control
called OutputContacts .
MainPage rootPage = MainPage.Current;
public IList<Contact> contacts;

private async void PickContactsButton-Click(object sender, RoutedEventArgs e)


{
var contactPicker = new Windows.ApplicationModel.Contacts.ContactPicker();
contactPicker.CommitButtonText = "Select";
contacts = await contactPicker.PickContactsAsync();

// Clear the ListView.


OutputContacts.Items.Clear();

if (contacts != null && contacts.Count > 0)


{
OutputContacts.Visibility = Windows.UI.Xaml.Visibility.Visible;
OutputEmpty.Visibility = Visibility.Collapsed;

foreach (Contact contact in contacts)


{
// Add the contacts to the ListView.
OutputContacts.Items.Add(new ContactItemAdapter(contact));
}
}
else
{
OutputEmpty.Visibility = Visibility.Visible;
}
}

public class ContactItemAdapter


{
public string Name { get; private set; }
public string SecondaryText { get; private set; }

public ContactItemAdapter(Contact contact)


{
Name = contact.DisplayName;
if (contact.Emails.Count > 0)
{
SecondaryText = contact.Emails[0].Address;
}
else if (contact.Phones.Count > 0)
{
SecondaryText = contact.Phones[0].Number;
}
else if (contact.Addresses.Count > 0)
{
List<string> addressParts = (new List<string> { contact.Addresses[0].StreetAddress,
contact.Addresses[0].Locality, contact.Addresses[0].Region, contact.Addresses[0].PostalCode });
string unstructuredAddress = string.Join(", ", addressParts.FindAll(s => !string.IsNullOrEmpty(s)));
SecondaryText = unstructuredAddress;
}
}
}

Summary and next steps


Now you have a basic understanding of how to use the contact picker to retrieve contact information. Download
the Universal Windows app samples from GitHub to see more examples of how to use contacts and the contact
picker.
Send email
3/6/2017 1 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Shows how to launch the compose email dialog to allow the user to send an email message. You can pre-populate
the fields of the email with data before showing the dialog. The message will not be sent until the user taps the
send button.
In this article
Launch the compose email dialog
Summary and next steps
Related topics

Launch the compose email dialog


Create a new EmailMessage object and set the data that you want to be pre-populated in the compose email
dialog. Call ShowComposeNewEmailAsync to show the dialog.

private async Task ComposeEmail(Windows.ApplicationModel.Contacts.Contact recipient,


string messageBody,
StorageFile attachmentFile)
{
var emailMessage = new Windows.ApplicationModel.Email.EmailMessage();
emailMessage.Body = messageBody;

if (attachmentFile != null)
{
var stream = Windows.Storage.Streams.RandomAccessStreamReference.CreateFromFile(attachmentFile);

var attachment = new Windows.ApplicationModel.Email.EmailAttachment(


attachmentFile.Name,
stream);

emailMessage.Attachments.Add(attachment);
}

var email = recipient.Emails.FirstOrDefault<Windows.ApplicationModel.Contacts.ContactEmail>();


if (email != null)
{
var emailRecipient = new Windows.ApplicationModel.Email.EmailRecipient(email.Address);
emailMessage.To.Add(emailRecipient);
}

await Windows.ApplicationModel.Email.EmailManager.ShowComposeNewEmailAsync(emailMessage);

Summary and next steps


This topic has shown you how to launch the compose email dialog. For information on selecting contacts to use as
recipients for an email message, see Select contacts. See PickSingleFileAsync to select a file to use as an email
attachment.
Related topics
Selecting contacts
How to continue your Windows Phone app after calling a file picker
Send an SMS message
3/6/2017 1 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This topic shows you how to launch the compose SMS dialog to allow the user to send an SMS message. You can
pre-populate the fields of the SMS with data before showing the dialog. The message will not be sent until the user
taps the send button.

Launch the compose SMS dialog


Create a new ChatMessage object and set the data that you want to be pre-populated in the compose email
dialog. Call ShowComposeSmsMessageAsync to show the dialog.

private async void ComposeSms(Windows.ApplicationModel.Contacts.Contact recipient,


string messageBody,
StorageFile attachmentFile,
string mimeType)
{
var chatMessage = new Windows.ApplicationModel.Chat.ChatMessage();
chatMessage.Body = messageBody;

if (attachmentFile != null)
{
var stream = Windows.Storage.Streams.RandomAccessStreamReference.CreateFromFile(attachmentFile);

var attachment = new Windows.ApplicationModel.Chat.ChatMessageAttachment(


mimeType,
stream);

chatMessage.Attachments.Add(attachment);
}

var phone = recipient.Phones.FirstOrDefault<Windows.ApplicationModel.Contacts.ContactPhone>();


if (phone != null)
{
chatMessage.Recipients.Add(phone.Number);
}
await Windows.ApplicationModel.Chat.ChatMessageManager.ShowComposeSmsMessageAsync(chatMessage);
}

Summary and next steps


This topic has shown you how to launch the compose SMS dialog. For information on selecting contacts to use as
recipients for an SMS message, see Select contacts. Download the Universal Windows app samples from GitHub to
see more examples of how to send and receive SMS messages by using a background task.

Related topics
Select contacts
Manage appointments
3/6/2017 10 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Through the Windows.ApplicationModel.Appointments namespace, you can create and manage appointments
in a user's calendar app. Here, we'll show you how to create an appointment, add it to a calendar app, replace it in
the calendar app, and remove it from the calendar app. We'll also show how to display a time span for a calendar
app and create an appointment-recurrence object.

Create an appointment and apply data to it


Create a Windows.ApplicationModel.Appointments.Appointment object and assign it to a variable. Then,
apply to the Appointment the appointment properties that were supplied through the UI by a user.

private void Create-Click(object sender, RoutedEventArgs e)


{
bool isAppointmentValid = true;
var appointment = new Windows.ApplicationModel.Appointments.Appointment();

// StartTime
var date = StartTimeDatePicker.Date;
var time = StartTimeTimePicker.Time;
var timeZoneOffset = TimeZoneInfo.Local.GetUtcOffset(DateTime.Now);
var startTime = new DateTimeOffset(date.Year, date.Month, date.Day, time.Hours, time.Minutes, 0, timeZoneOffset);
appointment.StartTime = startTime;

// Subject
appointment.Subject = SubjectTextBox.Text;

if (appointment.Subject.Length > 255)


{
isAppointmentValid = false;
ResultTextBlock.Text = "The subject cannot be greater than 255 characters.";
}

// Location
appointment.Location = LocationTextBox.Text;

if (appointment.Location.Length > 32768)


{
isAppointmentValid = false;
ResultTextBlock.Text = "The location cannot be greater than 32,768 characters.";
}

// Details
appointment.Details = DetailsTextBox.Text;

if (appointment.Details.Length > 1073741823)


{
isAppointmentValid = false;
ResultTextBlock.Text = "The details cannot be greater than 1,073,741,823 characters.";
}

// Duration
if (DurationComboBox.SelectedIndex == 0)
{
// 30 minute duration is selected
appointment.Duration = TimeSpan.FromMinutes(30);
}
}
else
{
// 1 hour duration is selected
appointment.Duration = TimeSpan.FromHours(1);
}

// All Day
appointment.AllDay = AllDayCheckBox.IsChecked.Value;

// Reminder
if (ReminderCheckBox.IsChecked.Value)
{
switch (ReminderComboBox.SelectedIndex)
{
case 0:
appointment.Reminder = TimeSpan.FromMinutes(15);
break;
case 1:
appointment.Reminder = TimeSpan.FromHours(1);
break;
case 2:
appointment.Reminder = TimeSpan.FromDays(1);
break;
}
}

//Busy Status
switch (BusyStatusComboBox.SelectedIndex)
{
case 0:
appointment.BusyStatus = Windows.ApplicationModel.Appointments.AppointmentBusyStatus.Busy;
break;
case 1:
appointment.BusyStatus = Windows.ApplicationModel.Appointments.AppointmentBusyStatus.Tentative;
break;
case 2:
appointment.BusyStatus = Windows.ApplicationModel.Appointments.AppointmentBusyStatus.Free;
break;
case 3:
appointment.BusyStatus = Windows.ApplicationModel.Appointments.AppointmentBusyStatus.OutOfOffice;
break;
case 4:
appointment.BusyStatus = Windows.ApplicationModel.Appointments.AppointmentBusyStatus.WorkingElsewhere;
break;
}

// Sensitivity
switch (SensitivityComboBox.SelectedIndex)
{
case 0:
appointment.Sensitivity = Windows.ApplicationModel.Appointments.AppointmentSensitivity.Public;
break;
case 1:
appointment.Sensitivity = Windows.ApplicationModel.Appointments.AppointmentSensitivity.Private;
break;
}

// Uri
if (UriTextBox.Text.Length > 0)
{
try
{
appointment.Uri = new System.Uri(UriTextBox.Text);
}
catch (Exception)
{
isAppointmentValid = false;
ResultTextBlock.Text = "The Uri provided is invalid.";
}
}
}

// Organizer
// Note: Organizer can only be set if there are no invitees added to this appointment.
if (OrganizerRadioButton.IsChecked.Value)
{
var organizer = new Windows.ApplicationModel.Appointments.AppointmentOrganizer();

// Organizer Display Name


organizer.DisplayName = OrganizerDisplayNameTextBox.Text;

if (organizer.DisplayName.Length > 256)


{
isAppointmentValid = false;
ResultTextBlock.Text = "The organizer display name cannot be greater than 256 characters.";
}
else
{
// Organizer Address (e.g. Email Address)
organizer.Address = OrganizerAddressTextBox.Text;

if (organizer.Address.Length > 321)


{
isAppointmentValid = false;
ResultTextBlock.Text = "The organizer address cannot be greater than 321 characters.";
}
else if (organizer.Address.Length == 0)
{
isAppointmentValid = false;
ResultTextBlock.Text = "The organizer address must be greater than 0 characters.";
}
else
{
appointment.Organizer = organizer;
}
}
}

// Invitees
// Note: If the size of the Invitees list is not zero, then an Organizer cannot be set.
if (InviteeRadioButton.IsChecked.Value)
{
var invitee = new Windows.ApplicationModel.Appointments.AppointmentInvitee();

// Invitee Display Name


invitee.DisplayName = InviteeDisplayNameTextBox.Text;

if (invitee.DisplayName.Length > 256)


{
isAppointmentValid = false;
ResultTextBlock.Text = "The invitee display name cannot be greater than 256 characters.";
}
else
{
// Invitee Address (e.g. Email Address)
invitee.Address = InviteeAddressTextBox.Text;

if (invitee.Address.Length > 321)


{
isAppointmentValid = false;
ResultTextBlock.Text = "The invitee address cannot be greater than 321 characters.";
}
else if (invitee.Address.Length == 0)
{
isAppointmentValid = false;
ResultTextBlock.Text = "The invitee address must be greater than 0 characters.";
}
else
{
{
// Invitee Role
switch (RoleComboBox.SelectedIndex)
{
case 0:
invitee.Role = Windows.ApplicationModel.Appointments.AppointmentParticipantRole.RequiredAttendee;
break;
case 1:
invitee.Role = Windows.ApplicationModel.Appointments.AppointmentParticipantRole.OptionalAttendee;
break;
case 2:
invitee.Role = Windows.ApplicationModel.Appointments.AppointmentParticipantRole.Resource;
break;
}

// Invitee Response
switch (ResponseComboBox.SelectedIndex)
{
case 0:
invitee.Response = Windows.ApplicationModel.Appointments.AppointmentParticipantResponse.None;
break;
case 1:
invitee.Response = Windows.ApplicationModel.Appointments.AppointmentParticipantResponse.Tentative;
break;
case 2:
invitee.Response = Windows.ApplicationModel.Appointments.AppointmentParticipantResponse.Accepted;
break;
case 3:
invitee.Response = Windows.ApplicationModel.Appointments.AppointmentParticipantResponse.Declined;
break;
case 4:
invitee.Response = Windows.ApplicationModel.Appointments.AppointmentParticipantResponse.Unknown;
break;
}

appointment.Invitees.Add(invitee);
}
}
}

if (isAppointmentValid)
{
ResultTextBlock.Text = "The appointment was created successfully and is valid.";
}
}

Add an appointment to the user's calendar


Create a Windows.ApplicationModel.Appointments.Appointment object and assign it to a variable. Then, call
the AppointmentManager.ShowAddAppointmentAsync(Appointment, Rect, Placement) method to show
the default appointments provider add-appointment UI, to enable the user to add an appointment. If the user
clicked Add, the sample prints the appointment identifier that ShowAddAppointmentAsync returned.
private async void Add-Click(object sender, RoutedEventArgs e)
{
// Create an Appointment that should be added the user's appointments provider app.
var appointment = new Windows.ApplicationModel.Appointments.Appointment();

// Get the selection rect of the button pressed to add this appointment
var rect = GetElementRect(sender as FrameworkElement);

// ShowAddAppointmentAsync returns an appointment id if the appointment given was added to the user's calendar.
// This value should be stored in app data and roamed so that the appointment can be replaced or removed in the future.
// An empty string return value indicates that the user canceled the operation before the appointment was added.
String appointmentId = await Windows.ApplicationModel.Appointments.AppointmentManager.ShowAddAppointmentAsync(
appointment, rect, Windows.UI.Popups.Placement.Default);
if (appointmentId != String.Empty)
{
ResultTextBlock.Text = "Appointment Id: " + appointmentId;
}
else
{
ResultTextBlock.Text = "Appointment not added.";
}
}

Note For Windows Phone Store apps, ShowAddAppointment functions just like ShowEditNewAppointment
in that the dialog displayed for adding the appointment is editable.

Replace an appointment in the user's calendar


Create a Windows.ApplicationModel.Appointments.Appointment object and assign it to a variable. Then, call
the appropriate AppointmentManager.ShowReplaceAppointmentAsync method to show the default
appointments provider replace-appointment UI to enable the user to replace an appointment. The user also
provides the appointment identifier that they want to replace. This identifier was returned from
AppointmentManager.ShowAddAppointmentAsync. If the user clicked Replace, the sample prints that it
updated that appointment identifier.
private async void Replace-Click(object sender, RoutedEventArgs e)
{
// The appointment id argument for ReplaceAppointmentAsync is typically retrieved from AddAppointmentAsync and stored in app data.
String appointmentIdOfAppointmentToReplace = AppointmentIdTextBox.Text;

if (String.IsNullOrEmpty(appointmentIdOfAppointmentToReplace))
{
ResultTextBlock.Text = "The appointment id cannot be empty";
}
else
{
// The Appointment argument for ReplaceAppointmentAsync should contain all of the Appointment' s properties including those that may
have changed.
var appointment = new Windows.ApplicationModel.Appointments.Appointment();

// Get the selection rect of the button pressed to replace this appointment
var rect = GetElementRect(sender as FrameworkElement);

// ReplaceAppointmentAsync returns an updated appointment id when the appointment was successfully replaced.
// The updated id may or may not be the same as the original one retrieved from AddAppointmentAsync.
// An optional instance start time can be provided to indicate that a specific instance on that date should be replaced
// in the case of a recurring appointment.
// If the appointment id returned is an empty string, that indicates that the appointment was not replaced.
String updatedAppointmentId;
if (InstanceStartDateCheckBox.IsChecked.Value)
{
// Replace a specific instance starting on the date provided.
var instanceStartDate = InstanceStartDateDatePicker.Date;
updatedAppointmentId = await Windows.ApplicationModel.Appointments.AppointmentManager.ShowReplaceAppointmentAsync(
appointmentIdOfAppointmentToReplace, appointment, rect, Windows.UI.Popups.Placement.Default, instanceStartDate);
}
else
{
// Replace an appointment that occurs only once or in the case of a recurring appointment, replace the entire series.
updatedAppointmentId = await Windows.ApplicationModel.Appointments.AppointmentManager.ShowReplaceAppointmentAsync(
appointmentIdOfAppointmentToReplace, appointment, rect, Windows.UI.Popups.Placement.Default);
}

if (updatedAppointmentId != String.Empty)
{
ResultTextBlock.Text = "Updated Appointment Id: " + updatedAppointmentId;
}
else
{
ResultTextBlock.Text = "Appointment not replaced.";
}
}
}

Remove an appointment from the user's calendar


Call the appropriate AppointmentManager.ShowRemoveAppointmentAsync method to show the default
appointments provider remove-appointment UI, to enable the user to remove an appointment. The user also
provides the appointment identifier that they want to remove. This identifier was returned from
AppointmentManager.ShowAddAppointmentAsync. If the user clicked Delete, the sample prints that it
removed the appointment specified by that appointment identifier.
private async void Remove-Click(object sender, RoutedEventArgs e)
{
// The appointment id argument for ShowRemoveAppointmentAsync is typically retrieved from AddAppointmentAsync and stored in app
data.
String appointmentId = AppointmentIdTextBox.Text;

// The appointment id cannot be null or empty.


if (String.IsNullOrEmpty(appointmentId))
{
ResultTextBlock.Text = "The appointment id cannot be empty";
}
else
{
// Get the selection rect of the button pressed to remove this appointment
var rect = GetElementRect(sender as FrameworkElement);

// ShowRemoveAppointmentAsync returns a boolean indicating whether or not the appointment related to the appointment id given was
removed.
// An optional instance start time can be provided to indicate that a specific instance on that date should be removed
// in the case of a recurring appointment.
bool removed;
if (InstanceStartDateCheckBox.IsChecked.Value)
{
// Remove a specific instance starting on the date provided.
var instanceStartDate = InstanceStartDateDatePicker.Date;
removed = await Windows.ApplicationModel.Appointments.AppointmentManager.ShowRemoveAppointmentAsync(
appointmentId, rect, Windows.UI.Popups.Placement.Default, instanceStartDate);
}
else
{
// Remove an appointment that occurs only once or in the case of a recurring appointment, replace the entire series.
removed = await Windows.ApplicationModel.Appointments.AppointmentManager.ShowRemoveAppointmentAsync(
appointmentId, rect, Windows.UI.Popups.Placement.Default);
}

if (removed)
{
ResultTextBlock.Text = "Appointment removed";
}
else
{
ResultTextBlock.Text = "Appointment not removed";
}
}
}

Show a time span for the appointments provider


Call the AppointmentManager.ShowTimeFrameAsync method to show a specific time span for the default
appointments provider's primary UI if the user clicked Show. The sample prints that the default appointments
provider appeared on screen.

private async void Show-Click(object sender, RoutedEventArgs e)


{
var dateToShow = new DateTimeOffset(2015, 6, 12, 18, 32, 0, 0, TimeSpan.FromHours(-8));
var duration = TimeSpan.FromHours(1);
await Windows.ApplicationModel.Appointments.AppointmentManager.ShowTimeFrameAsync(dateToShow, duration);
ResultTextBlock.Text = "The default appointments provider should have appeared on screen.";
}

Create an appointment-recurrence object and apply data to it


Create an Windows.ApplicationModel.Appointments.AppointmentRecurrence object and assign it to a
variable. Then, apply to the AppointmentRecurrence the recurrence properties that were supplied through the UI
by a user.

private void Create-Click(object sender, RoutedEventArgs e)


{
bool isRecurrenceValid = true;
var recurrence = new Windows.ApplicationModel.Appointments.AppointmentRecurrence();

// Unit
switch (UnitComboBox.SelectedIndex)
{
case 0:
recurrence.Unit = Windows.ApplicationModel.Appointments.AppointmentRecurrenceUnit.Daily;
break;
case 1:
recurrence.Unit = Windows.ApplicationModel.Appointments.AppointmentRecurrenceUnit.Weekly;
break;
case 2:
recurrence.Unit = Windows.ApplicationModel.Appointments.AppointmentRecurrenceUnit.Monthly;
break;
case 3:
recurrence.Unit = Windows.ApplicationModel.Appointments.AppointmentRecurrenceUnit.MonthlyOnDay;
break;
case 4:
recurrence.Unit = Windows.ApplicationModel.Appointments.AppointmentRecurrenceUnit.Yearly;
break;
case 5:
recurrence.Unit = Windows.ApplicationModel.Appointments.AppointmentRecurrenceUnit.YearlyOnDay;
break;
}

// Occurrences
// Note: Occurrences and Until properties are mutually exclusive.
if (OccurrencesRadioButton.IsChecked.Value)
{
recurrence.Occurrences = (uint)OccurrencesSlider.Value;
}

// Until
// Note: Until and Occurrences properties are mutually exclusive.
if (UntilRadioButton.IsChecked.Value)
{
recurrence.Until = UntilDatePicker.Date;
}

// Interval
recurrence.Interval = (uint)IntervalSlider.Value;

// Week of the month


switch (WeekOfMonthComboBox.SelectedIndex)
{
case 0:
recurrence.WeekOfMonth = Windows.ApplicationModel.Appointments.AppointmentWeekOfMonth.First;
break;
case 1:
recurrence.WeekOfMonth = Windows.ApplicationModel.Appointments.AppointmentWeekOfMonth.Second;
break;
case 2:
recurrence.WeekOfMonth = Windows.ApplicationModel.Appointments.AppointmentWeekOfMonth.Third;
break;
case 3:
recurrence.WeekOfMonth = Windows.ApplicationModel.Appointments.AppointmentWeekOfMonth.Fourth;
break;
case 4:
recurrence.WeekOfMonth = Windows.ApplicationModel.Appointments.AppointmentWeekOfMonth.Last;
break;
}

// Days of the Week


// Note: For Weekly, MonthlyOnDay or YearlyOnDay recurrence unit values, at least one day must be specified.
if (SundayCheckBox.IsChecked.Value) { recurrence.DaysOfWeek |=
Windows.ApplicationModel.Appointments.AppointmentDaysOfWeek.Sunday; }
if (MondayCheckBox.IsChecked.Value) { recurrence.DaysOfWeek |=
Windows.ApplicationModel.Appointments.AppointmentDaysOfWeek.Monday; }
if (TuesdayCheckBox.IsChecked.Value) { recurrence.DaysOfWeek |=
Windows.ApplicationModel.Appointments.AppointmentDaysOfWeek.Tuesday; }
if (WednesdayCheckBox.IsChecked.Value) { recurrence.DaysOfWeek |=
Windows.ApplicationModel.Appointments.AppointmentDaysOfWeek.Wednesday; }
if (ThursdayCheckBox.IsChecked.Value) { recurrence.DaysOfWeek |=
Windows.ApplicationModel.Appointments.AppointmentDaysOfWeek.Thursday; }
if (FridayCheckBox.IsChecked.Value) { recurrence.DaysOfWeek |=
Windows.ApplicationModel.Appointments.AppointmentDaysOfWeek.Friday; }
if (SaturdayCheckBox.IsChecked.Value) { recurrence.DaysOfWeek |=
Windows.ApplicationModel.Appointments.AppointmentDaysOfWeek.Saturday; }

if (((recurrence.Unit == Windows.ApplicationModel.Appointments.AppointmentRecurrenceUnit.Weekly) ||
(recurrence.Unit == Windows.ApplicationModel.Appointments.AppointmentRecurrenceUnit.MonthlyOnDay) ||
(recurrence.Unit == Windows.ApplicationModel.Appointments.AppointmentRecurrenceUnit.YearlyOnDay)) &&
(recurrence.DaysOfWeek == Windows.ApplicationModel.Appointments.AppointmentDaysOfWeek.None))
{
isRecurrenceValid = false;
ResultTextBlock.Text = "The recurrence specified is invalid. For Weekly, MonthlyOnDay or YearlyOnDay recurrence unit values, " +
"at least one day must be specified.";
}

// Month of the year


recurrence.Month = (uint)MonthSlider.Value;

// Day of the month


recurrence.Day = (uint)DaySlider.Value;

if (isRecurrenceValid)
{
ResultTextBlock.Text = "The recurrence specified was created successfully and is valid.";
}
}

Add a new editable appointment


ShowEditNewAppointmentAsync works just like ShowAddAppointmentAsync except that the dialog for
adding the appointment is editable so that the user can modify the appointment data before saving it.
private async void AddAndEdit-Click(object sender, RoutedEventArgs e)
{
// Create an Appointment that should be added the user' s appointments provider app.
var appointment = new Windows.ApplicationModel.Appointments.Appointment();

appointment.StartTime = DateTime.Now + TimeSpan.FromDays(1);


appointment.Duration = TimeSpan.FromHours(1);
appointment.Location = "Meeting location";
appointment.Subject = "Meeting subject";
appointment.Details = "Meeting description";
appointment.Reminder = TimeSpan.FromMinutes(15); // Remind me 15 minutes prior

// ShowAddAppointmentAsync returns an appointment id if the appointment given was added to the user' s calendar.
// This value should be stored in app data and roamed so that the appointment can be replaced or removed in the future.
// An empty string return value indicates that the user canceled the operation before the appointment was added.
String appointmentId =
await Windows.ApplicationModel.Appointments.AppointmentManager.ShowEditNewAppointmentAsync(appointment);

if (appointmentId != String.Empty)
{
ResultTextBlock.Text = "Appointment Id: " + appointmentId;
}
else
{
ResultTextBlock.Text = "Appointment not added.";
}
}

Show appointment details


ShowAppointmentDetailsAsync causes the system to show details for the specified appointment. An app that
implements app calendars may choose to be activated to show details for appointments in calendars it owns.
Otherwise, the system will show the appointment details. An overload of the method that accepts a start date
argument is provided to show details for an instance of a recurring appointment.

private async void ShowAppointmentDetails-Click(object sender, RoutedEventArgs e)


{

if (instanceStartTime == null)
{
await Windows.ApplicationModel.Appointments.AppointmentManager.ShowAppointmentDetailsAsync(
currentAppointment.LocalId);
}
else
{
// Specify a start time to show an instance of a recurring appointment
await Windows.ApplicationModel.Appointments.AppointmentManager.ShowAppointmentDetailsAsync(
currentAppointment.LocalId, instanceStartTime);
}

Summary and next steps


Now you have a basic understanding of how to manage appointments. Download the Universal Windows app
samples from GitHub to see more examples of how to manage appointments.

Related topics
Appointments API sample
Connect your app to actions on a contact card
3/6/2017 3 min to read Edit on GitHub

Your app can appear next to actions on a contact card or mini contact card. Users can choose your app to perform
an action such as open a profile page, place a call, or send a message.

To get started, find existing contacts or create new ones. Next, create an annotation and a few package manifest
entries to describe which actions your app supports. Then, write code that perform the actions.
For a more complete sample, see Contact Card Integration Sample.

Find or create a contact


If your app helps people connect with others, search Windows for contacts and then annotate them. If your app
manages contacts, you can add them to a Windows contact list and then annotate them.
Find a contact
Find contacts by using a name, email address, or phone number.

ContactStore contactStore = await ContactManager.RequestStoreAsync();

IReadOnlyList<Contact> contacts = null;

contacts = await contactStore.FindContactsAsync(emailAddress);

Contact contact = contacts[0];

Create a contact
If your app is more like an address book, create contacts and then add them to a contact list.
Contact contact = new Contact();
contact.FirstName = "TestContact";

ContactEmail email = new ContactEmail();


email.Address = "TestContact@contoso.com";
email.Kind = ContactEmailKind.Other;
contact.Emails.Add(email);

ContactPhone phone = new ContactPhone();


phone.Number = "4255550101";
phone.Kind = ContactPhoneKind.Mobile;
contact.Phones.Add(phone);

ContactStore store = await


ContactManager.RequestStoreAsync(ContactStoreAccessType.AppContactsReadWrite);

ContactList contactList;

IReadOnlyList<ContactList> contactLists = await store.FindContactListsAsync();

if (0 == contactLists.Count)
contactList = await store.CreateContactListAsync("TestContactList");
else
contactList = contactLists[0];

await contactList.SaveContactAsync(contact);

Tag each contact with an annotation


Tag each contact with a list of actions (operations) that your app can perform (for example: video calls and
messaging).
Then, associate the ID of a contact to an ID that your app uses internally to identify that user.

ContactAnnotationStore annotationStore = await


ContactManager.RequestAnnotationStoreAsync(ContactAnnotationStoreAccessType.AppAnnotationsReadWrite);

ContactAnnotationList annotationList;

IReadOnlyList<ContactAnnotationList> annotationLists = await annotationStore.FindAnnotationListsAsync();


if (0 == annotationLists.Count)
annotationList = await annotationStore.CreateAnnotationListAsync();
else
annotationList = annotationLists[0];

ContactAnnotation annotation = new ContactAnnotation();


annotation.ContactId = contact.Id;
annotation.RemoteId = "user22";

annotation.SupportedOperations = ContactAnnotationOperations.Message |
ContactAnnotationOperations.AudioCall |
ContactAnnotationOperations.VideoCall |
ContactAnnotationOperations.ContactProfile;

await annotationList.TrySaveAnnotationAsync(annotation);

Register for each operation


In your package manifest, register for each operation that you listed in your annotation.
Register by adding protocol handlers to the Extensions element of the manifest.
<Extensions>
<uap:Extension Category="windows.protocol">
<uap:Protocol Name="ms-contact-profile">
<uap:DisplayName>TestProfileApp</uap:DisplayName>
</uap:Protocol>
</uap:Extension>
<uap:Extension Category="windows.protocol">
<uap:Protocol Name="ms-ipmessaging">
<uap:DisplayName>TestMsgApp</uap:DisplayName>
</uap:Protocol>
</uap:Extension>
<uap:Extension Category="windows.protocol">
<uap:Protocol Name="ms-voip-video">
<uap:DisplayName>TestVideoApp</uap:DisplayName>
</uap:Protocol>
</uap:Extension>
<uap:Extension Category="windows.protocol">
<uap:Protocol Name="ms-voip-call">
<uap:DisplayName>TestCallApp</uap:DisplayName>
</uap:Protocol>
</uap:Extension>
</Extensions>

You can also add these in the Declarations tab of the manifest designer in Visual Studio.

Find your app next to actions in a contact card


Open the People app. Your app appears next to each action (operation) that you specified in your annotation and
package manifest.
If users choose your app for an action, it appears as the default app for that action the next time users open a
contact card.

Find your app next to actions in a mini contact card


In mini contact cards, your app appears in tabs that represent actions.

Apps such as the Mail app open mini contact cards. Your app can open them too. This code shows you how to do
that.

public async void OpenContactCard(object sender, RoutedEventArgs e)


{
// Get the selection rect of the button pressed to show contact card.
FrameworkElement element = (FrameworkElement)sender;

Windows.UI.Xaml.Media.GeneralTransform buttonTransform = element.TransformToVisual(null);


Windows.Foundation.Point point = buttonTransform.TransformPoint(new Windows.Foundation.Point());
Windows.Foundation.Rect rect =
new Windows.Foundation.Rect(point, new Windows.Foundation.Size(element.ActualWidth, element.ActualHeight));

// helper method to find a contact just for illustrative purposes.


Contact contact = await findContact("contoso@contoso.com");

ContactManager.ShowContactCard(contact, rect, Windows.UI.Popups.Placement.Default);

To see more examples with mini contact cards, see Contact cards sample.
Just like the contact card, each tab remembers the app that the user last used so it's easy for them to return to your
app.
Perform operations when users select your app in a contact card
Override the Application.OnActivated method in your App.cs file and navigate users to a page in your app. The
Contact Card Integration Sample shows one way to do that.
In the code behind file of the page, override the Page.OnNavigatedTo method. The contact card passes this method
the name of operation and the ID of the user.
To start a video or audio call, see this sample: VoIP sample. You'll find the complete API in the
WIndows.ApplicationModel.Calls namespace.
To facilitate messaging, see the Windows.ApplicationModel.Chat namespace.
You can also start another app. That's what this code does.

protected override async void OnNavigatedTo(NavigationEventArgs e)


{
base.OnNavigatedTo(e);

var args = e.Parameter as ProtocolActivatedEventArgs;


// Display the result of the protocol activation if we got here as a result of being activated for a protocol.

if (args != null)
{
var options = new Windows.System.LauncherOptions();
options.DisplayApplicationPicker = true;

options.TargetApplicationPackageFamilyName = ContosoApp;

string launchString = args.uri.Scheme + ":" + args.uri.Query;


var launchUri = new Uri(launchString);
await Windows.System.Launcher.LaunchUriAsync(launchUri, options);
}
}

The args.uri.scheme property contains the name of the operation, and the args.uri.Query property contains the ID of
the user.
Data access
3/6/2017 1 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This section discusses storing data on the device in a private database and using object relational mapping in
Universal Windows Platform (UWP) apps.
SQLite is included in the UWP SDK. Entity Framework Core works with SQLite in UWP apps. Use these technologies
to develop for offline / intermittent connectivity scenarios, and to persist data across app sessions.

TOPIC DESCRIPTION

Entity framework Core with SQLite for C# apps Entity Framework (EF) is an object-relational mapper that
enables you to work with relational data using domain-specific
objects. This article explains how you can use Entity
Framework Core with a SQLite database in a Universal
Windows app.

SQLite databases SQLite is a server-less, embedded database engine. This article


explains how to use the SQLite library included in the SDK,
package your own SQLite library in a Universal Windows app,
or build it from the source.
Entity Framework Core with SQLite for C# apps
3/6/2017 1 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Entity Framework (EF) is an object-relational mapper that enables you to work with relational data using domain-
specific objects. This article explains how you can use Entity Framework Core with a SQLite database in a Universal
Windows app.
Originally for .NET developers, Entity Framework Core can be used with SQLite on Universal Windows Platform
(UWP) to store and manipulate relational data using domain specific objects. You can migrate EF code from a .NET
app to a UWP app and expect it work with appropriate changes to the connection string.
Currently EF only supports SQLite on UWP. A detailed walkthrough on installing Entity Framework Core, and
creating models is available at the Getting Started on Universal Windows Platform page. It covers the following
topics:
Prerequisites
Create a new project
Install Entity Framework
Create your model
Create your database
Use your model
SQLite databases
3/6/2017 3 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
SQLite is a server-less, embedded database engine. This article explains how to use the SQLite library included in
the SDK, package your own SQLite library in a Universal Windows app, or build it from the source.

What SQLite is and when to use it


SQLite is an open source, embedded, server-less database. Over the years it has emerged as the dominant device
side technology for data storage on many platforms and devices. Universal Windows Platform (UWP) supports and
recommends SQLite for local storage across all Windows 10 device families.
SQLite is best suited for phone apps, embedded applications for Windows 10 IoT Core (IoT Core), and as a cache
for enterprise relations database server (RDBS) data. It will satisfy most local data access needs unless they entail
heavy concurrent writes, or a big data scalescenarios unlikely for most apps.
In media playback and gaming applications, SQLite can also be used as a file format to store catalogues or other
assets, such as levels of a game, that can be downloaded as-is from a web server.

Adding SQLite to a UWP app project


There are three ways of adding SQLite to a UWP project.
1. Using the SDK SQLite
2. Including SQLite in the App Package
3. Building SQLite from source in Visual Studio
Using the SDK SQLite
You may wish to use the SQLite library included in the UWP SDK to reduce the size of your application package,
and rely on the platform to update the library periodically. Using the SDK SQLite might also lead to performance
advantages such as faster launch times given the SQLite library is highly likely to already be loaded in memory for
use by system components.
To reference the SDK SQLite, include the following header in your project. The header also contains the version of
SQLite supported in the platform.
#include <winsqlite/winsqlite3.h>

Configure the project to link to winsqlite3.lib. In Solution Explorer, right-click your project and select Properties
> Linker > Input, then add winsqlite3.lib to Additional Dependencies.
Including SQLite in the App Package
Sometimes, you might wish to package your own library instead of using the SDK version, for example, you might
wish to use a particular version of it in your cross-platform clients that is different from the version of SQLite
included in the SDK.
Install the SQLite library on the Universal Windows Platform Visual Studio extension available from SQLite.org, or
through the Extensions and Updates tool.
Once the extension is installed, reference the following header file in your code.
#include <sqlite3.h>

Building SQLite from source in Visual Studio


Sometimes you might wish to compile your own SQLite binary to use various compiler options to reduce the file
size, performance tune the library, or tailor the feature set to your application. SQLite provides options for platform
configuration, setting default parameter values, setting size limits, controlling operating characteristics, enabling
features normally turned off, disabling features normally turned on, omitting features, enabling analysis and
debugging, and managing memory allocation behavior on Windows.
Adding source to a Visual Studio project
The SQLite source code is available for download at the SQLite.org download page. Add this file to the Visual
Studio project of the application you wish to use SQLite in.
Configure Preprocessors
Always use SQLITE_OS_WINRT and SQLITE_API=__declspec(dllexport) in addition to any other compile time
options.

Managing a SQLite Database


SQLite databases can be created, updated, and deleted with the SQLite C APIs. Details of the SQLite C API can be
found at the SQLite.org Introduction To The SQLite C/C++ Interface page.
To gain sound understanding of how SQLite works, work backwards from the main task of the SQL database which
is to evaluate SQL statements. There are two objects to keep in mind:
The database connection handle
The prepared statement object
There are six interfaces to perform database operations on these objects:
sqlite3_open()
sqlite3_prepare()
sqlite3_step()
sqlite3_column()
sqlite3_finalize()
sqlite3_close()
Data binding
3/6/2017 1 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Data binding is a way for your app's UI to display data, and optionally to stay in sync with that data. Data binding
allows you to separate the concern of data from the concern of UI, and that results in a simpler conceptual model
as well as better readability, testability, and maintainability of your app. In markup, you can choose to use either the
{x:Bind} markup extension or the {Binding} markup extension. And you can even use a mixture of the two in the
same appeven on the same UI element. {x:Bind} is new for Windows 10 and it has better performance.

TOPIC DESCRIPTION

Data binding overview This topic shows you how to bind a control (or other UI
element) to a single item or bind an items control to a
collection of items in a Universal Windows Platform (UWP)
app. In addition, we show how to control the rendering of
items, implement a details view based on a selection, and
convert data for display. For more detailed info, see Data
binding in depth.

Data binding in depth This topic describes data binding features in detail.

Sample data on the design surface, and for prototyping In order to have your controls populated with data in the
Visual Studio designer (so that you can work on your app's
layout, templates, and other visual properties), there are
various ways in which you can use design-time sample data.
Sample data can also be really useful and time-saving if you're
building a sketch (or prototype) app. You can use sample data
in your sketch or prototype at run-time to illustrate your ideas
without going as far as connecting to real, live data.

Bind hierarchical data and create a master/details view You can make a multi-level master/details (also known as list-
details) view of hierarchical data by binding items controls to
CollectionViewSource instances that are bound together in
a chain.
Data binding overview
3/6/2017 9 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This topic shows you how to bind a control (or other UI element) to a single item or bind an items control to a
collection of items in a Universal Windows Platform (UWP) app. In addition, we show how to control the rendering
of items, implement a details view based on a selection, and convert data for display. For more detailed info, see
Data binding in depth.

Prerequisites
This topic assumes that you know how to create a basic UWP app. For instructions on creating your first UWP app,
see Get started with Windows apps.

Create the project


Create a new Blank Application (Windows Universal) project. Name it "Quickstart".

Binding to a single item


Every binding consists of a binding target and a binding source. Typically, the target is a property of a control or
other UI element, and the source is a property of a class instance (a data model, or a view model). This example
shows how to bind a control to a single item. The target is the Text property of a TextBlock. The source is an
instance of a simple class named Recording that represents an audio recording. Let's look at the class first.
Add a new class to your project, name it Recording.cs (if you're using C#, C++ snippets provided below as well),
and add this code to it.
namespace Quickstart
{
public class Recording
{
public string ArtistName { get; set; }
public string CompositionName { get; set; }
public DateTime ReleaseDateTime { get; set; }
public Recording()
{
this.ArtistName = "Wolfgang Amadeus Mozart";
this.CompositionName = "Andante in C for Piano";
this.ReleaseDateTime = new DateTime(1761, 1, 1);
}
public string OneLineSummary
{
get
{
return $"{this.CompositionName} by {this.ArtistName}, released: "
+ this.ReleaseDateTime.ToString("d");
}
}
}
public class RecordingViewModel
{
private Recording defaultRecording = new Recording();
public Recording DefaultRecording { get { return this.defaultRecording; } }
}
}
#include <sstream>
namespace Quickstart
{
public ref class Recording sealed
{
private:
Platform::String^ artistName;
Platform::String^ compositionName;
Windows::Globalization::Calendar^ releaseDateTime;
public:
Recording(Platform::String^ artistName, Platform::String^ compositionName,
Windows::Globalization::Calendar^ releaseDateTime) :
artistName{ artistName },
compositionName{ compositionName },
releaseDateTime{ releaseDateTime } {}
property Platform::String^ ArtistName
{
Platform::String^ get() { return this->artistName; }
}
property Platform::String^ CompositionName
{
Platform::String^ get() { return this->compositionName; }
}
property Windows::Globalization::Calendar^ ReleaseDateTime
{
Windows::Globalization::Calendar^ get() { return this->releaseDateTime; }
}
property Platform::String^ OneLineSummary
{
Platform::String^ get()
{
std::wstringstream wstringstream;
wstringstream << this->CompositionName->Data();
wstringstream << L" by " << this->ArtistName->Data();
wstringstream << L", released: " << this->ReleaseDateTime->MonthAsNumericString()->Data();
wstringstream << L"/" << this->ReleaseDateTime->DayAsString()->Data();
wstringstream << L"/" << this->ReleaseDateTime->YearAsString()->Data();
return ref new Platform::String(wstringstream.str().c-str());
}
}
};
public ref class RecordingViewModel sealed
{
private:
Recording^ defaultRecording;
public:
RecordingViewModel()
{
Windows::Globalization::Calendar^ releaseDateTime = ref new Windows::Globalization::Calendar();
releaseDateTime->Month = 1;
releaseDateTime->Day = 1;
releaseDateTime->Year = 1761;
this->defaultRecording = ref new Recording{ L"Wolfgang Amadeus Mozart", L"Andante in C for Piano", releaseDateTime };
}
property Recording^ DefaultRecording
{
Recording^ get() { return this->defaultRecording; };
}
};
}

Next, expose the binding source class from the class that represents your page of markup. We do that by adding a
property of type RecordingViewModel to MainPage.
namespace Quickstart
{
public sealed partial class MainPage : Page
{
public MainPage()
{
this.InitializeComponent();
this.ViewModel = new RecordingViewModel();
}
public RecordingViewModel ViewModel { get; set; }
}
}

namespace Quickstart
{
public ref class MainPage sealed
{
private:
RecordingViewModel^ viewModel;
public:
MainPage()
{
InitializeComponent();
this->viewModel = ref new RecordingViewModel();
}
property RecordingViewModel^ ViewModel
{
RecordingViewModel^ get() { return this->viewModel; };
}
};
}

The last piece is to bind a TextBlock to the ViewModel.DefaultRecording.OneLiner property.

<Page x:Class="Quickstart.MainPage" ... >


<Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">
<TextBlock Text="{x:Bind ViewModel.DefaultRecording.OneLineSummary}"
HorizontalAlignment="Center"
VerticalAlignment="Center"/>
</Grid>
</Page>

Here's the result.


Binding to a collection of items
A common scenario is to bind to a collection of business objects. In C# and Visual Basic, the generic
ObservableCollection<T> class is a good collection choice for data binding, because it implements the
INotifyPropertyChanged and INotifyCollectionChanged interfaces. These interfaces provide change
notification to bindings when items are added or removed or a property of the list itself changes. If you want your
bound controls to update with changes to properties of objects in the collection, the business object should also
implement INotifyPropertyChanged. For more info, see Data binding in depth.
This next example binds a ListView to a collection of Recording objects. Let's start by adding the collection to our
view model. Just add these new members to the RecordingViewModel class.

public class RecordingViewModel


{
...
private ObservableCollection<Recording> recordings = new ObservableCollection<Recording>();
public ObservableCollection<Recording> Recordings { get { return this.recordings; } }
public RecordingViewModel()
{
this.recordings.Add(new Recording() { ArtistName = "Johann Sebastian Bach",
CompositionName = "Mass in B minor", ReleaseDateTime = new DateTime(1748, 7, 8) });
this.recordings.Add(new Recording() { ArtistName = "Ludwig van Beethoven",
CompositionName = "Third Symphony", ReleaseDateTime = new DateTime(1805, 2, 11) });
this.recordings.Add(new Recording() { ArtistName = "George Frideric Handel",
CompositionName = "Serse", ReleaseDateTime = new DateTime(1737, 12, 3) });
}
}
public ref class RecordingViewModel sealed
{
private:
...
Windows::Foundation::Collections::IVector<Recording^>^ recordings;
public:
RecordingViewModel()
{
...
releaseDateTime = ref new Windows::Globalization::Calendar();
releaseDateTime->Month = 7;
releaseDateTime->Day = 8;
releaseDateTime->Year = 1748;
Recording^ recording = ref new Recording{ L"Johann Sebastian Bach", L"Mass in B minor", releaseDateTime };
this->Recordings->Append(recording);
releaseDateTime = ref new Windows::Globalization::Calendar();
releaseDateTime->Month = 2;
releaseDateTime->Day = 11;
releaseDateTime->Year = 1805;
recording = ref new Recording{ L"Ludwig van Beethoven", L"Third Symphony", releaseDateTime };
this->Recordings->Append(recording);
releaseDateTime = ref new Windows::Globalization::Calendar();
releaseDateTime->Month = 12;
releaseDateTime->Day = 3;
releaseDateTime->Year = 1737;
recording = ref new Recording{ L"George Frideric Handel", L"Serse", releaseDateTime };
this->Recordings->Append(recording);
}
...
property Windows::Foundation::Collections::IVector<Recording^>^ Recordings
{
Windows::Foundation::Collections::IVector<Recording^>^ get()
{
if (this->recordings == nullptr)
{
this->recordings = ref new Platform::Collections::Vector<Recording^>();
}
return this->recordings;
};
}
};

And then bind a ListView to the ViewModel.Recordings property.

<Page x:Class="Quickstart.MainPage" ... >


<Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">
<ListView ItemsSource="{x:Bind ViewModel.Recordings}"
HorizontalAlignment="Center" VerticalAlignment="Center"/>
</Grid>
</Page>

We haven't yet provided a data template for the Recording class, so the best the UI framework can do is to call
ToString for each item in the ListView. The default implementation of ToString is to return the type name.
To remedy this we can either override ToString to return the value of OneLineSummary, or we can provide a
data template. The data template option is more common and arguably more flexible. You specify a data template
by using the ContentTemplate property of a content control or the ItemTemplate property of an items control.
Here are two ways we could design a data template for Recording together with an illustration of the result.

<ListView ItemsSource="{x:Bind ViewModel.Recordings}"


HorizontalAlignment="Center" VerticalAlignment="Center">
<ListView.ItemTemplate>
<DataTemplate x:DataType="local:Recording">
<TextBlock Text="{x:Bind OneLineSummary}"/>
</DataTemplate>
</ListView.ItemTemplate>
</ListView>
<ListView ItemsSource="{x:Bind ViewModel.Recordings}"
HorizontalAlignment="Center" VerticalAlignment="Center">
<ListView.ItemTemplate>
<DataTemplate x:DataType="local:Recording">
<StackPanel Orientation="Horizontal" Margin="6">
<SymbolIcon Symbol="Audio" Margin="0,0,12,0"/>
<StackPanel>
<TextBlock Text="{x:Bind ArtistName}" FontWeight="Bold"/>
<TextBlock Text="{x:Bind CompositionName}"/>
</StackPanel>
</StackPanel>
</DataTemplate>
</ListView.ItemTemplate>
</ListView>

For more information about XAML syntax, see Create a UI with XAML. For more information about control layout,
see Define layouts with XAML.

Adding a details view


You can choose to display all the details of Recording objects in ListView items. But that takes up a lot of space.
Instead, you can show just enough data in the item to identify it and then, when the user makes a selection, you
can display all the details of the selected item in a separate piece of UI known as the details view. This arrangement
is also known as a master/details view, or a list/details view.
There are two ways to go about this. You can bind the details view to the SelectedItem property of the ListView.
Or you can use a CollectionViewSource: bind both the ListView and the details view to the
CollectionViewSource (which will take care of the currently-selected item for you). Both techniques are shown
below, and they both give the same results shown in the illustration.

NOTE
So far in this topic we've only used the {x:Bind} markup extension, but both of the techniques we'll show below require the
more flexible (but less performant) {Binding} markup extension.

First, here's the SelectedItem technique. If you're using Visual C++ component extensions (C++/CX) then,
because we'll be using {Binding}, you'll need to add the BindableAttribute attribute to the Recording class.
[Windows::UI::Xaml::Data::Bindable]
public ref class Recording sealed
{
...
};

The only other change necessary is to the markup.

<Page x:Class="Quickstart.MainPage" ... >


<Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">
<StackPanel HorizontalAlignment="Center" VerticalAlignment="Center">
<ListView x:Name="recordingsListView" ItemsSource="{x:Bind ViewModel.Recordings}">
<ListView.ItemTemplate>
<DataTemplate x:DataType="local:Recording">
<StackPanel Orientation="Horizontal" Margin="6">
<SymbolIcon Symbol="Audio" Margin="0,0,12,0"/>
<StackPanel>
<TextBlock Text="{x:Bind CompositionName}"/>
</StackPanel>
</StackPanel>
</DataTemplate>
</ListView.ItemTemplate>
</ListView>
<StackPanel DataContext="{Binding SelectedItem, ElementName=recordingsListView}"
Margin="0,24,0,0">
<TextBlock Text="{Binding ArtistName}"/>
<TextBlock Text="{Binding CompositionName}"/>
<TextBlock Text="{Binding ReleaseDateTime}"/>
</StackPanel>
</StackPanel>
</Grid>
</Page>

For the CollectionViewSource technique, first add a CollectionViewSource as a page resource.

<Page.Resources>
<CollectionViewSource x:Name="RecordingsCollection" Source="{x:Bind ViewModel.Recordings}"/>
</Page.Resources>

And then adjust the bindings on the ListView (which no longer needs to be named) and on the details view to use
the CollectionViewSource. Note that by binding the details view directly to the CollectionViewSource, you're
implying that you want to bind to the current item in bindings where the path cannot be found on the collection
itself. There's no need to specify the CurrentItem property as the path for the binding, although you can do that if
there's any ambiguity).

...

<ListView ItemsSource="{Binding Source={StaticResource RecordingsCollection}}">

...

<StackPanel DataContext="{Binding Source={StaticResource RecordingsCollection}}" ...>


...

And here's the identical result in each case.


Formatting or converting data values for display
There is one small issue with the rendering above. The ReleaseDateTime property is not just a date, it's a
DateTime, so it's being displayed with more precision than we need. One solution is to add a string property to
the Recording class that returns this.ReleaseDateTime.ToString("d") . Naming that property ReleaseDate would
indicate that it returns a date, not a date-and-time. Naming it ReleaseDateAsString would further indicate that it
returns a string.
A more flexible solution is to use something known as a value converter. Here's an example of how to author your
own value converter. Add this code to your Recording.cs source code file.

public class StringFormatter : Windows.UI.Xaml.Data.IValueConverter


{
// This converts the value object to the string to display.
// This will work with most simple types.
public object Convert(object value, Type targetType,
object parameter, string language)
{
// Retrieve the format string and use it to format the value.
string formatString = parameter as string;
if (!string.IsNullOrEmpty(formatString))
{
return string.Format(formatString, value);
}

// If the format string is null or empty, simply


// call ToString() on the value.
return value.ToString();
}

// No need to implement converting back on a one-way binding


public object ConvertBack(object value, Type targetType,
object parameter, string language)
{
throw new NotImplementedException();
}
}

Now we can add an instance of StringFormatter as a page resource and use it in our binding. We pass the format
string into the converter from markup for ultimate formatting flexibility.
<Page.Resources>
<local:StringFormatter x:Key="StringFormatterValueConverter"/>
</Page.Resources>
...

<TextBlock Text="{Binding ReleaseDateTime,


Converter={StaticResource StringFormatterValueConverter},
ConverterParameter=Released: \{0:d\}}"/>

...

Here's the result.

NOTE
Starting in Windows 10, version 1607, the XAML framework provides a built in boolean to Visibility converter. The converter
maps true to the Visible enumeration value and false to Collapsed so you can bind a Visibility property to a boolean
without creating a converter. To use the built in converter, your app's minimum target SDK version must be 14393 or later.
You can't use it when your app targets earlier versions of Windows 10. For more info about target versions, see Version
adaptive code.

See also
Data binding
Data binding in depth
3/6/2017 31 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Important APIs
Binding class
DataContext
INotifyPropertyChanged

Note This topic describes data binding features in detail. For a short, practical introduction, see Data binding
overview.

Data binding is a way for your app's UI to display data, and optionally to stay in sync with that data. Data binding
allows you to separate the concern of data from the concern of UI, and that results in a simpler conceptual model
as well as better readability, testability, and maintainability of your app.
You can use data binding to simply display values from a data source when the UI is first shown, but not to
respond to changes in those values. This is called one-time binding, and it works well for data whose values don't
change during run-time. Additionally, you can choose to "observe" the values and to update the UI when they
change. This is called one-way binding, and it works well for read-only data. Ultimately, you can choose to both
observe and update, so that changes that the user makes to values in the UI are automatically pushed back into
the data source. This is called two-way binding, and it works well for read-write data. Here are some examples.
You could use one-time binding to bind an Image to the current user's photo.
You could use one-way binding to bind a ListView to a collection of real-time news articles grouped by
newspaper section.
You could use two-way binding to bind a TextBox to a customer's name in a form.
There are two kinds of binding, and they're both typically declared in UI markup. You can choose to use either the
{x:Bind} markup extension or the {Binding} markup extension. And you can even use a mixture of the two in the
same appeven on the same UI element. {x:Bind} is new for Windows 10 and it has better performance. All the
details described in this topic apply to both kinds of binding unless we explicitly say otherwise.
Sample apps that demonstrate {x:Bind}
{x:Bind} sample.
QuizGame.
XAML UI Basics sample.
Sample apps that demonstrate {Binding}
Download the Bookstore1 app.
Download the Bookstore2 app.

Every binding involves these pieces


A binding source. This is the source of the data for the binding, and it can be an instance of any class that has
members whose values you want to display in your UI.
A binding target. This is a DependencyProperty of the FrameworkElement in your UI that displays the data.
A binding object. This is the piece that transfers data values from the source to the target, and optionally from
the target back to the source. The binding object is created at XAML load time from your {x:Bind} or {Binding}
markup extension.
In the following sections, we'll take a closer look at the binding source, the binding target, and the binding object.
And we'll link the sections together with the example of binding a button's content to a string property named
NextButtonText, which belongs to a class named HostViewModel.
Binding source
Here's a very rudimentary implementation of a class that we could use as a binding source.
Note If you're using {Binding} with Visual C++ component extensions (C++/CX) then you'll need to add the
BindableAttribute attribute to your binding source class. If you're using {x:Bind} then you don't need that
attribute. See Adding a details view for a code snippet.

public class HostViewModel


{
public HostViewModel()
{
this.NextButtonText = "Next";
}

public string NextButtonText { get; set; }


}

That implementation of HostViewModel, and its property NextButtonText, are only appropriate for one-time
binding. But one-way and two-way bindings are extremely common, and in those kinds of binding the UI
automatically updates in response to changes in the data values of the binding source. In order for those kinds of
binding to work correctly, you need to make your binding source "observable" to the binding object. So in our
example, if we want to one-way or two-way bind to the NextButtonText property, then any changes that happen
at run-time to the value of that property need to be made observable to the binding object.
One way of doing that is to derive the class that represents your binding source from DependencyObject, and
expose a data value through a DependencyProperty. That's how a FrameworkElement becomes observable.
FrameworkElements are good binding sources right out of the box.
A more lightweight way of making a class observableand a necessary one for classes that already have a base
classis to implement System.ComponentModel.INotifyPropertyChanged. This really just involves
implementing a single event named PropertyChanged. An example using HostViewModel is below.
Note For C++/CX, you implement Windows::UI::Xaml::Data::INotifyPropertyChanged, and the binding
source class must either have the BindableAttribute or implement ICustomPropertyProvider.
public class HostViewModel : INotifyPropertyChanged
{
private string nextButtonText;

public event PropertyChangedEventHandler PropertyChanged = delegate { };

public HostViewModel()
{
this.NextButtonText = "Next";
}

public string NextButtonText


{
get { return this.nextButtonText; }
set
{
this.nextButtonText = value;
this.OnPropertyChanged();
}
}

public void OnPropertyChanged([CallerMemberName] string propertyName = null)


{
// Raise the PropertyChanged event, passing the name of the property whose value has changed.
this.PropertyChanged(this, new PropertyChangedEventArgs(propertyName));
}
}

Now the NextButtonText property is observable. When you author a one-way or a two-way binding to that
property (we'll show how later), the resulting binding object subscribes to the PropertyChanged event. When
that event is raised, the binding object's handler receives an argument containing the name of the property that
has changed. That's how the binding object knows which property's value to go and read again.
So that you don't have to implement the pattern shown above multiple times, you can just derive from the
BindableBase bass class that you'll find in the QuizGame sample (in the "Common" folder). Here's an example of
how that looks.

public class HostViewModel : BindableBase


{
private string nextButtonText;

public HostViewModel()
{
this.NextButtonText = "Next";
}

public string NextButtonText


{
get { return this.nextButtonText; }
set { this.SetProperty(ref this.nextButtonText, value); }
}
}

Raising the PropertyChanged event with an argument of String.Empty or null indicates that all non-indexer
properties on the object should be re-read. You can raise the event to indicate that indexer properties on the object
have changed by using an argument of "Item[indexer]" for specific indexers (where indexer is the index value), or a
value of "Item[]" for all indexers.
A binding source can be treated either as a single object whose properties contain data, or as a collection of
objects. In C# and Visual Basic code, you can one-time bind to an object that implements List(Of T) to display a
collection that does not change at run-time. For an observable collection (observing when items are added to and
removed from the collection), one-way bind to ObservableCollection(Of T) instead. In C++ code, you can bind
to Vector<T> for both observable and non-observable collections. To bind to your own collection classes, use the
guidance in the following table.

SCENARIO C# AND VB (CLR) C++/CX

Bind to an object. Can be any object. Object must have BindableAttribute


or implement
ICustomPropertyProvider.

Get property change updates from a Object must implement Object must implement
bound object. System.ComponentModel. Windows.UI.Xaml.Data.
INotifyPropertyChanged. INotifyPropertyChanged.

Bind to a collection. List(Of T) Platform::Collections::Vector<T>

Get collection change updates from a ObservableCollection(Of T) Windows::Foundation::Collections::I


bound collection. ObservableVector<T>

Implement a collection that supports Extend List(Of T) or implement IList, Implement IBindableVector,
binding. IList(Of Object), IEnumerable, or IBindableIterable,
IEnumerable(Of Object). Binding to IVector<Object^>,
generic IList(Of T) and IIterable<Object^>,
IEnumerable(Of T) is not supported. IVector<IInspectable*>, or
IIterable<IInspectable*>. Binding to
generic IVector<T> and IIterable<T>
is not supported.

Implement a collection that supports Extend ObservableCollection(Of T) or Implement IBindableVector and


collection change updates. implement (non-generic) IList and IBindableObservableVector.
INotifyCollectionChanged.

Implement a collection that supports Extend ObservableCollection(Of T) or Implement IBindableVector,


incremental loading. implement (non-generic) IList and IBindableObservableVector, and
INotifyCollectionChanged. ISupportIncrementalLoading.
Additionally, implement
ISupportIncrementalLoading.

You can bind list controls to arbitrarily large data sources, and still achieve high performance, by using
incremental loading. For example, you can bind list controls to Bing image query results without having to load all
the results at once. Instead, you load only some results immediately, and load additional results as needed. To
support incremental loading, you must implement ISupportIncrementalLoading on a data source that supports
collection change notification. When the data binding engine requests more data, your data source must make the
appropriate requests, integrate the results, and then send the appropriate notifications in order to update the UI.
Binding target
In the two examples below, the Button.Content property is the binding target, and its value is set to a markup
extension which declares the binding object. First {x:Bind} is shown, and then {Binding}. Declaring bindings in
markup is the common case (it's convenient, readable, and toolable). But you can avoid markup and imperatively
(programmatically) create an instance of the Binding class instead if you need to.

<Button Content="{x:Bind ...}" ... />

<Button Content="{Binding ...}" ... />


Binding object declared using {x:Bind}
There's one step we need to do before we author our {x:Bind} markup. We need to expose our binding source class
from the class that represents our page of markup. We do that by adding a property (of type HostViewModel in
this case) to our HostView page class.

namespace QuizGame.View
{
public sealed partial class HostView : Page
{
public HostView()
{
this.InitializeComponent();
this.ViewModel = new HostViewModel();
}

public HostViewModel ViewModel { get; set; }


}
}

That done, we can now take a closer look at the markup that declares the binding object. The example below uses
the same Button.Content binding target we used in the "Binding target" section earlier, and shows it being
bound to the HostViewModel.NextButtonText property.

<Page x:Class="QuizGame.View.HostView" ... >


<Button Content="{x:Bind Path=ViewModel.NextButtonText, Mode=OneWay}" ... />
</Page>

Notice the value that we specify for Path. This value is interpreted in the context of the page itself, and in this case
the path begins by referencing the ViewModel property that we just added to the HostView page. That property
returns a HostViewModel instance, and so we can dot into that object to access the
HostViewModel.NextButtonText property. And we specify Mode, to override the {x:Bind} default of one-time.
The Path property supports a variety of syntax options for binding to nested properties, attached properties, and
integer and string indexers. For more info, see Property-path syntax. Binding to string indexers gives you the effect
of binding to dynamic properties without having to implement ICustomPropertyProvider. For other settings, see
{x:Bind} markup extension.
Note Changes to TextBox.Text are sent to a two-way bound source when the TextBox loses focus, and not after
every user keystroke.
DataTemplate and x:DataType
Inside a DataTemplate (whether used as an item template, a content template, or a header template), the value of
Path is not interpreted in the context of the page, but in the context of the data object being templated. When
using {x:Bind} in a data template, so that its bindings can be validated (and efficient code generated for them) at
compile-time, the DataTemplate needs to declare the type of its data object using x:DataType. The example
given below could be used as the ItemTemplate of an items control bound to a collection of SampleDataGroup
objects.

<DataTemplate x:Key="SimpleItemTemplate" x:DataType="data:SampleDataGroup">


<StackPanel Orientation="Vertical" Height="50">
<TextBlock Text="{x:Bind Title}"/>
<TextBlock Text="{x:Bind Description}"/>
</StackPanel>
</DataTemplate>

Weakly-typed objects in your Path


Consider for example that you have a type named SampleDataGroup, which implements a string property named
Title. And you have a property MainPage.SampleDataGroupAsObject, which is of type object but which actually
returns an instance of SampleDataGroup. The binding <TextBlock Text="{x:Bind SampleDataGroupAsObject.Title}"/> will
result in a compile error because the Title property is not found on the type object. The remedy for this is to add a
cast to your Path syntax like this: <TextBlock Text="{x:Bind ((data:SampleDataGroup)SampleDataGroupAsObject).Title}"/> . Here's
another example where Element is declared as object but is actually a TextBlock:
<TextBlock Text="{x:Bind Element.Text}"/> . And a cast remedies the issue: <TextBlock Text="{x:Bind ((TextBlock)Element).Text}"/> .

If your data loads asynchronously


Code to support {x:Bind} is generated at compile-time in the partial classes for your pages. These files can be
found in your obj folder, with names like (for C#) <view name>.g.cs . The generated code includes a handler for your
page's Loading event, and that handler calls the Initialize method on a generated class that represent's your
page's bindings. Initialize in turn calls Update to begin moving data between the binding source and the target.
Loading is raised just before the first measure pass of the page or user control. So if your data is loaded
asynchronously it may not be ready by the time Initialize is called. So, after you've loaded data, you can force
one-time bindings to be initialized by calling this.Bindings.Update(); . If you only need one-time bindings for
asynchronously-loaded data then its much cheaper to initialize them this way than it is to have one-way bindings
and to listen for changes. If your data does not undergo fine-grained changes, and if it's likely to be updated as
part of a specific action, then you can make your bindings one-time, and force a manual update at any time with a
call to Update.
Limitations
{x:Bind} is not suited to late-bound scenarios, such as navigating the dictionary structure of a JSON object, nor
duck typing which is a weak form of typing based on lexical matches on property names ("if it walks, swims, and
quacks like a duck then it's a duck"). With duck typing, a binding to the Age property would be equally satisfied
with a Person or a Wine object. For these scenarios, use {Binding}.
Binding object declared using {Binding}
{Binding} assumes, by default, that you're binding to the DataContext of your markup page. So we'll set the
DataContext of our page to be an instance of our binding source class (of type HostViewModel in this case).
The example below shows the markup that declares the binding object. We use the same Button.Content binding
target we used in the "Binding target" section earlier, and we bind to the HostViewModel.NextButtonText
property.

<Page xmlns:viewmodel="using:QuizGame.ViewModel" ... >


<Page.DataContext>
<viewmodel:HostViewModel/>
</Page.DataContext>
...
<Button Content="{Binding Path=NextButtonText}" ... />
</Page>

Notice the value that we specify for Path. This value is interpreted in the context of the page's DataContext,
which in this example is set to an instance of HostViewModel. The path references the
HostViewModel.NextButtonText property. We can omit Mode, because the {Binding} default of one-way works
here.
The default value of DataContext for a UI element is the inherited value of its parent. You can of course override
that default by setting DataContext explicitly, which is in turn inherited by children by default. Setting
DataContext explicitly on an element is useful when you want to have multiple bindings that use the same
source.
A binding object has a Source property, which defaults to the DataContext of the UI element on which the
binding is declared. You can override this default by setting Source, RelativeSource, or ElementName explicitly
on the binding (see {Binding} for details).
Inside a DataTemplate, the DataContext is set to the data object being templated. The example given below
could be used as the ItemTemplate of an items control bound to a collection of any type that has string
properties named Title and Description.

<DataTemplate x:Key="SimpleItemTemplate">
<StackPanel Orientation="Vertical" Height="50">
<TextBlock Text="{Binding Title}"/>
<TextBlock Text="{Binding Description"/>
</StackPanel>
</DataTemplate>

Note By default, changes to TextBox.Text are sent to a two-way bound source when the TextBox loses focus. To
cause changes to be sent after every user keystroke, set UpdateSourceTrigger to PropertyChanged on the
binding in markup. You can also completely take control of when changes are sent to the source by setting
UpdateSourceTrigger to Explicit. You then handle events on the text box (typically TextBox.TextChanged), call
GetBindingExpression on the target to get a BindingExpression object, and finally call
BindingExpression.UpdateSource to programmatically update the data source.
The Path property supports a variety of syntax options for binding to nested properties, attached properties, and
integer and string indexers. For more info, see Property-path syntax. Binding to string indexers gives you the effect
of binding to dynamic properties without having to implement ICustomPropertyProvider. The ElementName
property is useful for element-to-element binding. The RelativeSource property has several uses, one of which is
as a more powerful alternative to template binding inside a ControlTemplate. For other settings, see {Binding}
markup extension and the Binding class.

What if the source and the target are not the same type?
If you want to control the visibility of a UI element based on the value of a boolean property, or if you want to
render a UI element with a color that's a function of a numeric value's range or trend, or if you want to display a
date and/or time value in a UI element property that expects a string, then you'll need to convert values from one
type to another. There will be cases where the right solution is to expose another property of the right type from
your binding source class, and keep the conversion logic encapsulated and testable there. But that isn't flexible nor
scalable when you have large numbers, or large combinations, of source and target properties. In that case you
have a couple of options:
If using {x:Bind} then you can bind directly to a function to do that conversion
Or you can specify a value converter which is an object designed to perform the conversion

Value Converters
Here's a value converter, suitable for a one-time or a one-way binding, that converts a DateTime value to a string
value containing the month. The class implements IValueConverter.
public class DateToStringConverter : IValueConverter
{
// Define the Convert method to convert a DateTime value to
// a month string.
public object Convert(object value, Type targetType,
object parameter, string language)
{
// value is the data from the source object.
DateTime thisdate = (DateTime)value;
int monthnum = thisdate.Month;
string month;
switch (monthnum)
{
case 1:
month = "January";
break;
case 2:
month = "February";
break;
default:
month = "Month not found";
break;
}
// Return the value to pass to the target.
return month;
}

// ConvertBack is not implemented for a OneWay binding.


public object ConvertBack(object value, Type targetType,
object parameter, string language)
{
throw new NotImplementedException();
}
}
Public Class DateToStringConverter
Implements IValueConverter

' Define the Convert method to change a DateTime object to


' a month string.
Public Function Convert(ByVal value As Object, -
ByVal targetType As Type, ByVal parameter As Object, -
ByVal language As String) As Object -
Implements IValueConverter.Convert

' value is the data from the source object.


Dim thisdate As DateTime = CType(value, DateTime)
Dim monthnum As Integer = thisdate.Month
Dim month As String
Select Case (monthnum)
Case 1
month = "January"
Case 2
month = "February"
Case Else
month = "Month not found"
End Select
' Return the value to pass to the target.
Return month

End Function

' ConvertBack is not implemented for a OneWay binding.


Public Function ConvertBack(ByVal value As Object, -
ByVal targetType As Type, ByVal parameter As Object, -
ByVal language As String) As Object -
Implements IValueConverter.ConvertBack

Throw New NotImplementedException

End Function
End Class

And here's how you consume that value converter in your binding object markup.

<UserControl.Resources>
<local:DateToStringConverter x:Key="Converter1"/>
</UserControl.Resources>

...

<TextBlock Grid.Column="0"
Text="{x:Bind ViewModel.Month, Converter={StaticResource Converter1}}"/>

<TextBlock Grid.Column="0"
Text="{Binding Month, Converter={StaticResource Converter1}}"/>

The binding engine calls the Convert and ConvertBack methods if the Converter parameter is defined for the
binding. When data is passed from the source, the binding engine calls Convert and passes the returned data to
the target. When data is passed from the target (for a two-way binding), the binding engine calls ConvertBack
and passes the returned data to the source.
The converter also has optional parameters: ConverterLanguage, which allows specifying the language to be
used in the conversion, and ConverterParameter, which allows passing a parameter for the conversion logic. For
an example that uses a converter parameter, see IValueConverter.
Note If there is an error in the conversion, do not throw an exception. Instead, return
DependencyProperty.UnsetValue, which will stop the data transfer.
To display a default value to use whenever the binding source cannot be resolved, set the FallbackValue property
on the binding object in markup. This is useful to handle conversion and formatting errors. It is also useful to bind
to source properties that might not exist on all objects in a bound collection of heterogeneous types.
If you bind a text control to a value that is not a string, the data binding engine will convert the value to a string. If
the value is a reference type, the data binding engine will retrieve the string value by calling
ICustomPropertyProvider.GetStringRepresentation or IStringable.ToString if available, and will otherwise
call Object.ToString. Note, however, that the binding engine will ignore any ToString implementation that hides
the base-class implementation. Subclass implementations should override the base class ToString method
instead. Similarly, in native languages, all managed objects appear to implement ICustomPropertyProvider and
IStringable. However, all calls to GetStringRepresentation and IStringable.ToString are routed to
Object.ToString or an override of that method, and never to a new ToString implementation that hides the base-
class implementation.

NOTE
Starting in Windows 10, version 1607, the XAML framework provides a built in boolean to Visibility converter. The converter
maps true to the Visible enumeration value and false to Collapsed so you can bind a Visibility property to a boolean
without creating a converter. To use the built in converter, your app's minimum target SDK version must be 14393 or later.
You can't use it when your app targets earlier versions of Windows 10. For more info about target versions, see Version
adaptive code.

Function binding in {x:Bind}


{x:Bind} enables the final step in a binding path to be a function. This can be used to perform conversions, and to
perform bindings that depend on more than one property. See {x:Bind} Markup Extension

Resource dictionaries with {x:Bind}


The {x:Bind} markup extension depends on code generation, so it needs a code-behind file containing a constructor
that calls InitializeComponent (to initialize the generated code). You re-use the resource dictionary by
instantiating its type (so that InitializeComponent is called) instead of referencing its filename. Here's an
example of what to do if you have an existing resource dictionary and you want to use {x:Bind} in it.
TemplatesResourceDictionary.xaml

<ResourceDictionary
x:Class="ExampleNamespace.TemplatesResourceDictionary"
.....
xmlns:examplenamespace="using:ExampleNamespace">

<DataTemplate x:Key="EmployeeTemplate" x:DataType="examplenamespace:IEmployee">


<Grid>
<TextBlock Text="{x:Bind Name}"/>
</Grid>
</DataTemplate>
</ResourceDictionary>

TemplatesResourceDictionary.xaml.cs
using Windows.UI.Xaml.Data;

namespace ExampleNamespace
{
public partial class TemplatesResourceDictionary
{
public TemplatesResourceDictionary()
{
InitializeComponent();
}
}
}

MainPage.xaml

<Page x:Class="ExampleNamespace.MainPage"
....
xmlns:examplenamespace="using:ExampleNamespace">

<Page.Resources>
<ResourceDictionary>
....
<ResourceDictionary.MergedDictionaries>
<examplenamespace:TemplatesResourceDictionary/>
</ResourceDictionary.MergedDictionaries>
</ResourceDictionary>
</Page.Resources>
</Page>

Event binding and ICommand


{x:Bind} supports a feature called event binding. With this feature, you can specify the handler for an event using a
binding, which is an additional option on top of handling events with a method on the code-behind file. Let's say
you have a RootFrame property on your MainPage class.

public sealed partial class MainPage : Page


{
....
public Frame RootFrame { get { return Window.Current.Content as Frame; } }
}

You can then bind a button's Click event to a method on the Frame object returned by the RootFrame property
like this. Note that we also bind the button's IsEnabled property to another member of the same Frame.

<AppBarButton Icon="Forward" IsCompact="True"


IsEnabled="{x:Bind RootFrame.CanGoForward, Mode=OneWay}"
Click="{x:Bind RootFrame.GoForward}"/>

Overloaded methods cannot be used to handle an event with this technique. Also, if the method that handles the
event has parameters then they must all be assignable from the types of all of the event's parameters, respectively.
In this case, Frame.GoForward is not overloaded and it has no parameters (but it would still be valid even if it
took two object parameters). Frame.GoBack is overloaded, though, so we can't use that method with this
technique.
The event binding technique is similar to implementing and consuming commands (a command is a property that
returns an object that implements the ICommand interface). Both {x:Bind} and {Binding} work with commands. So
that you don't have to implement the command pattern multiple times, you can use the DelegateCommand
helper class that you'll find in the QuizGame sample (in the "Common" folder).

Binding to a collection of folders or files


You can use the APIs in the Windows.Storage namespace to retrieve folder and file data. However, the various
GetFilesAsync, GetFoldersAsync, and GetItemsAsync methods do not return values that are suitable for
binding to list controls. Instead, you must bind to the return values of the GetVirtualizedFilesVector,
GetVirtualizedFoldersVector, and GetVirtualizedItemsVector methods of the FileInformationFactory class.
The following code example from the StorageDataSource and GetVirtualizedFilesVector sample shows the typical
usage pattern. Remember to declare the picturesLibrary capability in your app package manifest, and confirm
that there are pictures in your Pictures library folder.

protected override void OnNavigatedTo(NavigationEventArgs e)


{
var library = Windows.Storage.KnownFolders.PicturesLibrary;
var queryOptions = new Windows.Storage.Search.QueryOptions();
queryOptions.FolderDepth = Windows.Storage.Search.FolderDepth.Deep;
queryOptions.IndexerOption = Windows.Storage.Search.IndexerOption.UseIndexerWhenAvailable;

var fileQuery = library.CreateFileQueryWithOptions(queryOptions);

var fif = new Windows.Storage.BulkAccess.FileInformationFactory(


fileQuery,
Windows.Storage.FileProperties.ThumbnailMode.PicturesView,
190,
Windows.Storage.FileProperties.ThumbnailOptions.UseCurrentScale,
false
);

var dataSource = fif.GetVirtualizedFilesVector();


this.PicturesListView.ItemsSource = dataSource;
}

You will typically use this approach to create a read-only view of file and folder info. You can create two-way
bindings to the file and folder properties, for example to let users rate a song in a music view. However, any
changes are not persisted until you call the appropriate SavePropertiesAsync method (for example,
MusicProperties.SavePropertiesAsync). You should commit changes when the item loses focus because this
triggers a selection reset.
Note that two-way binding using this technique works only with indexed locations, such as Music. You can
determine whether a location is indexed by calling the FolderInformation.GetIndexedStateAsync method.
Note also that a virtualized vector can return null for some items before it populates their value. For example, you
should check for null before you use the SelectedItem value of a list control bound to a virtualized vector, or use
SelectedIndex instead.

Binding to data grouped by a key


If you take a flat collection of itemsbooks, for example, represented by a BookSku classand you group the
items by using a common property as a keythe BookSku.AuthorName property, for examplethen the result
is called grouped data. When you group data, it is no longer a flat collection. Grouped data is a collection of group
objects, where each group object has a) a key and b) a collection of items whose property matches that key. To
take the books example again, the result of grouping the books by author name results in a collection of author
name groups where each group has a) a key, which is an author name, and b) a collection of the BookSkus whose
AuthorName property matches the group's key.
In general, to display a collection, you bind the ItemsSource of an items control (such as ListView or GridView)
directly to a property that returns a collection. If that's a flat collection of items then you don't need to do anything
special. But if it's a collection of group objects (as it is when binding to grouped data) then you need the services of
an intermediary object called a CollectionViewSource which sits between the items control and the binding
source. You bind the CollectionViewSource to the property that returns grouped data, and you bind the items
control to the CollectionViewSource. An extra value-add of a CollectionViewSource is that it keeps track of the
current item, so you can keep more than one items control in sync by binding them all to the same
CollectionViewSource. You can also access the current item programmatically through the
ICollectionView.CurrentItem property of the object returned by the CollectionViewSource.View property.
To activate the grouping facility of a CollectionViewSource, set IsSourceGrouped to true. Whether you also
need to set the ItemsPath property depends on exactly how you author your group objects. There are two ways
to author a group object: the "is-a-group" pattern, and the "has-a-group" pattern. In the "is-a-group" pattern, the
group object derives from a collection type (for example, List<T>), so the group object actually is itself the group
of items. With this pattern you do not need to set ItemsPath. In the "has-a-group" pattern, the group object has
one or more properties of a collection type (such as List<T>), so the group "has a" group of items in the form of a
property (or several groups of items in the form of several properties). With this pattern you need to set
ItemsPath to the name of the property that contains the group of items.
The example below illustrates the "has-a-group" pattern. The page class has a property named ViewModel, which
returns an instance of our view model. The CollectionViewSource binds to the Authors property of the view
model (Authors is the collection of group objects) and also specifies that it's the Author.BookSkus property that
contains the grouped items. Finally, the GridView is bound to the CollectionViewSource, and has its group style
defined so that it can render the items in groups.

<Page.Resources>
<CollectionViewSource
x:Name="AuthorHasACollectionOfBookSku"
Source="{x:Bind ViewModel.Authors}"
IsSourceGrouped="true"
ItemsPath="BookSkus"/>
</Page.Resources>
...

<GridView
ItemsSource="{Binding Source={StaticResource AuthorHasACollectionOfBookSku}}" ...>
<GridView.GroupStyle>
<GroupStyle
HeaderTemplate="{StaticResource AuthorGroupHeaderTemplateWide}" ... />
</GridView.GroupStyle>
</GridView>

Note that the ItemsSource must use {Binding} (and not {x:Bind}) because it needs to set the Source property to a
resource. To see the above example in the context of the complete app, download the Bookstore2 sample app.
Unlike the markup shown above, Bookstore2 uses {Binding} exclusively.
You can implement the "is-a-group" pattern in one of two ways. One way is to author your own group class.
Derive the class from List<T> (where T is the type of the items). For example, public class Author : List<BookSku> . The
second way is to use a LINQ expression to dynamically create group objects (and a group class) from like property
values of the BookSku items. This approachmaintaining only a flat list of items and grouping them together on
the flyis typical of an app that accesses data from a cloud service. You get the flexibility to group books by
author or by genre (for example) without needing special group classes such as Author and Genre.
The example below illustrates the "is-a-group" pattern using LINQ. This time we group books by genre, displayed
with the genre name in the group headers. This is indicated by the "Key" property path in reference to the group
Key value.
using System.Linq;

...

private IOrderedEnumerable<IGrouping<string, BookSku>> genres;

public IOrderedEnumerable<IGrouping<string, BookSku>> Genres


{
get
{
if (this.genres == null)
{
this.genres = from book in this.bookSkus
group book by book.genre into grp
orderby grp.Key select grp;
}
return this.genres;
}
}

Remember that when using {x:Bind} with data templates we need to indicate the type being bound to by setting an
x:DataType value. If the type is generic then we can't express that in markup so we need to use {Binding} instead
in the group style header template.

<Grid.Resources>
<CollectionViewSource x:Name="GenreIsACollectionOfBookSku"
Source="{Binding Genres}"
IsSourceGrouped="true"/>
</Grid.Resources>
<GridView ItemsSource="{Binding Source={StaticResource GenreIsACollectionOfBookSku}}">
<GridView.ItemTemplate x:DataType="local:BookTemplate">
<DataTemplate>
<TextBlock Text="{x:Bind Title}"/>
</DataTemplate>
</GridView.ItemTemplate>
<GridView.GroupStyle>
<GroupStyle>
<GroupStyle.HeaderTemplate>
<DataTemplate>
<TextBlock Text="{Binding Key}"/>
</DataTemplate>
</GroupStyle.HeaderTemplate>
</GroupStyle>
</GridView.GroupStyle>
</GridView>

A SemanticZoom control is a great way for your users to view and navigate grouped data. The Bookstore2
sample app illustrates how to use the SemanticZoom. In that app, you can view a list of books grouped by author
(the zoomed-in view) or you can zoom out to see a jump list of authors (the zoomed-out view). The jump list
affords much quicker navigation than scrolling through the list of books. The zoomed-in and zoomed-out views
are actually ListView or GridView controls bound to the same CollectionViewSource.
When you bind to hierarchical datasuch as subcategories within categoriesyou can choose to display the
hierarchical levels in your UI with a series of items controls. A selection in one items control determines the
contents of subsequent items controls. You can keep the lists synchronized by binding each list to its own
CollectionViewSource and binding the CollectionViewSource instances together in a chain. This is called a
master/details (or list/details) view. For more info, see How to bind to hierarchical data and create a master/details
view.

Diagnosing and debugging data binding problems


Your binding markup contains the names of properties (and, for C#, sometimes fields and methods). So when you
rename a property, you'll also need to change any binding that references it. Forgetting to do that leads to a
typical example of a data binding bug, and your app either won't compile or won't run correctly.
The binding objects created by {x:Bind} and {Binding} are largely functionally equivalent. But {x:Bind} has type
information for the binding source, and it generates source code at compile-time. With {x:Bind} you get the same
kind of problem detection that you get with the rest of your code. That includes compile-time validation of your
binding expressions, and debugging by setting breakpoints in the source code generated as the partial class for
your page. These classes can be found in the files in your obj folder, with names like (for C#) <view name>.g.cs ). If
you have a problem with a binding then turn on Break On Unhandled Exceptions in the Microsoft Visual Studio
debugger. The debugger will break execution at that point, and you can then debug what has gone wrong. The
code generated by {x:Bind} follows the same pattern for each part of the graph of binding source nodes, and you
can use the info in the Call Stack window to help determine the sequence of calls that led up to the problem.
{Binding} does not have type information for the binding source. But when you run your app with the debugger
attached, any binding errors appear in the Output window in Visual Studio.

Creating bindings in code


Note This section only applies to {Binding}, because you can't create {x:Bind} bindings in code. However, some of
the same benefits of {x:Bind} can be achieved with DependencyObject.RegisterPropertyChangedCallback,
which enables you to register for change notifications on any dependency property.
You can also connect UI elements to data using procedural code instead of XAML. To do this, create a new
Binding object, set the appropriate properties, then call FrameworkElement.SetBinding or
BindingOperations.SetBinding. Creating bindings programmatically is useful when you want to choose the
binding property values at run-time or share a single binding among multiple controls. Note, however, that you
cannot change the binding property values after you call SetBinding.
The following example shows how to implement a binding in code.

<TextBox x:Name="MyTextBox" Text="Text"/>

// Create an instance of the MyColors class


// that implements INotifyPropertyChanged.
MyColors textcolor = new MyColors();

// Brush1 is set to be a SolidColorBrush with the value Red.


textcolor.Brush1 = new SolidColorBrush(Colors.Red);

// Set the DataContext of the TextBox MyTextBox.


MyTextBox.DataContext = textcolor;

// Create the binding and associate it with the text box.


Binding binding = new Binding() { Path = new PropertyPath("Brush1") };
MyTextBox.SetBinding(TextBox.ForegroundProperty, binding);

' Create an instance of the MyColors class


' that implements INotifyPropertyChanged.
Dim textcolor As New MyColors()

' Brush1 is set to be a SolidColorBrush with the value Red.


textcolor.Brush1 = New SolidColorBrush(Colors.Red)

' Set the DataContext of the TextBox MyTextBox.


MyTextBox.DataContext = textcolor

' Create the binding and associate it with the text box.
Dim binding As New Binding() With {.Path = New PropertyPath("Brush1")}
MyTextBox.SetBinding(TextBox.ForegroundProperty, binding)

{x:Bind} and {Binding} feature comparison


FEATURE {X:BIND} {BINDING} NOTES

Path is the default property {x:Bind a.b.c} {Binding a.b.c}

Path property {x:Bind Path=a.b.c} {Binding Path=a.b.c} In x:Bind, Path is rooted at


the Page by default, not the
DataContext.

Indexer {x:Bind {Binding Binds to the specified item in


Groups[2].Title} Groups[2].Title} the collection. Only integer-
based indexes are
supported.

Attached properties {x:Bind Button22. {Binding Button22. Attached properties are


(Grid.Row)} (Grid.Row)} specified using parentheses.
If the property is not
declared in a XAML
namespace, then prefix it
with an xml namespace,
which should be mapped to
a code namespace at the
head of the document.
FEATURE {X:BIND} {BINDING} NOTES

Casting {x:Bind groups[0]. Not needed< Casts are specified using


(data:SampleDataGroup.Title)} parentheses. If the property
is not declared in a XAML
namespace, then prefix it
with an xml namespace,
which should be mapped to
a code namespace at the
head of the document.

Converter {x:Bind IsShown, {Binding IsShown, Converters must be declared


Converter= Converter= at the root of the
{StaticResource {StaticResource
BoolToVisibility}} BoolToVisibility}} Page/ResourceDictionary, or
in App.xaml.

ConverterParameter, {x:Bind IsShown, {Binding IsShown, Converters must be declared


ConverterLanguage Converter= Converter= at the root of the
{StaticResource {StaticResource
BoolToVisibility}, BoolToVisibility}, Page/ResourceDictionary, or
ConverterParameter=One, ConverterParameter=One, in App.xaml.
ConverterLanguage=fr- ConverterLanguage=fr-
fr} fr}

TargetNullValue {x:Bind Name, {Binding Name, Used when the leaf of the
TargetNullValue=0} TargetNullValue=0} binding expression is null.
Use single quotes for a
string value.

FallbackValue {x:Bind Name, {Binding Name, Used when any part of the
FallbackValue='empty'} FallbackValue='empty'} path for the binding (except
for the leaf) is null.

ElementName {x:Bind slider1.Value} {Binding Value, With {x:Bind} you're binding


ElementName=slider1} to a field; Path is rooted at
the Page by default, so any
named element can be
accessed via its field.

RelativeSource: Self <Rectangle <Rectangle Width="200" With {x:Bind}, name the


x:Name="rect1" Height="{Binding Width, element and use its name in
Width="200" Height=" RelativeSource=
{x:Bind rect1.Width}" {RelativeSource Self}}" Path.
... /> ... />

RelativeSource: Not supported {Binding <path>, Regular template binding


TemplatedParent RelativeSource= can be used in control
{RelativeSource
TemplatedParent}} templates for most uses. But
use TemplatedParent where
you need to use a converter,
or a two-way binding.<

Source Not supported <ListView ItemsSource=" For {x:Bind} use a property


{Binding Orders, or a static path instead.
Source={StaticResource
MyData}}"/>

Mode {x:Bind Name, {Binding Name, Mode can be OneTime,


Mode=OneWay} Mode=TwoWay} OneWay, or TwoWay.
{x:Bind} defaults to OneTime;
{Binding} defaults to
OneWay.
FEATURE {X:BIND} {BINDING} NOTES

UpdateSourceTrigger Not supported <Binding {x:Bind} uses


UpdateSourceTrigger="Default PropertyChanged behavior
[or] PropertyChanged [or]
Explicit"/> for all cases except
TextBox.Text where it waits
for lost focus to update the
source.
Sample data on the design surface, and for
prototyping
3/6/2017 7 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Note The degree to which you need sample dataand how much it will help youdepends on whether your
bindings use the {Binding} markup extension or the {x:Bind} markup extension. The techniques described in this
topic are based on the use of a DataContext, so they're only appropriate for {Binding}. But if you're using
{x:Bind} then your bindings at least show placeholder values on the design surface (even for items controls), so
you don't have quite the same need for sample data.
It may be impossible or undesirable (perhaps for reasons of privacy or performance) for your app to display live
data on the design surface in Microsoft Visual Studio or Blend for Visual Studio. In order to have your controls
populated with data (so that you can work on your app's layout, templates, and other visual properties), there are
various ways in which you can use design-time sample data. Sample data can also be really useful and time-saving
if you're building a sketch (or prototype) app. You can use sample data in your sketch or prototype at run-time to
illustrate your ideas without going as far as connecting to real, live data.

Setting DataContext in markup


It's a fairly common developer practice to use imperative code (in code-behind) to set a page or user control's
DataContext to a view model instance.

public MainPage()
{
InitializeComponent();
this.DataContext = new BookstoreViewModel();
}

But if you do that then your page isn't as "designable" as it could be. The reason is that when your XAML page is
opened in Visual Studio or Blend for Visual Studio, the imperative code that assigns the DataContext value is
never run (in fact, none of your code-behind is executed). The XAML tools do of course parse your markup and
instantiate any objects declared in it, but they don't actually instantiate your page's type itself. The result is that you
won't see any data in your controls or in the Create Data Binding dialog, and your page will be more challenging
to style and to lay out.

The first remedy to try is to comment out that DataContext assignment and set the DataContext in your page
markup instead. That way, your live data shows up at design-time as well as at run-time. To do this, first open your
XAML page. Then, in the Document Outline window, click the root designable element (usually with the label
[Page]) to select it. In the Properties window, find the DataContext property (inside the Common category), and
then click New. Click your view model type from the Select Object dialog box, and then click OK.

Here's what the resulting markup looks like.

<Page ... >


<Page.DataContext>
<local:BookstoreViewModel/>
</Page.DataContext>

And heres what the design surface looks like now that your bindings can resolve. Notice that the Path picker in the
Create Data Binding dialog is now populated, based on the DataContext type and the properties that you can
bind to.

The Create Data Binding dialog only needs a type to work from, but the bindings need the properties to be
initialized with values. If you don't want to reach out to your cloud service at design-time (due to performance,
paying for data transfer, privacy issues, that kind of thing) then your initialization code can check to see whether
your app is running in a design tool (such as Visual Studio or Blend for Visual Studio) and in that case load sample
data for use at design-time only.
if (Windows.ApplicationModel.DesignMode.DesignModeEnabled)
{
// Load design-time books.
}
else
{
// Load books from a cloud service.
}

You could use a view model locator if you need to pass parameters to your initialization code. A view model locator
is a class that you can put into app resources. It has a property that exposes the view model, and your page's
DataContext binds to that property. Another pattern that the locator or the view model can use is dependency
injection, which can construct a design-time or a run-time data provider (each of which implements a common
interface), as applicable.

"Sample data from class", and design-time attributes


If for whatever reason none of the options in the previous section work for you then you still have plenty of design-
time data options available via XAML tools features and design-time attributes. One good option is the Create
Sample Data from Class feature in Blend for Visual Studio. You can find that command on one of the buttons at
the top of the Data panel.
All you need to do is to specify a class for the command to use. The command then does two important things for
you. First, it generates a XAML file that contains sample data suitable for hydrating an instance of your chosen class
and all of its members, recursively (in fact, the tooling works equally well with XAML or JSON files). Second, it
populates the Data panel with the schema of your chosen class. You can then drag members from the Data panel
onto the design surface to perform various tasks. Depending on what you drag and where you drop it, you can add
bindings to existing controls (using {Binding}), or create new controls and bind them at the same time. In either
case, the operation also sets a design-time data context (d:DataContext) for you (if one is not already set) on the
layout root of your page. That design-time data context uses the d:DesignData attribute to get its sample data
from the XAML file that was generated (which, by the way, you are free to find in your project and edit so that it
contains the sample data you want).

<Page ...
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
mc:Ignorable="d">
<Grid ... d:DataContext="{d:DesignData /SampleData/RecordingViewModelSampleData.xaml}"/>
<ListView ItemsSource="{Binding Recordings}" ... />
...
</Grid>
</Page>

The various xmlns declarations mean that attributes with the d: prefix are interpreted only at design-time and are
ignored at run-time. So the d:DataContext attribute only affects the value of the DataContext property at design-
time; it has no effect at run-time. You can even set both d:DataContext and DataContext in markup if you like.
d:DataContext will override at design-time, and DataContext will override at run-time. These same override
rules apply to all design-time and run-time attributes.
The d:DataContext attribute, and all other design-time attributes, are documented in the Design-Time Attributes
topic, which is still valid for Universal Windows Platform (UWP) apps.
CollectionViewSource doesn't have a DataContext property, but it does have a Source property. Consequently,
there's a d:Source property that you can use to set design-time-only sample data on a CollectionViewSource.
<Page.Resources>
<CollectionViewSource x:Name="RecordingsCollection" Source="{Binding Recordings}"
d:Source="{d:DesignData /SampleData/RecordingsSampleData.xaml}"/>
</Page.Resources>

...

<ListView ItemsSource="{Binding Source={StaticResource RecordingsCollection}}" ... />


...

For this to work, you would have a class named Recordings : ObservableCollection<Recording> , and you would edit the
sample data XAML file so that it contains only a Recordings object (with Recording objects inside that), as shown
here.

<Quickstart:Recordings xmlns:Quickstart="using:Quickstart">
<Quickstart:Recording ArtistName="Mollis massa" CompositionName="Cubilia metus"
OneLineSummary="Morbi adipiscing sed" ReleaseDateTime="01/01/1800 15:53:17"/>
<Quickstart:Recording ArtistName="Vulputate nunc" CompositionName="Parturient vestibulum"
OneLineSummary="Dapibus praesent netus amet vestibulum" ReleaseDateTime="01/01/1800 15:53:17"/>
<Quickstart:Recording ArtistName="Phasellus accumsan" CompositionName="Sit bibendum"
OneLineSummary="Vestibulum egestas montes dictumst" ReleaseDateTime="01/01/1800 15:53:17"/>
</Quickstart:Recordings>

If you use a JSON sample data file instead of XAML, you must set the Type property.

d:Source="{d:DesignData /SampleData/RecordingsSampleData.json, Type=local:Recordings}"

So far, we've been using d:DesignData to load design-time sample data from a XAML or JSON file. An alternative
to that is the d:DesignInstance markup extension, which indicates that the design-time source is based on the
class specified by the Type property. Here's an example.

<CollectionViewSource x:Name="RecordingsCollection" Source="{Binding Recordings}"


d:Source="{d:DesignInstance Type=local:Recordings, IsDesignTimeCreatable=True}"/>

The IsDesignTimeCreatable property indicates that the design tool should actually create an instance of the class,
which implies that the class has a public default constructor, and that it populates itself with data (either real or
sample). If you don't set IsDesignTimeCreatable (or if you set it to False) then you won't get sample data
displayed on the design surface. All the design tool does in that case is to parse the class for its bindable properties
and display these in the the Data panel and in the Create Data Binding dialog.

Sample data for prototyping


For prototyping, you want sample data at both design-time and at run-time. For that use case, Blend for Visual
Studio has the New Sample Data feature. You can find that command on one of the buttons at the top of the
Data panel.
Instead of specifying a class, you can actually design the schema of your sample data source right in the Data
panel. You can also edit sample data values in the Data panel: there's no need to open and edit a file (although, you
can still do that if you prefer).
The New Sample Data feature uses DataContext, and not d:DataContext, so that the sample data is available
when you run your sketch or prototype as well as while you're designing it. And the Data panel really speeds up
your designing and binding tasks. For example, simply dragging a collection property from the Data panel onto
the design surface generates a data-bound items control and the necessary templates, all ready to build and run.
Bind hierarchical data and create a master/details
view
3/6/2017 4 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]

Note Also see the Master/detail sample.

You can make a multi-level master/details (also known as list-details) view of hierarchical data by binding items
controls to CollectionViewSource instances that are bound together in a chain. In this topic we use the {x:Bind}
markup extension where possible, and the more flexible (but less performant) {Binding} markup extension where
necessary.
One common structure for Universal Windows Platform (UWP) apps is to navigate to different details pages when
a user makes a selection in a master list. This is useful when you want to provide a rich visual representation of
each item at every level in a hierarchy. Another option is to display multiple levels of data on a single page. This is
useful when you want to display a few simple lists that let the user quickly drill down to an item of interest. This
topic describes how to implement this interaction. The CollectionViewSource instances keep track of the current
selection at each hierarchical level.
We'll create a view of a sports team hierarchy that is organized into lists for leagues, divisions, and teams, and
includes a team details view. When you select an item from any list, the subsequent views update automatically.

Prerequisites
This topic assumes that you know how to create a basic UWP app. For instructions on creating your first UWP app,
see Create your first UWP app using C# or Visual Basic.

Create the project


Create a new Blank Application (Windows Universal) project. Name it "MasterDetailsBinding".

Create the data model


Add a new class to your project, name it ViewModel.cs, and add this code to it. This will be your binding source
class.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

namespace MasterDetailsBinding
{
public class Team
{
public string Name { get; set; }
public int Wins { get; set; }
public int Losses { get; set; }
}

public class Division


{
public string Name { get; set; }
public IEnumerable<Team> Teams { get; set; }
}

public class League


{
public string Name { get; set; }
public IEnumerable<Division> Divisions { get; set; }
}

public class LeagueList : List<League>


{
public LeagueList()
{
this.AddRange(GetLeague().ToList());
}

public IEnumerable<League> GetLeague()


{
return from x in Enumerable.Range(1, 2)
select new League
{
Name = "League " + x,
Divisions = GetDivisions(x).ToList()
};
}

public IEnumerable<Division> GetDivisions(int x)


{
return from y in Enumerable.Range(1, 3)
select new Division
{
Name = String.Format("Division {0}-{1}", x, y),
Teams = GetTeams(x, y).ToList()
};
}

public IEnumerable<Team> GetTeams(int x, int y)


{
return from z in Enumerable.Range(1, 4)
select new Team
{
Name = String.Format("Team {0}-{1}-{2}", x, y, z),
Wins = 25 - (x * y * z),
Losses = x * y * z
};
}
}
}
Create the view
Next, expose the binding source class from the class that represents your page of markup. We do that by adding a
property of type LeagueList to MainPage.

namespace MasterDetailsBinding
{
/// <summary>
/// An empty page that can be used on its own or navigated to within a Frame.
/// </summary>
public sealed partial class MainPage : Page
{
public MainPage()
{
this.InitializeComponent();
this.ViewModel = new LeagueList();
}
public LeagueList ViewModel { get; set; }
}
}

Finally, replace the contents of the MainPage.xaml file with the following markup, which declares three
CollectionViewSource instances and binds them together in a chain. The subsequent controls can then bind to
the appropriate CollectionViewSource, depending on its level in the hierarchy.

<Page
x:Class="MasterDetailsBinding.MainPage"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:local="using:MasterDetailsBinding"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
mc:Ignorable="d">

<Page.Resources>
<CollectionViewSource x:Name="Leagues"
Source="{x:Bind ViewModel}"/>
<CollectionViewSource x:Name="Divisions"
Source="{Binding Divisions, Source={StaticResource Leagues}}"/>
<CollectionViewSource x:Name="Teams"
Source="{Binding Teams, Source={StaticResource Divisions}}"/>

<Style TargetType="TextBlock">
<Setter Property="FontSize" Value="15"/>
<Setter Property="FontWeight" Value="Bold"/>
</Style>

<Style TargetType="ListBox">
<Setter Property="FontSize" Value="15"/>
</Style>

<Style TargetType="ContentControl">
<Setter Property="FontSize" Value="15"/>
</Style>

</Page.Resources>

<Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">

<StackPanel Orientation="Horizontal">

<!-- All Leagues view -->

<StackPanel Margin="5">
<TextBlock Text="All Leagues"/>
<TextBlock Text="All Leagues"/>
<ListBox ItemsSource="{Binding Source={StaticResource Leagues}}"
DisplayMemberPath="Name"/>
</StackPanel>

<!-- League/Divisions view -->

<StackPanel Margin="5">
<TextBlock Text="{Binding Name, Source={StaticResource Leagues}}"/>
<ListBox ItemsSource="{Binding Source={StaticResource Divisions}}"
DisplayMemberPath="Name"/>
</StackPanel>

<!-- Division/Teams view -->

<StackPanel Margin="5">
<TextBlock Text="{Binding Name, Source={StaticResource Divisions}}"/>
<ListBox ItemsSource="{Binding Source={StaticResource Teams}}"
DisplayMemberPath="Name"/>
</StackPanel>

<!-- Team view -->

<ContentControl Content="{Binding Source={StaticResource Teams}}">


<ContentControl.ContentTemplate>
<DataTemplate>
<StackPanel Margin="5">
<TextBlock Text="{Binding Name}"
FontSize="15" FontWeight="Bold"/>
<StackPanel Orientation="Horizontal" Margin="10,10">
<TextBlock Text="Wins:" Margin="0,0,5,0"/>
<TextBlock Text="{Binding Wins}"/>
</StackPanel>
<StackPanel Orientation="Horizontal" Margin="10,0">
<TextBlock Text="Losses:" Margin="0,0,5,0"/>
<TextBlock Text="{Binding Losses}"/>
</StackPanel>
</StackPanel>
</DataTemplate>
</ContentControl.ContentTemplate>
</ContentControl>

</StackPanel>

</Grid>
</Page>

Note that by binding directly to the CollectionViewSource, you're implying that you want to bind to the current
item in bindings where the path cannot be found on the collection itself. There's no need to specify the
CurrentItem property as the path for the binding, although you can do that if there's any ambiguity. For example,
the ContentControl representing the team view has its Content property bound to the Teams
CollectionViewSource. However, the controls in the DataTemplate bind to properties of the Team class
because the CollectionViewSource automatically supplies the currently selected team from the teams list when
necessary.
Debugging, testing, and performance
3/6/2017 1 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Use Microsoft Visual Studio to debug and test your app. To prepare your app for the Windows Store certification
process, use the Windows App Certification Kit.

TOPIC DESCRIPTION

Deploying and debugging UWP apps This article guides you through the steps to target various
deployment and debugging targets.

Testing and debugging tools for Process Lifetime Tools and techniques for debugging and testing how your app
Management (PLM) works with Process Lifetime Management.

Test with the Microsoft Emulator for Windows 10 Mobile Simulate real-world interaction with a device and test the
features of your app by using the tools included with
Microsoft Emulator for Windows 10 Mobile. The emulator is a
desktop application that emulates a mobile device running
Windows 10. It provides a virtualized environment in which
you can debug and test Windows apps without a physical
device. It also provides an isolated environment for your
application prototypes.

Test Surface Hub apps using Visual Studio The Visual Studio simulator provides an environment where
you can design, develop, debug, and test Universal Windows
Platform (UWP) apps, including apps that you have built for
Microsoft Surface Hub. The simulator does not use the same
user interface as Surface Hub, but it is useful for testing how
your app looks and behaves at the Surface Hub's screen size
and resolution.

Beta testing Beta testing gives you the chance to improve your app based
on feedback from individuals outside of your app-
development team who try your unreleased app on their own
devices.

Windows Device Portal The Windows Device Portal lets you configure and manage
your device remotely over a network or USB connection.

Windows App Certification Kit To give your app the best chance of being Windows App
Certification Kit.

Performance Users expect their apps to remain responsive, to feel natural,


and not to drain their battery. Technically, performance is a
non-functional requirement but treating performance as a
feature will help you deliver on your users' expectations.
Specifying goals, and measuring, are key factors. Determine
what your performance-critical scenarios are; define what
good performance mean. Then measure early and often
enough throughout the lifecycle of your project to be
confident you'll hit your goals.
Deploying and debugging UWP apps
3/6/2017 12 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This article guides you through the steps to target various deployment and debugging targets.
Microsoft Visual Studio allows you to deploy and debug your Universal Windows Platform (UWP) apps on a
variety of Windows 10 devices. Visual Studio will handle the process of building and registering the app on the
target device.

Picking a deployment target


To pick a target, go to the debug target drop-down next to the Start Debugging button and choose which target
you want to deploy your app to. After the target is selected, select Start Debugging (F5) to deploy and debug on
that target, or select Ctrl+F5 to just deploy to that target.

Simulator will deploy the app to a simulated environment on your current development machine. This option
is only available if your app's Target Platform Min. Version is less than or equal to the operating system on
your development machine.
Local Machine will deploy the app to your current development machine. This option is only available if your
app's Target Platform Min. Version is less than or equal to the operating system on your development
machine.
Remote Machine will let you specify a remote target to deploy the app. More information about deploying to
a remote machine can be found in Specifying a remote device.
Device will deploy the app to a USB connected device. The device must be developer unlocked and have the
screen unlocked.
An Emulator target will boot up and deploy the app to an emulator with the configuration specified in the
name. Emulators are only available on Hyper-V enabled machines running Windows 8.1 or beyond.

Debugging deployed apps


Visual Studio can also attach to any running UWP app process by selecting Debug, and then Attach to Process.
Attaching to a running process doesnt require the original Visual Studio project, but loading the process's symbols
will help significantly when debugging a process that you don't have the original code for.
In addition, any installed app package can be attached and debugged by selecting Debug, Other, and then Debug
Installed App Packages.
Selecting Do not launch, but debug my code when it starts will cause the Visual Studio debugger to attach to
your UWP app when you launch it at a custom time. This is an effective way to debug control paths from different
launch methods, such as protocol activation with custom parameters.
UWP apps can be developed and compiled on Windows 8.1 or later, but require Windows 10 to run. If you are
developing a UWP app on a Windows 8.1 PC, you can remotely debug a UWP app running on another Windows
10 device, provided that both the host and target computer are on the same LAN. To do this, download and install
the Remote Tools for Visual Studio on both machines. The installed version must match the existing version of
Visual Studio that you have installed, and the architecture you select (x86, x64) must also match that of your target
app.

Package layout
With Visual Studio 2015 Update 3, we have added the option for developers to specify the layout path for their
UWP apps. This determines where the package layout is copied to on disk when you build your app. By default, this
property is set relative to the projects root directory. If you do not modify this property, the behavior will remain
the same as it has for previous versions of Visual Studio.
This property can be modified in the project's Debug properties.
If you want to include all layout files in your package when you create a package for your app, you must add the
project property <IncludeLayoutFilesInPackage>true</IncludeLayoutFilesInPackage> .
To add this property:
1. Right-click the project, and then select Unload Project.
2. Right-click the project, and then select Edit [projectname].xxproj (.xxproj will change depending on project
language).
3. Add the property, and then reload the project.
Specifying a remote device
C# and Microsoft Visual Basic
To specify a remote machine for C# or Microsoft Visual Basic apps, select Remote Machine in the debug target
drop-down. The Remote Connections dialog will appear, which will let you specify an IP address or select a
discovered device. By default, the Universal authentication mode is selected. To determine which authentication
mode to use, see Authentication modes.

To return to this dialog, you can open project properties and go to the Debug tab. From there, select Find next to
Remote machine:

To deploy an app to a remote PC, you will also need to download and install the Visual Studio Remote Tools on the
target PC. For full instructions, see Remote PC instructions.
C++ and JavaScript
To specify a remote machine target for a C++ or JavaScript UWP app:
1. In the Solution Explorer, right-click the project, and then click Properties.
2. Go to Debugging settings, and under Debugger to launch, select Remote Machine.
3. Enter the Machine Name (or click Locate to find one), and then set the Authentication Type property.
After the machine is specified, you can select Remote Machine in the debug target drop-down to return to that
specified machine. Only one remote machine can be selected at a time.
Remote PC instructions
To deploy to a remote PC, the target PC must have the Visual Studio Remote Tools installed. The remote PC must
also be running a version of Windows that is greater than or equal to your apps Target Platform Min. Version
property. After you have installed the remote tools, you must launch the remote debugger on the target PC.
To do this, search for Remote Debugger in the Start menu, open it, and if prompted, allow the debugger to
configure your firewall settings. By default, the debugger launches with Windows authentication. This will require
user credentials if the signed-in user is not the same on both PCs.
To change it to no authentication, in the Remote Debugger, go to Tools -> Options, and then set it to No
Authentication. After the remote debugger is set up, you must also ensure that you have set the host device to
Developer Mode. After that, you can deploy from your development machine.
For more information, see the Visual studio Download Center page.

Authentication modes
There are three authentication modes for remote machine deployment:
Universal (Unencrypted Protocol): Use this authentication mode whenever you are deploying to a remote
device that is not a Windows PC (desktop or laptop). Currently, this is for IoT devices, Xbox devices, and
HoloLens devices. Universal (Unencrypted Protocol) should only be used on trusted networks. The debugging
connection is vulnerable to malicious users who could intercept and change data being passed between the
development and remote machine.
Windows: This authentication mode is only intended to be used for remote PC deployment (desktop or laptop).
Use this authentication mode when you have access to the credentials of the signed-in user of the target
machine. This is the most secure channel for remote deployment.
None: This authentication mode is only intended to be used for remote PC deployment (desktop or laptop). Use
this authentication mode when you have a test machine set up in an environment that has a test account signed
in and you cannot enter the credentials. Ensure that the remote debugger settings are set to accept no
authentication.
Advanced remote deployment options
With the release of Visual Studio 2015 Update 3, and the Windows 10 Anniversary Update, there are new
advanced remote deployment options for certain Windows 10 devices. The advanced remote deployment options
can be found on the Debug menu for project properties.
The new properties include:
Deployment type
Package registration path
Keep all files on device even those that are no longer a part of your layout
Requirements
To utilize the advanced remote deployment options, you must satisfy the following requirements:
Have Visual Studio 2015 Update 3 installed with Windows 10 Tools 1.4.1 (which includes the Windows 10
Anniversary Update SDK)
Target a Windows 10 Anniversary Update Xbox remote device
Use Universal Authentication mode
Properties pages
For a C# or Visual Basic UWP app, the properties page will look like the following.

For a C++ UWP app, the properties page will look like the following.
Copy files to device
Copy files to device will physically transfer the files over the network to the remote device. It will copy and
register the package layout that is built to the Layout folder path. Visual Studio will keep the files that are copied
to the device in sync with the files in your Visual Studio project; however, there is an option to keep all files on
device even those that are no longer a part of your layout. Selecting this option means that any files that
were previously copied to the remote device, but are no longer a part of your project, will remain on the remote
device.
The package registration path specified when you copy files to device is the physical location on the remote
device where the files are copied. This path can be specified as any relative path. The location where the files are
deployed will be relative to a development files root that will vary depending on the target device. Specifying this
path is useful for multiple developers sharing the same device and working on packages with some build variance.

NOTE
Copy files to device is currently supported on Xbox running Windows 10 Anniversary Update.

On the remote device, the layout gets copied to the following default location depending on the device family:
Xbox: \\MY-DEVKIT\DevelopmentFiles\PACKAGE-REGISTRATION-PATH

Register layout from network


When you choose to register the layout from the network, you can build your package layout to a network share
and then register the layout on the remote device directly from the network. This requires that you specify a layout
folder path (a network share) that is accessible from the remote device. The Layout folder path property is the
path set relative to the PC running Visual Studio, while the Package registration path property is the same path,
but specified relative to the remote device.
To successfully register the layout from the network, you must first make Layout folder path a shared network
folder. To do this, right-click the folder in File Explorer, select Share with > Specific people, and then choose the
users you would like to share the folder with. When you try to register the layout from the network, you will be
prompted for credentials to ensure that you are registering as a user with access to the share.
For help with this, see the following examples:
Example 1 (local layout folder, accessible as a network share):
Layout folder path = D:\Layouts\App1
Package registration path = \\NETWORK-SHARE\Layouts\App1
Example 2 (network layout folder):
Layout folder path = \\NETWORK-SHARE\Layouts\App1
Package registration path = \\NETWORK-SHARE\Layouts\App1

When you first register the layout from the network, your credentials will be cached on the target device so you do
not need to repeatedly sign in. To remove cached credentials, you can use the WinAppDeployCmd.exe tool from
the Windows 10 SDK with the deletecreds command.
You cannot select keep all files on device when you register the layout from the network because no files are
physically copied to the remote device.

NOTE
Register layout from network is currently supported on Xbox running Windows 10 Anniversary Update.

On the remote device, the layout gets registered to the following default location depending on the device family:
Xbox: \\MY-DEVKIT\DevelopmentFiles\XrfsFiles

Debugging options
On Windows 10, the startup performance of UWP apps is improved by proactively launching and then suspending
apps in a technique called prelaunch. Many apps will not need to do anything special to work in this mode, but
some apps may need to adjust their behavior. To help debug any issues in these code paths, you can start
debugging the app from Visual Studio in prelaunch mode.
Debugging is supported both from a Visual Studio project (Debug -> Other Debug Targets -> Debug
Universal Windows App Prelaunch), and for apps already installed on the machine (Debug -> Other Debug
Targets -> Debug Installed App Package by selecting the Activate app with Prelaunch check box). For more
information, see Debug UWP Prelaunch.
You can set the following deployment options on the Debug property page of the startup project:
Allow local network loopback
For security reasons, a UWP app that is installed in the standard manner is not allowed to make network
calls to the device it is installed on. By default, Visual Studio deployment creates an exemption from this rule
for the deployed app. This exemption allows you to test communication procedures on a single machine.
Before submitting your app to the Windows Store, you should test your app without the exemption.
To remove the network loopback exemption from the app:
On the C# and Visual Basic Debug property page, clear the Allow local network loopback check box.
On the JavaScript and C++ Debugging property page, set the Allow Local Network Loopback value
to No.
Do not launch, but debug my code when it starts / Launch Application
To configure the deployment to automatically start a debugging session when the app is launched:
On the C# and Visual Basic Debug property page, select the Do not launch, but debug my code
when it starts check box.
On the JavaScript and C++ Debugging property page, set the Launch Application value to Yes.

Symbols
Symbol files contain a variety of very useful data when debugging code, such as variables, function names, and
entry point addresses, allowing you to better understand exceptions and callstack execution order. Symbols for
most variants of Windows are available through the Microsoft Symbol Server or can be downloaded for faster,
offline lookups at Download Windows Symbol Packages.
To set symbol options for Visual Studio, select Tools > Options, and then go to Debugging > Symbols in the
dialog window.

To load symbols in a debugging session with WinDbg, set the sympath variable to the symbol package location.
For example, running the following command will load symbols from the Microsoft Symbol Server, and then cache
them in the C:\Symbols directory:

.sympath SRV*C:\Symbols*http://msdl.microsoft.com/download/symbols
.reload

You can add more paths by using the ; delimiter, or use the .sympath+ command. For more advanced symbol
operations that use WinDbg, see Public and Private Symbols.

WinDbg
WinDbg is a powerful debugger that is shipped as part of the Debugging Tools for Windows suite, which is
included in the Windows SDK. The Windows SDK installation allows you to install Debugging Tools for Windows as
a standalone product. While highly useful for debugging native code, we dont recommend WinDbg for apps
written in managed code or HTML5.
To use WinDbg with UWP apps, you will need to first disable Process Lifetime Management (PLM) for your app
package by using PLMDebug, as described in Testing and debugging tools for Process Lifetime Management
(PLM).
plmdebug /enableDebug [PackageFullName] "\"C:\Program Files\Debugging Tools for Windows (x64)\WinDbg.exe\" -server npipe:pipe=test"

In contrast to Visual Studio, most of the core functionality of WinDbg relies on providing commands to the
command window. The provided commands allow you to view execution state, investigate user mode crash
dumps, and debug in a variety of modes.
One of the most popular commands in WinDbg is !analyze -v , which is used to retrieve a verbose amount of
information about the current exception, including:
FAULTING_IP: instruction pointer at the time of fault
EXCEPTION_RECORD: address, code, and flags of the current exception
STACK_TEXT: stack trace prior to exception
For a complete list of all WinDbg commands, see Debugger Commands.

Related topics
Testing and debugging tools for Process Lifetime Management (PLM)
Debugging, testing, and performance
Testing and debugging tools for Process Lifetime
Management (PLM)
3/6/2017 2 min to read Edit on GitHub

One of the key differences between UWP apps and traditional desktop applications is that UWP titles reside in an
app container subject to Process Lifecycle Management (PLM). UWP apps can be suspended, resumed, or
terminated across all platforms by the Runtime Broker service, and there are dedicated tools for you to use to
force those transitions when you are testing or debugging the code that handles them.

Features in Visual Studio 2015


The built-in debugger in Visual Studio 2015 can help you investigate potential issues when using UWP-exclusive
features. You can force your application into different PLM states by using the Lifecycle Events toolbar, which
becomes visible when you run and debug your title.

The PLMDebug tool


PLMDebug.exe is a command-line tool that allows you to control the PLM state of an application package, and is
shipped as part of the Windows SDK. After it is installed, the tool resides in C:\Program Files (x86)\Windows
Kits\10\Debuggers\x64 by default.
PLMDebug also allows you to disable PLM for any installed app package, which is necessary for some debuggers.
Disabling PLM prevents the Runtime Broker service from terminating your app before you have a chance to
debug. To disable PLM, use the /enableDebug switch, followed by the full package name of your UWP app (the
short name, package family name, or AUMID of a package will not work):

plmdebug /enableDebug [PackageFullName]

After deploying your UWP app from Visual Studio, the full package name is displayed in the output window.
Alternatively, you can also retrieve the full package name by running Get-AppxPackage in a PowerShell console.

Optionally, you can specify an absolute path to a debugger that will automatically launch when your app package
is activated. If you wish to do this using Visual Studio, youll need to specify VSJITDebugger.exe as the debugger.
However, VSJITDebugger.exe requires that you specify the -p switch, along with the process ID (PID) of the UWP
app. Because its not possible to know the PID of your UWP app beforehand, this scenario is not possible out of the
box.
You can work around this limitation by writing a script or tool that identifies your games process, and then the
shell runs VSJITDebugger.exe, passing in the PID of your UWP app. The following C# code sample illustrates a
straightforward approach to accomplish this.

using System.Diagnostics;

namespace VSJITLauncher
{
class Program
{
static void Main(string[] args)
{
// Name of UWP process, which can be retrieved via Task Manager.
Process[] processes = Process.GetProcessesByName(args[0]);

// Get PID of most recent instance


// Note the highest PID is arbitrary. Windows may recycle or wrap the PID at any time.
int highestId = 0;
foreach (Process detectedProcess in processes)
{
if (detectedProcess.Id > highestId)
highestId = detectedProcess.Id;
}

// Launch VSJITDebugger.exe, which resides in C:\Windows\System32


ProcessStartInfo startInfo = new ProcessStartInfo("vsjitdebugger.exe", "-p " + highestId);
startInfo.UseShellExecute = true;

Process process = new Process();


process.StartInfo = startInfo;
process.Start();
}
}
}

Example usage of this in conjunction with PLMDebug:

plmdebug /enableDebug 279f7062-ce35-40e8-a69f-cc22c08e0bb8_1.0.0.0_x86__c6sq6kwgxxfcg "\"C:\VSJITLauncher.exe\" Game"

where Game is the process name, and 279f7062-ce35-40e8-a69f-cc22c08e0bb8_1.0.0.0_x86__c6sq6kwgxxfcg is the full package
name of the example UWP app package.
Note that every call to /enableDebug must be later coupled to another PLMDebug call with the /disableDebug
switch. Furthermore, the path to a debugger must be absolute (relative paths are not supported).

Related topics
Deploying and debugging UWP apps
Debugging, testing, and performance
Test with the Microsoft Emulator for Windows 10
Mobile
3/6/2017 31 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Simulate real-world interaction with a device and test the features of your app by using the tools included with
Microsoft Emulator for Windows 10 Mobile. The emulator is a desktop application that emulates a mobile device
running Windows 10. It provides a virtualized environment in which you can debug and test Windows apps
without a physical device. It also provides an isolated environment for your application prototypes.
The emulator is designed to provide comparable performance to an actual device. Before you publish your app to
the Windows Store, however, we recommend that you test your app on a physical device.
You can test your universal app using a unique Windows 10 Mobile emulator image for various screen resolution
and screen size configurations. You can simulate real-world interaction with a device and test various features of
your app by using the tools included in the Microsoft Emulator.

System requirements
Your computer must meet the following requirements:
BIOS
Hardware-assisted virtualization.
Second Level Address Translation (SLAT).
Hardware-based Data Execution Prevention (DEP).
RAM
4 GB or more.
Operating system
Windows 8 or higher (Windows 10 strongly recommended)
64-bit
Pro edition or higher
To check the BIOS requirements, see How to enable Hyper-V for the emulator for Windows Phone 8.
To check requirements for RAM and operating system, in Control Panel, click System and Security, and then click
System.
Microsoft Emulator for Windows 10 Mobile requires Visual Studio 2015; it is not backward compatible with earlier
versions of Visual Studio.
Microsoft Emulator for Windows 10 Mobile cannot load apps that target the Windows Phone OS version earlier
than Windows Phone OS 7.1.

Installing and uninstalling


Installing.
Microsoft Emulator for Windows 10 Mobile ships as part of the Windows 10 SDK. The Windows 10 SDK and
emulator can be installed as part of the Visual Studio 2015 install. See the Visual Studio download page.
You can also install the Microsoft Emulator for Windows 10 Mobile using the Microsoft Emulator setup. See
the Windows 10 Tools download page.
Uninstalling.
You can uninstall the Microsoft Emulator for Windows 10 Mobile using Visual Studio setup/repair. Or you
can use Programs and Features under Control Panel to remove the emulator.
When you uninstall the Microsoft Emulator for Windows 10 Mobile, the Hyper-V Virtual Ethernet Adapter
that was created for the emulator to use is not automatically removed. You can manually remove this virtual
adapter from Network Connections in Control Panel.

Whats new in Microsoft Emulator for Windows 10 Mobile


In addition to providing support for Universal Windows Platform (UWP), the emulator has added the following
functionality:
Mouse input mode support to differentiate between mouse and single touch input.
NFC Support. The emulator allows you to simulate NFC and make it possible to test and develop
NFC/Proximity-enabled universal apps.
Native hardware acceleration improves graphics performance in the emulator by using the local graphics card.
You must have a supported graphics card installed, and enable acceleration on the Sensors tab of the
emulator's Additional Tools settings user interface in order to use acceleration.

Features that you can test in the emulator


In addition to the new features mentioned in the previous section, you can test the following commonly used
features in the Microsoft Emulator for Windows 10 Mobile.
Screen resolution, screen size, and memory. Reach a broad market for your app by testing it on various
emulator images to simulate various screen resolutions, physical sizes, and memory constraints.

Screen configuration. Change the emulator from portrait to landscape mode. Change the zoom setting to
fit the emulator to your desktop screen.
Networking. Networking support is integrated with Windows Phone Emulator. Networking is enabled by
default. You do not have to install network drivers for Windows Phone Emulator or configure networking
options manually in most environments.
The emulator uses the network connection of the host computer. It does not appear as a separate device on
the network. This eliminates some of the configuration issues that users encountered with the Windows
Phone SDK 8.0 emulator.
Language and region settings. Prepare your app for an international market by changing the display
language and region settings in Windows Phone Emulator.
On the running emulator, go to the Settings app, then select the system settings, then select language or
region. Change the settings that you want to test. If you're prompted, click restart phone to apply the new
settings and restart the emulator.
Application lifecycle and tombstoning. Test the behavior or your app when it's deactivated or
tombstoned by changing the value of the option Tombstone upon deactivation while debugging on
the Debug page of project properties.
Local folder storage (previously known as isolated storage). Data in isolated storage persists while the
emulator is running, but is lost once the emulator closes.
Microphone. Requires and uses the microphone on the host computer.
Phone keyboard. The emulator supports mapping of the hardware keyboard on your development
computer to the keyboard on a Windows Phone. The behavior of the keys is the same as on a Windows
Phone device
Lock screen. With the emulator open, press F12 on your computer keyboard twice. The F12 key emulates
the power button on the phone. The first key press turns off the display. The second key press turns the
display on again with the lock screen engaged. Unlock the screen by using the mouse to slide the lock
screen upward.

Features that you can't test in the emulator


Test the following features only on a physical device.
Compass
Gyroscope
Vibration controller
Brightness. The brightness level of the emulator is always High.
High-resolution video. Videos with a resolution higher than VGA resolution (640 x 480) cannot be displayed
reliably, especially on emulator images with only 512MB of memory.

Mouse input
Simulate mouse input using the physical mouse or trackpad on your Windows PC and the mouse input button on
the emulator toolbar. This feature is useful if your app provides the user with an ability to utilize a mouse paired to
their Windows 10 device to provide input.
Tap the mouse input button on the emulator toolbar to enable mouse input. Any click events within the emulator
chrome will now be sent to the Windows 10 Mobile OS running inside the emulator VM as mouse events.
The emulator screen with the mouse input enabled.

The mouse input button on the emulator toolbar.

Keyboard input
The emulator supports mapping of the hardware keyboard on your development computer to the keyboard on a
Windows Phone. The behavior of the keys is the same as on a Windows Phone device.
By default, the hardware keyboard is not enabled. This implementation is equivalent to a sliding keyboard that
must be deployed before you can use it. Before you enable the hardware keyboard, the emulator accepts key input
only from the control keys.
Special characters on the keyboard of a localized version of a Windows development computer are not supported
by the emulator. To enter special characters that are present on a localized keyboard, use the Software Input Panel
(SIP) instead.
To use your computers keyboard in the emulator, press F4.
To stop using your computers keyboard in the emulator, press F4.
The following table lists the keys on a hardware keyboard that you can use to emulate the buttons and other
controls on a Windows Phone.
Note that in Emulator Build 10.0.14332 the computer hardware key mapping was changed. Values in the second
column of the table below represent these new keys.

COMPUTER HARDWARE KEYS COMPUTER HARDWARE KEYS


(EMULATOR BUILD 10.0.14295 (EMULATOR BUILD 10.0.14332 WINDOWS PHONE HARDWARE
AND EARLIER) AND NEWER) BUTTON NOTES

F1 WIN + ESC BACK Long presses work as


expected.

F2 WIN + F2 START Long presses work as


expected.

F3 WIN + F3 SEARCH

F4 F4 (no change) Toggles between using the


local computers keyboard
and not using the local
computers keyboard.

F6 WIN + F6 CAMERA HALF A dedicated camera button


that is pressed halfway.

F7 WIN + F7 CAMERA FULL A dedicated camera button.


COMPUTER HARDWARE KEYS COMPUTER HARDWARE KEYS
(EMULATOR BUILD 10.0.14295 (EMULATOR BUILD 10.0.14332 WINDOWS PHONE HARDWARE
AND EARLIER) AND NEWER) BUTTON NOTES

F9 WIN + F9 VOLUME UP

F10 WIN + F10 VOLUME DOWN

F12 WIN + F12 POWER Press the F12 key twice to


enable the lock screen. Long
presses work as expected.

ESC WIN + ESC BACK Long presses work as


expected.

Near Field Communications (NFC)


Build and test apps that use Near Field Communication (NFC) enabled features on Windows 10 Mobile by using
the NFC tab of the emulators Additional Tools menu. NFC is useful for a number of scenarios ranging from
Proximity scenarios (such as tap to share) to card emulation (such as tap to pay).
You can test your app by simulating a pair of phones tapping together by using a pair of emulators, or you can test
your app by simulating a tap to a tag. Also in Windows 10, mobile devices are enabled with HCE (Host Card
Emulation) feature and by using the phone emulator you can simulate tapping your device to a payment terminal
for APDU command-response traffic.
The NFC tab supports three modes:
Proximity Mode
HCE (Host Card Emulation) Mode
Smart Card Reader Mode
In all modes, the emulator window has three areas of interest.
The top left section is specific to the mode selected. The features of this section depend on the mode, and are
detailed in the mode-specific sections below.
The top right section lists the logs. When you tap a pair of devices together (or tap to the POS terminal) the tap
event is logged and when the devices are untapped the untap event is logged. This section also records if your
app responded before the connection is broken or any other action you have taken in the emulator UI with time
stamps. Logs are persistent between mode switches, and you can clear the logs at any point by hitting the Clear
button above the Logs screen.
The bottom half of the screen is the message log and shows the transcript of all messages sent or received over
the currently selected connection, depending on the mode selected.

Important When you first launch the tapper tool, you will get a Windows Firewall prompt. You MUST select
ALL 3 check boxes and allow the tool through the firewall, or the tool will silently fail to work.

After launching the quick start installer, make sure you follow the above instruction to select all 3 check boxes on
the firewall prompt. Also, the tapper tool must be installed and used on the same physical host machine as the
Microsoft Emulator.
Proximity mode
To simulate a pair of phones tapping together you'll need to launch a pair of Windows Phone 8 emulators. Since
Visual Studio doesn't support running two identical emulators at the same time, you'll need to select different
resolutions for each of the emulators to work around it.
When you check the Enable discovery of peer devices checkbox, the Peer device dropdown box shows
Microsoft Emulators (running on the same physical host machine or in the local network) as well as the Windows
machines running the simulator driver (running on the same machine or in the local network).
Once both emulators are running:
Select the emulator you would like to target in the Peer device list.
Select the Send to peer device radio button.
Click Tap button. This will simulate the two devices tapping together and you should be hearing the NFC tap
notification sound
To disconnect the 2 devices, simply hit the Untap button.
Alternatively, you can enable Automatically untap in (seconds) check box where you can specify the number of
seconds you want the devices to be tapped and they will be automatically untapped after the specified number of
seconds (simulating what would be expected of a user in real life, they would only hold their phones together for a
short time). Note however that currently the message log isn't available after the connection has been untapped.
To simulate reading messages from a tag or receiving messages from another device:
Select the Send to self radio button to test scenarios that require only one NFC enabled device.
Click Tap button. This will simulate the tapping a device to a tag and you should be hearing the NFC tap
notification sound
To disconnect, simply hit the Untap button.
Using the proximity mode you can inject messages as if they came from a tag or another peer device. The
toolallows you to send messages of the following types.
WindowsURI
WindowsMime
WritableTag
Pairing:Bluetooth
NDEF
NDEF:MIME
NDEF:URI
NDEF:wkt.U
You can either create these messages by editing the Payload windows or providing them in a file. For more
information about these types and how to use them please refer to the Remarks section of
theProximityDevice.PublishBinaryMessage reference page.
The Windows 8 Driver Kit (WDK) includes a driver sample that exposes the same protocol as the Windows Phone 8
emulator. You'll need to download the DDK, build that sample driver, install it on a Windows 8 device, then add the
Windows 8 device's IP address or hostname to the devices list and tap it either with another Windows 8 device or
with a Windows Phone 8 emulator.
Host Card Emulation (HCE) Mode
In Host Card Emulation (HCE) mode you can test your HCE-based card emulation application by writing your own
custom scripts to simulate a smart card reader terminal, such as a Point of Sale (POS) terminal. This tool assumes
that you are familiar with the command response pairs (compliant with ISO-7816-4) that are sent between a
reader terminal (such as POS, badge reader or transit card reader) and the smart card (that you are emulating in
your application).

Create a new script by clicking the Add button in the script editor section. You can provide a name for your
script and after you are done with editing, you can save your script using the Save button.
Your saved scripts will be available the next time you launch the emulator.
Run your scripts by hitting the Play button in the scripts editor window. This action results in simulating of
tapping your phone to the terminal and sending commands written in your script. Alternatively you can hit the
Tap button and then the Play button, until you hit Play the script will not run.
Stop sending commands by hitting the Stop button, which stops sending the commands to your application but
the devices remain tapped until you hit the Untap button.
Delete your scripts by selecting the script in the dropdown menu and hitting Delete button.
The emulator tool does not check for the syntax of your scripts until you run the script using the Play button.
The messages sent by your script are dependent on your implementation of your card emulation app.
You can also use the terminal simulator tool from MasterCard (https://www.terminalsimulator.com/) for payments
app testing.
Check the Enable MasterCard listener checkbox below the script editor windows and launch the simulator
from MasterCard.
Using the tool, you can generate commands that are relayed to your application running on the emulator
through the NFC tool.
To learn more about HCE support and how to develop HCE apps in Windows 10 Mobile, please refer to the
Microsoft NFC Team Blog.
How to Create Scripts for HCE Testing
The scripts are written as C# code and your scripts Run method is called when you click the Play button, this
method takes an IScriptProcessor interface which is used to transceive APDU commands, output to the log window,
and control the timeout for waiting on an APDU response from the phone.
Below is a reference on what functionality is available:

public interface IScriptProcessor


{
// Sends an APDU command given as a hex-encoded string, and returns the APDU response
string Send(string s);

// Sends an APDU command given as a byte array, and returns the APDU response
byte[] Send(byte[] s);

// Logs a string to the log window


void Log(string s);

// Logs a byte array to the log window


void Log(byte[] s);

// Sets the amount of time the Send functions will wait for an APDU response, after which
// the function will fail
void SetResponseTimeout(double seconds);
}

Smart Card Reader Mode


The emulator can be connected to a smart card reader device on your host computer, such that smart cards
inserted or tapped will show up to your phone application and can be communicated to with APDUs using the
Windows.Devices.SmartCards.SmartCardConnection class. For this to work, you will need a compatible smart
card reader device attached to your computer, USB smart card readers (both NFC/contactless and insert/contact)
are widely available. To enable the emulator to work with an attached smart card reader, first choose the Card
Reader mode which should show a dropdown box listing all the compatible smart card readers attached to the
host system, then choose the smart card reader device youd like to be connected from the dropdown.
Note that not all NFC-capable smart card readers support some types of NFC cards, and some do not support the
standard PC/SC storage card APDU commands.

Multi-point input
Simulate multi-touch input for pinching and zooming, rotating, and panning objects by using the Multi-touch
Input button on the emulator toolbar. This feature is useful if your app displays photos, maps, or other visual
elements that users can pinch and zoom, rotate, or pan.
1. Tap the Multi-touch Input button on the emulator toolbar to enable multi-point input. Two touch points
appear on the emulator screen around a center point.
2. Right-click and drag one of the touch points to position them without touching the screen.
3. Left-click and drag one of the touch points to simulate pinching and zooming, rotating, or panning.
4. Tap the Single Point Input button on the emulator toolbar to restore normal input.
The following screenshot shows multi-touch input.
1. The small left image shows the Multi-touch Input button on the emulator toolbar.
2. The middle image shows the emulator screen after tapping the Multi-touch Input button to display the touch
points.
3. The right image shows the emulator screen after dragging the touch points to zoom the image.

Accelerometer
Test apps that track the movement of the phone by using the Accelerometer tab of the emulator's Additional
Tools.
You can test the accelerometer sensor with live input or pre-recorded input. The only type of recorded data thats
available simulates shaking the phone. You cant record or save your own simulations for the accelerometer.
1. Select the desired starting orientation in the Orientation drop-down list.
2. Select the type of input.
To run the simulation with live input
In the middle of the accelerometer simulator, drag the colored dot to simulate movement of the
device in a 3D plane.
Moving the dot on the horizontal access rotates the simulator from side to side. Moving the dot on
the vertical access rotates the simulator back and forth, rotating around the x-axis. As you drag the
dot, the X, Y, and Z coordinates update based on the rotation calculations. You cannot move the dot
outside the bounding circle in the touch pad area.
Optionally, click Reset to restore the starting orientation.
To run the simulation with recorded input
In the Recorded Data section, click the Play button to start playback of the simulated data. The only
option available in the Recorded Data list is shake. The simulator does not move on the screen when
it plays back the data.

Location and driving


Test apps that use navigation or geofencing by using the Location tab of the emulator's Additional Tools. This
feature is useful for simulating driving, biking, or walking in conditions similar to the real world.
You can test your app while you simulate moving from one location to another at different speeds and with
different accuracy profiles. The location simulator can help you to identify changes in your usage of the location
APIs usage that improve the user experience. For example, the tool can help you identify that you have to tune
geofence parameters, such as size or dwell time, to detect the geofences successfully in different scenarios.
The Location tab supports three modes. In all modes, when the emulator receives a new position, that position is
available to trigger the PositionChanged event or to respond to a GetGeopositionAsync call in your location-
aware app.
In Pin mode, you place pushpins on the map. When you click Play all points, the location simulator sends
the location of each pin to the emulator one after another, at the interval specified in the Seconds per pin
text box.
In Live mode, you place pushpins on the map. The location simulator sends the location of each pin to the
emulator immediately as you place them on the map.
In Route mode, you place pushpins on the map to indicate waypoints, and the location simulator
automatically calculates a route. The route includes invisible pins at one-second intervals along the route.
For example, if you have select the Walking speed profile, which assumes a speed of 5 kilometers per hour,
then invisible pins are generated at intervals of 1.39 meters. When you click Play all points, the location
simulator sends the location of each pin to the emulator one after another, at the interval determined by the
speed profile selected in the drop-down list.
In all modes of the location simulator, you can do the following things.
You can search for a location by using the Search box.
You can Zoom in and Zoom out on the map.
You can save the current set of data points to an XML file, and reload the file later to reuse the same data
points.
You can Toggle pushpin mode on or off and Clear all points.
In Pin and Route mode, you can also do the following things.
Save a route you created for later use.
Load a route previously created. You can even load route files created in previous versions of the tool.
Modify a route by deleting pushpins (in Pin mode) or waypoints (in Route mode).
Accuracy profiles
In all modes of the location simulator, you can select one of the following accuracy profiles in the Accuracy profile
drop-down list.

PROFILE DESCRIPTION

Pinpoint Assumes perfectly accurate location readings. This setting is


not realistic, but it's useful for testing the logic of your app.

Urban Assumes that buildings are restricting the number of satellites


in view, but there is often a high density of cell towers and Wi-
Fi access points that can be used for positioning.

Suburban Assumes that satellite positioning is relatively good and there


is good density of cell towers, but the density of Wi-Fi access
points is not high.

Rural Assumes that satellite positioning is good, but there is low


density of cell towers and almost no Wi-Fi access points that
can be used for positioning.

Speed profiles
In Route mode, you can select one of the following speed profiles in the drop-down list.

PROFILE SPEED PER HOUR SPEED PER SECOND DESCRIPTION

Speed Limit Speed limit of the route Not applicable Traverse the route at the
posted speed limit.

Walking 5 km/h 1.39 m Traverse the route at a


natural walking pace of 5
km/h.

Biking 25 km/h 6.94 m Traverse the route at a


natural biking pace of 25
km/h.

Fast Traverse the route faster


than the posted speed limit.
Route mode
Route mode has the following features and limitations.
Route mode requires an Internet connection.
When the Urban, Suburban, or Rural accuracy profile is selected, the location simulator calculates a
simulated satellite-based position, a simulated Wi-Fi position, and a simulated cellular position for each pin.
Your app receives only one of these positions. The three sets of coordinates for the current location are
displayed in different colors on the map and in the Current location list.
The accuracy of the pins along route the route is not uniform. Some of the pins use satellite accuracy, some
use Wi-Fi accuracy, and some use cellular accuracy.
You cannot select more than 20 waypoints for the route.
Positions for the visible and invisible pins on the map are generated only once when you select a new
accuracy profile. When you play the route more than once with the same accuracy profile during the same
emulator session, the previously generated positions are reused.
The following screenshot shows Route mode. The orange line indicates the route. The blue dot indicates the
accurate location of the car determined by satellite-based positioning. The red and green dots indicate less accurate
locations calculated by using Wi-Fi and cellular positioning and the Suburban accuracy profile. The three calculated
locations are also displayed in the Current location list.

More info about the location simulator


You can request a position with the accuracy set to Default. A limitation that existed in the Windows Phone 8
version of the location simulator, and required you to request a position with the accuracy set to High, has
been fixed.
When you test geofencing in the emulator, create a simulation that gives the geofencing engine a warm-
up period to learn and adjust to the movement patterns.
The only position properties that are simulated are the Latitude, Longitude, Accuracy, and PositionSource.
The location simulator does not simulate other properties such as Speed, Heading, and so forth.
Network
Test your app with different network speeds and different signal strengths by using the Network tab of the
emulator's Additional Tools. This feature is useful if your app calls web services or transfers data.
The network simulation feature helps you to make sure that your app runs well in the real world. The Windows
Phone Emulator runs on a computer that usually has a fast WiFi or Ethernet connection. Your app, however, runs
on phones that are typically connected over a slower cellular connection.
1. Check Enable network simulation to test your app with different network speeds and different signal
strengths.
2. In the Network speed dropdown list, select one of the following options:
No network
2G
3G
4G
3. In the Signal strength dropdown list, select one of the following options:
Good
Average
Poor
4. Clear Enable network simulation to restore the default behavior, which uses the network settings of your
development computer.
You can also review the current network settings on the Network tab.

SD card
Test your app with a simulated removable SD card by using the SD Card tab of the emulator's Additional Tools.
This feature is useful if your app reads or write files.
The SD Card tab uses a folder on the development computer to simulate a removable SD card in the phone.
1. Select a folder.
Click Browse to pick a folder on the development computer to hold the contents of the simulated SD card.
2. Insert the SD card.
After selecting a folder, click Insert SD card. When you insert the SD card, the following things happen:
If you didn't specify a folder, or the folder's not valid, an error occurs.
The files in the specified folder on the development computer are copied to the root folder of the
simulated SD card on the emulator. A progress bar indicates the progress of the sync operation.
The Insert the SD card button changes to Eject SD card.
If you click Eject SD card while the sync operation is in progress, the operation is canceled.
3. Optionally, select or clear Sync updated files back to the local folder when I eject the SD card.
This option is enabled by default. When this option is enabled, files are synced from the emulator back to the
folder on the development computer when you eject the SD card.
4. Eject the SD card.
Click Eject SD card. When you eject the SD card, the following things happen:
if you have selected Sync updated files back to the local folder when I eject the SD card, the
following things happen:
The files on the simulated SD card on the emulator are copied to the specified folder on the
development computer. A progress bar indicates the progress of the sync operation.
The Eject SD card button changes to Cancel sync.
If you click Cancel sync while the sync operation is in progress, the card is ejected and the results
of the sync operation are incomplete.
The Eject SD card button changes back to Insert SD card.

Note Since an SD card used by the phone is formatted with the FAT32 file system, 32GB is the maximum size.

The speed of reading from and writing to the simulated SD card is throttled to imitate real-world speeds. Accessing
an SD card is slower than accessing the computer's hard drive.

Notifications
Send push notifications to your app by using the Notifications tab of the emulator's Additional Tools. This
feature is useful if your app receives push notifications.
You can easily test push notifications without creating the working cloud service that's required after you publish
your app.
1. Enable simulation.
After you select Enabled, all apps deployed on the emulator use the simulation engine instead of the WNS
or MPN service until you disable simulation.
2. Select an app to receive notifications.
The AppId list is automatically populated with all apps deployed to the emulator that are enabled for push
notifications. Select an app in the drop-down list.
If you deploy another push-enabled app after enabling simulation, click Refresh to add the app to the list.
3. Select a notification channel.
After you select an app in the AppId list, the URI list is automatically populated with all the notification
channels registered for the selected app. Select a notification channel in the drop-down list.
4. Select a notification type.
After you select a notification channel in the URI list, the Notification Type list is automatically populated
with all the types available for the notification service. Select a notification type in the drop-down list.
The simulator uses the Uri format of the notification channel to determine whether the app is using WNS or
MPN push notifications.
Simulation supports all notification types. The default notification type is Tile.
The following WNS notification types are supported.
Raw
Toast
When your app uses WNS notifications and you select the Toast notification type, the
simulation tab displays the Tag and Group fields. You can select these options and enter Tag
and Group values to manage toast notifications in the Notification Center.
Tile
Badge
The following MPN notification types are supported.
Raw
Toast
Tile
5. Select a notification template.
After you select a notification type in the Notification Type list, the Templates list is automatically
populated with all the templates available for the notification type. Select a template in the drop-down list.
Simulation supports all template types.
6. Optionally, change the notification payload.
After you select a template in the Templates list, the Notification Payload text box is automatically
populated with a sample payload for the template. Review the sample payload in the Notification Payload
text box.
You can send the sample payload without changing it.
You can edit the sample payload in the text box.
You can click Load to load a payload from a text or XML file.
You can click Save to save the XML text of the payload to use again later.
The simulator does not validate the XML text of the payload.
7. Send the push notification.
Click Send to deliver the push notification to the selected app.
The screen displays a message to indicate success or failure.

Sensors
Test how your app works on low-cost phones that don't have all the optional sensors or camera features by using
the Sensors tab of the emulator's Additional Tools. This feature is useful if your app uses the camera or some of
the phone's sensors, and you want your app to reach the largest possible market.
By default, all sensors are enabled in the Optional sensors list. Select or clear individual check boxes to enable
or disable individual sensors.
After you change your selections, click Apply. Then you have to restart the emulator.
If you make changes, and then you switch tabs or close the Additional Tools window without clicking Apply,
your changes are discarded.
Your settings are persisted between for the emulator session until you change them or reset them. If you
capture a checkpoint, the settings are saved with the checkpoint. The settings are persisted only for the specific
emulator that you're using - for example, Emulator 8.1 WVGA 4" 512MB.
Sensor options
You can enable or disable the following optional hardware sensors:
Ambient light sensor
Front-facing camera
Gyroscope
Compass (magnetometer)
NFC
Software buttons (only on some high-resolution emulator images)
Camera options
You can enable or disable the optional front-facing camera by selecting or clearing the check box in the Optional
sensors list.
You can also select one of the following camera profiles in the Camera dropdown list.
Windows Phone 8.0 camera.
Windows Phone 8.1 camera.
Here is the list of camera features supported by each of the profiles.

FEATURE WINDOWS PHONE 8.0 CAMERA WINDOWS PHONE 8.1 CAMERA

Resolution 640 x 480 (VGA) 640 x 480 (VGA) or better

Autofocus Yes Yes

Flash No Yes

Zoom 2x (digital or optical) 2x (digital or optical)

Video resolution 640 x 480 (VGA) 640 x 480 (VGA) or better

Preview resolution 640 x 480 (VGA) 640 x 480 (VGA)

Frame rate counters


Use the frame rate counters in Windows Phone emulator to monitor the performance of your running app.
Descriptions of the frame rate counters
The following table describes each frame rate counter.

FRAME RATE COUNTER DESCRIPTION

Composition (Render) Thread Frame Rate (FPS) The rate at which the screen is updated.

User Interface Thread Frame Rate (FPS) The rate at which the UI thread is running.

Texture Memory Usage The video memory and system memory copies of textures
being used in the app.

Surface Counter The number of explicit surfaces being passed to the GPU for
processing.

Intermediate Surface Counter The number of implicit surfaces generated as a result of


cached surfaces.

Screen Fill Rate Counter The number of pixels being painted per frame in terms of
screens. A value of 1 represents the number of pixels in the
current screen resolution for example, 480 x 800 pixels.

Enabling and disabling the frame rate counters


You can enable or disable the display of the frame rate counters in your code. When you create a Windows Phone
app project in Visual Studio, the following code to enable the frame rate counters is added by default in the file
App.xaml.cs. To disable the frame rate counters, set EnableFrameRateCounter to false or comment out the line
of code.
// Show graphics profiling information while debugging.
if (System.Diagnostics.Debugger.IsAttached)
{
// Display the current frame rate counters.
Application.Current.Host.Settings.EnableFrameRateCounter = true;

// other code
}

' Show graphics profiling information while debugging.


If System.Diagnostics.Debugger.IsAttached Then

' Display the current frame rate counters.


Application.Current.Host.Settings.EnableFrameRateCounter = True

' other code...


End If

Known Issues
The following are known issues with the emulator, with suggested ways to work around problems if you encounter
them.
Error message: Failed while removing virtual Ethernet switch
In certain situations, including after you update to a new Windows 10 flight, a virtual network switch associated
with the emulator can get into a state where it can't be deleted through the user interface.
To recover from this situation run "netcfg -d" from an administrator command prompt:
C:\Program Files (x86)\Microsoft XDE\<version>\XdeCleanup.exe . When the command is finished running, reboot your
computer to complete the recovery process.
Note This command will delete all networking devices, not just those associated with the emulator. When your
computer starts again, all hardware networking devices will be discovered automatically.
Unable to launch the emulators
Microsoft Emulator includes XDECleanup.exe, a tool that deletes all VMs, diff disks, and emulator specific network
switches, and it ships with the emulator (XDE) binaries already. You should use this tool to clean up emulator VMs
if they get into a bad state. Run the tool from an administrator command prompt:
C:\Program Files (x86)\Microsoft XDE\<version>\XdeCleanup.exe

Note XDECleanup.exe deletes all emulator specific Hyper-V VMs, and it also deletes any VM checkpoints or
saved states.

Uninstall Windows 10 for Mobile Image


When you install the emulator, a Windows 10 for Mobile VHD image is installed, which gets its own entry in the
Programs and Features list in the Control Panel. If you wish to uninstall the image, find Windows 10 for Mobile
Image - in the list of installed programs, right-click on it, and choose Uninstall.
In the current release, you must then manually delete the VHD file for the emulator. If you installed the emulator to
the default path, the VHD file is at C:\Program Files (x86)\Windows Kits\10\Emulation\Mobile\\flash.vhd.
How to disable hardware accelerated graphics
By default, Windows 10 Mobile Emulator uses hardware accelerated graphics. If you are having trouble launching
the emulator with hardware acceleration enabled, you can turn it off by setting a registry value.
To disable hardware acceleration:
1. Start Registry Editor.
2. Create the following registry subkey if it doesn't exist:
HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Xde\10.0
3. Right click the 10.0 folder, point to New, and then click DWORD Value.
4. Type DisableRemoteFx, and then press Enter.
5. Double-click DisableRemoteFx, enter 1 in the Value data box, select the Decimal option, and then click OK.
6. Close Registry Editor.
Note: After setting this registry value, you must delete the virtual machine in Hyper-V manager for the
configuration that you launched in Visual Studio, and then relaunch the emulator with software-rendered graphics.

Support Resources
To find answers and solve problems as you start working with the Windows 10 tools, please visit Windows 10
Tools forum. To see all the forums for Windows 10 development, visit this link.

Related topics
Run Windows Phone apps in the emulator
Windows and Windows Phone SDK archive
Test Surface Hub apps using Visual Studio
3/6/2017 2 min to read Edit on GitHub

The Visual Studio simulator provides an environment where you can design, develop, debug, and test Universal
Windows Platform (UWP) apps, including apps that you have built for Microsoft Surface Hub. The simulator does
not use the same user interface as Surface Hub, but it is useful for testing how your app looks and behaves at the
Surface Hub's screen size and resolution.
For more information, see Run Windows Store apps in the simulator.

Add Surface Hub resolutions to the simulator


To add Surface Hub resolutions to the simulator:
1. Create a configuration for the 55" Surface Hub by saving the following XML into a file named
HardwareConfigurations-SurfaceHub55.xml.

<?xml version="1.0" encoding="UTF-8"?>


<ArrayOfHardwareConfiguration xmlns:xsd="http://www.w3.org/2001/XMLSchema"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<HardwareConfiguration>
<Name>SurfaceHub55</Name>
<DisplayName>Surface Hub 55"</DisplayName>
<Resolution>
<Height>1080</Height>
<Width>1920</Width>
</Resolution>
<DeviceSize>55</DeviceSize>
<DeviceScaleFactor>100</DeviceScaleFactor>
</HardwareConfiguration>
</ArrayOfHardwareConfiguration>

2. Create a configuration for the 84" Surface Hub by saving the following XML into a file named
HardwareConfigurations-SurfaceHub84.xml.

<?xml version="1.0" encoding="UTF-8"?>


<ArrayOfHardwareConfiguration xmlns:xsd="http://www.w3.org/2001/XMLSchema"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<HardwareConfiguration>
<Name>SurfaceHub84</Name>
<DisplayName>Surface Hub 84"</DisplayName>
<Resolution>
<Height>2160</Height>
<Width>3840</Width>
</Resolution>
<DeviceSize>84</DeviceSize>
<DeviceScaleFactor>150</DeviceScaleFactor>
</HardwareConfiguration>
</ArrayOfHardwareConfiguration>

3. Copy the two XML files into C:\Program Files (x86)\Common Files\Microsoft Shared\Windows
Simulator\<version number>\HardwareConfigurations.

Note Administrative privileges are required to save files into this folder.
4. Run your app in the Visual Studio simulator. Click the Change Resolution button on the palette and select
a Surface Hub configuration from the list.

Tip Turn on Tablet mode to better simulate the experience on a Surface Hub.

Deploy apps to a Surface Hub from Visual Studio


Manually deploying an app is a simple process.
Enable developer mode
By default, Surface Hub only installs apps from the Windows Store. To install apps signed by other sources, you
must enable developer mode.

Note After developer mode has been enabled, you will need to reset the Surface Hub to disable it again.
Resetting the device removes all local user files and configurations and then reinstalls Windows.

1. From the Surface Hub's Start menu, open the Settings app.

Note Administrative privileges are required to access the Settings app.

2. Navigate to Update & security > For developers.


3. Choose Developer mode and accept the warning prompt.
Deploy your app from Visual Studio
For more information, see Deploying and debugging Universal Windows Platform (UWP) apps.

Note This feature requires at least Visual Studio 2015 Update 1.

1. Navigate to the debug target dropdown next to the Start Debugging button and select Remote Machine.
2. Enter the Surface Hub's IP address. Ensure that the Universal authentication mode is selected.

Tip After you have enabled developer mode, you can find the Surface Hub's IP address on the welcome
screen.

3. Choose Start Debugging (F5) to deploy and debug your app on the Surface Hub, or press Ctrl+F5 to just
deploy your app.

Tip If the Surface Hub is on the welcome screen, dismiss it by choosing any button.
Beta testing
3/6/2017 2 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Beta testing gives you the chance to improve your app based on feedback from individuals outside of your app-
development team who try your unreleased app on their own devices.
This section describes your options for beta testing Universal Windows apps.

Why beta test?


To thoroughly test an app, you need to try it against as many device configurations and user interactions as
possible. Doing all of that testing in-house is difficult if not impossible.
With beta testing, users try your app on their own devices. And it's unmoderated: instead of performing specified
tasks, users have complete freedom in how they use an app, so they can find issues that you might never have
expected.
With beta testing, you can:
Test your app on a variety of devices.
Identify performance issues and other bugs that you might not have found otherwise.
Get real-world usage info that can be used improve the user experience.
Receive feedback without affecting public ratings in the Windows Store.

When to beta test


It's best to conduct beta testing as the final stage of testing before you release your app. At that point, you have
tested the app as thoroughly as you can yourself, and you've covered all explicit use cases. Beta testing is not a
substitute for other testing methods. Since beta testing is unmoderated, participants may not catch all bugs in your
code because every tester's experience is self-directed and it's unlikely that they'll explore all features of the app.
But beta-testing feedback can give you a final wave of real-world feedback that reveals issues that you might never
have expected before you go live.

Next steps
In the unified Windows Dev Center dashboard, you can limit distribution of your apps to only your testers,
regardless of which operating systems your app targets. Theres no need to create a separate version of your app
with a separate name and package identity; you can do your testing, then create a new submission when youre
ready to make the app available to everyone. (Of course, you can create a separate app for testing only if you
prefer. If you do, make sure to give it a different name from what you intend as the final, public app name.)
See Beta testing and targeted distribution to learn how to submit your app to the Store for beta testing.
Windows Device Portal overview
3/6/2017 7 min to read Edit on GitHub

The Windows Device Portal lets you configure and manage your device remotely over a network or USB
connection. It also provides advanced diagnostic tools to help you troubleshoot and view the real time
performance of your Windows device.
The Device Portal is a web server on your device that you can connect to from a web browser on your PC. If your
device has a web browser, you can also connect locally with the browser on your device.
Windows Device Portal is available on each device family, but features and setup vary based on the device's
requirements. This article provides a general description of Device Portal and links to articles with more specific
information for each device family.
Everything in the Windows Device Portal is built on top of REST API's that you can use to access the data and
control your device programmatically.

Setup
Each device has specific instructions for connecting to Device Portal, but each requires these general steps:
1. Enable Developer Mode and Device Portal on your device.
2. Connect your device and PC via local network or USB.
3. Navigate to the Device Portal page in your browser. This table shows the ports and protcols used by each
device family.

DEVICE FAMILY ON BY DEFAULT? HTTP HTTPS USB

HoloLens Yes, in Dev Mode 80 (default) 443 (default) http://127.0.0.1:1008


0

IoT Yes, in Dev Mode 8080 Enable via regkey N/A

Xbox Enable inside Dev Disabled 11443 N/A


Mode

Desktop Enable inside Dev 50080* 50043* N/A


Mode

Phone Enable inside Dev 80 443 http://127.0.0.1:1008


Mode 0

* This is not always the case, as Device Portal on desktop claims ports in the ephemeral range (>50,000) to
prevent collisions with existing port claims on the device. To learn more, see the Port Settings section for desktop.
For device-specific setup instructions, see:
Device Portal for HoloLens
Device Portal for IoT
Device Portal for Mobile
Device Portal for Xbox
Device Portal for Desktop
Features
Toolbar and navigation
The toolbar at the top of the page provides access to commonly used status and features.
Shutdown: Turns off the device.
Restart: Cycles power on the device.
Help: Opens the help page.
Use the links in the navigation pane along the left side of the page to navigate to the available management and
monitoring tools for your device.
Tools that are common across devices are described here. Other options might be available depending on the
device. For more info, see the specific page for your device.
Home
Your Device Portal session starts at the home page. The home page typically has information about the device,
such as name and OS version, and preferences that you can set for the device.
Apps
Provides install/uninstall and management functionality for AppX packages and bundles on your device.

Installed apps: Remove and start apps.


Running apps: Lists apps that are running currently.
Install app: Select app packages for installation from a folder on your computer or network.
Dependency: Add dependencies for the app you are going to install.
Deploy: Deploy the selected app and dependencies to your device.
To install an app
1. When you've created an app package, you can remotely install it onto your device. After you build it in
Visual Studio, an output folder is generated.

2. Click browse and find your app package (.appx).


3. Click browse and find the certificate file (.cer). (Not required on all devices.)
4. Add dependencies. If you have more than one, add each one individually.
5. Under Deploy, click Go.
6. To install another app, click the Reset button to clear the fields.
To uninstall an app
1. Ensure that your app is not running.
2. If it is, go to 'running apps' and close it. If you attempt to uninstall while the app is running, it will cause issues
when trying to re-install the app.
3. Once you're ready, click Uninstall.
Processes
Shows details about currently running processes. This includes both apps and system processes.
Much like the Task Manager on your PC, this page lets you see which processes are currently running as well as
their memory usage. On some platforms (Desktop, IoT, and HoloLens) you can terminate processes.
Performance
Shows real-time graphs of system diagnostic info, like power usage, frame rate, and CPU load.
These are the available metrics:
CPU: Percent of total available
Memory: Total, in use, available committed, paged, and non-paged
GPU: GPU engine utilization, percent of total available
I/O: Reads and writes
Network: Received and sent
Event Tracing for Windows (ETW)
Manages realtime Event Tracing for Windows (ETW) on the device.

Check Hide providers to show the Events list only.


Registered providers: Select the ETW provider and the tracing level. Tracing level is one of these values:
1. Abnormal exit or termination
2. Severe errors
3. Warnings
4. Non-error warnings
5. Detailed trace (*)
Click or tap Enable to start tracing. The provider is added to the Enabled Providers dropdown.
Custom providers: Select a custom ETW provider and the tracing level. Identify the provider by its GUID. Don't
include brackets in the GUID.
Enabled providers: Lists the enabled providers. Select a provider from the dropdown and click or tap Disable
to stop tracing. Click or tap Stop all to suspend all tracing.
Providers history: Shows the ETW providers that were enabled during the current session. Click or tap Enable
to activate a provider that was disabled. Click or tap Clear to clear the history.
Events: Lists ETW events from the selected providers in table format. This table is updated in real time. Beneath
the table, click the Clear button to delete all ETW events from the table. This does not disable any providers.
You can click Save to file to export the currently collected ETW events to a CSV file locally.
For more details on using ETW tracing, see the blogpost about using it to collect real-time logs from your app.
Performance tracing
Capture Windows Performance Recorder (WPR) traces from your device.

Available profiles: Select the WPR profile from the dropdown, and click or tap Start to start tracing.
Custom profiles: Click or tap Browse to choose a WPR profile from your PC. Click or tap Upload and start to
start tracing.
To stop the trace, click Stop. Stay on this page until the trace file (.ETL) has completed downloading.
Captured ETL files can be opened for analysis in Windows Performance Analyzer.
Devices
Enumerates all peripherals attached to your device.

Networking
Manages network connections on the device. Unless you are connected to Device Portal via USB, changing these
settings will likely disconnect you from Device Portal.
Profiles: Select a different WiFi profile to use.
Available networks: The WiFi networks available to the device. Clicking or tapping on a network will allow
you to connect to it and supply a passkey if needed. Note: Device Portal does not yet support Enterprise
Authentication.
App File Explorer
Allows you to view and manipulate files stored by your sideloaded apps. This is a new, cross-platform version of
the Isolated Storage Explorer from Windows Phone 8.1 See this blog post to learn more about the App File
Explorer and how to use it.

Service Features and Notes


DNS-SD
Device Portal advertises its presence on the local network using DNS-SD. All Device Portal instances, regardless of
their device type, advertise under "WDP._wdp._tcp.local". The TXT records for the service instance provide the
following:

KEY TYPE DESCRIPTION

S int Secure port for Device Portal. If 0 (zero),


Device Portal is not listening for HTTPS
connections.
KEY TYPE DESCRIPTION

D string Type of device. This will be in the format


"Windows.*", e.g. Windows.Xbox or
Windows.Desktop

A string Device architecture. This will be ARM,


x86, or AMD64.

T null-character delineated list of strings User-applied tags for the device. See
the Tags REST API for how to use this.
List is double-null terminated.

Connecting on the HTTPS port is suggested, as not all devices are listening on the HTTP port advertised by the
DNS-SD record.
CSRF Protection and Scripting
In order to protect against CSRF attacks, a unique token is required on all non-GET requests. This token, the X-
CSRF-Token request header, is derived from a session cookie, CSRF-Token. In the Device Portal web UI, the CSRF-
Token cookie is copied into the X-CSRF-Token header on each request.
Important This protection prevents usages of the REST APIs from a standalone client (e.g. command-line utilities).
This can be solved in 3 ways:
1. Use of the "auto-" username. Clients that prepend "auto-" to their username will bypass CSRF protection. It
is important that this username not be used to log in to Device Portal through the browser, as it will open
up the service to CSRF attacks. Example: If Device Portal's username is "admin",
curl -u auto-admin:password <args> should be used to bypass CSRF protection.

2. Implement the cookie-to-header scheme in the client. This requires a GET request to establish the session
cookie, and then the inclusion of both the header and the cookie on all subsequent requests.
3. Disable authentication and use HTTP. CSRF protection only applies to HTTPS endpoints, so connections on
HTTP endpoints will not need to do either of the above.
Note: a username that begins with "auto-" will not be able to log into Device Portal via the browser.
Cross-Site WebSocket Hijacking (CSWSH) protection
To protect against CSWSH attacks, all clients opening a WebSocket connection to Device Portal must also provide
an Origin header that matches the Host header. This proves to Device Portal that the request comes either from
the Device Portal UI or a valid client application. Without the Origin header your request will be rejected.
Device Portal for Desktop
3/6/2017 3 min to read Edit on GitHub

Starting in Windows 10, Version 1607, additional developer features are available for desktop. These features are
available only when Developer Mode is enabled.
For information about how to enable Developer Mode, see Enable your device for development.
Device Portal lets you view diagnostic information and interact with your desktop over HTTP from your browser.
You can use Device Portal to do the following:
See and manipulate a list of running processes
Install, delete, launch, and terminate apps
Change Wi-Fi profiles, view signal strength, and see ipconfig
View live graphs of CPU, memory, I/O, network, and GPU usage
Collect process dumps
Collect ETW traces
Manipulate the isolated storage of sideloaded apps

Set up device portal on Windows Desktop


Turn on device portal
In the Developer Settings menu, with Developer Mode enabled, you can enable Device Portal.
When you enable Device Portal, you must also create a username and password for Device Portal. Do not use your
Microsoft account or other Windows credentials.
After Device Portal is enabled, you will see links to it at the bottom of the Settings section. Take note of the port
number applied to the end of the URL: this port number is randomly generated when Device Portal is enabled, but
should remain consistent between reboots of the desktop. If you'd like to set the port numbers manually so they
remain permanent, see Setting port numbers.
You can choose from two ways to connect to Device Portal: local host and over the local network (including VPN).
To connect to Device Portal
1. In your browser, enter the address shown here for the connection type you're using.
Localhost: http://127.0.0.1:PORT or http://localhost:PORT

Use this address to view Device Portal locally.


Local Network: https://<The IP address of the desktop>:PORT

Use this address to connect over a local network.


HTTPS is required for authentication and secure communication.
If you are using Device Portal in a protected environment, like a test lab, where you trust everyone on your local
network, have no personal information on the device, and have unique requirements, you can disable
authentication. This enables unencrypted communication, and allows anyone with the IP address of your computer
to control it.
Device Portal pages
Device Portal on desktop provides the standard set of pages. For detailed descriptions, see Windows Device Portal
overview.
Apps
Processes
Performance
Debugging
Event Tracing for Windows (ETW)
Performance tracing
Devices
Networking
App File Explorer

Setting port numbers


If you would like to select port numbers for Device Portal (such as 80 and 443), you can set the following regkeys:
Under HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\WebManagement\Service
UseDynamicPorts: A required DWORD. Set this to 0 in order to retain the port numbers you've chosen.
HttpPort: A required DWORD. Contains the port number that Device Portal will listen for HTTP
connections on.
HttpsPort: A required DWORD. Contains the port number that Device Portal will listen for HTTPS
connections on.

Failure to install Developer Mode package or launch Device Portal


Sometimes, due to network or compatibility issues, Developer Mode won't install correctly. The Developer Mode
package is required for remote deployment -- Device Portal and SSH -- but not for local development. Even if you
encounter these issues, you can still deploy your app locally using Visual Studio.
See the Known Issues forum to find workarounds to these issues and more.
Failed to locate the package
"Developer Mode package couldnt be located in Windows Update. Error Code 0x001234 Learn more"
This error may occur due to a network connectivity problem, Enterprise settings, or the package may be missing.
To fix this issue:
1. Ensure your computer is connected to the Internet.
2. If you are on a domain-joined computer, speak to your network administrator.
3. Check for Windows updates in the Settings > Updates and Security > Windows Updates.
4. Verify that the Windows Developer Mode package is present in Settings > System > Apps & Features > Manage
optional features > Add a feature. If it is missing, Windows cannot find the correct package for your computer.
After doing any of the above steps, disable and then re-enable Developer Mode to verify the fix.
Failed to install the package
"Developer Mode package failed to install. Error code 0x001234 Learn more"
This error may occur due to incompatibilities between your build of Windows and the Developer Mode package.
To fix this issue:
1. Check for Windows updates in the Settings > Updates and Security > Windows Updates.
2. Reboot your computer to ensure all updates are applied.
Device Portal for Mobile
3/6/2017 2 min to read Edit on GitHub

Starting in Windows 10, Version 1511, additional developer features are available for the mobile device family.
These features are available only when Developer mode is enabled on the device.
For info about how to enable Developer mode, see Enable your device for development.

Set up Device Portal on Windows Phone


Turn on device discovery and pairing
To connect to Device Portal, you must enable Device discovery and Device Portal in your phone's settings. This lets
you pair your phone with a PC or other Windows 10 device. Both devices must be connected to the same subnet of
the network by a wired or wireless connection, or they must be connected by USB.
The first time you connect to Device Portal, you are asked for a case-sensitive, 6 character security code. This
ensures that you have access to the phone, and keeps you safe from attackers. Press the Pair button on your phone
to generate and display the code, then enter the 6 characters into the text box in the browser.
You can choose from 3 ways to connect to Device Portal: USB, local host, and over the local network (including VPN
and tethering).
To connect to Device Portal
1. In your browser, enter the address shown here for the connection type you're using.
USB: http://127.0.0.1:10080

Use this address when the phone is connected to a PC via a USB connection. Both devices must have
Windows 10, Version 1511 or later.
Localhost: http://127.0.0.1

Use this address to view Device Portal locally on the phone in Microsoft Edge for Windows 10 Mobile.
Local Network: https://<The IP address or hostname of the phone>

Use this address to connect over a local network.


The IP address of the phone is shown in the Device Portal settings on the phone. HTTPS is required
for authentication and secure communication. The hostname (editable in Settings > System > About)
can also be used to access Device Portal on the local network (e.g. http://Phone360), which is useful
for devices that may change networks or IP addresses frequently, or need to be shared.
2. Press the Pair button on your phone to generate and display the required security code
3. Enter the 6 character security code into the Device Portal password box in your browser.
4. (Optional) Check the Remember my computer box in your browser to remember this pairing in the future.
Here's the Device Portal section of the developer settings page on Windows Phone.
If you are using Device Portal in a protected environment, like a test lab, where you trust everyone on your local
network, have no personal information on the device, and have unique requirements, you can disable
authentication. This enables unencrypted communication, and allows anyone with the IP address of your phone to
control it.

Tool Notes
Device Portal pages
Processes
The ability to terminate arbitrary processes is not included in the Windows Mobile Device Portal.
Device Portal on mobile devices provides the standard set of pages. For detailed descriptions, see Windows Device
Portal overview.
App Manager
App File Explorer (Isolated Storage Explorer)
Processes
Performance charts
Event Tracing for Windows (ETW)
Performance tracing (WPR)
Devices
Networking
Device Portal for Xbox
3/6/2017 1 min to read Edit on GitHub

Set up Device Portal on Xbox


Enable Device Portal
To enable Device Portal
1. Select the Dev Home tile on the home screen (see image)

2. Within Dev Home, navigate to the Remote Management tool

3. Select Manage Windows Device Portal and press A


4. Check the Enable Windows Device Portal setting
5. Enter a Username and Password to use to authenticate access to your devkit from a browser, and save them.
6. Close the settings page and note the URL listed on the Remote Management tool to connect.
7. Enter the URL in your browser, and then sign in with the credentials you configured.
8. You will receive a warning about the Certificate that was provided, similar to that pictured below. You should
click on Continue to this website to access Windows Device Portal in the preview.
Device Portal pages
Device Portal on Xbox provides a set of standard pages. For detailed descriptions, see Windows Device Portal
overview.
Apps
Performance
Networking
Device Portal core API reference
3/6/2017 35 min to read Edit on GitHub

Everything in the Windows Device Portal is built on top of REST APIs that you can use to access the data and control
your device programmatically.

App deployment
Install an app
Request
You can install an app by using the following request format.

METHOD REQUEST URI

POST /api/app/packagemanager/package

URI parameters
You can specify the following additional parameters on the request URI:

URI PARAMETER DESCRIPTION

package (required) The file name of the package to be installed.

Request headers
None
Request body
The .appx or .appxbundle file, as well as any dependencies the app requires.
The certificate used to sign the app, if the device is IoT or Windows Desktop. Other platforms do not require the
certificate.
Response
Status code
This API has the following expected status codes.

HTTP STATUS CODE DESCRIPTION

200 Deploy request accepted and being processed

4XX Error codes

5XX Error codes


Available device families
Windows Mobile
Windows Desktop
Xbox
HoloLens
IoT

Get app installation status


Request
You can get the status of an app installation that is currently in progress by using the following request format.

METHOD REQUEST URI

GET /api/app/packagemanager/state

URI parameters
None
Request headers
None
Request body
None
Response
Status code
This API has the following expected status codes.

HTTP STATUS CODE DESCRIPTION

200 The result of the last deployment

204 The installation is running

404 No installation action was found

Available device families


Windows Mobile
Windows Desktop
Xbox
HoloLens
IoT

Uninstall an app
Request
You can uninstall an app by using the following request format.

METHOD REQUEST URI

DELETE /api/app/packagemanager/package

URI parameters

URI PARAMETER DESCRIPTION

package (required) The PackageFullName (from GET


/api/app/packagemanager/packages) of the target app

Request headers
None
Request body
None
Response
Status code
This API has the following expected status codes.

HTTP STATUS CODE DESCRIPTION

200 OK

4XX Error codes

5XX Error codes

Available device families


Windows Mobile
Windows Desktop
Xbox
HoloLens
IoT

Get installed apps


Request
You can get a list of apps installed on the system by using the following request format.

METHOD REQUEST URI

GET /api/app/packagemanager/packages
URI parameters
None
Request headers
None
Request body
None
Response
The response includes a list of installed packages with associated details. The template for this response is as
follows.

{"InstalledPackages": [
{
"Name": string,
"PackageFamilyName": string,
"PackageFullName": string,
"PackageOrigin": int, (https://msdn.microsoft.com/en-us/library/windows/desktop/dn313167(v=vs.85).aspx)
"PackageRelativeId": string,
"Publisher": string,
"Version": {
"Build": int,
"Major": int,
"Minor": int,
"Revision": int
},
"RegisteredUsers": [
{
"UserDisplayName": string,
"UserSID": string
},...
]
},...
]}

Status code
This API has the following expected status codes.

HTTP STATUS CODE DESCRIPTION

200 OK

4XX Error codes

5XX Error codes

Available device families


Windows Mobile
Windows Desktop
Xbox
HoloLens
IoT
Device manager
Get the installed devices on the machine
Request
You can get a list of devices that are installed on the machine by using the following request format.

METHOD REQUEST URI

GET /api/devicemanager/devices

URI parameters
None
Request headers
None
Request body
None
Response
The response includes a JSON array of devices attached to the device.

{"DeviceList": [
{
"Class": string,
"Description": string,
"ID": string,
"Manufacturer": string,
"ParentID": string,
"ProblemCode": int,
"StatusCode": int
},...
]}

Status code
This API has the following expected status codes.

HTTP STATUS CODE DESCRIPTION

200 OK

4XX Error codes

5XX Error codes

Available device families


Windows Mobile
Windows Desktop
IoT

Dump collection
Get the list of all crash dumps for apps
Request
You can get the list of all the available crash dumps for all sideloaded apps by using the following request format.

METHOD REQUEST URI

GET /api/debug/dump/usermode/dumps

URI parameters
None
Request headers
None
Request body
None
Response
The response includes a list of crash dumps for each sideloaded application.
Status code
This API has the following expected status codes.

HTTP STATUS CODE DESCRIPTION

200 OK

4XX Error codes

5XX Error codes

Available device families


Window Mobile (in Windows Insider Program)
Windows Desktop
HoloLens
IoT

Get the crash dump collection settings for an app


Request
You can get the crash dump collection settings for a sideloaded app by using the following request format.
METHOD REQUEST URI

GET /api/debug/dump/usermode/crashcontrol

URI parameters
You can specify the following additional parameters on the request URI:

URI PARAMETER DESCRIPTION

packageFullname (required) The full name of the package for the sideloaded
app.

Request headers
None
Request body
None
Response
The response has the following format.

{"CrashDumpEnabled": bool}

Status code
This API has the following expected status codes.

HTTP STATUS CODE DESCRIPTION

200 OK

4XX Error codes

5XX Error codes

Available device families


Window Mobile (in Windows Insider Program)
Windows Desktop
HoloLens
IoT

Delete a crash dump for a sideloaded app


Request
You can delete a sideloaded app's crash dump by using the following request format.
METHOD REQUEST URI

DELETE /api/debug/dump/usermode/crashdump

URI parameters
You can specify the following additional parameters on the request URI:

URI PARAMETER DESCRIPTION

packageFullname (required) The full name of the package for the sideloaded
app.

fileName (required) The name of the dump file that should be deleted.

Request headers
None
Request body
None
Response
Status code
This API has the following expected status codes.

HTTP STATUS CODE DESCRIPTION

200 OK

4XX Error codes

5XX Error codes

Available device families


Window Mobile (in Windows Insider Program)
Windows Desktop
HoloLens
IoT

Disable crash dumps for a sideloaded app


Request
You can disable crash dumps for a sideloaded app by using the following request format.

METHOD REQUEST URI

DELETE /api/debug/dump/usermode/crashcontrol
URI parameters
You can specify the following additional parameters on the request URI:

URI PARAMETER DESCRIPTION

packageFullname (required) The full name of the package for the sideloaded
app.

Request headers
None
Request body
None
Response
Status code
This API has the following expected status codes.

HTTP STATUS CODE DESCRIPTION

200 OK

4XX Error codes

5XX Error codes

Available device families


Window Mobile (in Windows Insider Program)
Windows Desktop
HoloLens
IoT

Download the crash dump for a sideloaded app


Request
You can download a sideloaded app's crash dump by using the following request format.

METHOD REQUEST URI

GET /api/debug/dump/usermode/crashdump

URI parameters
You can specify the following additional parameters on the request URI:
URI PARAMETER DESCRIPTION

packageFullname (required) The full name of the package for the sideloaded
app.

fileName (required) The name of the dump file that you want to
download.

Request headers
None
Request body
None
Response
The response includes a dump file. You can use WinDbg or Visual Studio to examine the dump file.
Status code
This API has the following expected status codes.

HTTP STATUS CODE DESCRIPTION

200 OK

4XX Error codes

5XX Error codes

Available device families


Window Mobile (in Windows Insider Program)
Windows Desktop
HoloLens
IoT

Enable crash dumps for a sideloaded app


Request
You can enable crash dumps for a sideloaded app by using the following request format.

METHOD REQUEST URI

POST /api/debug/dump/usermode/crashcontrol

URI parameters
You can specify the following additional parameters on the request URI:
URI PARAMETER DESCRIPTION

packageFullname (required) The full name of the package for the sideloaded
app.

Request headers
None
Request body
None
Response
Status code
This API has the following expected status codes.

HTTP STATUS CODE DESCRIPTION

200 OK

Available device families


Window Mobile (in Windows Insider Program)
Windows Desktop
HoloLens
IoT

Get the list of bugcheck files


Request
You can get the list of bugcheck minidump files by using the following request format.

METHOD REQUEST URI

GET /api/debug/dump/kernel/dumplist

URI parameters
None
Request headers
None
Request body
None
Response
The response includes a list of dump file names and the sizes of these files. This list will be in the following format.
{"DumpFiles": [
{
"FileName": string,
"FileSize": int
},...
]}

Status code
This API has the following expected status codes.

HTTP STATUS CODE DESCRIPTION

200 OK

Available device families


Windows Desktop
IoT

Download a bugcheck dump file


Request
You can download a bugcheck dump file by using the following request format.

METHOD REQUEST URI

GET /api/debug/dump/kernel/dump

URI parameters
You can specify the following additional parameters on the request URI:

URI PARAMETER DESCRIPTION

filename (required) The file name of the dump file. You can find this by
using the API to get the dump list.

Request headers
None
Request body
None
Response
The response includes the dump file. You can inspect this file using WinDbg.
Status code
This API has the following expected status codes.
HTTP STATUS CODE DESCRIPTION

200 OK

4XX Error codes

5XX Error codes

Available device families


Windows Desktop
IoT

Get the bugcheck crash control settings


Request
You can get the bugcheck crash control settings by using the following request format.

METHOD REQUEST URI

GET /api/debug/dump/kernel/crashcontrol

URI parameters
None
Request headers
None
Request body
None
Response
The response includes the crash control settings. For more information about CrashControl, see the CrashControl
article. The template for the response is as follows.

{
"autoreboot": bool (0 or 1),
"dumptype": int (0 to 4),
"maxdumpcount": int,
"overwrite": bool (0 or 1)
}

Dump types
0: Disabled
1: Complete memory dump (collects all in-use memory)
2: Kernel memory dump (ignores user mode memory)
3: Limited kernel minidump
Status code
This API has the following expected status codes.

HTTP STATUS CODE DESCRIPTION

200 OK

4XX Error codes

5XX Error codes

Available device families


Windows Desktop
IoT

Get a live kernel dump


Request
You can get a live kernel dump by using the following request format.

METHOD REQUEST URI

GET /api/debug/dump/livekernel

URI parameters
None
Request headers
None
Request body
None
Response
The response includes the full kernel mode dump. You can inspect this file using WinDbg.
Status code
This API has the following expected status codes.

HTTP STATUS CODE DESCRIPTION

200 OK

4XX Error codes

5XX Error codes


Available device families
Windows Desktop
IoT

Get a dump from a live user process


Request
You can get the dump for live user process by using the following request format.

METHOD REQUEST URI

GET /api/debug/dump/usermode/live

URI parameters
You can specify the following additional parameters on the request URI:

URI PARAMETER DESCRIPTION

pid (required) The unique process id for the process you are
interested in.

Request headers
None
Request body
None
Response
The response includes the process dump. You can inspect this file using WinDbg or Visual Studio.
Status code
This API has the following expected status codes.

HTTP STATUS CODE DESCRIPTION

200 OK

4XX Error codes

5XX Error codes

Available device families


Windows Desktop
IoT

Set the bugcheck crash control settings


Request
You can set the settings for collecting bugcheck data by using the following request format.

METHOD REQUEST URI

POST /api/debug/dump/kernel/crashcontrol

URI parameters
You can specify the following additional parameters on the request URI:

URI PARAMETER DESCRIPTION

autoreboot (optional) True or false. This indicates whether the system


restarts automatically after it fails or locks.

dumptype (optional) The dump type. For the supported values, see the
CrashDumpType Enumeration.

maxdumpcount (optional) The maximum number of dumps to save.

overwrite (optional) True of false. This indicates whether or not to


overwrite old dumps when the dump counter limit specified by
maxdumpcount has been reached.

Request headers
None
Request body
None
Response
Status code
This API has the following expected status codes.

HTTP STATUS CODE DESCRIPTION

200 OK

4XX Error codes

5XX Error codes

Available device families


Windows Desktop
IoT
ETW
Create a realtime ETW session over a websocket
Request
You can create a realtime ETW session by using the following request format. This will be managed over a
websocket. ETW events are batched on the server and sent to the client once per second.

METHOD REQUEST URI

GET/WebSocket /api/etw/session/realtime

URI parameters
None
Request headers
None
Request body
None
Response
The response includes the ETW events from the enabled providers. See ETW WebSocket commands below.
Status code
This API has the following expected status codes.

HTTP STATUS CODE DESCRIPTION

200 OK

4XX Error codes

5XX Error codes

Available device families


Windows Mobile
Windows Desktop
HoloLens
IoT
ETW WebSocket commands
These commands are sent from the client to the server.

COMMAND DESCRIPTION
COMMAND DESCRIPTION

provider {guid} enable {level} Enable the provider marked by {guid} (without brackets) at the
specified level. {level} is an int from 1 (least detail) to 5
(verbose).

provider {guid} disable Disable the provider marked by {guid} (without brackets).

This responses is sent from the server to the client. This is sent as text and you get the following format by parsing
the JSON.

{
"Events":[
{
"Timestamp": int,
"ProviderName": string,
"ID": int,
"TaskName": string,
"Keyword": int,
"Level": int,
payload objects...
},...
],
"Frequency": int
}

Payload objects are extra key-value pairs (string:string) that are provided in the original ETW event.
Example:

{
"ID" : 42,
"Keyword" : 9223372036854775824,
"Level" : 4,
"Message" : "UDPv4: 412 bytes transmitted from 10.81.128.148:510 to 132.215.243.34:510. ",
"PID" : "1218",
"ProviderName" : "Microsoft-Windows-Kernel-Network",
"TaskName" : "KERNEL_NETWORK_TASK_UDPIP",
"Timestamp" : 131039401761757686,
"connid" : "0",
"daddr" : "132.245.243.34",
"dport" : "500",
"saddr" : "10.82.128.118",
"seqnum" : "0",
"size" : "412",
"sport" : "500"
}

Enumerate the registered ETW providers


Request
You can enumerate through the registered providers by using the following request format.

METHOD REQUEST URI

GET /api/etw/providers
URI parameters
None
Request headers
None
Request body
None
Response
The response includes the list of ETW providers. The list will include the friendly name and GUID for each provider
in the following format.

{"Providers": [
{
"GUID": string, (GUID)
"Name": string
},...
]}

Status code
This API has the following expected status codes.

HTTP STATUS CODE DESCRIPTION

200 OK

Available device families


Windows Mobile
Windows Desktop
HoloLens
IoT

Enumerate the custom ETW providers exposed by the platform.


Request
You can enumerate through the registered providers by using the following request format.

METHOD REQUEST URI

GET /api/etw/customproviders

URI parameters
None
Request headers
None
Request body
None
Response
200 OK. The response includes the list of ETW providers. The list will include the friendly name and GUID for each
provider.

{"Providers": [
{
"GUID": string, (GUID)
"Name": string
},...
]}

Status code
Standard status codes.
Available device families
Windows Mobile
Windows Desktop
HoloLens
IoT

OS information
Get the machine name
Request
You can get the name of a machine by using the following request format.

METHOD REQUEST URI

GET /api/os/machinename

URI parameters
None
Request headers
None
Request body
None
Response
The response includes the computer name in the following format.

{"ComputerName": string}
Status code
This API has the following expected status codes.

HTTP STATUS CODE DESCRIPTION

200 OK

4XX Error codes

5XX Error codes

Available device families


Windows Mobile
Windows Desktop
Xbox
HoloLens
IoT

Get the operating system information


Request
You can get the OS information for a machine by using the following request format.

METHOD REQUEST URI

GET /api/os/info

URI parameters
None
Request headers
None
Request body
None
Response
The response includes the OS information in the following format.

{
"ComputerName": string,
"OsEdition": string,
"OsEditionId": int,
"OsVersion": string,
"Platform": string
}

Status code
This API has the following expected status codes.

HTTP STATUS CODE DESCRIPTION

200 OK

4XX Error codes

5XX Error codes

Available device families


Windows Mobile
Windows Desktop
Xbox
HoloLens
IoT

Get the device family


Request
You can get the device family (Xbox, phone, desktop, etc) using the following request format.

METHOD REQUEST URI

GET /api/os/devicefamily

URI parameters
None
Request headers
None
Request body
None
Response
The response includes the device family (SKU - Desktop, Xbox, etc).

{
"DeviceType" : string
}

DeviceType will look like "Windows.Xbox", "Windows.Desktop", etc.


Status code
This API has the following expected status codes.
HTTP STATUS CODE DESCRIPTION

200 OK

4XX Error codes

5XX Error codes

Available device families


Windows Mobile
Windows Desktop
Xbox
HoloLens
IoT

Set the machine name


Request
You can set the name of a machine by using the following request format.

METHOD REQUEST URI

POST /api/os/machinename

URI parameters
You can specify the following additional parameters on the request URI:

URI PARAMETER DESCRIPTION

name (required) The new name for the machine.

Request headers
None
Request body
None
Response
Status code
This API has the following expected status codes.

HTTP STATUS CODE DESCRIPTION

200 OK

Available device families


Windows Mobile
Windows Desktop
Xbox
HoloLens
IoT

Performance data
Get the list of running processes
Request
You can get the list of currently running processes by using the following request format. this can be upgraded to a
WebSocket connection as well, with the same JSON data being pushed to the client once per second.

METHOD REQUEST URI

GET /api/resourcemanager/processes

GET/WebSocket /api/resourcemanager/processes

URI parameters
None
Request headers
None
Request body
None
Response
The response includes a list of processes with details for each process. The information is in JSON format and has
the following template.

{"Processes": [
{
"CPUUsage": int,
"ImageName": string,
"PageFileUsage": int,
"PrivateWorkingSet": int,
"ProcessId": int,
"SessionId": int,
"UserName": string,
"VirtualSize": int,
"WorkingSetSize": int
},...
]}

Status code
This API has the following expected status codes.
HTTP STATUS CODE DESCRIPTION

200 OK

4XX Error codes

5XX Error codes

Available device families


Windows Mobile
Windows Desktop
HoloLens
IoT

Get the system performance statistics


Request
You can get the system performance statistics by using the following request format. This includes information
such as read and write cycles and how much memory has been used.

METHOD REQUEST URI

GET /api/resourcemanager/systemperf

GET/WebSocket /api/resourcemanager/systemperf

This can also be upgraded to a WebSocket connection. It provides the same JSON data below once every second.
URI parameters
None
Request headers
None
Request body
None
Response
The response includes the performance statistics for the system such as CPU and GPU usage, memory access, and
network access. This information is in JSON format and has the following template.
{
"AvailablePages": int,
"CommitLimit": int,
"CommittedPages": int,
"CpuLoad": int,
"IOOtherSpeed": int,
"IOReadSpeed": int,
"IOWriteSpeed": int,
"NonPagedPoolPages": int,
"PageSize": int,
"PagedPoolPages": int,
"TotalInstalledInKb": int,
"TotalPages": int,
"GPUData":
{
"AvailableAdapters": [{ (One per detected adapter)
"DedicatedMemory": int,
"DedicatedMemoryUsed": int,
"Description": string,
"SystemMemory": int,
"SystemMemoryUsed": int,
"EnginesUtilization": [ float,... (One per detected engine)]
},...
]},
"NetworkingData": {
"NetworkInBytes": int,
"NetworkOutBytes": int
}
}

Status code
This API has the following expected status codes.

HTTP STATUS CODE DESCRIPTION

200 OK

4XX Error codes

5XX Error codes

Available device families


Windows Mobile
Windows Desktop
Xbox
HoloLens
IoT

Power
Get the current battery state
Request
You can get the current state of the battery by using the following request format.
METHOD REQUEST URI

GET /api/power/battery

URI parameters
None
Request headers
None
Request body
None
Response
The current battery state information is returned using the following format.

{
"AcOnline": int (0 | 1),
"BatteryPresent": int (0 | 1),
"Charging": int (0 | 1),
"DefaultAlert1": int,
"DefaultAlert2": int,
"EstimatedTime": int,
"MaximumCapacity": int,
"RemainingCapacity": int
}

Status code
This API has the following expected status codes.

HTTP STATUS CODE DESCRIPTION

200 OK

4XX Error codes

5XX Error codes

Available device families


Windows Mobile
Windows Desktop
HoloLens
IoT

Get the active power scheme


Request
You can get the active power scheme by using the following request format.
METHOD REQUEST URI

GET /api/power/activecfg

URI parameters
None
Request headers
None
Request body
None
Response
The active power scheme has the following format.

{"ActivePowerScheme": string (guid of scheme)}

Status code
This API has the following expected status codes.

HTTP STATUS CODE DESCRIPTION

200 OK

4XX Error codes

5XX Error codes

Available device families


Windows Desktop
IoT

Get the sub-value for a power scheme


Request
You can get the sub-value for a power scheme by using the following request format.

METHOD REQUEST URI

GET /api/power/cfg/

Options:
SCHEME_CURRENT
URI parameters
None
Request headers
None
Request body
A full listing of power states available is on a per-application basis and the settings for flagging various power
states like low and critical batterty.
Response
Status code
This API has the following expected status codes.

HTTP STATUS CODE DESCRIPTION

200 OK

4XX Error codes

5XX Error codes

Available device families


Windows Desktop
IoT

Get the power state of the system


Request
You can check the power state of the system by using the following request format. This will let you check to see if
it is in a low power state.

METHOD REQUEST URI

GET /api/power/state

URI parameters
None
Request headers
None
Request body
None
Response
The power state information has the following template.
{"LowPowerStateAvailable": bool}

Status code
This API has the following expected status codes.

HTTP STATUS CODE DESCRIPTION

200 OK

4XX Error codes

5XX Error codes

Available device families


Windows Desktop
HoloLens
IoT

Set the active power scheme


Request
You can set the active power scheme by using the following request format.

METHOD REQUEST URI

POST /api/power/activecfg

URI parameters
You can specify the following additional parameters on the request URI:

URI PARAMETER DESCRIPTION

scheme (required) The GUID of the scheme you want to set as the
active power scheme for the system.

Request headers
None
Request body
None
Response
Status code
This API has the following expected status codes.
HTTP STATUS CODE DESCRIPTION

200 OK

4XX Error codes

5XX Error codes

Available device families


Windows Desktop
IoT

Set the sub-value for a power scheme


Request
You can set the sub-value for a power scheme by using the following request format.

METHOD REQUEST URI

POST /api/power/cfg/

URI parameters
You can specify the following additional parameters on the request URI:

URI PARAMETER DESCRIPTION

valueAC (required) The value to use for A/C power.

valueDC (required) The value to use for battery power.

Request headers
None
Request body
None
Response
Status code
This API has the following expected status codes.

HTTP STATUS CODE DESCRIPTION

200 OK

Available device families


Windows Desktop
IoT

Get a sleep study report


Request

METHOD REQUEST URI

GET /api/power/sleepstudy/report

You can get a sleep study report by using the following request format.
URI parameters

URI PARAMETER DESCRIPTION

FileName (required) The full name for the file you want to download.
This value should be hex64 encoded.

Request headers
None
Request body
None
Response
The response is a file containing the sleep study.
Status code
This API has the following expected status codes.

HTTP STATUS CODE DESCRIPTION

200 OK

4XX Error codes

5XX Error codes

Available device families


Windows Desktop
IoT

Enumerate the available sleep study reports


Request
You can enumerate the available sleep study reports by using the following request format.
METHOD REQUEST URI

GET /api/power/sleepstudy/reports

URI parameters
None
Request headers
None
Request body
None
Response
The list of available reports has the following template.

{"Reports": [
{
"FileName": string
},...
]}

Status code
This API has the following expected status codes.

HTTP STATUS CODE DESCRIPTION

200 OK

4XX Error codes

5XX Error codes

Available device families


Windows Desktop
IoT

Get the sleep study transform


Request
You can get the sleep study transform by using the following request format. This transform is an XSLT that
converts the sleep study report into an XML format that can be read by a person.

METHOD REQUEST URI

GET /api/power/sleepstudy/transform
URI parameters
None
Request headers
None
Request body
None
Response
The response contains the sleep study transform.
Status code
This API has the following expected status codes.

HTTP STATUS CODE DESCRIPTION

200 OK

4XX Error codes

5XX Error codes

Available device families


Windows Desktop
IoT

Remote control
Restart the target computer
Request
You can restart the target computer by using the following request format.

METHOD REQUEST URI

POST /api/control/restart

URI parameters
None
Request headers
None
Request body
None
Response
Status code
This API has the following expected status codes.

HTTP STATUS CODE DESCRIPTION

200 OK

Available device families


Windows Mobile
Windows Desktop
Xbox
HoloLens
IoT

Shut down the target computer


Request
You can shut down the target computer by using the following request format.

METHOD REQUEST URI

POST /api/control/shutdown

URI parameters
None
Request headers
None
Request body
None
Response
Status code
This API has the following expected status codes.

HTTP STATUS CODE DESCRIPTION

200 OK

4XX Error codes

5XX Error codes


Available device families
Windows Mobile
Windows Desktop
Xbox
HoloLens
IoT

Task manager
Start a modern app
Request
You can start a modern app by using the following request format.

METHOD REQUEST URI

POST /api/taskmanager/app

URI parameters
You can specify the following additional parameters on the request URI:

URI PARAMETER DESCRIPTION

appid (required) The PRAID for the app you want to start. This
value should be hex64 encoded.

package (required) The full name for the app package you want to
start. This value should be hex64 encoded.

Request headers
None
Request body
None
Response
Status code
This API has the following expected status codes.

HTTP STATUS CODE DESCRIPTION

200 OK

4XX Error codes

5XX Error codes


Available device families
Windows Mobile
Windows Desktop
Xbox
HoloLens
IoT

Stop a modern app


Request
You can stop a modern app by using the following request format.

METHOD REQUEST URI

DELETE /api/taskmanager/app

URI parameters
You can specify the following additional parameters on the request URI:

URI PARAMETER DESCRIPTION

package (required) The full name of the app packages that you want
to stop. This value should be hex64 encoded.

forcestop (optional) A value of yes indicates that the system should


force all processes to stop.

Request headers
None
Request body
None
Response
Status code
This API has the following expected status codes.

HTTP STATUS CODE DESCRIPTION

200 OK

4XX Error codes

5XX Error codes


Available device families
Windows Mobile
Windows Desktop
Xbox
HoloLens
IoT

Networking
Get the current IP configuration
Request
You can get the current IP configuration by using the following request format.

METHOD REQUEST URI

GET /api/networking/ipconfig

URI parameters
None
Request headers
None
Request body
None
Response
The response includes the IP configuration in the following template.
{"Adapters": [
{
"Description": string,
"HardwareAddress": string,
"Index": int,
"Name": string,
"Type": string,
"DHCP": {
"LeaseExpires": int, (timestamp)
"LeaseObtained": int, (timestamp)
"Address": {
"IpAddress": string,
"Mask": string
}
},
"WINS": {(WINS is optional)
"Primary": {
"IpAddress": string,
"Mask": string
},
"Secondary": {
"IpAddress": string,
"Mask": string
}
},
"Gateways": [{ (always 1+)
"IpAddress": "10.82.128.1",
"Mask": "255.255.255.255"
},...
],
"IpAddresses": [{ (always 1+)
"IpAddress": "10.82.128.148",
"Mask": "255.255.255.0"
},...
]
},...
]}

Status code
This API has the following expected status codes.

HTTP STATUS CODE DESCRIPTION

200 OK

4XX Error codes

5XX Error codes

Available device families


Windows Mobile
Windows Desktop
Xbox
HoloLens
IoT
--
Enumerate wireless network interfaces
Request
You can enumerate the available wireless network interfaces by using the following request format.

METHOD REQUEST URI

GET /api/wifi/interfaces

URI parameters
None
Request headers
None
Request body
None
Response
A list of the available wireless interfaces with details in the following format.

{"Interfaces": [{
"Description": string,
"GUID": string (guid with curly brackets),
"Index": int,
"ProfilesList": [
{
"GroupPolicyProfile": bool,
"Name": string, (Network currently connected to)
"PerUserProfile": bool
},...
]
}
]}

Status code
This API has the following expected status codes.

HTTP STATUS CODE DESCRIPTION

200 OK

4XX Error codes

5XX Error codes

Available device families


Windows Mobile
Windows Desktop
Xbox
HoloLens
IoT

Enumerate wireless networks


Request
You can enumerate the list of wireless networks on the specified interface by using the following request format.

METHOD REQUEST URI

GET /api/wifi/networks

URI parameters
You can specify the following additional parameters on the request URI:

URI PARAMETER DESCRIPTION

interface (required) The GUID for the network interface to use to


search for wireless networks, without brackets.

Request headers
None
Request body
None
Response
The list of wireless networks found on the provided interface. This includes details for the networks in the following
format.

{"AvailableNetworks": [
{
"AlreadyConnected": bool,
"AuthenticationAlgorithm": string, (WPA2, etc)
"Channel": int,
"CipherAlgorithm": string, (e.g. AES)
"Connectable": int, (0 | 1)
"InfrastructureType": string,
"ProfileAvailable": bool,
"ProfileName": string,
"SSID": string,
"SecurityEnabled": int, (0 | 1)
"SignalQuality": int,
"BSSID": [int,...],
"PhysicalTypes": [string,...]
},...
]}

Status code
This API has the following expected status codes.
HTTP STATUS CODE DESCRIPTION

200 OK

4XX Error codes

5XX Error codes

Available device families


Windows Mobile
Windows Desktop
Xbox
HoloLens
IoT

Connect and disconnect to a Wi-Fi network.


Request
You can connect or disconnect to a Wi-Fi network by using the following request format.

METHOD REQUEST URI

POST /api/wifi/network

URI parameters
You can specify the following additional parameters on the request URI:

URI PARAMETER DESCRIPTION

interface (required) The GUID for the network interface you use to
connect to the network.

op (required) Indicates the action to take. Possible values are


connect or disconnect.

ssid (required if op == connect) The SSID to connect to.

key (required if op == connect and network requires


authentication) The shared key.

createprofile (required) Create a profile for the network on the device. This
will cause the device to auto-connect to the network in the
future. This can be yes or no.

Request headers
None
Request body
None
Response
Status code
This API has the following expected status codes.

HTTP STATUS CODE DESCRIPTION

200 OK

Available device families


Windows Mobile
Windows Desktop
Xbox
HoloLens
IoT

Delete a Wi-Fi profile


Request
You can delete a profile associated with a network on a specific interface by using the following request format.

METHOD REQUEST URI

DELETE /api/wifi/network

URI parameters
You can specify the following additional parameters on the request URI:

URI PARAMETER DESCRIPTION

interface (required) The GUID for the network interface associated with
the profile to delete.

profile (required) The name of the profile to delete.

Request headers
None
Request body
None
Response
Status code
This API has the following expected status codes.
HTTP STATUS CODE DESCRIPTION

200 OK

Available device families


Windows Mobile
Windows Desktop
Xbox
HoloLens
IoT

Windows Error Reporting (WER)


Download a Windows error reporting (WER) file
Request
You can download a WER-related file by using the following request format.

METHOD REQUEST URI

GET /api/wer/report/file

URI parameters
You can specify the following additional parameters on the request URI:

URI PARAMETER DESCRIPTION

user (required) The user name associated with the report.

type (required) The type of report. This can be either queried or


archived.

name (required) The name of the report. This should be base64


encoded.

file (required) The name of the file to download from the report.
This should be base64 encoded.

Request headers
None
Request body
None
Response
Response contains the requested file.
Status code
This API has the following expected status codes.

HTTP STATUS CODE DESCRIPTION

200 OK

4XX Error codes

5XX Error codes

Available device families


Windows Desktop
HoloLens
IoT

Enumerate files in a Windows error reporting (WER) report


Request
You can enumerate the files in a WER report by using the following request format.

METHOD REQUEST URI

GET /api/wer/report/files

URI parameters
You can specify the following additional parameters on the request URI:

URI PARAMETER DESCRIPTION

user (required) The user associated with the report.

type (required) The type of report. This can be either queried or


archived.

name (required) The name of the report. This should be base64


encoded.

Request headers
None
Request body
{"Files": [
{
"Name": string, (Filename, not base64 encoded)
"Size": int (bytes)
},...
]}

Response
Status code
This API has the following expected status codes.

HTTP STATUS CODE DESCRIPTION

200 OK

4XX Error codes

5XX Error codes

Available device families


Windows Desktop
HoloLens
IoT

List the Windows error reporting (WER) reports


Request
You can get the WER reports by using the following request format.

METHOD REQUEST URI

GET /api/wer/reports

URI parameters
None
Request headers
None
Request body
None
Response
The WER reports in the following format.
{"WerReports": [
{
"User": string,
"Reports": [
{
"CreationTime": int,
"Name": string, (not base64 encoded)
"Type": string ("Queue" or "Archive")
},
},...
]}

Status code
This API has the following expected status codes.

HTTP STATUS CODE DESCRIPTION

200 OK

4XX Error codes

5XX Error codes

Available device families


Windows Desktop
HoloLens
IoT

Windows Performance Recorder (WPR)


Start tracing with a custom profile
Request
You can upload a WPR profile and start tracing using that profile by using the following request format. Only one
trace can run at a time. The profile will not remain on the device.

METHOD REQUEST URI

POST /api/wpr/customtrace

URI parameters
None
Request headers
None
Request body
A multi-part conforming http body that contains the custom WPR profile.
Response
The WPR session status in the following format.

{
"SessionType": string, (Running or Idle)
"State": string (normal or boot)
}

Status code
This API has the following expected status codes.

HTTP STATUS CODE DESCRIPTION

200 OK

4XX Error codes

5XX Error codes

Available device families


Windows Mobile
Windows Desktop
HoloLens
IoT

Start a boot performance tracing session


Request
You can start a boot WPR tracing session by using the following request format. This is also known as a
performance tracing session.

METHOD REQUEST URI

POST /api/wpr/boottrace

URI parameters
You can specify the following additional parameters on the request URI:

URI PARAMETER DESCRIPTION

profile (required) This parameter is required on start. The name of


the profile that should start a performance tracing session. The
possible profiles are stored in perfprofiles/profiles.json.

Request headers
None
Request body
None
Response
On start, this API returns the WPR session status in the following format.

{
"SessionType": string, (Running or Idle)
"State": string (boot)
}

Status code
This API has the following expected status codes.

HTTP STATUS CODE DESCRIPTION

200 OK

4XX Error codes

5XX Error codes

Available device families


Windows Mobile
Windows Desktop
HoloLens
IoT

Stop a boot performance tracing session


Request
You can stop a boot WPR tracing session by using the following request format. This is also known as a
performance tracing session.

METHOD REQUEST URI

GET /api/wpr/boottrace

URI parameters
None
Request headers
None
Request body
None
Response
None. Note: This is a long running operation. It will return when the ETL is finished writing to disk.
Status code
This API has the following expected status codes.

HTTP STATUS CODE DESCRIPTION

200 OK

4XX Error codes

5XX Error codes

Available device families


Windows Mobile
Windows Desktop
HoloLens
IoT

Start a performance tracing session


Request
You can start a WPR tracing session by using the following request format. This is also known as a performance
tracing session. Only one trace can run at a time.

METHOD REQUEST URI

POST /api/wpr/trace

URI parameters
You can specify the following additional parameters on the request URI:

URI PARAMETER DESCRIPTION

profile (required) The name of the profile that should start a


performance tracing session. The possible profiles are stored in
perfprofiles/profiles.json.

Request headers
None
Request body
None
Response
On start, this API returns the WPR session status in the following format.

{
"SessionType": string, (Running or Idle)
"State": string (normal)
}

Status code
This API has the following expected status codes.

HTTP STATUS CODE DESCRIPTION

200 OK

4XX Error codes

5XX Error codes

Available device families


Windows Mobile
Windows Desktop
HoloLens
IoT

Stop a performance tracing session


Request
You can stop a WPR tracing session by using the following request format. This is also known as a performance
tracing session.

METHOD REQUEST URI

GET /api/wpr/trace

URI parameters
None
Request headers
None
Request body
None
Response
None. Note: This is a long running operation. It will return when the ETL is finished writing to disk.
Status code
This API has the following expected status codes.

HTTP STATUS CODE DESCRIPTION

200 OK

4XX Error codes

5XX Error codes

Available device families


Windows Mobile
Windows Desktop
HoloLens
IoT

Retrieve the status of a tracing session


Request
You can retrieve the status of the current WPR session by using the following request format.

METHOD REQUEST URI

GET /api/wpr/status

URI parameters
None
Request headers
None
Request body
None
Response
The status of the WPR tracing session in the following format.

{
"SessionType": string, (Running or Idle)
"State": string (normal or boot)
}

Status code
This API has the following expected status codes.
HTTP STATUS CODE DESCRIPTION

200 OK

4XX Error codes

5XX Error codes

Available device families


Windows Mobile
Windows Desktop
HoloLens
IoT

List completed tracing sessions (ETLs)


Request
You can get a listing of ETL traces on the device using the following request format.

METHOD REQUEST URI

GET /api/wpr/tracefiles

URI parameters
None
Request headers
None
Request body
None
Response
The listing of completed tracing sessions is provided in the following format.

{"Items": [{
"CurrentDir": string (filepath),
"DateCreated": int (File CreationTime),
"FileSize": int (bytes),
"Id": string (filename),
"Name": string (filename),
"SubPath": string (filepath),
"Type": int
}]}

Status code
This API has the following expected status codes.
HTTP STATUS CODE DESCRIPTION

200 OK

4XX Error codes

5XX Error codes

Available device families


Windows Mobile
Windows Desktop
HoloLens
IoT

Download a tracing session (ETL)


Request
You can download a tracefile (boot trace or user-mode trace) using the following request format.

METHOD REQUEST URI

GET /api/wpr/tracefile

URI parameters
You can specify the following additional parameter on the request URI:

URI PARAMETER DESCRIPTION

filename (required) The name of the ETL trace to download. These can
be found in /api/wpr/tracefiles

Request headers
None
Request body
None
Response
Returns the trace ETL file.
Status code
This API has the following expected status codes.

HTTP STATUS CODE DESCRIPTION

200 OK
HTTP STATUS CODE DESCRIPTION

4XX Error codes

5XX Error codes

Available device families


Windows Mobile
Windows Desktop
HoloLens
IoT

Delete a tracing session (ETL)


Request
You can delete a tracefile (boot trace or user-mode trace) using the following request format.

METHOD REQUEST URI

DELETE /api/wpr/tracefile

URI parameters
You can specify the following additional parameter on the request URI:

URI PARAMETER DESCRIPTION

filename (required) The name of the ETL trace to delete. These can be
found in /api/wpr/tracefiles

Request headers
None
Request body
None
Response
Returns the trace ETL file.
Status code
This API has the following expected status codes.

HTTP STATUS CODE DESCRIPTION

200 OK

4XX Error codes


HTTP STATUS CODE DESCRIPTION

5XX Error codes

Available device families


Windows Mobile
Windows Desktop
HoloLens
IoT

DNS-SD Tags
View Tags
Request
View the currently applied tags for the device. These are advertised via DNS-SD TXT records in the T key.

METHOD REQUEST URI

GET /api/dns-sd/tags

URI parameters
None
Request headers
None
Request body
None
Response The currently applied tags in the following format.

{
"tags": [
"tag1",
"tag2",
...
]
}

Status code
This API has the following expected status codes.

HTTP STATUS CODE DESCRIPTION

200 OK
HTTP STATUS CODE DESCRIPTION

5XX Server Error

Available device families


Windows Mobile
Windows Desktop
Xbox
HoloLens
IoT

Delete Tags
Request
Delete all tags currently advertised by DNS-SD.

METHOD REQUEST URI

DELETE /api/dns-sd/tags

URI parameters
None
Request headers
None
Request body
None
Response
None
Status code
This API has the following expected status codes.

HTTP STATUS CODE DESCRIPTION

200 OK

5XX Server Error

Available device families


Windows Mobile
Windows Desktop
Xbox
HoloLens
IoT

Delete Tag
Request
Delete a tag currently advertised by DNS-SD.

METHOD REQUEST URI

DELETE /api/dns-sd/tag

URI parameters

URI PARAMETER DESCRIPTION

tagValue (required) The tag to be removed.

Request headers
None
Request body
None
Response
None
Status code
This API has the following expected status codes.

HTTP STATUS CODE DESCRIPTION

200 OK

Available device families


Windows Mobile
Windows Desktop
Xbox
HoloLens
IoT

Add a Tag
Request
Add a tag to the DNS-SD advertisement.
METHOD REQUEST URI

POST /api/dns-sd/tag

URI parameters

URI PARAMETER DESCRIPTION

tagValue (required) The tag to be added.

Request headers
None
Request body
None
Response
None
Status code
This API has the following expected status codes.

HTTP STATUS CODE DESCRIPTION

200 OK

401 Tag space Overflow. Results when the proposed tag is too
long for the resulting DNS-SD service record.

Available device families


Windows Mobile
Windows Desktop
Xbox
HoloLens
IoT

App File Explorer


Get known folders
Request
Obtain a list of accessible top-level folders.

METHOD REQUEST URI

GET /api/filesystem/apps/knownfolders
URI parameters
None
Request headers
None
Request body
None
Response The available folders in the following format.

{"KnownFolders": [
"folder0",
"folder1",...
]}

Status code
This API has the following expected status codes.

HTTP STATUS CODE DESCRIPTION

200 Deploy request accepted and being processed

4XX Error codes

5XX Error codes

Available device families


Windows Mobile
Windows Desktop
HoloLens
Xbox
IoT

Get files
Request
Obtain a list of files in a folder.

METHOD REQUEST URI

GET /api/filesystem/apps/files

URI parameters
URI PARAMETER DESCRIPTION

knownfolderid (required) The top-level directory where you want the list of
files. Use LocalAppData for access to sideloaded apps.

packagefullname (required if knownfolderid == LocalAppData) The package


full name of the app you are interested in.

path (optional) The sub-directory within the folder or package


specified above.

Request headers
None
Request body
None
Response The available folders in the following format.

{"Items": [
{
"CurrentDir": string (folder under the requested known folder),
"DateCreated": int,
"FileSize": int (bytes),
"Id": string,
"Name": string,
"SubPath": string (present if this item is a folder, this is the name of the folder),
"Type": int
},...
]}

Status code
This API has the following expected status codes.

HTTP STATUS CODE DESCRIPTION

200 OK

4XX Error codes

5XX Error codes

Available device families


Windows Mobile
Windows Desktop
HoloLens
Xbox
IoT

Download a file
Request
Obtain a file from a known folder or appLocalData.

METHOD REQUEST URI

GET /api/filesystem/apps/file

URI parameters

URI PARAMETER DESCRIPTION

knownfolderid (required) The top-level directory where you want to


download files. Use LocalAppData for access to sideloaded
apps.

filename (required) The name of the file being downloaded.

packagefullname (required if knownfolderid == LocalAppData) The package


full name you are interested in.

path (optional) The sub-directory within the folder or package


specified above.

Request headers
None
Request body
The file requested, if present
Response
Status code
This API has the following expected status codes.

HTTP STATUS CODE DESCRIPTION

200 The requested file

404 File not found

5XX Error codes

Available device families


Windows Mobile
Windows Desktop
HoloLens
Xbox
IoT

Rename a file
Request
Rename a file in a folder.

METHOD REQUEST URI

POST /api/filesystem/apps/rename

URI parameters

URI PARAMETER DESCRIPTION

knownfolderid (required) The top-level directory where the file is located.


Use LocalAppData for access to sideloaded apps.

filename (required) The original name of the file being renamed.

newfilename (required) The new name of the file.

packagefullname (required if knownfolderid == LocalAppData) The package


full name of the app you are interested in.

path (optional) The sub-directory within the folder or package


specified above.

Request headers
None
Request body
None
Response
None
Status code
This API has the following expected status codes.

HTTP STATUS CODE DESCRIPTION

200 OK. The file is renamed

404 File not found

5XX Error codes

Available device families


Windows Mobile
Windows Desktop
HoloLens
Xbox
IoT
Delete a file
Request
Delete a file in a folder.

METHOD REQUEST URI

DELETE /api/filesystem/apps/file

URI parameters

URI PARAMETER DESCRIPTION

knownfolderid (required) The top-level directory where you want to delete


files. Use LocalAppData for access to sideloaded apps.

filename (required) The name of the file being deleted.

packagefullname (required if knownfolderid == LocalAppData) The package


full name of the app you are interested in.

path (optional) The sub-directory within the folder or package


specified above.

Request headers
None
Request body
None
Response
None
Status code
This API has the following expected status codes.

HTTP STATUS CODE DESCRIPTION

200 OK. The file is deleted

404 File not found

5XX Error codes

Available device families


Windows Mobile
Windows Desktop
HoloLens
Xbox
IoT
Upload a file
Request
Upload a file to a folder. This will overwrite an existing file with the same name, but will not create new folders.

METHOD REQUEST URI

POST /api/filesystem/apps/file

URI parameters

URI PARAMETER DESCRIPTION

knownfolderid (required) The top-level directory where you want to upload


files. Use LocalAppData for access to sideloaded apps.

packagefullname (required if knownfolderid == LocalAppData) The package


full name of the app you are interested in.

path (optional) The sub-directory within the folder or package


specified above.

Request headers
None
Request body
None
Response
Status code
This API has the following expected status codes.

HTTP STATUS CODE DESCRIPTION

200 OK. The file is uploaded

4XX Error codes

5XX Error codes

Available device families


Windows Mobile
Windows Desktop
HoloLens
Xbox
IoT
Windows App Certification Kit
3/6/2017 4 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
To give your app the best chance of being published on the Windows Store, or becoming Windows Certified,
validate and test it locally before you submit it for certification. This topic shows you how to install and run the
Windows App Certification Kit.

Prerequisites
Prerequisites for testing a Universal Windows app:
You must install and run Windows 10.
You must install Windows App Certification Kit version 10, which is included in the Windows Software
Development Kit (SDK) for Windows 10.
You must have a valid developer license for your computer. See Get a developer license to learn how.
You must deploy the Windows app that you want to test to your computer.
A note about in-place upgrades
The installation of a more recent Windows App Certification Kit will replace any previous version of the kit that is
installed on the machine.

Validate your Windows app using the Windows App Certification Kit
interactively
1. From the Start menu, search Apps, find Windows Kits, and click Windows App Cert Kit.
2. From the Windows App Certification Kit, select the category of validation you would like to perform. For
example: If you are validating a Windows app, select Validate a Windows app.
You may browse directly to the app you're testing, or choose the app from a list in the UI. When the
Windows App Certification Kit is run for the first time, the UI lists all the Windows apps that you have
installed on your computer. For any subsequent runs, the UI will display the most recent Windows apps that
you have validated. If the app that you want to test is not listed, you can click on My app isn't listed to get
a comprehensive list of all apps installed on your system.
3. After you have input or selected the app that you want to test, click Next.
4. From the next screen, you will see the test workflow that aligns to the app type you are testing. If a test is
grayed out in the list, the test is not applicable to your environment. For example, if you are testing a
Windows 10 app on Windows 7, only static tests will apply to the workflow. Note that the Windows Store
may apply all tests from this workflow. Select the tests you want to run and click Next.
The Windows App Certification Kit begins validating the app.
5. At the prompt after the test, enter the path to the folder where you want to save the test report.
The Windows App Certification Kit creates an HTML along with an XML report and saves it in this folder.
6. Open the report file and review the results of the test.
Note If you're using Visual Studio, you can run the Windows App Certification Kit when you create your app
package. See Packaging UWP apps to learn how.

Validate your Windows app using the Windows App Certification Kit
from a command line
Important The Windows App Certification Kit must be run within the context of an active user session.
1. In the command window, navigate to the directory that contains the Windows App Certification Kit.
Note The default path is C:\Program Files\Windows Kits\10\App Certification Kit\.
2. Enter the following commands in this order to test an app that is already installed on your test computer:
appcert.exe reset

appcert.exe test -packagefullname [package full name] -reportoutputpath [report file name]

Or you can use the following commands if the app is not installed. The Windows App Certification Kit will
open the package and apply the appropriate test workflow:
appcert.exe reset

appcert.exe test -appxpackagepath [package path] -reportoutputpath [report file name]

3. After the test completes, open the report file named [report file name] and review the test results.
Note The Windows App Certification Kit can be run from a service, but the service must initiate the kit process
within an active user session and cannot be run in Session0.
Note For more info about the Windows App Certification Kit command line, enter the command appcert.exe /?

Testing with a low-power computer


The performance test thresholds of the Windows App Certification Kit are based on the performance of a low-
power computer.
The characteristics of the computer on which the test is performed can influence the test results. To determine if
your apps performance meets the Windows Store Policies, we recommend that you test your app on a low-power
computer, such as an Intel Atom processor-based computer with a screen resolution of 1366x768 (or higher) and a
rotational hard drive (as opposed to a solid-state hard drive).
As low-power computers evolve, their performance characteristics might change over time. Refer to the most
current Windows Store Policies and test your app with the most current version of the Windows App Certification
Kit to make sure that your app complies with the latest performance requirements.

Related topics
Windows App Certification Kit tests
Windows Store Policies
Windows App Certification Kit tests
3/6/2017 24 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
The Windows App Certification Kit contains a number of tests that can help ensure that your app is ready to be
published on the Windows Store.

Deployment and launch tests


Monitors the app during certification testing to record when it crashes or hangs.
Background
Apps that stop responding or crash can cause the user to lose data and have a poor experience.
We expect apps to be fully functional without the use of Windows compatibility modes, AppHelp messages, or
compatibility fixes.
Apps must not list DLLs to load in the HKEY-LOCAL-MACHINE\Software\Microsoft\Windows
NT\CurrentVersion\Windows\AppInit-DLLs registry key.
Test details
We test the app resilience and stability throughout the certification testing.
The Windows App Certification Kit calls IApplicationActivationManager::ActivateApplication to launch apps.
For ActivateApplication to launch an app, User Account Control (UAC) must be enabled and the screen resolution
must be at least 1024 x 768 or 768 x 1024. If either condition is not met, your app will fail this test.
Corrective actions
Make sure UAC is enabled on the test computer.
Make sure you are running the test on a computer with large enough screen.
If your app fails to launch and your test platform satisfies the prerequisites of ActivateApplication, you can
troubleshoot the problem by reviewing the activation event log. To find these entries in the event log:
1. Open eventvwr.exe and navigate to the Application and Services Log\Microsoft\Windows\Immersive-Shell
folder.
2. Filter the view to show Event Ids: 5900-6000.
3. Review the log entries for info that might explain why the app didn't launch.
Troubleshoot the file with the problem, identify and fix the problem. Rebuild and re-test the app. You can also check
if a dump file was generated in the Windows App Certification Kit log folder that can be used to debug your app.

Platform Version Launch test


Checks that the Windows app can run on a future version of the OS. This test has historically been only applied to
the Desktop app workflow, but this is now enabled for the Store and Universal Windows Platform (UWP)
workflows.
Background
Operating system version info has restricted usage for the Windows Store. This has often been incorrectly used by
apps to check OS version so that the app can provide users with functionality that is specific to an OS version.
Test details
The Windows App Certification Kit uses the HighVersionLie to detect how the app checks the OS version. If the app
crashes, it will fail this test.
Corrective action
Apps should use Version API helper functions to check this. See Operating System Version for more information.

Background tasks cancellation handler validation


This verifies that the app has a cancellation handler for declared background tasks. There needs to be a dedicated
function that will be called when the task is cancelled. This test is applied only for deployed apps.
Background
Store apps can register a process that runs in the background. For example, an email app may ping a server from
time to time. However, if the OS needs these resources, it will cancel the background task, and apps should
gracefully handle this cancellation. Apps that don't have a cancellation handler may crash or not close when the
user tries to close the app.
Test details
The app is launched, suspended and the non-background portion of the app is terminated. Then the background
tasks associated with this app are cancelled. The state of the app is checked, and if the app is still running then it will
fail this test.
Corrective action
Add the cancellation handler to your app. For more information see Support your app with background tasks.

App count
This verifies that an app package (APPX, app bundle) contains one application. This was changed in the kit to be a
standalone test.
Background
This test was implemented as per Store policy.
Test details
For Windows Phone 8.1 apps the test verifies the total number of appx packages in the bundle is < 512, there is
only one main package in the bundle, and that the architecture of the main package in the bundle is marked as
ARM or neutral.
For Windows 10 apps the test verifies that the revision number in the version of the bundle is set to 0.
Corrective action
Ensure the app package and bundle meet requirements above in Test details.

App manifest compliance test


Test the contents of app manifest to make sure its contents are correct.
Background
Apps must have a correctly formatted app manifest.
Test details
Examines the app manifest to verify the contents are correct as described in the App package requirements.
File extensions and protocols
Your app can declare the file extensions that it wants to associate with. Used improperly, an app can declare
a large number of file extensions, most of which it may not even use, resulting in a bad user experience. This
test will add a check to limit the number of file extensions that an app can associate with.
Framework Dependency rule
This test enforces the requirement that apps take appropriate dependencies on the UWP. If there is an
inappropriate dependency, this test will fail.
If there is a mismatch between the OS version the app applies to and the framework dependencies made,
the test will fail. The test would also fail if the app refers to any preview versions of the framework dlls.
Inter-process Communication (IPC) verification
This test enforces the requirement that Windows Store apps do not communicate outside of the app
container to Desktop components. Inter-process communication is intended for side-loaded apps only. Apps
that specify the ActivatableClassAttribute with name equal to "DesktopApplicationPath" will fail this test.
Corrective action
Review the app's manifest against the requirements described in the App package requirements.

Windows Security features test


Background
Changing the default Windows security protections can put customers at increased risk.
Test details
Tests the app's security by running the BinScope Binary Analyzer.
The BinScope Binary Analyzer tests examine the app's binary files to check for coding and building practices that
make the app less vulnerable to attack or to being used as an attack vector.
The BinScope Binary Analyzer tests check for the correct use of the following security-related features.
BinScope Binary Analyzer tests
Private Code Signing
BinScope Binary Analyzer tests
The BinScope Binary Analyzer tests examine the app's binary files to check for coding and building practices that
make the app less vulnerable to attack or to being used as an attack vector.
The BinScope Binary Analyzer tests check for the correct use of these security-related features:
AllowPartiallyTrustedCallersAttribute
/SafeSEH Exception Handling Protection
Data Execution Prevention
Address Space Layout Randomization
Read/Write Shared PE Section
AppContainerCheck
ExecutableImportsCheck
WXCheck
AllowPartiallyTrustedCallersAttribute
Windows App Certification Kit error message: APTCACheck Test failed
The AllowPartiallyTrustedCallersAttribute (APTCA) attribute enables access to fully trusted code from partially
trusted code in signed assemblies. When you apply the APTCA attribute to an assembly, partially trusted callers can
access that assembly for the life of the assembly, which can compromise security.
What to do if your app fails this test
Don't use the APTCA attribute on strong named assemblies unless your project requires it and the risks are well
understood. In cases where it's required, make sure that all APIs are protected with appropriate code access
security demands. APTCA has no effect when the assembly is a part of a Universal Windows Platform (UWP) app.
Remarks
This test is performed only on managed code (C#, .NET, etc.).
/SafeSEH Exception Handling Protection
Windows App Certification Kit error message: SafeSEHCheck Test failed
An exception handler runs when the app encounters an exceptional condition, such as a divide-by-zero error.
Because the address of the exception handler is stored on the stack when a function is called, it could be vulnerable
to a buffer overflow attacker if some malicious software were to overwrite the stack.
What to do if your app fails this test
Enable the /SAFESEH option in the linker command when you build your app. This option is on by default in the
Release configurations of Visual Studio. Verify this option is enabled in the build instructions for all executable
modules in your app.
Remarks
The test is not performed on 64-bit binaries or ARM chipset binaries because they don't store exception handler
addresses on the stack.
Data Execution Prevention
Windows App Certification Kit error message: NXCheck Test failed
This test verifies that an app doesn't run code that is stored in a data segment.
What to do if your app fails this test
Enable the /NXCOMPAT option in the linker command when you build your app. This option is on by default in
linker versions that support Data Execution Prevention (DEP).
Remarks
We recommend that you test your apps on a DEP-capable CPU and fix any failures you find that result from DEP.
Address Space Layout Randomization
Windows App Certification Kit error message: DBCheck Test failed
Address Space Layout Randomization (ASLR) loads executable images into unpredictable locations in memory,
which makes it harder for malicious software that expects a program to be loaded at a certain virtual address to
operate predictably. Your app and all components that your app uses must support ASLR.
What to do if your app fails this test
Enable the /DYNAMICBASE option in the linker command when you build your app. Verify that all modules that
your app uses also use this linker option.
Remarks
Normally, ASLR doesn't affect performance. But in some scenarios there is a slight performance improvement on
32-bit systems. It is possible that performance could degrade in a highly congested system that have many images
loaded in many different memory locations.
This test is performed on only apps written in managed code, such as by using C# or .NET Framework.
Read/Write Shared PE Section
Windows App Certification Kit error message: SharedSectionsCheck Test failed.
Binary files with writable sections that are marked as shared are a security threat. Don't build apps with shared
writable sections unless necessary. Use CreateFileMapping or MapViewOfFile to create a properly secured
shared memory object.
What to do if your app fails this test
Remove any shared sections from the app and create shared memory objects by calling CreateFileMapping or
MapViewOfFile with the proper security attributes and then rebuild your app.
Remarks
This test is performed only on apps written in unmanaged languages, such as by using C or C++.
AppContainerCheck
Windows App Certification Kit error message: AppContainerCheck Test failed.
The AppContainerCheck verifies that the appcontainer bit in the portable executable (PE) header of an executable
binary is set. Apps must have the appcontainer bit set on all .exe files and all unmanaged DLLs to execute
properly.
What to do if your app fails this test
If a native executable file fails the test, make sure that you used the latest compiler and linker to build the file and
that you use the /appcontainer flag on the linker.
If a managed executable fails the test, make sure that you used the latest compiler and linker, such as Microsoft
Visual Studio, to build the Windows Store app.
Remarks
This test is performed on all .exe files and on unmanaged DLLs.
ExecutableImportsCheck
Windows App Certification Kit error message: ExecutableImportsCheck Test failed.
A portable executable (PE) image fails this test if its import table has been placed in an executable code section. This
can occur if you enabled .rdata merging for the PE image by setting the /merge flag of the Visual C++ linker as
/merge:.rdata=.text.
What to do if your app fails this test
Don't merge the import table into an executable code section. Make sure that the /merge flag of the Visual C++
linker is not set to merge the ".rdata" section into a code section.
Remarks
This test is performed on all binary code except purely managed assemblies.
WXCheck
Windows App Certification Kit error message: WXCheck Test failed.
The check helps to ensure that a binary does not have any pages that are mapped as writable and executable. This
can occur if the binary has a writable and executable section or if the binarys SectionAlignment is less than PAGE-
SIZE.
What to do if your app fails this test
Make sure that the binary does not have a writeable or executable section and that the binary's SectionAlignment
value is at least equal to its PAGE-SIZE.
Remarks
This test is performed on all .exe files and on native, unmanaged DLLs.
An executable may have a writable and executable section if it has been built with Edit and Continue enabled (/ZI).
Disabling Edit and Continue will cause the invalid section to not be present.
PAGE-SIZE is the default SectionAlignment for executables.
Private Code Signing
Tests for the existence of private code signing binaries within the app package.
Background
Private code signing files should be kept private as they may be used for malicious purposes in the event they are
compromised.
Test details
Tests for files within the app package that have an extension of .pfx or.snk that would indicate that private signing
keys were included.
Corrective actions
Remove any private code signing keys (e.g. .pfx and .snk files) from the package.

Supported API test


Test the app for the use of any non-compliant APIs.
Background
Apps must use the APIs for Windows Store apps (Windows Runtime or supported Win32 APIs) to be certified for
the Windows Store. This test also identifies situations where a managed binary takes a dependency on a function
outside of the approved profile.
Test details
Verifies that each binary within the app package doesn't have a dependency on a Win32 API that is not
supported for Windows Store app development by checking the import address table of the binary.
Verifies that each managed binary within the app package doesn't have a dependency on a function outside of
the approved profile.
Corrective actions
Make sure that the app was compiled as a release build and not a debug build.

Note The debug build of an app will fail this test even if the app uses only APIs for Windows Store apps.

Review the error messages to identify the API the app uses that is not an API for Windows Store apps.

Note C++ apps that are built in a debug configuration will fail this test even if the configuration only uses APIs
from the Windows SDK for Windows Store apps. See, Alternatives to Windows APIs in Windows Store apps for
more info.

Performance tests
The app must respond quickly to user interaction and system commands in order to present a fast and fluid user
experience.
The characteristics of the computer on which the test is performed can influence the test results. The performance
test thresholds for app certification are set such that low-power computers meet the customers expectation of a
fast and fluid experience. To determine your apps performance, we recommend that you test on a low-power
computer, such as an Intel Atom processor-based computer with a screen resolution of 1366x768 (or higher) and a
rotational hard drive (as opposed to a solid-state hard drive).
Bytecode generation
As a performance optimization to accelerate JavaScript execution time, JavaScript files ending in the .js extension
generate bytecode when the app is deployed. This significantly improves startup and ongoing execution times for
JavaScript operations.
Test Details
Checks the app deployment to verify that all .js files have been converted to bytecode.
Corrective Action
If this test fails, consider the following when addressing the issue:
Verify that event logging is enabled.
Verify that all JavaScript files are syntactically valid.
Confirm that all previous versions of the app are uninstalled.
Exclude identified files from the app package.
Optimized binding references
When using bindings, WinJS.Binding.optimizeBindingReferences should be set to true in order to optimize memory
usage.
Test Details
Verify the value of WinJS.Binding.optimizeBindingReferences.
Corrective Action
Set WinJS.Binding.optimizeBindingReferences to true in the app JavaScript.

App manifest resources test


App resources validation
The app might not install if the strings or images declared in your apps manifest are incorrect. If the app does
install with these errors, your apps logo or other images used by your app might not display correctly.
Test Details
Inspects the resources defined in the app manifest to make sure they are present and valid.
Corrective Action
Use the following table as guidance.

ERROR MESSAGE COMMENTS

The image {image name} defines both Scale and TargetSize You can customize images for different resolutions.
qualifiers; you can define only one qualifier at a time.
In the actual message, {image name} contains the name of
the image with the error.
Make sure that each image defines either Scale or
TargetSize as the qualifier.
The image {image name} failed the size restrictions. Ensure that all the app images adhere to the proper size
restrictions.
In the actual message, {image name} contains the name of
the image with the error.

The image {image name} is missing from the package. A required image is missing.
In the actual message, {image name} contains the name of
the image that is missing.

The image {image name} is not a valid image file. Ensure that all the app images adhere to the proper file
format type restrictions.
In the actual message, {image name} contains the name of
the image that is not valid.

The image "BadgeLogo" has an ABGR value {value} at The badge logo is an image that appears next to the
position (x, y) that is not valid. The pixel must be white badge notification to identify the app on the lock screen.
(##FFFFFF) or transparent (00######) This image must be monochromatic (it can contain only
white and transparent pixels).
In the actual message, {value} contains the color value in
the image that is not valid.

The image "BadgeLogo" has an ABGR value {value} at The badge logo is an image that appears next to the
position (x, y) that is not valid for a high-contrast white badge notification to identify the app on the lock screen.
image. The pixel must be (##2A2A2A) or darker, or Because the badge logo appears on a white background
transparent (00######). when in high-contrast white, it must be a dark version of
the normal badge logo. In high-contrast white, the badge
logo can only contain pixels that are darker than
(##2A2A2A) or transparent.
In the actual message, {value} contains the color value in
the image that is not valid.

The image must define at least one variant without a For more info, see Responsive design 101 for UWP apps
TargetSize qualifier. It must define a Scale qualifier or leave and Guidelines for app resources.
Scale and TargetSize unspecified, which defaults to Scale-
100.

The package is missing a "resources.pri" file. If you have localizable content in your app manifest, make
sure that your app's package includes a valid resources.pri
file.

The "resources.pri" file must contain a resource map with a You can get this error if the manifest changed and the
name that matches the package name {package full name} name of the resource map in resources.pri no longer
matches the package name in the manifest.
In the actual message, {package full name} contains the
package name that resources.pri must contain.
To fix this, you need to rebuild resources.pri and the
easiest way to do that is by rebuilding the app's package.
The "resources.pri" file must not have AutoMerge enabled. MakePRI.exe supports an option called AutoMerge. The
default value of AutoMerge is off. When enabled,
AutoMerge merges an app's language pack resources
into a single resources.pri at runtime. We don't
recommend this for apps that you intend to distribute
through the Windows Store. The resources.pri of an app
that is distributed through the Windows Store must be in
the root of the app's package and contain all the language
references that the app supports.

The string {string} failed the max length restriction of Refer to the App package requirements.
{number} characters.
In the actual message, {string} is replaced by the string
with the error and {number} contains the maximum
length.

The string {string} must not have leading/trailing The schema for the elements in the app manifest don't
whitespace. allow leading or trailing white space characters.
In the actual message, {string} is replaced by the string
with the error.
Make sure that none of the localized values of the
manifest fields in resources.pri have leading or trailing
white space characters.

The string must be non-empty (greater than zero in For more info, see App package requirements.
length)

There is no default resource specified in the "resources.pri" For more info, see Guidelines for app resources.
file.
In the default build configuration, Visual Studio only
includes scale-200 image resources in the app package
when generating bundles, putting other resources in the
resource package. Make sure you either include scale-200
image resources or configure your project to include the
resources you have.

There is no resource value specified in the "resources.pri" Make sure that the app manifest has valid resources
file. defined in resources.pri.

The image file {filename} must be smaller than 204800 Reduce the size of the indicated images.
bytes.**

The {filename} file must not contain a reverse map While the reverse map is generated during Visual Studio
section.** 'F5 debugging' when calling into makepri.exe, it can be
removed by running makepri.exe without the /m
parameter when generating a pri file.

** Indicates that a test was added in the Windows App Certification Kit 3.3 for Windows 8.1 and is only applicable when using
the that version of the kit or later.

Branding validation
Windows Store apps are expected to be complete and fully functional. Apps using the default images (from
templates or SDK samples) present a poor user experience and cannot be easily identified in the store catalog.
Test Details
The test will validate if the images used by the app are not default images either from SDK samples or from Visual
Studio.
Corrective actions
Replace default images with something more distinct and representative of your app.

Debug configuration test


Test the app to make sure it is not a debug build.
Background
To be certified for the Windows Store, apps must not be compiled for debug and they must not reference debug
versions of an executable file. In addition, you must build your code as optimized for your app to pass this test.
Test details
Test the app to make sure it is not a debug build and is not linked to any debug frameworks.
Corrective actions
Build the app as a release build before you submit it to the Windows Store.
Make sure that you have the correct version of .NET framework installed.
Make sure the app isn't linking to debug versions of a framework and that it is building with a release version. If
this app contains .NET components, make sure that you have installed the correct version of the .NET
framework.

File encoding test


UTF-8 file encoding
Background
HTML, CSS, and JavaScript files must be encoded in UTF-8 form with a corresponding byte-order mark (BOM) to
benefit from bytecode caching and avoid certain runtime error conditions.
Test details
Test the contents of app packages to make sure that they use the correct file encoding.
Corrective Action
Open the affected file and select Save As from the File menu in Visual Studio. Select the drop-down control next to
the Save button and select Save with Encoding. From the Advanced save options dialog, choose the Unicode
(UTF-8 with signature) option and click OK.

Direct3D feature level test


Direct3D feature level support
Tests Microsoft Direct3D apps to ensure that they won't crash on devices with older graphics hardware.
Background
Windows Store requires all applications using Direct3D to render properly or fail gracefully on feature level 9-1
graphics cards.
Because users can change the graphics hardware in their device after the app is installed, if you choose a minimum
feature level higher than 9-1, your app must detect at launch whether or not the current hardware meets the
minimum requirements. If the minimum requirements are not met, the app must display a message to the user
detailing the Direct3D requirements. Also, if an app is downloaded on a device with which it is not compatible, it
should detect that at launch and display a message to the customer detailing the requirements.
Test Details
The test will validate if the apps render accurately on feature level 9-1.
Corrective Action
Ensure that your app renders correctly on Direct3D feature level 9-1, even if you expect it to run at a higher feature
level. See Developing for different Direct3D feature levels for more info.
Direct3D Trim after suspend

Note This test only applies to Windows Store apps developed for Windows 8.1 and later.

Background
If the app does not call Trim on its Direct3D device, the app will not release memory allocated for its earlier 3D
work. This increases the risk of apps being terminated due to system memory pressure.
Test Details
Checks apps for compliance with d3d requirements and ensures that apps are calling a new Trim API upon their
Suspend callback.
Corrective Action
The app should call the Trim API on its IDXGIDevice3 interface anytime it is about to be suspended.

App Capabilities test


Special use capabilities
Background
Special use capabilities are intended for very specific scenarios. Only company accounts are allowed to use these
capabilities.
Test Details
Validate if the app is declaring any of the below capabilities:
EnterpriseAuthentication
SharedUserCertificates
DocumentsLibrary
If any of these capabilities are declared, the test will display a warning to the user.
Corrective Actions
Consider removing the special use capability if your app doesn't require it. Additionally, use of these capabilities are
subject to additional on-boarding policy review.

Windows Runtime metadata validation


Background
Ensures that the components that ship in an app conform to the UWP type system.
Test Details
Verifies that the .winmd files in the package conform to UWP rules.
Corrective Actions
ExclusiveTo attribute test: Ensure that UWP classes don't implement interfaces that are marked as
ExclusiveTo another class.
Type location test: Ensure that the metadata for all UWP types is located in the winmd file that has the longest
namespace-matching name in the app package.
Type name case-sensitivity test: Ensure that all UWP types have unique, case-insensitive names within your
app package. Also ensure that no UWP type name is also used as a namespace name within your app package.
Type name correctness test: Ensure there are no UWP types in the global namespace or in the Windows top-
level namespace.
General metadata correctness test: Ensure that the compiler you are using to generate your types is up to
date with the UWP specifications.
Properties test: ensure that all properties on a UWP class have a get method (set methods are optional). Ensure
that the type of the get method return value matches the type of the set method input parameter, for all
properties on UWP types.

Package Sanity tests


Platform appropriate files test
Apps that install mixed binaries may crash or not run correctly depending upon the users processor architecture.
Background
This test validates the binaries in an app package for architecture conflicts. An app package should not include
binaries that can't be used on the processor architecture specified in the manifest. Including unsupported binaries
can lead to your app crashing or an unnecessary increase in the app package size.
Test Details
Validates that each file's "bitness" in the PE header is appropriate when cross-referenced with the app package
processor architecture declaration
Corrective Action
Follow these guidelines to ensure that your app package only contains files supported by the architecture specified
in the app manifest:
If the Target Processor Architecture for your app is Neutral processor Type, the app package cannot contain
x86, x64, or ARM binary or image type files.
If the Target Processor Architecture for your app is x86 processor type, the app package must only contain
x86 binary or image type files. If the package contains x64 or ARM binary or image types, it will fail the test.
If the Target Processor Architecture for your app is x64 processor type, the app package must contain x64
binary or image type files. Note that in this case the package can also include x86 files, but the primary app
experience should utilize the x64 binary.
However, if the package contains ARM binary or image type files, or only contains x86 binaries or image
type files, it will fail the test.
If the Target Processor Architecture for your app is ARM processor type, the app package must only contain
ARM binary or image type files. If the package contains x64 or x86 binary or image type files, it will fail the
test.
Supported Directory Structure test
Validates that applications are not creating subdirectories as part of installation that are longer than MAX-PATH.
Background
OS components (including Trident, WWAHost, etc.) are internally limited to MAX-PATH for file system paths and
will not work correctly for longer paths.
Test Details
Verifies that no path within the app install directory exceeds MAX-PATH.
Corrective Action
Use a shorter directory structure, and or file name.

Resource Usage test


WinJS Background Task test
WinJS background task test ensures that JavaScript apps have the proper close statements so apps dont consume
battery.
Background
Apps that have JavaScript background tasks need to call Close() as the last statement in their background task.
Apps that do not do this could keep the system from returning to connected standby mode and result in draining
the battery.
Test Details
If the app does not have a background task file specified in the manifest, the test will pass. Otherwise the test will
parse the JavaScript background task file that is specified in the app package, and look for a Close() statement. If
found, the test will pass; otherwise the test will fail.
Corrective Action
Update the background JavaScript code to call Close() correctly.

Note This article is for Windows 10 developers writing UWP apps. If youre developing for Windows 8.x or
Windows Phone 8.x, see the archived documentation.
Performance
3/6/2017 3 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Users expect their apps to remain responsive, to feel natural, and not to drain their battery. Technically,
performance is a non-functional requirement but treating performance as a feature will help you deliver on your
users' expectations. Specifying goals, and measuring, are key factors. Determine what your performance-critical
scenarios are; define what good performance mean. Then measure early and often enough throughout the lifecycle
of your project to be confident you'll hit your goals.. This section shows you how to organize your performance
workflow, fix animation glitches and frame rate problems, and tune your startup time, page navigation time, and
memory usage.
If you haven't done so already, a step that we've seen result in significant performance improvements is just
porting your app to target Windows 10. Several XAML optimizations (for example, {x:Bind}) are only available in
Windows 10 apps. See Porting apps to Windows 10 and the //build/ session Moving to the Universal Windows
Platform.

TOPIC DESCRIPTION

Planning for performance Users expect their apps to remain responsive, to feel natural,
and not to drain their battery. Technically, performance is a
non-functional requirement but treating performance as a
feature will help you deliver on your users' expectations.
Specifying goals, and measuring, are key factors. Determine
what your performance-critical scenarios are; define what
good performance mean. Then measure early and often
enough throughout the lifecycle of your project to be
confident you'll hit your goals.

Optimize background activity Create UWP apps that work with the system to use
background tasks in a battery-efficient way.

ListView and GridView UI optimization Improve GridView performance and startup time through UI
virtualization, element reduction, and progressive updating of
items.

ListView and GridView data virtualization Improve GridView performance and startup time through
data virtualization.

Improve garbage collection performance Universal Windows Platform (UWP) apps written in C# and
Visual Basic get automatic memory management from the
.NET garbage collector. This section summarizes the behavior
and performance best practices for the .NET garbage collector
in UWP apps.
TOPIC DESCRIPTION

Keep the UI thread responsive Users expect an app to remain responsive while it does
computation, regardless of the type of machine. This means
different things for different apps. For some, this translates to
providing more realistic physics, loading data from disk or the
web faster, quickly presenting complex scenes and navigating
between pages, finding directions in a snap, or rapidly
processing data. Regardless of the type of computation, users
want their app to act on their input and eliminate instances
where it appears unresponsive while it "thinks".

Optimize your XAML markup Parsing XAML markup to construct objects in memory is time-
consuming for a complex UI. Here are some things you can do
to improve XAML markup parse and load time and memory
efficiency for your app.

Optimize your XAML layout Layout can be an expensive part of a XAML appboth in CPU
usage and memory overhead. Here are some simple steps you
can take to improve the layout performance of your XAML
app.

MVVM and language performance tips This topic discusses some performance considerations related
to your choice of software design patterns, and programming
language.

Best practices for your app's startup performance Create UWP apps with optimal startup times by improving the
way you handle launch and activation.

Optimize animations, media, and images Create Universal Windows Platform (UWP) apps with smooth
animations, high frame rate, and high-performance media
capture and playback.

Optimize suspend/resume Create UWP apps that streamline their use of the process
lifetime system to resume efficiently after suspension or
termination.

Optimize file access Create UWP apps that access the file system efficiently,
avoiding performance issues due to disk latency and
memory/CPU cycles.

Windows Runtime Components and optimizing interop Create UWP apps that use UWP Components and interop
between native and managed types while avoiding interop
performance issues.

Tools for profiling and performance Microsoft provides several tools to help you improve the
performance of your UWP app.
Planning for performance
3/6/2017 11 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Users expect their apps to remain responsive, to feel natural, and not to drain their battery. Technically,
performance is a non-functional requirement but treating performance as a feature will help you deliver on your
users' expectations. Specifying goals, and measuring, are key factors. Determine what your performance-critical
scenarios are; define what good performance mean. Then measure early and often enough throughout the lifecycle
of your project to be confident you'll hit your goals.

Specifying goals
The user experience is a basic way to define good performance. An app's startup time can influence a user's
perception of its performance. A user might consider an app launch time of less than one second to be excellent,
less than 5 seconds to be good, and greater than 5 seconds to be poor.
Other metrics have a less obvious impact on user experience, for example memory. The chances of an app being
terminated while either suspended or inactive rise with the amount of memory used by the active app. It's a
general rule that high memory usage degrades the experience for all apps on the system, so having a goal on
memory consumption is reasonable. Take into consideration the rough size of your app as perceived by users:
small, medium, or large. Expectations around performance will correlate to this perception. For example, you might
want a small app that doesn't use a lot of media to consume less than 100MB of memory.
It's better to set an initial goal, and then revise it later, than not to have a goal at all. Your app's performance goals
should be specific and measurable and they should fall into three categories: how long it takes users, or the app, to
complete tasks (time); the rate and continuity with which the app redraws itself in response to user interaction
(fluidity); and how well the app conserves system resources, including battery power (efficiency).

Time
Think of the acceptable ranges of elapsed time (interaction classes) it takes for users to complete their tasks in your
app. For each interaction class assign a label, a perceived user sentiment, and ideal and maximum durations. Here
are some suggestions.

INTERACTION CLASS
LABEL USER PERCEPTION IDEAL MAXIMUM EXAMPLES

Fast Minimally noticeable 100 milliseconds 200 milliseconds Bring up the app bar;
delay press a button (first
response)

Typical Quick, but not fast 300 milliseconds 500 milliseconds Resize; semantic zoom

Responsive Not quick, but feels 500 milliseconds 1 second Navigate to a


responsive different page;
resume the app from
a suspended state
INTERACTION CLASS
LABEL USER PERCEPTION IDEAL MAXIMUM EXAMPLES

Launch Competitive 1 second 3 seconds Launch the app for


experience the first time or after
it has been previously
terminated

Continuous No longer feels 500 milliseconds 5 seconds Download a file from


responsive the Internet

Captive Long; user could 500 milliseconds 10 seconds Install multiple apps
switch away from the Store

You can now assign interaction classes to your app's performance scenarios. You can assign the app's point-in-
time reference, a portion of the user experience, and an interaction class to each scenario. Here are some
suggestions for an example food and dining app.

SCENARIO TIME POINT USER EXPERIENCE INTERACTION CLASS

Navigate to recipe page First response Page transition animation Fast (100-200 milliseconds)
started

Responsive Ingredients list loaded; no Responsive (500 milliseconds


images - 1 second)

Visible complete All content loaded; images Continuous (500


shown milliseconds - 5 seconds)

Search for recipe First response Search button clicked Fast (100 - 200 milliseconds)

Visible complete List of local recipe titles Typical (300 - 500


shown milliseconds)

If you're displaying live content then also consider content freshness goals. Is the goal to refresh content every few
seconds? Or is refreshing content every few minutes, every few hours, or even once a day an acceptable user
experience?
With your goals specified, you are now better able to test, analyze, and optimize your app.

Fluidity
Specific measurable fluidity goals for your app might include:
No screen redraw stops-and-starts (glitches).
Animations render at 60 frames per second (FPS).
When a user pans/scrolls, the app presents 3-6 pages of content per second.

Efficiency
Specific measurable efficiency goals for your app might include:
For your app's process, CPU percentage is at or below N and memory usage in MB is at or below M at all times.
When the app is inactive, N and M are zero for your app's process.
Your app can be used actively for X hours on battery power; when your app is inactive, the device retains its
charge for Y hours.
Design your app for performance
You can now use your performance goals to influence your app's design. Using the example food and dining app,
after the user navigates to the recipe page, you might choose to update items incrementally so that the recipe's
name is rendered first, displaying the ingredients is deferred, and displaying images is deferred further. This
maintains responsiveness and a fluid UI while panning/scrolling, with the full fidelity rendering taking place after
the interaction slows to a pace that allow the UI thread to catch up. Here are some other aspects to consider.
UI
Maximize parse and load time and memory efficiency for each page of your app's UI (especially the initial page)
by optimizing your XAML markup. In a nutshell, defer loading UI and code until it's needed.
For ListView and GridView, make all the items the same size and use as many ListView and GridView
optimization techniques as you can.
Declare UI in the form of markup, which the framework can load and re-use in chunks, rather than constructing
it imperatively in code.
Collapse UI elements until the user needs them. See the Visibility property.
Prefer theme transitions and animations to storyboarded animations. For more info, see Animations overview.
Remember that storyboarded animations require constant updates to the screen, and keep the CPU and
graphics pipeline active. To preserve the battery, don't have animations running if the user is not interacting
with the app.
Images you load should be loaded at a size that is appropriate for the view in which you are presenting it, using
the GetThumbnailAsync method.
CPU, memory, and power
Schedule lower-priority work to run on lower-priority threads and/or cores. See Asynchronous programming,
the Dispatcher property, and the CoreDispatcher class.
Minimize your app's memory footprint by releasing expensive resources (such as media) on suspend.
Minimize your code's working set.
Avoid memory leaks by unregistering event handlers and dereferencing UI elements whenever possible.
For the sake of the battery, be conservative with how often you poll for data, query a sensor, or schedule work
on the CPU when it is idle.
Data access
If possible, prefetch content. For automatic prefetching, see the ContentPrefetcher class. For manual
prefetching, see the Windows.ApplicationModel.Background namespace and the MaintenanceTrigger
class.
If possible, cache content that's expensive to access. See the LocalFolder and LocalSettings properties.
For cache misses, show a placeholder UI as quickly as possible that indicates that the app is still loading content.
Transition from placeholder to live content in a way that is not jarring to the user. For example, don't change the
position of content under the user's finger or mouse pointer as the app loads live content.
App launch and resume
Defer the app's splash screen, and don't extend the app's splash screen unless necessary. For details, see
Creating a fast and fluid app launch experience and Display a splash screen for more time.
Disable animations that occur immediately after the splash screen is dismissed, as these will only lead to a
perception of delay in app launch time.
Adaptive UI, and orientation
Use the VisualStateManager class.
Complete only required work immediately, deferring intensive app work until lateryour app has between 200
and 800 milliseconds to complete work before the user sees your app's UI in a cropped state.
With your performance-related designs in place, you can start coding your app.

Instrument for performance


As you code, add code that logs messages and events at certain points while your app runs. Later, when you're
testing your app, you can use profiling tools such as Windows Performance Recorder and Windows Performance
Analyzer (both are included in the Windows Performance Toolkit) to create and view a report about your app's
performance. In this report, you can look for these messages and events to help you more easily analyze the
report's results.
The Universal Windows Platform (UWP) provides logging APIs, backed by Event Tracing for Windows (ETW), that
together offer a rich event logging and tracing solution. The APIs, which are part of the
Windows.Foundation.Diagnostics namespace, include the FileLoggingSession, LoggingActivity,
LoggingChannel, and LoggingSession classes.
To log a message in the report at a specific point while the app is running, create a LoggingChannel object, and
then call the object's LogMessage method, like this.

// using Windows.Foundation.Diagnostics;
// ...

LoggingChannel myLoggingChannel = new LoggingChannel("MyLoggingChannel");

myLoggingChannel.LogMessage(LoggingLevel.Information, "Here' s my logged message.");

// ...

To log start and stop events in the report over a period of time while the app is running, create a LoggingActivity
object, and then call the object's LoggingActivity constructor, like this.

// using Windows.Foundation.Diagnostics;
// ...

LoggingActivity myLoggingActivity;

// myLoggingChannel is defined and initialized in the previous code example.


using (myLoggingActivity = new LoggingActivity("MyLoggingActivity"), myLoggingChannel))
{ // After this logging activity starts, a start event is logged.

// Add code here to do something of interest.

} // After this logging activity ends, an end event is logged.

// ...

Also see the Logging sample.


With your app instrumented, you can test and measure your app's performance.

Test and measure against performance goals


Part of your performance plan is to define the points during development where you'll measure performance. This
serves different purposes depending on whether you're measuring during prototyping, development, or
deployment. Measuring performance during the early stages of prototyping can be tremendously valuable, so we
recommend that you do so as soon as you have code that does meaningful work. Early measurements give you a
good idea of where the important costs are in your app, and inform design decisions. This results in high
performing and scaling apps. It's generally costlier to change designs later than earlier. Measuring performance
late in the product cycle can result in last-minute hacks and poor performance.
Use these techniques and tools to test how your app stacks up against your original performance goals.
Test against a wide variety of hardware configurations including all-in-one and desktop PCs, laptops,
ultrabooks, and tablets and other mobile devices.
Test against a wide variety of screen sizes. While wider screen sizes can show much more content, bringing in
all of that extra content can negatively impact performance.
Eliminate as many testing variables as you can.
Turn off background apps on the testing device. To do this, in Windows, select Settings from the Start
menu > Personalization > Lock screen. Select each active app and select None.
Compile your app to native code by building it in release configuration before deploying it to the testing
device.
To ensure that automatic maintenance does not affect the performance of the testing device, trigger it
manually and wait for it to complete. In Windows, in the Start menu search for Security and
Maintenance. In the Maintenance area, under Automatic Maintenance, select Start maintenance
and wait for the status to change from Maintenance in progress.
Run the app multiple times to help eliminate random testing variables and help ensure consistent
measurements.
Test for reduced power availability. Your users' device might have significantly less power than your
development machine. Windows was designed with low-power devices, such as mobile devices, in mind. Apps
that run on the platform should ensure they perform well on these devices. As a heuristic, expect that a low
power device runs at about a quarter the speed of a desktop computer, and set your goals accordingly.
Use a combination of tools like Microsoft Visual Studio and Windows Performance Analyzer to measure app
performance. Visual Studio is designed to provide app-focused analysis, such as source code linking. Windows
Performance Analyzer is designed to provide system-focused analysis, such as providing system info, info about
touch manipulation events, and info about disk input/output (I/O) and graphics processing unit (GPU) cost. Both
tools provide trace capture and export, and can reopen shared and post-mortem traces.
Before you submit your app to the Store for certification, be sure to incorporate into your test plans the
performance-related test cases as described in the "Performance tests" section of Windows App Certification Kit
tests and in the "Performance and stability" section of Windows Store app test cases.
For more info, see these resources and profiling tools.
Windows Performance Analyzer
Windows Performance Toolkit
Analyze performance using Visual Studio diagnostic tools
The //build/ session XAML Performance
The //build/ session New XAML Tools in Visual Studio 2015

Respond to the performance test results


After you analyze your performance test results, determine if any changes are needed, for example:
Should you change any of your app design decisions, or optimize your code?
Should you add, remove, or change any of the instrumentation in the code?
Should you revise any of your performance goals?
If any changes are needed, make them and then go back to instrumenting or testing and repeat.

Optimizing
Optimize only the performance-critical code paths in your app: those where most time is spent. Profiling will tell
you which. Often, there is a trade-off between creating software that follows good design practices and writing
code that performs at the highest optimization. It is generally better to prioritize developer productivity and good
software design in areas where performance is not a concern.
Optimize background activity
3/6/2017 3 min to read Edit on GitHub

Universal Windows apps should perform consistently well across all device families. On battery-powered devices,
power consumption is a critical factor in the user's overall experience with your app. All-day battery life is a
desirable feature to every user, but it requires efficiency from all of the software installed on the device, including
your own.
Background task behavior is arguably the greatest factor in the total energy cost of an app. A background task is
any program activity that has been registered with the system to run without the app being open. See Create and
register an out-of-process background task for more.

Background activity allowance


In Windows 10, version 1607, users can view their "Battery usage by app" in the Battery section of the Settings
app. Here, they will see a list of apps and the percentage of battery life (out of the amount of battery life that has
been used since the last charge) that each app has consumed.
For UWP apps on this list, users have some control over how the system treats their background activity.
Background activity can be specified as "Always allowed," "Managed by Windows" (the default setting), or "Never
allowed" (more details on these below). Use the BackgroundAccessStatus enum value returned from the
BackgroundExecutionManager.RequestAccessAsync() method to see what background activity allowance
your app has.
All this is to say that if your app doesn't implement responsible background activity management, the user may
deny background permissions to your app altogether, which is not desirable for either party.

Work with the Battery Saver feature


Battery Saver is a system-level feature that users can configure in Settings. It cuts off all background activity of all
apps when the battery level drops below a user-defined threshold, except for the background activity of apps that
have been set to "Always allowed."
If your app is marked as "Managed by Windows" and calls
BackgroundExecutionManager.RequestAccessAsync() to register a background activity while Battery Saver is
on, it will return a DeniedSubjectToSystemPolicy value. Your app should handle this by notifying the user that
the given background task(s) will not run until Battery Saver is off and they are re-registered with the system. If a
background task has already been registered to run, and Battery Saver is on at the time of its trigger, the task will
not run and the user will not be notified. To reduce the chance of this happening, it is a good practice to program
your app to re-register its background tasks each time it is opened.
While background activity management is the primary purpose of the Battery Saver feature, your app can make
additional adjustments to further conserve energy when Battery Saver is on. Check the status of Battery Saver
mode from within your app by referencing the PowerManager.PowerSavingMode property. It is an enum value:
either PowerSavingMode.Off or PowerSavingMode.On. In the case where Battery Saver is on, your app could
reduce its use of animations, stop location polling, or delay syncs and backups.

Further optimize background tasks


The following are additional steps you can take when registering your background tasks to make them more
battery-aware.
Use a maintenance trigger. A MaintenanceTrigger object can be used instead of a SystemTrigger object to
determine when a background task starts. Tasks that use maintenance triggers will only run when the device is
connected to AC power, and they are allowed to run for longer. See Use a maintenance trigger for instructions.
Use the BackgroundWorkCostNotHigh system condition type. System conditions must be met in order for
background tasks to run (see Set conditions for running a background task for more). The background work cost is
a measurement that denotes the relative energy impact of running the background task. A task running when the
device is plugged into AC power would be marked as low (little/no impact on battery). A task running when the
device is on battery power with the screen off is marked as high because there is presumably little program activity
running on the device at the time, so the background task would have a greater relative cost. A task running when
the device is on battery power with the screen on is marked as medium, because there is presumably already
some program activity running, and the background task would add a bit more to the energy cost. The
BackgroundWorkCostNotHigh system condition simply delays your task's ability to run until either the screen is
on or the device is connected to AC power.

Test battery efficiency


Make sure to test your app on real devices for any high-power-consumption scenarios. It's a good idea to test your
app on many different devices, with Battery Saver on and off, and in environments of varying network strength.

Related topics
Create and register an out-of-process background task
Planning for performance
ListView and GridView UI optimization
3/6/2017 13 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Note
For more details, see the //build/ session Dramatically Increase Performance when Users Interact with Large
Amounts of Data in GridView and ListView.
Improve ListView and GridView performance and startup time through UI virtualization, element reduction, and
progressive updating of items. For data virtualization techniques, see ListView and GridView data virtualization.

Two key factors in collection performance


Manipulating collections is a common scenario. A photo viewer has collections of photos, a reader has collections
of articles/books/stories, and a shopping app has collections of products. This topic shows what you can do to
make your app efficient at manipulating collections.
There are two key factors in performance when it comes to collections: one is the time spent by the UI thread
creating items; the other is the memory used by both the raw data set and the UI elements used to render that
data.
For smooth panning/scrolling, it's vital that the UI thread do an efficient and smart job of instantiating, data-
binding, and laying out items.

UI virtualization
UI virtualization is the most important improvement you can make. This means that UI elements representing the
items are created on demand. For an items control bound to a 1000-item collection, it would be a waste of
resources to create the UI for all the items at the same time, because they can't all be displayed at the same time.
ListView and GridView (and other standard ItemsControl-derived controls) perform UI virtualization for you.
When items are close to being scrolled into view (a few pages away), the framework generates the UI for the items
and caches them. When it's unlikely that the items will be shown again, the framework re-claims the memory.
If you provide a custom items panel template (see ItemsPanel) then make sure you use a virtualizing panel such
as ItemsWrapGrid or ItemsStackPanel. If you use VariableSizedWrapGrid, WrapGrid, or StackPanel, then
you will not get virtualization. Additionally, the following ListView events are raised only when using an
ItemsWrapGrid or an ItemsStackPanel: ChoosingGroupHeaderContainer, ChoosingItemContainer, and
ContainerContentChanging.
The concept of a viewport is critical to UI virtualization because the framework must create the elements that are
likely to be shown. In general, the viewport of an ItemsControl is the extent of the logical control. For example, the
viewport of a ListView is the width and height of the ListView element. Some panels allow child elements
unlimited space, examples being ScrollViewer and a Grid, with auto-sized rows or columns. When a virtualized
ItemsControl is placed in a panel like that, it takes enough room to display all of its items, which defeats
virtualization. Restore virtualization by setting a width and height on the ItemsControl.

Element reduction per item


Keep the number of UI elements used to render your items to a reasonable minimum.
When an items control is first shown, all the elements needed to render a viewport full of items are created. Also,
as items approach the viewport, the framework updates the UI elements in cached item templates with the bound
data objects. Minimizing the complexity of the markup inside templates pays off in memory and in time spent on
the UI thread, improving responsiveness especially while panning/scrolling. The templates in question are the item
template (see ItemTemplate) and the control template of a ListViewItem or a GridViewItem (the item control
template, or ItemContainerStyle). The benefit of even a small reduction in element count is multiplied by the
number of items displayed.
For examples of element reduction, see Optimize your XAML markup.
The default control templates for ListViewItem and GridViewItem contain a ListViewItemPresenter element.
This presenter is a single optimized element that displays complex visuals for focus, selection, and other visual
states. If you already have custom item control templates (ItemContainerStyle), or if in future you edit a copy of
an item control template, then we recommend you use a ListViewItemPresenter because that element will give
you optimum balance between performance and customizability in the majority of cases. You customize the
presenter by setting properties on it. For example, here's markup that removes the check mark that appears by
default when an item is selected, and changes the background color of the selected item to orange.

...
<ListView>
...
<ListView.ItemContainerStyle>
<Style TargetType="ListViewItem">
<Setter Property="Template">
<Setter.Value>
<ControlTemplate TargetType="ListViewItem">
<ListViewItemPresenter SelectionCheckMarkVisualEnabled="False" SelectedBackground="Orange"/>
</ControlTemplate>
</Setter.Value>
</Setter>
</Style>
</ListView.ItemContainerStyle>
</ListView>
<!-- ... -->

There are about 25 properties with self-describing names similar to SelectionCheckMarkVisualEnabled and
SelectedBackground. Should the presenter types prove not to be customizable enough for your use case, you
can edit a copy of the ListViewItemExpanded or GridViewItemExpanded control template instead. These can be found in
\Program Files (x86)\Windows Kits\10\DesignTime\CommonConfiguration\Neutral\UAP\<version>\Generic\generic.xaml . Be aware that
using these templates means trading some performance for the increase in customization.

Update ListView and GridView items progressively


If you're using data virtualization then you can keep ListView and GridView responsiveness high by configuring
the control to render temporary UI elements for the items still being (down)loaded. The temporary elements are
then progressively replaced with actual UI as data loads.
Alsono matter where you're loading data from (local disk, network, or cloud)a user can pan/scroll a ListView
or GridView so rapidly that it's not possible to render each item with full fidelity while preserving smooth
panning/scrolling. To preserve smooth panning/scrolling you can choose to render an item in multiple phases in
addition to using placeholders.
An example of these techniques is often seen in photo-viewing apps: even though not all of the images have been
loaded and displayed, the user can still pan/scroll and interact with the collection. Or, for a "movie" item, you could
show the title in the first phase, the rating in the second phase, and an image of the poster in the third phase. The
user sees the most important data about each item as early as possible, and that means they're able to take action
at once. Then the less important info is filled-in as time allows. Here are the platform features you can use to
implement these techniques.
Placeholders
The temporary placeholder visuals feature is on by default, and it's controlled with the
ShowsScrollingPlaceholders property. During fast panning/scrolling, this feature gives the user a visual hint
that there are more items yet to fully display while also preserving smoothness. If you use one of the techniques
below then you can set ShowsScrollingPlaceholders to false if you prefer not to have the system render
placeholders.
Progressive data template updates using x:Phase
Here's how to use the x:Phase attribute with {x:Bind} bindings to implement progressive data template updates.
1. Here's what the binding source looks like (this is the data source that we'll bind to).

namespace LotsOfItems
{
public class ExampleItem
{
public string Title { get; set; }
public string Subtitle { get; set; }
public string Description { get; set; }
}

public class ExampleItemViewModel


{
private ObservableCollection<ExampleItem> exampleItems = new ObservableCollection<ExampleItem>();
public ObservableCollection<ExampleItem> ExampleItems { get { return this.exampleItems; } }

public ExampleItemViewModel()
{
for (int i = 1; i < 150000; i++)
{
this.exampleItems.Add(new ExampleItem(){
Title = "Title: " + i.ToString(),
Subtitle = "Sub: " + i.ToString(),
Description = "Desc: " + i.ToString()
});
}
}
}
}

2. Here's the markup that DeferMainPage.xaml contains. The grid view contains an item template with elements
bound to the Title, Subtitle, and Description properties of the MyItem class. Note that x:Phase defaults
to 0. Here, items will be initially rendered with just the title visible. Then the subtitle element will be data
bound and made visible for all the items and so on until all the phases have been processed.
<Page
x:Class="LotsOfItems.DeferMainPage"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
xmlns:lotsOfItems="using:LotsOfItems"
mc:Ignorable="d">

<Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">


<GridView ItemsSource="{x:Bind ViewModel.ExampleItems}">
<GridView.ItemTemplate>
<DataTemplate x:DataType="lotsOfItems:ExampleItem">
<StackPanel Height="100" Width="100" Background="OrangeRed">
<TextBlock Text="{x:Bind Title}"/>
<TextBlock Text="{x:Bind Subtitle}" x:Phase="1"/>
<TextBlock Text="{x:Bind Description}" x:Phase="2"/>
</StackPanel>
</DataTemplate>
</GridView.ItemTemplate>
</GridView>
</Grid>
</Page>

3. If you run the app now and pan/scroll quickly through the grid view then you'll notice that as each new item
appears on the screen, at first it is rendered as a dark gray rectangle (thanks to the
ShowsScrollingPlaceholders property defaulting to true), then the title appears, followed by subtitle,
followed by description.
Progressive data template updates using ContainerContentChanging
The general strategy for the ContainerContentChanging event is to use Opacity to hide elements that dont
need to be immediately visible. When elements are recycled, they will retain their old values so we want to hide
those elements until we've updated those values from the new data item. We use the Phase property on the event
arguments to determine which elements to update and show. If additional phases are needed, we register a
callback.
1. We'll use the same binding source as for x:Phase.
2. Here's the markup that MainPage.xaml contains. The grid view declares a handler to its
ContainerContentChanging event, and it contains an item template with elements used to display the
Title, Subtitle, and Description properties of the MyItem class. To get the maximum performance
benefits of using ContainerContentChanging, we don't use bindings in the markup but we instead assign
values programmatically. The exception here is the element displaying the title, which we consider to be in
phase 0.
<Page
x:Class="LotsOfItems.MainPage"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
xmlns:lotsOfItems="using:LotsOfItems"
mc:Ignorable="d">

<Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">


<GridView ItemsSource="{x:Bind ViewModel.ExampleItems}" ContainerContentChanging="GridView-
ContainerContentChanging">
<GridView.ItemTemplate>
<DataTemplate x:DataType="lotsOfItems:ExampleItem">
<StackPanel Height="100" Width="100" Background="OrangeRed">
<TextBlock Text="{x:Bind Title}"/>
<TextBlock Opacity="0"/>
<TextBlock Opacity="0"/>
</StackPanel>
</DataTemplate>
</GridView.ItemTemplate>
</GridView>
</Grid>
</Page>

3. Lastly, here's the implementation of the ContainerContentChanging event handler. This code also shows
how we add a property of type RecordingViewModel to MainPage to expose the binding source class
from the class that represents our page of markup. As long as you don't have any {Binding} bindings in
your data template, then mark the event arguments object as handled in the first phase of the handler to
hint to the item that it needn't set a data context.
namespace LotsOfItems
{
/// <summary>
/// An empty page that can be used on its own or navigated to within a Frame.
/// </summary>
public sealed partial class MainPage : Page
{
public MainPage()
{
this.InitializeComponent();
this.ViewModel = new ExampleItemViewModel();
}

public ExampleItemViewModel ViewModel { get; set; }

// Display each item incrementally to improve performance.


private void GridView-ContainerContentChanging(ListViewBase sender, ContainerContentChangingEventArgs args)
{
if (args.Phase != 0)
{
throw new System.Exception("We should be in phase 0, but we are not.");
}

// It's phase 0, so this item's title will already be bound and displayed.

args.RegisterUpdateCallback(this.ShowSubtitle);

args.Handled = true;
}

private void ShowSubtitle(ListViewBase sender, ContainerContentChangingEventArgs args)


{
if (args.Phase != 1)
{
throw new System.Exception("We should be in phase 1, but we are not.");
}

// It's phase 1, so show this item's subtitle.


var templateRoot = args.ItemContainer.ContentTemplateRoot as StackPanel;
var textBlock = templateRoot.Children[1] as TextBlock;
textBlock.Text = (args.Item as ExampleItem).Subtitle;
textBlock.Opacity = 1;

args.RegisterUpdateCallback(this.ShowDescription);
}

private void ShowDescription(ListViewBase sender, ContainerContentChangingEventArgs args)


{
if (args.Phase != 2)
{
throw new System.Exception("We should be in phase 2, but we are not.");
}

// It's phase 2, so show this item's description.


var templateRoot = args.ItemContainer.ContentTemplateRoot as StackPanel;
var textBlock = templateRoot.Children[2] as TextBlock;
textBlock.Text = (args.Item as ExampleItem).Description;
textBlock.Opacity = 1;
}
}
}

4. If you run the app now and pan/scroll quickly through the grid view then you'll see the same behavior as
for as for x:Phase.
Container-recycling with heterogeneous collections
In some applications, you need to have different UI for different types of item within a collection. This can create a
situation where it is impossible for virtualizing panels to reuse/recycle the visual elements used to display the
items. Recreating the visual elements for an item during panning undoes many of the performance wins provided
by virtualization. However, a little planning can allow virtualizing panels to reuse the elements. Developers have a
couple of options depending on their scenario: the ChoosingItemContainer event, or an item template selector.
The ChoosingItemContainer approach has better performance.
The ChoosingItemContainer event
ChoosingItemContainer is an event that allows you to provide an item (ListViewItem/GridViewItem) to the
ListView/GridView whenever a new item is needed during start-up or recycling. You can create a container based
on the type of data item the container will display (shown in the example below). ChoosingItemContainer is the
higher-performing way to use different data templates for different items. Container caching is something that can
be achieved using ChoosingItemContainer. For example, if you have five different templates, with one template
occurring an order of magnitude more often than the others, then ChoosingItemContainer allows you not only to
create items at the ratios needed but also to keep an appropriate number of elements cached and available for
recycling. ChoosingGroupHeaderContainer provides the same functionality for group headers.
// Example shows how to use ChoosingItemContainer to return the correct
// DataTemplate when one is available. This example shows how to return different
// data templates based on the type of FileItem. Available ListViewItems are kept
// in two separate lists based on the type of DataTemplate needed.
private void lst-ChoosingItemContainer
(ListViewBase sender, ChoosingItemContainerEventArgs args)
{
// Determines type of FileItem from the item passed in.
bool special = args.Item is DifferentFileItem;

// Uses the Tag property to keep track of whether a particular ListViewItem's


// datatemplate should be a simple or a special one.
string tag = special ? "specialFiles" : "simpleFiles";

// Based on the type of datatemplate needed return the correct list of


// ListViewItems, this could have also been handled with a hash table. These
// two lists are being used to keep track of ItemContainers that can be reused.
List<UIElement> relevantStorage = special ? specialFileItemTrees : simpleFileItemTrees;

// args.ItemContainer is used to indicate whether the ListView is proposing an


// ItemContainer (ListViewItem) to use. If args.Itemcontainer, then there was a
// recycled ItemContainer available to be reused.
if (args.ItemContainer != null)
{
// The Tag is being used to determine whether this is a special file or
// a simple file.
if (args.ItemContainer.Tag.Equals(tag))
{
// Great: the system suggested a container that is actually going to
// work well.
}
else
{
// the ItemContainer's datatemplate does not match the needed
// datatemplate.
args.ItemContainer = null;
}
}

if (args.ItemContainer == null)
{
// see if we can fetch from the correct list.
if (relevantStorage.Count > 0)
{
args.ItemContainer = relevantStorage[0] as SelectorItem;
}
else
{
// there aren't any (recycled) ItemContainers available. So a new one
// needs to be created.
ListViewItem item = new ListViewItem();
item.ContentTemplate = this.Resources[tag] as DataTemplate;
item.Tag = tag;
args.ItemContainer = item;
}
}
}

Item template selector


An item template selector (DataTemplateSelector) allows an app to return a different item template at runtime
based on the type of the data item that will be displayed. This makes development more productive, but it makes
UI virtualization more difficult because not every item template can be reused for every data item.
When recycling an item (ListViewItem/GridViewItem), the framework must decide whether the items that are
available for use in the recycle queue (the recycle queue is a cache of items that are not currently being used to
display data) have an item template that will match the one desired by the current data item. If there are no items
in the recycle queue with the appropriate item template then a new item is created, and the appropriate item
template is instantiated for it. If, on other hand, the recycle queue contains an item with the appropriate item
template then that item is removed from the recycle queue and is used for the current data item. An item template
selector works in situations where only a small number of item templates are used and there is a flat distribution
throughout the collection of items that use different item templates.
When there is an uneven distribution of items that use different item templates then new item templates will likely
need to be created during panning, and this negates many of the gains provided by virtualization. Additionally, an
item template selector only considers five possible candidates when evaluating whether a particular container can
be reused for the current data item. So you should carefully consider whether your data is appropriate for use with
an item template selector before using one in your app. If your collection is mostly homogeneous then the selector
is returning the same type most (possibly all) of the time. Just be aware of the price you're paying for the rare
exceptions to that homegeneity, and consider whether using ChoosingItemContainer (or two items controls) is
preferable.
ListView and GridView data virtualization
3/6/2017 6 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Note For more details, see the //build/ session Dramatically Increase Performance when Users Interact with Large
Amounts of Data in GridView and ListView.
Improve ListView and GridView performance and startup time through data virtualization. For UI virtualization,
element reduction, and progressive updating of items, see ListView and GridView UI optimization.
A method of data virtualization is needed for a data set that is so large that it cannot or should not all be stored in
memory at one time. You load an initial portion into memory (from local disk, network, or cloud) and apply UI
virtualization to this partial data set. You can later load data incrementally, or from arbitrary points in the master
data set (random access), on demand. Whether data virtualization is appropriate for you depends on many factors.
The size of your data set
The size of each item
The source of the data set (local disk, network, or cloud)
The overall memory consumption of your app
Note Be aware that a feature is enabled by default for ListView and GridView that displays temporary placeholder
visuals while the user is panning/scrolling quickly. As data is loaded, these placeholder visuals are replaced with
your item template. You can turn the feature off by setting ListViewBase.ShowsScrollingPlaceholders to false,
but if you do so then we recommend that you use the x:Phase attribute to progressively render the elements in
your item template. See Update ListView and GridView items progressively.
Here are more details about the incremental and random-access data virtualization techniques.

Incremental data virtualization


Incremental data virtualization loads data sequentially. A ListView that uses incremental data virtualization may be
used to view a collection of a million items, but only 50 items are loaded initially. As the user pans/scrolls, the next
50 are loaded. As items are loaded, the scroll bar's thumb decreases in size. For this type of data virtualization you
write a data source class that implements these interfaces.
IList
INotifyCollectionChanged (C#/VB) or IObservableVector<T> (C++/CX)
ISupportIncrementalLoading
A data source like this is an in-memory list that can be continually extended. The items control will ask for items
using the standard IList indexer and count properties. The count should represent the number of items locally, not
the true size of the dataset.
When the items control gets close to the end of the existing data, it will call
ISupportIncrementalLoading.HasMoreItems. If you return true, then it will call
ISupportIncrementalLoading.LoadMoreItemsAsync passing an advised number of items to load. Depending
on where you're loading data from (local disk, network, or cloud), you may choose to load a different number of
items than that advised. For example, if your service supports batches of 50 items but the items control only asks
for 10 then you can load 50. Load the data from your back end, add it to your list, and raise a change notification
via INotifyCollectionChanged or IObservableVector<T> so that the items control knows about the new items.
Also return a count of the items you actually loaded. If you load fewer items than advised, or the items control has
been panned/scrolled even further in the interim, then your data source will be called again for more items and the
cycle will continue. You can learn more by downloading the XAML data binding sample for Windows 8.1 and re-
using its source code in your Windows 10 app.

Random access data virtualization


Random access data virtualization allows loading from an arbitrary point in the data set. A ListView that uses
random access data virtualization, used to view a collection of a million items, can load the items 100,000
100,050. If the user then moves to the beginning of the list, the control loads items 1 50. At all times, the scroll
bar's thumb indicates that the ListView contains a million items. The position of the scroll bar's thumb is relative to
where the visible items are located in the collection's entire data set. This type of data virtualization can significantly
reduce the memory requirements and load times for the collection. To enable it you need to write a data source
class that fetches data on demand and manages a local cache and implements these interfaces.
IList
INotifyCollectionChanged (C#/VB) or IObservableVector<T> (C++/CX)
(Optionally) IItemsRangeInfo
(Optionally) ISelectionInfo
IItemsRangeInfo provides information on which items the control is actively using. The items control will call this
method whenever its view is changing, and will include these two sets of ranges.
The set of items that are in the viewport.
A set of non-virtualized items that the control is using that may not be in the viewport.
A buffer of items around the viewport that the items control keeps so that touch panning is smooth.
The focused item.
The first item.
By implementing IItemsRangeInfo your data source knows what items need to be fetched and cached, and when
to prune from the cache data that is no longer needed. IItemsRangeInfo uses ItemIndexRange objects to
describe a set of items based on their index in the collection. This is so that it doesn't use item pointers, which may
not be correct or stable. IItemsRangeInfo is designed to be used by only a single instance of an items control
because it relies on state information for that items control. If multiple items controls need access to the same data
then you will need a separate instance of the data source for each. They can share a common cache, but the logic to
purge from the cache will be more complicated.
Here's the basic strategy for your random access data virtualization data source.
When asked for an item
If you have it available in memory, then return it.
If you dont have it, then return either null or a placeholder item.
Use the request for an item (or the range information from IItemsRangeInfo) to know which items are
needed, and to fetch data for items from your back end asynchronously. After retrieving the data, raise a
change notification via INotifyCollectionChanged or IObservableVector<T> so that the items
control knows about the new item.
(Optionally) as the items control's viewport changes, identify what items are needed from your data source via
your implementation of IItemsRangeInfo.
Beyond that, the strategy for when to load data items, how many to load, and which items to keep in memory is up
to your application. Some general considerations to keep in mind:
Make asynchronous requests for data; don't block the UI thread.
Find the sweet spot in the size of the batches you fetch items in. Prefer chunky to chatty. Not so small that you
make too many small requests; not too large that they take too long to retrieve.
Consider how many requests you want to have pending at the same time. Performing one at a time is easier,
but it may be too slow if turnaround time is high.
Can you cancel requests for data?
If using a hosted service, is there a cost per transaction?
What kind of notifications are provided by the service when the results of a query are changed? Will you know if
an item is inserted at index 33? If your service supports queries based on a key-plus-offset, that may be better
than just using an index.
How smart do you want to be in pre-fetching items? Are you going to try and track the direction and velocity of
scrolling to predict which items are needed?
How aggressive do you want to be in purging the cache? This is a tradeoff of memory versus experience.
Improve garbage collection performance
3/6/2017 7 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Universal Windows Platform (UWP) apps written in C# and Visual Basic get automatic memory management from
the .NET garbage collector. This section summarizes the behavior and performance best practices for the .NET
garbage collector in UWP apps. For more info on how the .NET garbage collector works and tools for debugging
and analyzing garbage collector performance, see Garbage collection.
Note Needing to intervene in the default behavior of the garbage collector is strongly indicative of general
memory issues with your app. For more info, see Memory Usage Tool while debugging in Visual Studio 2015. This
topic applies to C# and Visual Basic only.
The garbage collector determines when to run by balancing the memory consumption of the managed heap with
the amount of work a garbage collection needs to do. One of the ways the garbage collector does this is by dividing
the heap into generations and collecting only part of the heap most of the time. There are three generations in the
managed heap:
Generation 0. This generation contains newly allocated objects unless they are 85KB or larger, in which case
they are part of the large object heap. The large object heap is collected with generation 2 collections.
Generation 0 collections are the most frequently occurring type of collection and clean up short-lived objects
such as local variables.
Generation 1. This generation contains objects that have survived generation 0 collections. It serves as a buffer
between generation 0 and generation 2. Generation 1 collections occur less frequently than generation 0
collections and clean up temporary objects that were active during previous generation 0 collections. A
generation 1 collection also collects generation 0.
Generation 2. This generation contains long-lived objects that have survived generation 0 and generation 1
collections. Generation 2 collections are the least frequent and collect the entire managed heap, including the
large object heap which contains objects that are 85KB or larger.
You can measure the performance of the garbage collector in 2 aspects: the time it takes to do the garbage
collection, and the memory consumption of the managed heap. If you have a small app with a heap size less than
100MB then focus on reducing memory consumption. If you have an app with a managed heap larger than 100MB
then focus on reducing the garbage collection time only. Here's how you can help the .NET garbage collector
achieve better performance.

Reduce memory consumption


Release references
A reference to an object in your app prevents that object, and all of the objects it references, from being collected.
The .NET compiler does a good job of detecting when a variable is no longer in use so objects held onto by that
variable will be eligible for collection. But in some cases it may not be obvious that some objects have a reference
to other objects because part of the object graph might be owned by libraries your app uses. To learn about the
tools and techniques to find out which objects survive a garbage collection, see Garbage collection and
performance.
Induce a garbage collection if its useful
Induce a garbage collection only after you have measured your app's performance and have determined that
inducing a collection will improve its performance.
You can induce a garbage collection of a generation by calling GC.Collect(n), where n is the generation you want
to collect (0, 1, or 2).
Note We recommend that you don't force a garbage collection in your app because the garbage collector uses
many heuristics to determine the best time to perform a collection, and forcing a collection is in many cases an
unnecessary use of the CPU. But if you know that you have a large number of objects in your app that are no
longer used and you want to return this memory to the system, then it may be appropriate to force a garbage
collection. For example, you can induce a collection at the end of a loading sequence in a game to free up memory
before gameplay starts.
To avoid inadvertently inducing too many garbage collections, you can set the GCCollectionMode to Optimized.
This instructs the garbage collector to start a collection only if it determines that the collection would be productive
enough to be justified.

Reduce garbage collection time


This section applies if you've analyzed your app and observed large garbage collection times. Garbage collection-
related pause times include: the time it takes to run a single garbage collection pass; and the total time your app
spends doing garbage collections. The amount of time it takes to do a collection depends on how much live data
the collector has to analyze. Generation 0 and generation 1 are bounded in size, but generation 2 continues to grow
as more long-lived objects are active in your app. This means that the collection times for generation 0 and
generation 1 are bounded, while generation 2 collections can take longer. How often garbage collections run
depends mostly on how much memory you allocate, because a garbage collection frees up memory to satisfy
allocation requests.
The garbage collector occasionally pauses your app to perform work, but doesn't necessarily pause your app the
entire time it is doing a collection. Pause times are usually not user-perceivable in your app, especially for
generation 0 and generation 1 collections. The Background garbage collection feature of the .NET garbage collector
allows Generation 2 collections to be performed concurrently while your app is running and will only pause your
app for short periods of time. But it is not always possible to do a Generation 2 collection as a background
collection. In that case, the pause can be user-perceivable if you have a large enough heap (more than 100MB).
Frequent garbage collections can contribute to increased CPU (and therefore power) consumption, longer loading
times, or decreased frame rates in your application. Below are some techniques you can use to reduce garbage
collection time and collection-related pauses in your managed UWP app.
Reduce memory allocations
If you dont allocate any objects then the garbage collector doesnt run unless there is a low memory condition in
the system. Reducing the amount of memory you allocate directly translates to less frequent garbage collections.
If in some sections of your app pauses are completely undesirable, then you can pre-allocate the necessary objects
beforehand during a less performance-critical time. For example, a game might allocate all of the objects needed
for gameplay during the loading screen of a level and not make any allocations during gameplay. This avoids
pauses while the user is playing the game and can result in a higher and more consistent frame rate.
Reduce generation 2 collections by avoiding objects with a medium-length lifetime
Generational garbage collections perform best when you have really short-lived and/or really long-lived objects in
your app. Short lived objects are collected in the cheaper generation 0 and generation 1 collections, and objects
that are long-lived get promoted to generation 2, which is collected infrequently. Long-lived objects are those that
are in use for the entire duration of your app, or during a significant period of your app, such as during a specific
page or game level.
If you frequently create objects that have a temporary lifetime but live long enough to be promoted to generation
2, then more of the expensive generation 2 collections happen. You may be able to reduce generation 2 collections
by recycling existing objects or releasing objects more quickly.
A common example of objects with medium-term lifetime is objects that are used for displaying items in a list that
a user scrolls through. If objects are created when items in the list are scrolled into view, and are no longer
referenced as items in the list are scrolled out of view, then your app typically has a large number of generation 2
collections. In situations like this you can pre-allocate and reuse a set of objects for the data that is actively shown
to the user, and use short-lived objects to load info as items in the list come into view.
Reduce generation 2 collections by avoiding large-sized objects with short lifetimes
Any object that is 85KB or larger is allocated on the large object heap (LOH) and gets collected as part of generation
2. If you have temporary variables, such as buffers, that are greater than 85KB, then a generation 2 collection cleans
them up. Limiting temporary variables to less than 85KB reduces the number of generation 2 collections in your
app. One common technique is to create a buffer pool and reuse objects from the pool to avoid large temporary
allocations.
Avoid reference-rich objects
The garbage collector determines which objects are live by following references between objects, starting from
roots in your app. For more info, see What happens during a garbage collection. If an object contains many
references, then there is more work for the garbage collector to do. A common technique (especially with large
objects) is to convert reference rich objects into objects with no references (e.g., instead of storing a reference, store
an index). Of course this technique works only when it is logically possible to do so.
Replacing object references with indexes can be a disruptive and complicated change to your app and is most
effective for large objects with a large number of references. Do this only if you are noticing large garbage
collection times in your app related to reference-heavy objects.
Keep the UI thread responsive
3/6/2017 4 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Users expect an app to remain responsive while it does computation, regardless of the type of machine. This means
different things for different apps. For some, this translates to providing more realistic physics, loading data from
disk or the web faster, quickly presenting complex scenes and navigating between pages, finding directions in a
snap, or rapidly processing data. Regardless of the type of computation, users want their app to act on their input
and eliminate instances where it appears unresponsive while it "thinks".
Your app is event-driven, which means that your code performs work in response to an event and then it sits idle
until the next. Platform code for UI (layout, input, raising events, etc.) and your apps code for UI all are executed on
the same UI thread. Only one instruction can execute on that thread at a time so if your app code takes too long to
process an event then the framework cant run layout or raise new events representing user interaction. The
responsiveness of your app is related to the availability of the UI thread to process work.
You need to use the UI thread to make almost all changes to the UI thread, including creating UI types and
accessing their members. You can't update the UI from a background thread but you can post a message to it with
CoreDispatcher.RunAsync to cause code to be run there.

Note The one exception is that there's a separate render thread that can apply UI changes that won't affect how
input is handled or the basic layout. For example many animations and transitions that dont affect layout can
run on this render thread.

Delay element instantiation


Some of the slowest stages in an app can include startup, and switching views. Don't do more work than necessary
to bring up the UI that the user sees initially. For example, don't create the UI for progressively-disclosed UI and the
contents of popups.
Use x:DeferLoadStrategy to delay-instantiate elements.
Programmatically insert elements into the tree on-demand.
CoreDispatcher.RunIdleAsync queues work for the UI thread to process when it's not busy.

Use asynchronous APIs


To help keep your app responsive, the platform provides asynchronous versions of many of its APIs. An
asynchronous API ensures that your active execution thread never blocks for a significant amount of time. When
you call an API from the UI thread, use the asynchronous version if it's available. For more info about programming
with async patterns, see Asynchronous programming or Call asynchronous APIs in C# or Visual Basic.

Offload work to background threads


Write event handlers to return quickly. In cases where a non-trivial amount of work needs to be performed,
schedule it on a background thread and return.
You can schedule work asynchronously by using the await operator in C#, the Await operator in Visual Basic, or
delegates in C++. But this doesn't guarantee that the work you schedule will run on a background thread. Many of
the Universal Windows Platform (UWP) APIs schedule work in the background thread for you, but if you call your
app code by using only await or a delegate, you run that delegate or method on the UI thread. You have to
explicitly say when you want to run your app code on a background thread. In C#C# and Visual Basic you can
accomplish this by passing code to Task.Run.
Remember that UI elements may only be accessed from the UI thread. Use the UI thread to access UI elements
before launching the background work and/or use CoreDispatcher.RunAsync or CoreDispatcher.RunIdleAsync
on the background thread.
An example of work that can be performed on a background thread is the calculating of computer AI in a game. The
code that calculates the computer's next move can take a lot of time to execute.

public class AsyncExample


{
private async void NextMove-Click(object sender, RoutedEventArgs e)
{
// The await causes the handler to return immediately.
await System.Threading.Tasks.Task.Run(() => ComputeNextMove());
// Now update the UI with the results.
// ...
}

private async System.Threading.Tasks.Task ComputeNextMove()


{
// Perform background work here.
// Don't directly access UI elements from this method.
}
}

public class Example


{
// ...
private async void NextMove-Click(object sender, RoutedEventArgs e)
{
await Task.Run(() => ComputeNextMove());
// Update the UI with results
}

private async Task ComputeNextMove()


{
// ...
}
// ...
}

Public Class Example


' ...
Private Async Sub NextMove-Click(ByVal sender As Object, ByVal e As RoutedEventArgs)
Await Task.Run(Function() ComputeNextMove())
' update the UI with results
End Sub

Private Async Function ComputeNextMove() As Task


' ...
End Function
' ...
End Class

In this example, the NextMove-Click handler returns at the await in order to keep the UI thread responsive. But
execution picks up in that handler again after ComputeNextMove (which executes on a background thread) completes.
The remaining code in the handler updates the UI with the results.
Note There's also a ThreadPool and ThreadPoolTimer API for the UWP, which can be used for similar
scenarios. For more info, see Threading and async programming.

Related topics
Custom user interactions
Optimize your XAML markup
3/6/2017 8 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Parsing XAML markup to construct objects in memory is time-consuming for a complex UI. Here are some things
you can do to improve XAML markup parse and load time and memory efficiency for your app.
At app startup, limit the XAML markup that is loaded to only what you need for your initial UI. Examine the markup
in your initial page and confirm it contains nothing that it doesn't need. If a page references a user control or a
resource defined in a different file, then the framework parses that file, too.
In this example, because InitialPage.xaml uses one resource from ExampleResourceDictionary.xaml, the whole of
ExampleResourceDictionary.xaml must be parsed at startup.
InitialPage.xaml.

<Page x:Class="ExampleNamespace.InitialPage" ...>


<UserControl.Resources>
<ResourceDictionary>
<ResourceDictionary.MergedDictionaries>
<ResourceDictionary Source="ExampleResourceDictionary.xaml"/>
</ResourceDictionary.MergedDictionaries>
</ResourceDictionary>
</UserControl.Resources>

<Grid>
<TextBox Foreground="{StaticResource TextBrush}"/>
</Grid>
</Page>

ExampleResourceDictionary.xaml.

<ResourceDictionary>
<SolidColorBrush x:Key="TextBrush" Color="#FF3F42CC"/>

<!--This ResourceDictionary contains many other resources that


used in the app, but not during startup.-->
</ResourceDictionary>

If you use a resource on many pages throughout your app, then storing it in App.xaml is a good practice, and
avoids duplication. But App.xaml is parsed at app startup so any resource that is used in only one page (unless that
page is the initial page) should be put into the page's local resources. This counter-example shows App.xaml
containing resources that are used by only one page (that's not the initial page). This needlessly increases app
startup time.
InitialPage.xaml.

<Page ...> <!-- NOTE: EXAMPLE OF INEFFICIENT CODE; DO NOT COPY-PASTE.-->


<StackPanel>
<TextBox Foreground="{StaticResource InitialPageTextBrush}"/>
</StackPanel>
</Page> <!-- NOTE: EXAMPLE OF INEFFICIENT CODE; DO NOT COPY-PASTE.-->
SecondPage.xaml.

<Page ...> <!-- NOTE: EXAMPLE OF INEFFICIENT CODE; DO NOT COPY-PASTE.-->


<StackPanel>
<Button Content="Submit" Foreground="{StaticResource SecondPageTextBrush}" />
</StackPanel>
</Page> <!-- NOTE: EXAMPLE OF INEFFICIENT CODE; DO NOT COPY-PASTE.-->

App.xaml

<Application ...> <!-- NOTE: EXAMPLE OF INEFFICIENT CODE; DO NOT COPY-PASTE.-->


<Application.Resources>
<SolidColorBrush x:Key="DefaultAppTextBrush" Color="#FF3F42CC"/>
<SolidColorBrush x:Key="InitialPageTextBrush" Color="#FF3F42CC"/>
<SolidColorBrush x:Key="SecondPageTextBrush" Color="#FF3F42CC"/>
<SolidColorBrush x:Key="ThirdPageTextBrush" Color="#FF3F42CC"/>
</Application.Resources>
</Application> <!-- NOTE: EXAMPLE OF INEFFICIENT CODE; DO NOT COPY-PASTE.-->

The way to make the above counter-example more efficient is to move SecondPageTextBrush into SecondPage.xaml
and to move ThirdPageTextBrush into ThirdPage.xaml. InitialPageTextBrush can remain in App.xaml because application
resources must be parsed at app startup in any case.

Minimize element count


Although the XAML platform is capable of displaying large numbers of elements, you can make your app lay out
and render faster by using the fewest number of elements to achieve the visuals you want.
Layout panels have a Background property so there's no need to put a Rectangle in front of a Panel just to
color it.
Inefficient.

<Grid> <!-- NOTE: EXAMPLE OF INEFFICIENT CODE; DO NOT COPY-PASTE.-->


<Rectangle Fill="Black"/>
</Grid> <!-- NOTE: EXAMPLE OF INEFFICIENT CODE; DO NOT COPY-PASTE.-->

Efficient.

<Grid Background="Black"/>

If you reuse the same vector-based element enough times, it becomes more efficient to use an Image element
instead. Vector-based elements can be more expensive because the CPU must create each individual element
separately. The image file needs to be decoded only once.

Consolidate multiple brushes that look the same into one resource
The XAML platform tries to cache commonly-used objects so that they can be reused as often as possible. But
XAML cannot easily tell if a brush declared in one piece of markup is the same as a brush declared in another. The
example here uses SolidColorBrush to demonstrate, but the case is more likely and more important with
GradientBrush.
Inefficient.
<Page ... > <!-- NOTE: EXAMPLE OF INEFFICIENT CODE; DO NOT COPY-PASTE.-->
<StackPanel>
<TextBlock>
<TextBlock.Foreground>
<SolidColorBrush Color="#FFFFA500"/>
</TextBlock.Foreground>
</TextBox>
<Button Content="Submit">
<Button.Foreground>
<SolidColorBrush Color="#FFFFA500"/>
</Button.Foreground>
</Button>
</StackPanel>
</Page> <!-- NOTE: EXAMPLE OF INEFFICIENT CODE; DO NOT COPY-PASTE.-->

Also check for brushes that use predefined colors: "Orange" and "#FFFFA500" are the same color. To fix the
duplication, define the brush as a resource. If controls in other pages use the same brush, move it to App.xaml.
Efficient.

<Page ... >


<Page.Resources>
<SolidColorBrush x:Key="BrandBrush" Color="#FFFFA500"/>
</Page.Resources>

<StackPanel>
<TextBlock Foreground="{StaticResource BrandBrush}" />
<Button Content="Submit" Foreground="{StaticResource BrandBrush}" />
</StackPanel>
</Page>

Minimize overdrawing
Overdrawing is where more than one object is drawn in the same screen pixels. Note that there is sometimes a
trade-off between this guidance and the desire to minimize element count.
If an element isn't visible because it's transparent or hidden behind other elements, and it's not contributing to
layout, then delete it. If the element is not visible in the initial visual state but it is visible in other visual states
then set Visibility to Collapsed on the element itself and change the value to Visible in the appropriate states.
There will be exceptions to this heuristic: in general, the value a property has in the major of visual states is best
set locally on the element.
Use a composite element instead of layering multiple elements to create an effect. In this example, the result is a
two-toned shape where the top half is black (from the background of the Grid) and the bottom half is gray
(from the semi-transparent white Rectangle alpha-blended over the black background of the Grid). Here,
150% of the pixels necessary to achieve the result are being filled.
Inefficient.

<Grid Background="Black"> <!-- NOTE: EXAMPLE OF INEFFICIENT CODE; DO NOT COPY-PASTE.-->


<Grid.RowDefinitions>
<RowDefinition Height="*"/>
<RowDefinition Height="*"/>
</Grid.RowDefinitions>
<Rectangle Grid.Row="1" Fill="White" Opacity=".5"/>
</Grid> <!-- NOTE: EXAMPLE OF INEFFICIENT CODE; DO NOT COPY-PASTE.-->

Efficient.
<Grid>
<Grid.RowDefinitions>
<RowDefinition Height="*"/>
<RowDefinition Height="*"/>
</Grid.RowDefinitions>
<Rectangle Fill="Black"/>
<Rectangle Grid.Row="1" Fill="#FF7F7F7F"/>
</Grid>

A layout panel can have two purposes: to color an area, and to lay out child elements. If an element further back
in z-order is already coloring an area then a layout panel in front does not need to paint that area: instead it can
just focus on laying out its children. Here's an example.
Inefficient.

<!-- NOTE: EXAMPLE OF INEFFICIENT CODE; DO NOT COPY-PASTE.-->


<GridView Background="Blue">
<GridView.ItemTemplate>
<DataTemplate>
<Grid Background="Blue"/>
</DataTemplate>
</GridView.ItemTemplate>
</GridView> <!-- NOTE: EXAMPLE OF INEFFICIENT CODE; DO NOT COPY-PASTE.-->

Efficient.

<GridView Background="Blue">
<GridView.ItemTemplate>
<DataTemplate>
<Grid/>
</DataTemplate>
</GridView.ItemTemplate>
</GridView>

If the Grid has to be hit-testable then set a background value of transparent on it.
Use a Border element to draw a border around an object. In this example, a Grid is used as a makeshift border
around a TextBox. But all the pixels in the center cell are overdrawn.
Inefficient.

<!-- NOTE: EXAMPLE OF INEFFICIENT CODE; DO NOT COPY-PASTE.-->


<Grid Background="Blue" Width="300" Height="45">
<Grid.RowDefinitions>
<RowDefinition Height="5"/>
<RowDefinition/>
<RowDefinition Height="5"/>
</Grid.RowDefinitions>
<Grid.ColumnDefinitions>
<ColumnDefinition Width="5"/>
<ColumnDefinition/>
<ColumnDefinition Width="5"/>
</Grid.ColumnDefinitions>
<TextBox Grid.Row="1" Grid.Column="1"></TextBox>
</Grid> <!-- NOTE: EXAMPLE OF INEFFICIENT CODE; DO NOT COPY-PASTE.-->

Efficient.
<Border BorderBrush="Blue" BorderThickness="5" Width="300" Height="45">
<TextBox/>
</Border>

Be aware of margins. Two neighboring elements will overlap (possibly accidentally) if negative margins extend
into anothers render bounds and cause overdrawing.
Use DebugSettings.IsOverdrawHeatMapEnabled as a visual diagnostic. You may find objects being drawn that
you weren't aware were in the scene.

Cache static content


Another source of overdrawing is a shape made from many overlapping elements. If you set CacheMode to
BitmapCache on the UIElement that contains the composite shape then the platform renders the element to a
bitmap once and then uses that bitmap each frame instead of overdrawing.
Inefficient.

<Canvas Background="White">
<Ellipse Height="40" Width="40" Fill="Blue"/>
<Ellipse Canvas.Left="21" Height="40" Width="40" Fill="Blue"/>
<Ellipse Canvas.Top="13" Canvas.Left="10" Height="40" Width="40" Fill="Blue"/>
</Canvas>

The image above is the result, but here's a map of the overdrawn regions. Darker red indicates higher amounts of
overdraw.

Efficient.

<Canvas Background="White" CacheMode="BitmapCache">


<Ellipse Height="40" Width="40" Fill="Blue"/>
<Ellipse Canvas.Left="21" Height="40" Width="40" Fill="Blue"/>
<Ellipse Canvas.Top="13" Canvas.Left="10" Height="40" Width="40" Fill="Blue"/>
</Canvas>

Note the use of CacheMode. Don't use this technique if any of the sub-shapes animate because the bitmap cache
will likely need to be regenerated every frame, defeating the purpose.

ResourceDictionaries
ResourceDictionaries are generally used to store your resources at a somewhat global level. Resources that your
app wants to reference in multiple places. For example, styles, brushes, templates, and so on. In general, we have
optimized ResourceDictionaries to not instantiate resources unless they're asked for. But there are few places
where you need to be a little careful.
Resource with x:Name. Any resource with x:Name will not benefit from the platform optimization, but instead it
will be instantiated as soon as the ResourceDictionary is created. This happens because x:Name tells the platform
that your app needs field access to this resource, so the platform needs to create something to create a reference
to.
ResourceDictionaries in a UserControl. ResourceDictionaries defined inside of a UserControl carry a penalty.
The platform will create a copy of such a ResourceDictionary for every instance of the UserControl. If you have a
UserControl that is used a lot, then move the ResourceDictionary out of the UserControl and put it the page level.

Use XBF2
XBF2 is a binary representation of XAML markup that avoids all text-parsing costs at runtime. It also optimizes your
binary for load and tree creation, and allows "fast-path" for XAML types to improve heap and object creation costs,
for example VSM, ResourceDictionary, Styles, and so on. It is completely memory-mapped so there is no heap
footprint for loading and reading a XAML Page. In addition, it reduces the disk footprint of stored XAML pages in
an appx. XBF2 is a more compact representation and it can reduce disk footprint of comparative XAML/XBF1 files
by up to 50%. For example, the built-in Photos app saw around a 60% reduction after conversion to XBF2 dropping
from around ~1mb of XBF1 assets to ~400kb of XBF2 assets. We have also seen apps benefit anywhere from 15 to
20% in CPU and 10 to 15% in Win32 heap.
XAML built-in controls and dictionaries that the framework provides are already fully XBF2-enabled. For your own
app, ensure that your project file declares TargetPlatformVersion 8.2 or later.
To check whether you have XBF2, open your app in a binary editor; the 12th and 13th bytes are 00 02 if you have
XBF2.
Optimize your XAML layout
3/6/2017 5 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Important APIs
Panel
Layout is the process of defining the visual structure for your UI. The primary mechanism for describing layout in
XAML is through panels, which are container objects that enable you to position and arrange the UI elements within
them. Layout can be an expensive part of a XAML appboth in CPU usage and memory overhead. Here are some
simple steps you can take to improve the layout performance of your XAML app.

Reduce layout structure


The biggest gain in layout performance comes from simplifying the hierarchical structure of the tree of UI elements.
Panels exist in the visual tree, but they are structural elements, not pixel producing elements like a Button or a
Rectangle. Simplifying the tree by reducing the number of non-pixel-producing elements typically provides a
significant performance increase.
Many UIs are implemented by nesting panels which results in deep, complex trees of panels and elements. It is
convenient to nest panels, but in many cases the same UI can be achieved with a more complex single panel. Using
a single panel provides better performance.
When to reduce layout structure
Reducing layout structure in a trivial wayfor example, reducing one nested panel from your top-level pagedoes
not have a noticeable effect.
The largest performance gains come from reducing layout structure that's repeated in the UI, like in a ListView or
GridView. These ItemsControl elements use a DataTemplate, which defines a subtree of UI elements that is
instantiated many times. When the same subtree is being duplicated many times in your app, any improvements to
the performance of that subtree has a multiplicative effect on the overall performance of your app.
Examples
Consider the following UI.

These examples shows 3 ways of implementing the same UI. Each implementation choice results in nearly identical
pixels on the screen, but differs substantially in the implementation details.
Option1: Nested StackPanel elements
Although this is the simplest model, it uses 5 panel elements and results in significant overhead.

<StackPanel>
<TextBlock Text="Options:" />
<StackPanel Orientation="Horizontal">
<CheckBox Content="Power User" />
<CheckBox Content="Admin" Margin="20,0,0,0" />
</StackPanel>
<TextBlock Text="Basic information:" />
<StackPanel Orientation="Horizontal">
<TextBlock Text="Name:" Width="75" />
<TextBox Width="200" />
</StackPanel>
<StackPanel Orientation="Horizontal">
<TextBlock Text="Email:" Width="75" />
<TextBox Width="200" />
</StackPanel>
<StackPanel Orientation="Horizontal">
<TextBlock Text="Password:" Width="75" />
<TextBox Width="200" />
</StackPanel>
<Button Content="Save" />
</StackPanel>

Option 2: A single Grid


The Grid adds some complexity, but uses only a single panel element.

<Grid>
<Grid.RowDefinitions>
<RowDefinition Height="Auto" />
<RowDefinition Height="Auto" />
<RowDefinition Height="Auto" />
<RowDefinition Height="Auto" />
<RowDefinition Height="Auto" />
<RowDefinition Height="Auto" />
<RowDefinition Height="Auto" />
</Grid.RowDefinitions>
<Grid.ColumnDefinitions>
<ColumnDefinition Width="Auto" />
<ColumnDefinition Width="Auto" />
</Grid.ColumnDefinitions>
<TextBlock Text="Options:" Grid.ColumnSpan="2" />
<CheckBox Content="Power User" Grid.Row="1" Grid.ColumnSpan="2" />
<CheckBox Content="Admin" Margin="150,0,0,0" Grid.Row="1" Grid.ColumnSpan="2" />
<TextBlock Text="Basic information:" Grid.Row="2" Grid.ColumnSpan="2" />
<TextBlock Text="Name:" Width="75" Grid.Row="3" />
<TextBox Width="200" Grid.Row="3" Grid.Column="1" />
<TextBlock Text="Email:" Width="75" Grid.Row="4" />
<TextBox Width="200" Grid.Row="4" Grid.Column="1" />
<TextBlock Text="Password:" Width="75" Grid.Row="5" />
<TextBox Width="200" Grid.Row="5" Grid.Column="1" />
<Button Content="Save" Grid.Row="6" />
</Grid>

Option 3: A single RelativePanel:


This single panel is also a bit more complex than using nested panels, but may be easier to understand and
maintain than a Grid.
<RelativePanel>
<TextBlock Text="Options:" x:Name="Options" />
<CheckBox Content="Power User" x:Name="PowerUser" RelativePanel.Below="Options" />
<CheckBox Content="Admin" Margin="20,0,0,0"
RelativePanel.RightOf="PowerUser" RelativePanel.Below="Options" />
<TextBlock Text="Basic information:" x:Name="BasicInformation"
RelativePanel.Below="PowerUser" />
<TextBlock Text="Name:" RelativePanel.AlignVerticalCenterWith="NameBox" />
<TextBox Width="200" Margin="75,0,0,0" x:Name="NameBox"
RelativePanel.Below="BasicInformation" />
<TextBlock Text="Email:" RelativePanel.AlignVerticalCenterWith="EmailBox" />
<TextBox Width="200" Margin="75,0,0,0" x:Name="EmailBox"
RelativePanel.Below="NameBox" />
<TextBlock Text="Password:" RelativePanel.AlignVerticalCenterWith="PasswordBox" />
<TextBox Width="200" Margin="75,0,0,0" x:Name="PasswordBox"
RelativePanel.Below="EmailBox" />
<Button Content="Save" RelativePanel.Below="PasswordBox" />
</RelativePanel>

As these examples show, there are many ways of achieving the same UI. You should choose by carefully
considering all the tradeoffs, including performance, readability, and maintainability.

Use single-cell grids for overlapping UI


A common UI requirement is to have a layout where elements overlap each other. Typically padding, margins,
alignments, and transforms are used to position the elements this way. The XAML Grid control is optimized to
improve layout performance for elements that overlap.
Important To see the improvement, use a single-cell Grid. Do not define RowDefinitions or ColumnDefinitions.
Examples

<Grid>
<Ellipse Fill="Red" Width="200" Height="200" />
<TextBlock Text="Test"
HorizontalAlignment="Center"
VerticalAlignment="Center" />
</Grid>

<Grid Width="200" BorderBrush="Black" BorderThickness="1">


<TextBlock Text="Test1" HorizontalAlignment="Left" />
<TextBlock Text="Test2" HorizontalAlignment="Right" />
</Grid>
Use a panel's built-in border properties
Grid, StackPanel, RelativePanel, and ContentPresenter controls have built-in border properties that let you
draw a border around them without adding an additional Border element to your XAML. The new properties that
support the built-in border are: BorderBrush, BorderThickness, CornerRadius, and Padding. Each of these is a
DependencyProperty, so you can use them with bindings and animations. Theyre designed to be a full
replacement for a separate Border element.
If your UI has Border elements around these panels, use the built-in border instead, which saves an extra element
in the layout structure of your app. As mentioned previously, this can be a significant savings, especially in the case
of repeated UI.
Examples

<RelativePanel BorderBrush="Red" BorderThickness="2" CornerRadius="10" Padding="12">


<TextBox x:Name="textBox1" RelativePanel.AlignLeftWithPanel="True"/>
<Button Content="Submit" RelativePanel.Below="textBox1"/>
</RelativePanel>

Use SizeChanged events to respond to layout changes


The FrameworkElement class exposes two similar events for responding to layout changes: LayoutUpdated and
SizeChanged. You might be using one of these events to receive notification when an element is resized during
layout. The semantics of the two events are different, and there are important performance considerations in
choosing between them.
For good performance, SizeChanged is almost always the right choice. SizeChanged has intuitive semantics. It is
raised during layout when the size of the FrameworkElement has been updated.
LayoutUpdated is also raised during layout, but it has global semanticsit is raised on every element whenever
any element is updated. It is typical to only do local processing in the event handler, in which case the code is run
more often than needed. Use LayoutUpdated only if you need to know when an element is repositioned without
changing size (which is uncommon).

Choosing between panels


Performance is typically not a consideration when choosing between individual panels. That choice is typically
made by considering which panel provides the layout behavior that is closest to the UI youre implementing. For
example, if youre choosing between Grid, StackPanel , and RelativePanel, you should choose the panel that
provides the closest mapping to your mental model of the implementation.
Every XAML panel is optimized for good performance, and all the panels provide similar performance for similar UI.
MVVM and language performance tips
3/6/2017 3 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This topic discusses some performance considerations related to your choice of software design patterns, and
programming language.

The Model-View-ViewModel (MVVM) pattern


The Model-View-ViewModel (MVVM) pattern is common in a lot of XAML apps. (MVVM is very similar to Fowlers
description of the Model-View-Presenter pattern, but it is tailored to XAML). The issue with the MVVM pattern is
that it can inadvertently lead to apps that have too many layers and too many allocations. The motivations for
MVVM are these.
Separation of concerns. Its always helpful to divide a problem into smaller pieces, and a pattern like MVVM or
MVC is a way to divide an app (or even a single control) into smaller pieces: the actual view, a logical model of
the view (view-model), and the view-independent app logic (the model). In particular, its a popular workflow to
have designers own the view using one tool, developers own the model using another tool, and design
integrators own the view-model using both tools.
Unit testing. You can unit test the view-model (and consequently the model) independent of the view, thereby
not relying on creating windows, driving input, and so on. By keeping the view small, you can test a large
portion of your app without ever having to create a window.
Agility to user experience changes. The view tends to see the most frequent changes, and the most late
changes, as the user experience is tweaked based on end-user feedback. By keeping the view separate, these
changes can be accommodated more quickly and with less churn to the app.
There are multiple concrete definitions of the MVVM pattern, and 3rd party frameworks that help implement it. But
strict adherence to any variation of the pattern can lead to apps with a lot more overhead than can be justified.
XAML data binding (the {Binding} markup extension) was designed in part to enable model/view patterns. But
{Binding} brings with it non-trivial working set and CPU overhead. Creating a {Binding} causes a series of
allocations, and updating a binding target can cause reflection and boxing. These problems are being addressed
with the {x:Bind} markup extension, which compiles the bindings at build time. Recommendation: use {x:Bind}.
Its popular in MVVM to connect Button.Click to the view-model using an ICommand, such as the common
DelegateCommand or RelayCommand helpers. Those commands are extra allocations, though, including the
CanExecuteChanged event listener, adding to the working set, and adding to the startup/navigation time for the
page. Recommendation: As an alternative to using the convenient ICommand interface, consider putting event
handlers in your code-behind and attaching them to the view events and call a command on your view-model
when those events are raised. You'll also need to add extra code to disable the Button when the command is
unavailable.
Its popular in MVVM to create a Page with all possible configurations of the UI, then collapse parts of the tree
by binding the Visibility property to properties in the VM. This adds unnecessarily to startup time and possibly
to working set (because some parts of the tree may never become visible). Recommendations: Use the
x:DeferLoadStrategy feature to defer unnecessary portions of the tree out of startup. Also, create separate user
controls for the different modes of the page and use code-behind to keep only the necessary controls loaded.

C++/CX recommendations
Use the latest version. There are continual performance improvements made to the C++/CX compiler. Ensure
your app is building using the latest toolset.
Disable RTTI (/GR-). RTTI is on by default in the compiler so, unless your build environment switches it off,
youre probably using it. RTTI has significant overhead, and unless your code has a deep dependency on it, you
should turn it off. The XAML framework has no requirement that your code use RTTI.
Avoid heavy use of ppltasks. Ppltasks are very convenient when calling async WinRT APIs, but they come with
significant code size overhead. The C++/CX team is working on a language feature await that will provide
much better performance. In the meantime, balance your use of ppltasks in the hot paths of your code.
Avoid use of C++/CX in the business logic of your app. C++/CX is designed to be a convenient way to
access WinRT APIs from C++ apps. It makes use of wrappers that have overhead. You should avoid C++/CX
inside the business logic/model of your class, and reserve it for use at the boundaries between your code and
WinRT.
Best practices for your app's startup performance
3/6/2017 20 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Create Universal Windows Platform (UWP) apps with optimal startup times by improving the way you handle
launch and activation.

Best practices for your app's startup performance


In part, users perceive whether your app is fast or slow based on how long it takes to start up. For the purposes of
this topic, an app's startup time begins when the user starts the app, and ends when the user can interact with the
app in some meaningful way. This section provides suggestions on how to get better performance out of your app
when it starts.
Measuring your app's startup time
Be sure to start your app a few times before you actually measure its startup time. This gives you a baseline for
your measurement and ensures that you're measuring as reasonably short a startup time as possible.
By the time your UWP app arrives on your customers' computers, your app has been compiled with the .NET Native
toolchain. .NET Native is an ahead-of-time compilation technology that converts MSIL into natively-runnable
machine code. .NET Native apps start faster, use less memory, and use less battery than their MSIL counterparts.
Applications built with .NET Native statically link in a custom runtime and the new converged .NET Core that can
run on all devices, so they dont depend on the in-box .NET implementation. On your development machine, by
default your app uses .NET Native if youre building it in Release mode, and it uses CoreCLR if youre building it in
Debug mode. You can configure this in Visual Studio from the Build page in Properties (C#) or Compile-
>Advanced in "My Project" (VB). Look for a checkbox that says Compile with .NET Native Toolchain.
Of course, you should take measurements that are representative of what the end user will experience. So, if you're
not sure you're compiling your app to native code on your development machine, you could run the Native Image
Generator (Ngen.exe) tool to precompile your app before you measure its startup time.
The following procedure describes how to run Ngen.exe to precompile your app.
To run Ngen.exe
1. Run your app at least one time to ensure that Ngen.exe detects it.
2. Open the Task Scheduler by doing one of the following:
Search for "Task Scheduler" from the start screen.
Run "taskschd.msc."
3. In the left-hand pane of Task Scheduler, expand Task Scheduler Library.
4. Expand Microsoft.
5. Expand Windows.
6. Select .NET Framework.
7. Select .NET Framework NGEN 4.x from the task list.
If you are using a 64-bit computer, there is also a .NET Framework NGEN v4.x 64. If you are building a 64-
bit app, select .NET Framework NGEN v4.x 64.
8. From the Action menu, click Run.
Ngen.exe precompiles all the apps on the machine that have been used and do not have native images. If there are
a lot of apps that need to be precompiled, this can take a long time, but subsequent runs are much faster.
When you recompile your app, the native image is no longer used. Instead, the app is just-in-time compiled, which
means that it is compiled as the app runs. You must rerun Ngen.exe to get a new native image.
Defer work as long as possible
To increase your app's startup time, do only the work that absolutely needs to be done to let the user start
interacting with the app. This can be especially beneficial if you can delay loading additional assemblies. The
common language runtime loads an assembly the first time it is used. If you can minimize the number of
assemblies that are loaded, you might be able to improve your app's startup time and its memory consumption.
Do long-running work independently
Your app can be interactive even though there are parts of the app that aren't fully functional. For example, if your
app displays data that takes a while to retrieve, you can make that code execute independently of the app's startup
code by retrieving the data asynchronously. When the data is available, populate the app's user interface with the
data.
Many of the Universal Windows Platform (UWP) APIs that retrieve data are asynchronous, so you will probably be
retrieving data asynchronously anyway. For more info about asynchronous APIs, see Call asynchronous APIs in C#
or Visual Basic. If you do work that doesn't use asynchronous APIs, you can use the Task class to do long running
work so that you don't block the user from interacting with the app. This will keep your app responsive to the user
while the data loads.
If your app takes an especially long time to load part of its UI, consider adding a string in that area that says
something like, "Getting latest data," so that your users know that the app is still processing.

Minimize startup time


All but the simplest apps require a perceivable amount of time to load resources, parse XAML, set up data
structures, and run logic at activation. Here, we analyze the process of activation by breaking it into three phases.
We also provide tips for reducing the time spent in each phase, and techniques for making each phase of your
app's startup more palatable to the user.
The activation period is the time between the moment a user starts the app and the moment the app is functional.
This is a critical time because its a users first impression of your app. They expect instant and continuous feedback
from the system and apps. The system and the app are perceived to be broken or poorly designed when apps don't
start quickly. Even worse, if an app takes too long to activate, the Process Lifetime Manager (PLM) might kill it, or
the user might uninstall it.
Introduction to the stages of startup
Startup involves a number of moving pieces, and all of them need to be correctly coordinated for the best user
experience. The following steps occur between your user clicking on your app tile and the application content being
shown.
The Windows shell starts the process and Main is called.
The Application object is created.
(Project template) Constructor calls InitializeComponent, which causes App.xaml to be parsed and objects
created.
Application.OnLaunched event is raised.
(ProjectTemplate) App code creates a Frame and navigates to MainPage.
(ProjectTemplate) Mainpage constructor calls InitializeComponent which causes MainPage.xaml to be
parsed and objects created.
ProjectTemplate) Window.Current.Activate() is called.
XAML Platform runs the Layout pass including Measure & Arrange.
ApplyTemplate will cause control template content to be created for each control, which is typically the
bulk of Layout time for startup.
Render is called to create visuals for all the window contents.
Frame is presented to the Desktop Windows Manager (DWM).
Do less in your Startup path
Keep your startup code path free from anything that is not needed for your first frame.
If you have user dlls containing controls that are not needed during first frame, consider delay loading them.
If you have a portion of your UI dependent on data from the cloud, then split that UI. First, bring up the UI that is
not dependent on cloud data and asynchronously bring up the cloud-dependent UI. You should also consider
caching data locally so that the application will work offline or not be affected by slow network connectivity.
Show progress UI if your UI is waiting for data.
Be cautious of app designs that involve a lot of parsing of configuration files, or UI that is dynamically generated
by code.
Reduce element count
Startup performance in a XAML app is directly correlated to the number of elements you create during startup. The
fewer elements you create, the less time your app will take to start up. As a rough benchmark, consider each
element to take 1ms to create.
Templates used in items controls can have the biggest impact, as they are repeated multiple times. See ListView
and GridView UI optimization.
UserControls and control templates will be expanded, so those should also be taken into account.
If you create any XAML that does not appear on the screen, then you should justify whether those pieces of
XAML should be created during your startup.
The Visual Studio Live Visual Tree window shows the child element counts for each node in the tree.
Use x:DeferLoadStrategy. Collapsing an element, or setting its opacity to 0, will not prevent the element from
being created. Using x:DeferLoadStrategy, you can delay the loading of a piece of UI, and load it when needed. This
is good way to delay processing UI that is not visible during the startup screen, so that you can load it when
needed, or as part of a set of delayed logic. To trigger the loading, you need only call FindName for the element. For
an example and more information, see x:DeferLoadStrategy attribute.
Virtualization. If you have list or repeater content in your UI then its highly advised that you use UI virtualization.
If list UI is not virtualized then you are paying the cost of creating all the elements up front, and that can slow down
your startup. See ListView and GridView UI optimization.
Application performance is not only about raw performance, its also about perception. Changing the order of
operations so that visual aspects occur first will commonly make the user feel like the application is faster. Users
will consider the application loaded when the content is on the screen. Commonly, applications need to do multiple
things as part of the startup, and not all of that is required to bring up the UI, so those should be delayed or
prioritized lower than the UI.
This topic talks about the first frame which comes from animation/TV, and is a measure of how long until content
is seen by the end user.
Improve startup perception
Lets use the example of a simple online game to identify each phase of startup and different techniques to give the
user feedback throughout the process. For this example, the first phase of activation is the time between the user
tapping the games tile and the game starting to run its code. During this time, the system doesnt have any content
to display to the user to even indicate that the correct game has started. But providing a splash screen gives that
content to the system. The game then informs the user that the first phase of activation has completed by replacing
the static splash screen with its own UI when it begins running code.
The second phase of activation encompasses creating and initializing structures critical for the game. If an app can
quickly create its initial UI with the data available after the first phase of activation, then the second phase is trivial
and you can display the UI immediately. Otherwise we recommend that the app display a loading page while it is
initialized.
What the loading page looks like is up to you and it can be as simple as displaying a progress bar or a progress
ring. The key point is that the app indicates that it is performing tasks before becoming responsive. In the case of
the game, it would like to display its initial screen but that UI requires that some images and sounds be loaded from
disk into memory. These tasks take a couple of seconds, so the app keeps the user informed by replacing the splash
screen with a loading page, which shows a simple animation related to the theme of the game.
The third stage begins after the game has a minimal set of info to create an interactive UI, which replaces the
loading page. At this point the only info available to the online game is the content that the app loaded from disk.
The game can ship with enough content to create an interactive UI; but because its an online game it wont be
functional until it connects to the internet and downloads some additional info. Until it has all the info it needs to be
functional, the user can interact with the UI, but features that need additional data from the web should give
feedback that content is still loading. It may take some time for an app to become fully functional, so its important
that functionality be made available as soon as possible.
Now that we have identified the three stages of activation in the online game, lets tie them to actual code.
Phase 1
Before an app starts, it needs to tell the system what it wants to display as the splash screen. It does so by providing
an image and background color to the SplashScreen element in an apps manifest, as in the example. Windows
displays this after the app begins activation.

<Package ...>
...
<Applications>
<Application ...>
<VisualElements ...>
...
<SplashScreen Image="Images\splashscreen.png" BackgroundColor="#000000" />
...
</VisualElements>
</Application>
</Applications>
</Package>

For more info, see Add a splash screen.


Use the apps constructor only to initialize data structures that are critical to the app. The constructor is called only
the first time the app is run and not necessarily each time the app is activated. For example, the constructor isn't
called for an app that has been run, placed in the background, and then activated via the search contract.
Phase 2
There are a number of reasons for an app to be activated, each of which you may want to handle differently. You
can override OnActivated, OnCachedFileUpdaterActivated, OnFileActivated, OnFileOpenPickerActivated,
OnFileSavePickerActivated, OnLaunched, OnSearchActivated, and OnShareTargetActivated methods to
handle each reason of activation. One of the things that an app must do in these methods is create a UI, assign it to
Window.Content, and then call Window.Activate. At this point the splash screen is replaced by the UI that the
app created. This visual could either be loading screen or the app's actual UI if enough info is available at activation
to create it.
public partial class App : Application
{
// A handler for regular activation.
async protected override void OnLaunched(LaunchActivatedEventArgs args)
{
base.OnLaunched(args);

// Asynchronously restore state based on generic launch.

// Create the ExtendedSplash screen which serves as a loading page while the
// reader downloads the section information.
ExtendedSplash eSplash = new ExtendedSplash();

// Set the content of the window to the extended splash screen.


Window.Current.Content = eSplash;

// Notify the Window that the process of activation is completed


Window.Current.Activate();
}

// a different handler for activation via the search contract


async protected override void OnSearchActivated(SearchActivatedEventArgs args)
{
base.OnSearchActivated(args);

// Do an asynchronous restore based on Search activation

// the rest of the code is the same as the OnLaunched method


}
}

partial class ExtendedSplash : Page


{
// This is the UIELement that's the game's home page.
private GameHomePage homePage;

public ExtendedSplash()
{
InitializeComponent();
homePage = new GameHomePage();
}

// Shown for demonstration purposes only.


// This is typically autogenerated by Visual Studio.
private void InitializeComponent()
{
}
}
Partial Public Class App
Inherits Application

' A handler for regular activation.


Protected Overrides Async Sub OnLaunched(ByVal args As LaunchActivatedEventArgs)
MyBase.OnLaunched(args)

' Asynchronously restore state based on generic launch.

' Create the ExtendedSplash screen which serves as a loading page while the
' reader downloads the section information.
Dim eSplash As New ExtendedSplash()

' Set the content of the window to the extended splash screen.
Window.Current.Content = eSplash

' Notify the Window that the process of activation is completed


Window.Current.Activate()
End Sub

' a different handler for activation via the search contract


Protected Overrides Async Sub OnSearchActivated(ByVal args As SearchActivatedEventArgs)
MyBase.OnSearchActivated(args)

' Do an asynchronous restore based on Search activation

' the rest of the code is the same as the OnLaunched method
End Sub
End Class

Partial Friend Class ExtendedSplash


Inherits Page

Public Sub New()


InitializeComponent()

' Downloading the data necessary for


' initial UI on a background thread.
Task.Run(Sub() DownloadData())
End Sub

Private Sub DownloadData()


' Download data to populate the initial UI.

' Create the first page.


Dim firstPage As New MainPage()

' Add the data just downloaded to the first page

' Replace the loading page, which is currently


' set as the window's content, with the initial UI for the app
Window.Current.Content = firstPage
End Sub

' Shown for demonstration purposes only.


' This is typically autogenerated by Visual Studio.
Private Sub InitializeComponent()
End Sub
End Class

Apps that display a loading page in the activation handler begin work to create the UI in the background. After that
element has been created, its FrameworkElement.Loaded event occurs. In the event handler you replace the
window's content, which is currently the loading screen, with the newly created home page.
Its critical that an app with an extended initialization period show a loading page. Aside from providing the user
feedback about the activation process, the process will be terminated if Window.Activate is not called within 15
seconds of the start of the activation process.

partial class GameHomePage : Page


{
public GameHomePage()
{
InitializeComponent();

// add a handler to be called when the home page has been loaded
this.Loaded += ReaderHomePageLoaded;

// load the minimal amount of image and sound data from disk necessary to create the home page.
}

void ReaderHomePageLoaded(object sender, RoutedEventArgs e)


{
// set the content of the window to the home page now that it's ready to be displayed.
Window.Current.Content = this;
}

// Shown for demonstration purposes only.


// This is typically autogenerated by Visual Studio.
private void InitializeComponent()
{
}
}

Partial Friend Class GameHomePage


Inherits Page

Public Sub New()


InitializeComponent()

' add a handler to be called when the home page has been loaded
AddHandler Me.Loaded, AddressOf ReaderHomePageLoaded

' load the minimal amount of image and sound data from disk necessary to create the home page.
End Sub

Private Sub ReaderHomePageLoaded(ByVal sender As Object, ByVal e As RoutedEventArgs)


' set the content of the window to the home page now that it's ready to be displayed.
Window.Current.Content = Me
End Sub

' Shown for demonstration purposes only.


' This is typically autogenerated by Visual Studio.
Private Sub InitializeComponent()
End Sub
End Class

For an example of using extended splash screens, see Splash screen sample.
Phase 3
Just because the app displayed the UI doesn't mean it is completely ready for use. In the case of our game, the UI is
displayed with placeholders for features that require data from the internet. At this point the game downloads the
additional data needed to make the app fully functional and progressively enables features as data is acquired.
Sometimes much of the content needed for activation can be packaged with the app. Such is the case with a simple
game. This makes the activation process quite simple. But many programs (such as news readers and photo
viewers) must pull information from the web to become functional. This data can be large and take a fair amount of
time to download. How the app gets this data during the activation process can have a huge impact on the
perceived performance of an app.
You could display a loading page, or worse, a splash screen, for minutes if an app tried to download an entire data
set it needs for functionality in phase one or two of activation. This makes an app look like its hung or cause it to be
terminated by the system. We recommend that an app download the minimal amount of data to show an
interactive UI with placeholder elements in phase 2 and then progressively load data, which replaces the
placeholder elements, in phase 3. For more info on dealing with data, see Optimize ListView and GridView.
How exactly an app reacts to each phase of startup is completely up to you, but providing the user as much
feedback as possible (splash screen, loading screen, UI while data loads) makes the user feel as though an app, and
the system as a whole, are fast.
Minimize managed assemblies in the startup path
Reusable code often comes in the form of modules (DLLs) included in a project. Loading these modules requires
accessing the disk, and as you can imagine, the cost of doing so can add up. This has the greatest impact on cold
startup, but it can have an impact on warm startup, too. In the case of C# and Visual Basic, the CLR tries to delay
that cost as much as possible by loading assemblies on demand. That is, the CLR doesnt load a module until an
executed method references it. So, reference only assemblies that are necessary to the launch of your app in startup
code so that the CLR doesnt load unnecessary modules. If you have unused code paths in your startup path that
have unnecessary references, you can move these code paths to other methods to avoid the unnecessary loads.
Another way to reduce module loads is to combine your app modules. Loading one large assembly typically takes
less time than loading two small ones. This is not always possible, and you should combine modules only if it
doesn't make a material difference to developer productivity or code reusability. You can use tools such as
PerfView or the Windows Performance Analyzer (WPA) to find out what modules are loaded on startup.
Make smart web requests
You can dramatically improve the loading time of an app by packaging its contents locally, including XAML, images,
and any other files important to the app. Disk operations are faster than network operations. If an app needs a
particular file at initialization, you can reduce the overall startup time by loading it from disk instead of retrieving it
from a remote server.

Journal and Cache Pages Efficiently


The Frame control provides navigation features. It offers navigation to a Page (Navigate method), navigation
journaling (BackStack/ForwardStack properties, GoForward/GoBack method), Page caching
(Page.NavigationCacheMode), and serialization support (GetNavigationState method).
The performance to be aware of with Frame is primarily around the journaling and page caching.
Frame journaling. When you navigate to a page with Frame.Navigate(), a PageStackEntry for the current page is
added to Frame.BackStack collection. PageStackEntry is relatively small, but theres no built-in limit to the size of
the BackStack collection. Potentially, a user could navigate in a loop and grow this collection indefinitely.
The PageStackEntry also includes the parameter that was passed to the Frame.Navigate() method. Its
recommended that that parameter be a primitive serializable type (such as an int or string), in order to allow the
Frame.GetNavigationState() method to work. But that parameter could potentially reference an object that accounts
for more significant amounts of working set or other resources, making each entry in the BackStack that much
more expensive. For example, you could potentially use a StorageFile as a parameter, and consequently the
BackStack is keeping an indefinite number of files open.
Therefore its recommended to keep the navigation parameters small, and to limit the size of the BackStack. The
BackStack is a standard vector (IList in C#, Platform::Vector in C++/CX), and so can be trimmed simply by removing
entries.
Page caching. By default, when you navigate to a page with the Frame.Navigate method, a new instance of the
page is instantiated. Similarly, if you then navigate back to the previous page with Frame.GoBack, a new instance of
the previous page is allocated.
Frame, though, offers an optional page cache that can avoid these instantiations. To get a page put into the cache,
use the Page.NavigationCacheMode property. Setting that mode to Required will force the page to be cached,
setting it to Enabled will allow it to be cached. By default the cache size is 10 pages, but this can be overridden with
the Frame.CacheSize property. All Required pages will be cached, and if there are fewer than CacheSize Required
pages, Enabled pages can be cached as well.
Page caching can help performance by avoiding instantiations, and therefore improving navigation performance.
Page caching can hurt performance by over-caching and therefore impacting working set.
Therefore its recommend to use page caching as appropriate for your application. For example, say you have an
app that shows a list of items in a Frame, and when you tap on an item, it navigates the frame to a detail page for
that item. The list page should probably be set to cache. If the detail page is the same for all items, it should
probably be cached as well. But if the detail page is more heterogeneous, it might be better to leave caching off.
Optimize animations, media, and images
3/6/2017 18 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Create Universal Windows Platform (UWP) apps with smooth animations, high frame rate, and high-performance
media capture and playback.

Make animations smooth


A key aspect of UWP apps is smooth interactions. This includes touch manipulations that "stick to your finger,"
smooth transitions and animations, and small motions that provide input feedback. In the XAML framework there is
a thread called the composition thread that is dedicated to the composition and animation of an apps visual
elements. Because the composition thread is separate from UI thread (the thread that runs framework and
developer code), apps can achieve a consistent frame rate and smooth animations regardless of complicated layout
passes or extended calculations. This section shows how to use the composition thread to keep an apps animations
buttery smooth. For more info about animations, see Animations overview. To learn about increasing an apps
responsiveness while performing intensive computations, see Keep the UI thread responsive.
Use independent instead of dependent animations
Independent animations can be calculated from beginning to end at the time of creation because changes to the
property being animated don't affect rest of the objects in a scene. Independent animations can therefore run on
the composition thread instead of the UI thread. This guarantees that they remain smooth because the composition
thread is updated at a consistent cadence.
All of these types of animations are guaranteed to be independent:
Object animations using key frames
Zero-duration animations
Animations to the Canvas.Left and Canvas.Top properties
Animations to the UIElement.Opacity property
Animations to properties of type Brush when targeting the SolidColorBrush.Color subproperty
Animations to the following UIElement properties when targeting subproperties of these return value
types:
RenderTransform
Projection
Clip
Dependent animations affect layout, which therefore cannot be calculated without extra input from the UI thread.
Dependent animations include modifications to properties like Width and Height. By default, dependent
animations are not run and require an opt-in from the app developer. When enabled, they run smoothly if the UI
thread remains unblocked, but they begin to stutter if the framework or app is doing a lot of other work on the UI
thread.
Almost all animations in the XAML framework are independent by default, but there are some actions that you can
take to disable this optimization. Beware of these scenarios particularly:
Setting the EnableDependentAnimation property to allow a dependent animation to run on the UI thread.
Convert these animations into an independent version. For example animate ScaleTransform.ScaleX and
ScaleTransform.ScaleY instead of the Width and Height of an object. Dont be afraid to scale objects like
images and text. The framework applies bilinear scaling only while the ScaleTransform is being animated. The
image/text will be rerasterized at the final size to ensure its always clear.
Making per frame updates, which are effectively dependent animations. An example of this is applying
transformations in the handler of the CompositonTarget.Rendering event.
Running any animation considered independent in an element with the CacheMode property set to
BitmapCache. This is considered dependent because the cache must be re-rasterized for each frame.
Don't animate a WebView or MediaPlayerElement
Web content within a WebView control is not directly rendered by the XAML framework and it requires extra work
to be composed with the rest of the scene. This extra work adds up when animating the control around the screen
and can potentially introduce synchronization issues (for example, the HTML content might not move in sync with
the rest of the XAML content on the page). When you need to animate a WebView control, swap it with a
WebViewBrush for the duration of the animation.
Animating a MediaPlayerElement is a similarly bad idea. Beyond the performance detriment, it can cause tearing
or other artifacts in the video content being played.

Note The recommendations in this article for MediaPlayerElement also apply to MediaElement.
MediaPlayerElement is only available in Windows 10, version 1607, so if you are creating an app for a
previous version of Windows you need to use MediaElement.

Use infinite animations sparingly


Most animations execute for a specified amount of time, but setting the Timeline.Duration property to Forever
allows an animation to run indefinitely. We recommend minimizing the use of infinite animations because they
continually consume CPU resources and can prevent the CPU from going into a low power or idle state, causing it
to run out of power more quickly.
Adding a handler for CompositionTarget.Rendering is similar to running an infinite animation. Normally the UI
thread is active only when there is work to do, but adding handler for this event forces it to run every frame.
Remove the handler when there is no work to be done and reregister it when its needed again.
Use the animation library
The Windows.UI.Xaml.Media.Animation namespace includes a library of high-performance, smooth animations
that have a look and feel consistent with other Windows animations. The relevant classes have "Theme" in their
name, and are described in Animations overview. This library supports many common animation scenarios, such as
animating the first view of the app and creating state and content transitions. We recommend using this animation
library whenever possible to increase performance and consistency for UWP UI.

Note The animation library can't animate all possible properties. For XAML scenarios where the animation
library doesn't apply, see Storyboarded animations.

Animate CompositeTransform3D properties independently


You can animate each property of a CompositeTransform3D independently, so apply only the animations you
need. For examples and more info, see UIElement.Transform3D. For more info about animating transforms, see
Storyboarded animations and Key-frame and easing function animations.

Optimize media resources


Audio, video, and images are compelling forms of content that the majority of apps use. As media capture rates
increase and content moves from standard definition to high definition the amount of resources need to store,
decode, and play back this content increases. The XAML framework builds on the latest features added to the UWP
media engines so apps get these improvements for free. Here we explain some additional tricks that allow you to
get the most out media in your UWP app.
Release media streams
Media files are some of the most common and expensive resources apps typically use. Because media file resources
can greatly increase the size of your app's memory footprint, you must remember to release the handle to media as
soon as the app is finished using it.
For example, if your app working with a RandomAccessStream or an IInputStream object, be sure to call the
close method on the object when your app has finished using it, to release the underlying object.
Display full screen video playback when possible
In UWP apps, always use the IsFullWindow property on the MediaPlayerElement to enable and disable full
window rendering. This insures system level optimizations are used during media playback.
The XAML framework can optimize the display of video content when it is the only thing being rendered, resulting
in an experience that uses less power and yields higher frame rates. For most efficient media playback set the size
of a MediaPlayerElement to be the width and height of the screen and dont display other XAML elements
There are legitimate reasons to overlay XAML elements on a MediaPlayerElement that takes up the full width and
height of the screen, for example closed captions or momentary transport controls. Make sure to hide these
elements (set Visibility="Collapsed" ) when they are not needed to put media playback back into its most efficient
state.
Display deactivation and conserving power
To prevent the display from be deactivating when user action is no longer detected, such as when an app is playing
video, you can call DisplayRequest.RequestActive.
To conserve power and battery life, you should call DisplayRequest.RequestRelease to release the display
request as soon as it is no longer required.
Here are some situations when you should release the display request:
Video playback is paused, for example by user action, buffering, or adjustment due to limited bandwidth.
Playback stops. For example, the video is done playing or the presentation is over.
A playback error has occurred. For example, network connectivity issues or a corrupted file.
Put other elements to the side of embedded video
Often apps offer an embedded view where video is played within a page. Now you obviously lost the full screen
optimization because the MediaPlayerElement is not the size of the page and there are other XAML objects
drawn. Beware of unintentionally entering this mode by drawing a border around a MediaPlayerElement.
Dont draw XAML elements on top of video when its in embedded mode. If you do, the framework is forced to do a
little extra work to compose the scene. Placing transport controls below an embedded media element instead of on
top of the video is a good example of optimizing for this situation. In this image, the red bar indicates a set of
transport controls (play, pause, stop, etc.).
Dont place these controls on top of media that is not full screen. Instead place the transport controls somewhere
outside of the area where the media is being rendered. In the next image, the controls are placed below the media.

Delay setting the source for a MediaPlayerElement


Media engines are expensive objects and the XAML framework delays loading dlls and creating large objects as
long as possible. The MediaPlayerElement is forced to do this work after its source is set via the Source property.
Setting this when the user is really ready to play media delays the majority of the cost associated with the
MediaPlayerElement as long as possible.
Set MediaPlayerElement.PosterSource
Setting MediaPlayerElement.PosterSource enables XAML to release some GPU resources that would have
otherwise been used. This API allows an app to use as little memory as possible.
Improve media scrubbing
Scrubbing is always a tough task for media platforms to make really responsive. Generally people accomplish this
by changing the value of a Slider. Here are a couple tips on how to make this as efficient as possible:
Update the value of Slider based on a timer that queries the Position on the
MediaPlayerElement.MediaPlayer. Make sure to use a reasonable update frequency for your timer. The
Position property only updates every 250 millisecond during playback.
The size of the step frequency on the Slider must scale with the length of the video.
Subscribe to the PointerPressed, PointerMoved, PointerReleased events on the slider to set the
PlaybackRate property to 0 when the user drags the thumb of the slider.
In the PointerReleased event handler, manually set the media position to the slider position value to achieve
optimal thumb snapping while scrubbing.
Match video resolution to device resolution
Decoding video takes a lot of memory and GPU cycles, so choose a video format close to the resolution it will be
displayed at. There is no point in using the resources to decode 1080 video if its going to get scaled down to a
much smaller size. Many apps dont have the same video encoded at different resolutions; but if it is available, use
an encoding that is close to the resolution at which it will be displayed.
Choose recommended formats
Media format selection can be a sensitive topic and is often driven by business decisions. From a UWP performance
perspective, we recommend H.264 video as the primary video format and AAC and MP3 as the preferred audio
formats. For local file playback, MP4 is the preferred file container for video content. H.264 decoding is accelerated
through most recent graphics hardware. Also, although hardware acceleration for VC-1 decoding is broadly
available, for a large set of graphics hardware on the market, the acceleration is limited in many cases to a partial
acceleration level (or IDCT level), rather than a full-steam level hardware offload (i.e. VLD mode).
If you have full control of the video content generation process, you must figure out how to keep a good balance
between compression efficiency and GOP structure. Relatively smaller GOP size with B pictures can increase the
performance in seeking or trick modes.
When including short, low-latency audio effects, for example in games, use WAV files with uncompressed PCM data
to reduce processing overhead that is typical for compressed audio formats.

Optimize image resources


Scale images to the appropriate size
Images are captured at very high resolutions, which can lead to apps using more CPU when decoding the image
data and more memory after its loaded from disk. But theres no sense decoding and saving a high-resolution
image in memory only to display it smaller than its native size. Instead, create a version of the image at the exact
size it will be drawn on-screen using the DecodePixelWidth and DecodePixelHeight properties.
Don't do this:

<Image Source="ms-appx:///Assets/highresCar.jpg"
Width="300" Height="200"/> <!-- BAD CODE DO NOT USE.-->

Instead, do this:

<Image>
<Image.Source>
<BitmapImage UriSource="ms-appx:///Assets/highresCar.jpg"
DecodePixelWidth="300" DecodePixelHeight="200"/>
</Image.Source>
</Image>

The units for DecodePixelWidth and DecodePixelHeight are by default physical pixels. The DecodePixelType
property can be used to change this behavior: setting DecodePixelType to Logical results in the decode size
automatically accounting for the systems current scale factor, similar to other XAML content. It would therefore be
generally appropriate to set DecodePixelType to Logical if, for instance, you want DecodePixelWidth and
DecodePixelHeight to match the Height and Width properties of the Image control the image will be displayed in.
With the default behavior of using physical pixels, you must account for the systems current scale factor yourself;
and you should listen for scale change notifications in case the user changes their display preferences.
If DecodePixelWidth/Height are explicitly set larger than the image will be displayed on-screen then the app will
unnecessarily use extra memoryup to 4 bytes per pixelwhich quickly becomes expensive for large images. The
image will also be scaled down using bilinear scaling which could cause it to appear blurry for large scale factors.
If DecodePixelWidth/DecodePixelHeight are explicitly set smaller than the image will be displayed on screen then it
will be scaled up and could appear pixelated.
In some cases where an appropriate decode size cannot be determined ahead of time, you should defer to XAMLs
automatic right-size-decoding which will make a best effort attempt to decode the image at the appropriate size if
an explicit DecodePixelWidth/DecodePixelHeight is not specified.
You should set an explicit decode size if you know the size of the image content ahead of time. You should also in
conjunction set DecodePixelType to Logical if the supplied decode size is relative to other XAML element sizes.
For example, if you explicitly set the content size with Image.Width and Image.Height, you could set
DecodePixelType to DecodePixelType.Logical to use the same logical pixel dimensions as an Image control and then
explicitly use BitmapImage.DecodePixelWidth and/or BitmapImage.DecodePixelHeight to control the size of the
image to achieve potentially large memory savings.
Note that Image.Stretch should be considered when determining the size of the decoded content.
Right-sized decoding
In the event that you don't set an explicit decode size, XAML will make a best effort attempt to save memory by
decoding an image to the exact size it will appear on-screen according to the containing pages initial layout. You're
advised to write your application in such a way as to make use of this feature when possible. This feature will be
disabled if any of the following conditions are met.
The BitmapImage is connected to the live XAML tree after setting the content with SetSourceAsync or
UriSource.
The image is decoded using synchronous decoding such as SetSource.
The image is hidden via setting Opacity to 0 or Visibility to Collapsed on the host image element or brush or
any parent element.
The image control or brush uses a Stretch of None.
The image is used as a NineGrid.
CacheMode="BitmapCache" is set on the image element or on any parent element.
The image brush is non-rectangular (such as when applied to a shape or to text).
In the above scenarios, setting an explicit decode size is the only way to achieve memory savings.
You should always attach a BitmapImage to the live tree before setting the source. Any time an image element or
brush is specified in markup, this will automatically be the case. Examples are provided below under the heading
"Live tree examples". You should always avoid using SetSource and instead use SetSourceAsync when setting a
stream source. And it's a good idea to avoid hiding image content (either with zero opacity or with collapsed
visibility) while waiting for the ImageOpened event to be raised. Doing this is a judgment call: you won't benefit
from automatic right-sized decoding if it's done. If your app must hide image content initially then it should also set
the decode size explicitly if possible.
Live tree examples
Example 1 (good)Uniform Resource Identifier (URI) specified in markup.

<Image x:Name="myImage" UriSource="Assets/cool-image.png"/>

Example 2 markupURI specified in code-behind.

<Image x:Name="myImage"/>

Example 2 code-behind (good)connecting the BitmapImage to the tree before setting its UriSource.

var bitmapImage = new BitmapImage();


myImage.Source = bitmapImage;
bitmapImage.UriSource = new URI("ms-appx:///Assets/cool-image.png", UriKind.RelativeOrAbsolute);

Example 2 code-behind (bad)setting the the BitmapImage's UriSource before connecting it to the tree.

var bitmapImage = new BitmapImage();


bitmapImage.UriSource = new URI("ms-appx:///Assets/cool-image.png", UriKind.RelativeOrAbsolute);
myImage.Source = bitmapImage;

Caching optimizations
Caching optimizations are in effect for images that use UriSource to load content from an app package or from the
web. The URI is used to uniquely identify the underlying content, and internally the XAML framework will not
download or decode the content multiple times. Instead, it will use the cached software or hardware resources to
display the content multiple times.
The exception to this optimization is if the image is displayed multiple times at different resolutions (which can be
specified explicitly or through automatic right-sized decoding). Each cache entry also stores the resolution of the
image, and if XAML cannot find an image with a source URI that matches the required resolution then it will decode
a new version at that size. It will not, however, download the encoded image data again.
Consequently, you should embrace using UriSource when loading images from an app package, and avoid using a
file stream and SetSourceAsync when it's not required.
Images in virtualized panels (ListView, for instance)
If an image is removed from the treebecause the app explicitly removed it, or because its in a modern virtualized
panel and was implicitly removed when scrolled out of viewthen XAML will optimize memory usage by releasing
the hardware resources for the image since they are no longer required. The memory is not released immediately,
but rather is released during the frame update that occurs after one second of the image element no longer being
in the tree.
Consequently, you should strive to use modern virtualized panels to host lists of image content.
Software-rasterized images
When an image is used for a non-rectangular brush or for a NineGrid, the image will use a software rasterization
path, which will not scale images at all. Additionally, it must store a copy of the image in both software and
hardware memory. For instance, if an image is used as a brush for an ellipse then the potentially large full image
will be stored twice internally. When using NineGrid or a non-rectangular brush, then, your app should pre-scale
its images to approximately the size they will be rendered at.
Background thread image-loading
XAML has an internal optimization that allows it to decode the contents of an image asynchronously to a surface in
hardware memory without requiring an intermediate surface in software memory. This reduces peak memory
usage and rendering latency. This feature will be disabled if any of the following conditions are met.
The image is used as a NineGrid.
CacheMode="BitmapCache" is set on the image element or on any parent element.
The image brush is non-rectangular (such as when applied to a shape or to text).
SoftwareBitmapSource
The SoftwareBitmapSource class exchanges interoperable uncompressed images between different WinRT
namespaces such as BitmapDecoder, camera APIs, and XAML. This class obviates an extra copy that would
typically be necessary with WriteableBitmap, and that helps reduce peak memory and source-to-screen latency.
The SoftwareBitmap that supplies source information can also be configured to use a custom IWICBitmap to
provide a reloadable backing store that allows the app to re-map memory as it sees fit. This is an advanced C++
use case.
Your app should use SoftwareBitmap and SoftwareBitmapSource to interoperate with other WinRT APIs that
produce and consume images. And your app should use SoftwareBitmapSource when loading uncompressed
image data instead of using WriteableBitmap.
Use GetThumbnailAsync for thumbnails
One use case for scaling images is creating thumbnails. Although you could use DecodePixelWidth and
DecodePixelHeight to provide small versions of images, UWP provides even more efficient APIs for retrieving
thumbnails. GetThumbnailAsync provides the thumbnails for images that have the file system already cached.
This provides even better performance than the XAML APIs because the image doesnt need to be opened or
decoded.
FileOpenPicker picker = new FileOpenPicker();
picker.FileTypeFilter.Add(".bmp");
picker.FileTypeFilter.Add(".jpg");
picker.FileTypeFilter.Add(".jpeg");
picker.FileTypeFilter.Add(".png");
picker.SuggestedStartLocation = PickerLocationId.PicturesLibrary;

StorageFile file = await picker.PickSingleFileAsync();

StorageItemThumbnail fileThumbnail = await file.GetThumbnailAsync(ThumbnailMode.SingleItem, 64);

BitmapImage bmp = new BitmapImage();


bmp.SetSource(fileThumbnail);

Image img = new Image();


img.Source = bmp;

Dim picker As New FileOpenPicker()


picker.FileTypeFilter.Add(".bmp")
picker.FileTypeFilter.Add(".jpg")
picker.FileTypeFilter.Add(".jpeg")
picker.FileTypeFilter.Add(".png")
picker.SuggestedStartLocation = PickerLocationId.PicturesLibrary

Dim file As StorageFile = Await picker.PickSingleFileAsync()

Dim fileThumbnail As StorageItemThumbnail = Await file.GetThumbnailAsync(ThumbnailMode.SingleItem, 64)

Dim bmp As New BitmapImage()


bmp.SetSource(fileThumbnail)

Dim img As New Image()


img.Source = bmp

Decode images once


To prevent images from being decoded more than once, assign the Image.Source property from an Uri rather
than using memory streams. The XAML framework can associate the same Uri in multiple places with one decoded
image, but it cannot do the same for multiple memory streams that contain the same data and creates a different
decoded image for each memory stream.
Optimize suspend/resume
3/6/2017 7 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Create Universal Windows Platform (UWP) apps that streamline their use of the process lifetime system to resume
efficiently after suspension or termination.

Launch
When reactivating an app following suspend/terminate, check to see if a long time has elapsed. If so, consider
returning to the main landing page of the app instead of showing the user stale data. This will also result in faster
startup.
During activation, always check the PreviousExecutionState of the event args parameter (for example, for launched
activations check LaunchActivatedEventArgs.PreviousExecutionState). If the value is ClosedByUser or NotRunning,
dont waste time restoring previously saved state. In this case, the right thing is to provide a fresh experience and
it will result in faster startup.
Instead of eagerly restoring previously saved state, consider keep track of that state, and only restoring it on
demand. For example, consider a situation where your app was previously suspended, saved state for 3 pages, and
was then terminated. Upon relaunch, if you decide to return the user to the 3rd page, do not eagerly restore the
state for the first 2 pages. Instead, hold on to this state and only use it once you know you need it.

While running
As a best practice, dont wait for the suspend event and then persist a large amount of state. Instead, your
application should incrementally persist smaller amounts of state as it runs. This is especially important for large
apps that are at risk of running out of time during suspend if they try to save everything at once.
However, you need to find a good balance between incremental saving and performance of your app while
running. A good tradeoff is to incrementally keep track of the data that has changed (and therefore needs to be
saved) and use the suspend event to actually save that data (which is faster than saving all data or examining the
entire state of app to decide what to save).
Dont use the window Activated or VisibilityChanged events to decide when to save state. When the user switches
away from your app, the window is deactivated, but the system waits a short amount of time (about 10 seconds)
before suspending the app. This is to give a more responsive experience in case the user switches back to your app
rapidly. Wait for the suspend event before running suspend logic.

Suspend
During suspend, reduce the footprint of your app. If your app uses less memory while suspended, the overall
system will be more responsive and fewer suspended apps (including yours) will be terminated. However, balance
this with the need for snappy resumes: dont reduce footprint so much that resume slows down considerably while
your app reloads lots of data into memory.
For managed apps, the system will run a garbage collection pass after the apps suspend handlers complete. Make
sure to take advantage of this by releasing references to objects that will help reduce the apps footprint while
suspended.
Ideally, your app will finish with suspend logic in less than 1 second. The faster you can suspend, the better that
will result in a snappier user experience for other apps and parts of the system. If you must, your suspend logic can
take up to 5 seconds on desktop devices or 10 seconds on mobile devices. If those times are exceeded, your app
will be abruptly terminated. You dont want this to happen because if it does, when the user switches back to your
app, a new process will be launched and the experience will feel much slower compared to resuming a suspended
app.

Resume
Most apps dont need to do anything special when resumed, so typically you wont handle this event. Some apps
use resume to restore connections that were closed during suspend, or to refresh data that may be stale. Instead of
doing this kind of work eagerly, design your app to initiate these activities on demand. This will result in a faster
experience when the user switches back to a suspended app, and ensures that youre only doing work the user
really needs.

Avoid unnecessary termination


The UWP process lifetime system can suspend or terminate an app for a variety of reasons. This process is
designed to quickly return an app to the state it was in before it was suspended or terminated. When done well, the
user wont be aware that the app ever stopped running. Here are a few tricks that your UWP app can use to help
the system streamline transitions in an apps lifetime.
An app can be suspended when the user moves it to the background or when the system enters a low power state.
When the app is being suspended, it raises the suspending event and has up to 5 seconds to save its data. If the
app's suspending event handler doesn't complete within 5 seconds, the system assumes the app has stopped
responding and terminates it. A terminated app has to go through the long startup process again instead of being
immediately loaded into memory when a user switches to it.
Serialize only when necessary
Many apps serialize all their data on suspension. If you only need to store a small amount of app settings data,
however, you should use the LocalSettings store instead of serializing the data. Use serialization for larger
amounts of data and for non-settings data.
When you do serialize your data, you should avoid reserializing if it hasn't changed. It takes extra time to serialize
and save the data, plus extra time to read and deserialize it when the app is activated again. Instead, we recommend
that the app determine if its state has actually changed, and if so, serialize and deserialize only the data that
changed. A good way to ensure that this happens is to periodically serialize data in the background after it changes.
When you use this technique, everything that needs to be serialized at suspension has already been saved so there
is no work to do and an app suspends quickly.
Serializing data in C# and Visual Basic
The available choices of serialization technology for .NET apps are the System.Xml.Serialization.XmlSerializer,
System.Runtime.Serialization.DataContractSerializer, and
System.Runtime.Serialization.Json.DataContractJsonSerializer classes.
From a performance perspective, we recommend using the XmlSerializer class. The XmlSerializer has the lowest
serialization and deserialization times, and maintains a low memory footprint. The XmlSerializer has few
dependencies on the .NET framework; this means that compared with the other serialization technologies, fewer
modules need to be loaded into your app to use the XmlSerializer.
DataContractSerializer makes it easier to serialize custom classes, although it has a larger performance impact
than XmlSerializer. If you need better performance, consider switching. In general, you should not load more than
one serializer, and you should prefer XmlSerializer unless you need the features of another serializer.
Reduce memory footprint
The system tries to keep as many suspended apps in memory as possible so that users can quickly and reliably
switch between them. When an app is suspended and stays in the system's memory, it can quickly be brought to
the foreground for the user to interact with, without having to display a splash screen or perform a lengthy load
operation. If there aren't enough resources to keep an app in memory, the app is terminated. This makes memory
management important for two reasons:
Freeing as much memory as possible at suspension minimizes the chances that your app is terminated because
of lack of resources while its suspended.
Reducing the overall amount of memory your app uses reduces the chances that other apps are terminated
while they are suspended.
Release resources
Certain objects, such as files and devices, occupy a large amount of memory. We recommend that during
suspension, an app release handles to these objects and recreate them when needed. This is also a good time to
purge any caches that wont be valid when the app is resumed. An additional step the XAML framework runs on
your behalf for C# and Visual Basic apps is garbage collection if it is necessary. This ensures any objects no longer
referenced in app code are released.

Resume quickly
A suspended app can be resumed when the user moves it to the foreground or when the system comes out of a
low power state. When an app is resumed from the suspended state, it continues from where it was when it was
suspended. No app data is lost because it was stored in memory, even if the app was suspended for a long period
of time.
Most apps don't need to handle the Resuming event. When your app is resumed, variables and objects have the
exact same state they had when the app was suspended. Handle the Resuming event only if you need to update
data or objects that might have changed between the time your app was suspended and when it was resumed such
as: content (for example, update feed data), network connections that may have gone stale, or if you need to
reacquire access to a device (for example, a webcam).

Related topics
Guidelines for app suspend and resume
Optimize file access
3/6/2017 7 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Create Universal Windows Platform (UWP) apps that access the file system efficiently, avoiding performance issues
due to disk latency and memory/CPU cycles.
When you want to access a large collection of files and you want to access property values other than the typical
Name, FileType, and Path properties, access them by creating QueryOptions and calling SetPropertyPrefetch.
The SetPropertyPrefetch method can dramatically improve the performance of apps that display a collection of
items obtained from the file system, such as a collection of images. The next set of examples shows a few ways to
access multiple files.
The first example uses Windows.Storage.StorageFolder.GetFilesAsync to retrieve the name info for a set of
files. This approach provides good performance, because the example accesses only the name property.

StorageFolder library = Windows.Storage.KnownFolders.PicturesLibrary;


IReadOnlyList<StorageFile> files = await library.GetFilesAsync(Windows.Storage.Search.CommonFileQuery.OrderByDate);

for (int i = 0; i < files.Count; i++)


{
// do something with the name of each file
string fileName = files[i].Name;
}

Dim library As StorageFolder = Windows.Storage.KnownFolders.PicturesLibrary


Dim files As IReadOnlyList(Of StorageFile) =
Await library.GetFilesAsync(Windows.Storage.Search.CommonFileQuery.OrderByDate)

For i As Integer = 0 To files.Count - 1


' do something with the name of each file
Dim fileName As String = files(i).Name
Next i

The second example uses Windows.Storage.StorageFolder.GetFilesAsync and then retrieves the image
properties for each file. This approach provides poor performance.

StorageFolder library = Windows.Storage.KnownFolders.PicturesLibrary;


IReadOnlyList<StorageFile> files = await library.GetFilesAsync(Windows.Storage.Search.CommonFileQuery.OrderByDate);
for (int i = 0; i < files.Count; i++)
{
ImageProperties imgProps = await files[i].Properties.GetImagePropertiesAsync();

// do something with the date the image was taken


DateTimeOffset date = imgProps.DateTaken;
}
Dim library As StorageFolder = Windows.Storage.KnownFolders.PicturesLibrary
Dim files As IReadOnlyList(Of StorageFile) = Await library.GetFilesAsync(Windows.Storage.Search.CommonFileQuery.OrderByDate)
For i As Integer = 0 To files.Count - 1
Dim imgProps As ImageProperties =
Await files(i).Properties.GetImagePropertiesAsync()

' do something with the date the image was taken


Dim dateTaken As DateTimeOffset = imgProps.DateTaken
Next i

The third example uses QueryOptions to get info about a set of files. This approach provides much better
performance than the previous example.

// Set QueryOptions to prefetch our specific properties


var queryOptions = new Windows.Storage.Search.QueryOptions(CommonFileQuery.OrderByDate, null);
queryOptions.SetThumbnailPrefetch(ThumbnailMode.PicturesView, 100,
ThumbnailOptions.ReturnOnlyIfCached);
queryOptions.SetPropertyPrefetch(PropertyPrefetchOptions.ImageProperties,
new string[] {"System.Size"});

StorageFileQueryResult queryResults = KnownFolders.PicturesLibrary.CreateFileQueryWithOptions(queryOptions);


IReadOnlyList<StorageFile> files = await queryResults.GetFilesAsync();

foreach (var file in files)


{
ImageProperties imageProperties = await file.Properties.GetImagePropertiesAsync();

// Do something with the date the image was taken.


DateTimeOffset dateTaken = imageProperties.DateTaken;

// Performance gains increase with the number of properties that are accessed.
IDictionary<String, object> propertyResults =
await file.Properties.RetrievePropertiesAsync(
new string[] {"System.Size" });

// Get/Set extra properties here


var systemSize = propertyResults["System.Size"];
}
' Set QueryOptions to prefetch our specific properties
Dim queryOptions = New Windows.Storage.Search.QueryOptions(CommonFileQuery.OrderByDate, Nothing)
queryOptions.SetThumbnailPrefetch(ThumbnailMode.PicturesView,
100, Windows.Storage.FileProperties.ThumbnailOptions.ReturnOnlyIfCached)
queryOptions.SetPropertyPrefetch(PropertyPrefetchOptions.ImageProperties,
New String() {"System.Size"})

Dim queryResults As StorageFileQueryResult = KnownFolders.PicturesLibrary.CreateFileQueryWithOptions(queryOptions)


Dim files As IReadOnlyList(Of StorageFile) = Await queryResults.GetFilesAsync()

For Each file In files


Dim imageProperties As ImageProperties = Await file.Properties.GetImagePropertiesAsync()

' Do something with the date the image was taken.


Dim dateTaken As DateTimeOffset = imageProperties.DateTaken

' Performance gains increase with the number of properties that are accessed.
Dim propertyResults As IDictionary(Of String, Object) =
Await file.Properties.RetrievePropertiesAsync(New String() {"System.Size"})

' Get/Set extra properties here


Dim systemSize = propertyResults("System.Size")

Next file

If you're performing multiple operations on Windows.Storage objects such as


Windows.Storage.ApplicationData.Current.LocalFolder , create a local variable to reference that storage source so that you
don't recreate intermediate objects each time you access it.

Stream performance in C# and Visual Basic


Buffering between UWP and .NET streams
There are many scenarios when you might want to convert a UWP stream (such as a
Windows.Storage.Streams.IInputStream or IOutputStream) to a .NET stream (System.IO.Stream). For
example, this is useful when you are writing a UWP app and want to use existing .NET code that works on streams
with the UWP file system. In order to enable this, .NET APIs for Windows Store apps provides extension methods
that allow you to convert between .NET and UWP stream types. For more info, see
WindowsRuntimeStreamExtensions.
When you convert a UWP stream to a .NET stream, you effectively create an adapter for the underlying UWP
stream. Under some circumstances, there is a runtime cost associated with invoking methods on UWP streams. This
may affect the speed of your app, especially in scenarios where you perform many small, frequent read or write
operations.
In order to speed up apps, the UWP stream adapters contain a data buffer. The following code sample
demonstrates small consecutive reads using a UWP stream adapter with a default buffer size.
StorageFile file = await Windows.Storage.ApplicationData.Current
.LocalFolder.GetFileAsync("example.txt");
Windows.Storage.Streams.IInputStream windowsRuntimeStream =
await file.OpenReadAsync();

byte[] destinationArray = new byte[8];

// Create an adapter with the default buffer size.


using (var managedStream = windowsRuntimeStream.AsStreamForRead())
{

// Read 8 bytes into destinationArray.


// A larger block is actually read from the underlying
// windowsRuntimeStream and buffered within the adapter.
await managedStream.ReadAsync(destinationArray, 0, 8);

// Read 8 more bytes into destinationArray.


// This call may complete much faster than the first call
// because the data is buffered and no call to the
// underlying windowsRuntimeStream needs to be made.
await managedStream.ReadAsync(destinationArray, 0, 8);
}

Dim file As StorageFile = Await Windows.Storage.ApplicationData.Current -


.LocalFolder.GetFileAsync("example.txt")
Dim windowsRuntimeStream As Windows.Storage.Streams.IInputStream =
Await file.OpenReadAsync()

Dim destinationArray() As Byte = New Byte(8) {}

' Create an adapter with the default buffer size.


Dim managedStream As Stream = windowsRuntimeStream.AsStreamForRead()
Using (managedStream)

' Read 8 bytes into destinationArray.


' A larger block is actually read from the underlying
' windowsRuntimeStream and buffered within the adapter.
Await managedStream.ReadAsync(destinationArray, 0, 8)

' Read 8 more bytes into destinationArray.


' This call may complete much faster than the first call
' because the data is buffered and no call to the
' underlying windowsRuntimeStream needs to be made.
Await managedStream.ReadAsync(destinationArray, 0, 8)

End Using

This default buffering behavior is desirable in most scenarios where you convert a UWP stream to a .NET stream.
However, in some scenarios you may want to tweak the buffering behavior in order to increase performance.
Working with large data sets
When reading or writing larger sets of data you may be able to increase your read or write throughput by
providing a large buffer size to the AsStreamForRead, AsStreamForWrite, and AsStream extension methods.
This gives the stream adapter a larger internal buffer size. For instance, when passing a stream that comes from a
large file to an XML parser, the parser can make many sequential small reads from the stream. A large buffer can
reduce the number of calls to the underlying UWP stream and boost performance.

Note You should be careful when setting a buffer size that is larger than approximately 80 KB, as this may
cause fragmentation on the garbage collector heap (see Improve garbage collection performance). The
following code example creates a managed stream adapter with an 81,920 byte buffer.
// Create a stream adapter with an 80 KB buffer.
Stream managedStream = nativeStream.AsStreamForRead(bufferSize: 81920);

' Create a stream adapter with an 80 KB buffer.


Dim managedStream As Stream = nativeStream.AsStreamForRead(bufferSize:=81920)

The Stream.CopyTo and CopyToAsync methods also allocate a local buffer for copying between streams. As with
the AsStreamForRead extension method, you may be able to get better performance for large stream copies by
overriding the default buffer size. The following code example demonstrates changing the default buffer size of a
CopyToAsync call.

MemoryStream destination = new MemoryStream();


// copies the buffer into memory using the default copy buffer
await managedStream.CopyToAsync(destination);

// Copy the buffer into memory using a 1 MB copy buffer.


await managedStream.CopyToAsync(destination, bufferSize: 1024 * 1024);

Dim destination As MemoryStream = New MemoryStream()


' copies the buffer into memory using the default copy buffer
Await managedStream.CopyToAsync(destination)

' Copy the buffer into memory using a 1 MB copy buffer.


Await managedStream.CopyToAsync(destination, bufferSize:=1024 * 1024)

This example uses a buffer size of 1 MB, which is greater than the 80 KB previously recommended. Using such a
large buffer can improve throughput of the copy operation for very large data sets (that is, several hundred
megabytes). However, this buffer is allocated on the large object heap and could potentially degrade garbage
collection performance. You should only use large buffer sizes if it will noticeably improve the performance of your
app.
When you are working with a large number of streams simultaneously, you might want to reduce or eliminate the
memory overhead of the buffer. You can specify a smaller buffer, or set the bufferSize parameter to 0 to turn off
buffering entirely for that stream adapter. You can still achieve good throughput performance without buffering if
you perform large reads and writes to the managed stream.
Performing latency-sensitive operations
You might also want to avoid buffering if you want low-latency reads and writes and do not want to read in large
blocks out of the underlying UWP stream. For example, you might want low-latency reads and writes if you are
using the stream for network communications.
In a chat app you might use a stream over a network interface to send messages back in forth. In this case you want
to send messages as soon as they are ready and not wait for the buffer to fill up. If you set the buffer size to 0 when
calling the AsStreamForRead, AsStreamForWrite, and AsStream extension methods, then the resulting adapter
will not allocate a buffer, and all calls will manipulate the underlying UWP stream directly.
UWP Components and optimizing interop
3/6/2017 8 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Create Universal Windows Platform (UWP) apps that use UWP Components and interop between native and
managed types while avoiding interop performance issues.

Best practices for interoperability with UWP Components


If you are not careful, using UWP Components can have a large impact on your app performance. This section
discusses how to get good performance when your app uses UWP Components.
Introduction
Interoperability can have a big impact on performance and you might be using it without even realizing that you
are. The UWP handles a lot of the interoperability for you so that you can be more productive and reuse code that
was written in other languages. We encourage you to take advantage of what the UWP does for you, but be aware
that it can impact performance. This section discusses things you can do to lessen the impact that interoperability
has on your app's performance.
The UWP has a library of types that are accessible from any language that can write a UWP app. You use the UWP
types in C# or Microsoft Visual Basic the same way you use .NET objects. You don't need to make platform invoke
method calls to access the UWP components. This makes writing your apps much less complex, but it is important
to realize that there might be more interoperability occurring than you expect. If a UWP component is written in a
language other than C# or Visual Basic, you cross interoperability boundaries when you use that component.
Crossing interoperability boundaries can impact the performance of an app.
When you develop a UWP app in C# or Visual Basic, the two most common set of APIs that you use are the UWP
APIs and the .NET APIs for UWP apps. In general, types that are defined in the UWP are in namespaces that begin
with "Windows." and .NET types are in namespaces that begin with "System." There are exceptions, though. The
types in .NET for UWP apps do not require interoperability when they are used. If you find that you have bad
performance in an area that uses UWP, you might be able to use .NET for UWP apps instead to get better
performance.
Note
Most of the UWP components that ship with Windows 10 are implemented in C++ so you cross interoperability
boundaries when you use them from C# or Visual Basic. As always, make sure to measure your app to see if using
UWP components affects your app's performance before you invest in making changes to your code.
In this topic, when we say "UWP components", we mean components that are written in a language other than C#
or Visual Basic.
Each time you access a property or call a method on a UWP component, an interoperability cost is incurred. In fact,
creating a UWP component is more costly than creating a .NET object. The reasons for this are that the UWP must
execute code that transitions from your app's language to the component's language. Also, if you pass data to the
component, the data must be converted between managed and unmanaged types.
Using UWP Components efficiently
If you find that you need to get better performance, you can ensure that your code uses UWP components as
efficiently as possible. This section discusses some tips for improving performance when you use UWP
components.
It takes a significant number of calls in a short period of time for the performance impact to be noticeable. A well-
designed application that encapsulates calls to UWP components from business logic and other managed code
should not incur huge interoperability costs. But if your tests indicate that using UWP components is affecting your
app's performance, the tips discussed in this section help you improve performance.
Consider using .NET for UWP apps
There are certain cases where you can accomplish a task by using either UWP or .NET for UWP apps. It is a good
idea to try to not mix .NET types and UWP types. Try to stay in one or the other. For example, you can parse a
stream of xml by using either the Windows.Data.Xml.Dom.XmlDocument type (a UWP type) or the
System.Xml.XmlReader type (a .NET type). Use the API that is from the same technology as the stream. For
example, if you read xml from a MemoryStream, use the System.Xml.XmlReader type, because both types are
.NET types. If you read from a file, use the Windows.Data.Xml.Dom.XmlDocument type because the file APIs
and XmlDocument are UWP components.
Copy Window Runtime objects to .NET types
When a UWP component returns a UWP object, it might be beneficial to copy the returned object into a .NET object.
Two places where this is especially important is when you're working with collections and streams.
If you call a UWP API that returns a collection and then you save and access that collection many times, it might be
beneficial to copy the collection into a .NET collection and use the .NET version from then on.
Cache the results of calls to UWP components for later use
You might be able to get better performance by saving values into local variables instead of accessing a UWP type
multiple times. This can be especially beneficial if you use a value inside of a loop. Measure your app to see if using
local variables improves your app's performance. Using cached values can increase your app's speed because it will
spend less time on interoperability.
Combine calls to UWP components
Try to complete tasks with the fewest number of calls to UWP objects as possible. For example, it is usually better to
read a large amount of data from a stream than to read small amounts at a time.
Use APIs that bundle work in as few calls as possible instead of APIs that do less work and require more calls. For
example, prefer to create an object by calling constructors that initialize multiple properties instead of calling the
default constructor and assigning properties one at a time.
Building a UWP components
If you write a UWP Component that can be used by apps written in C++ or JavaScript, make sure that your
component is designed for good performance. All the suggestions for getting good performance in apps apply to
getting good performance in components. Measure your component to find out which APIs have high traffic
patterns and for those areas, consider providing APIs that enable your users to do work with few calls.

Keep your app fast when you use interop in managed code
The UWP makes it easy to interoperate between native and managed code, but if you're not careful it can incur
performance costs. Here we show you how to get good performance when you use interop in your managed UWP
apps.
The UWP allows developers to write apps using XAML with their language of choice thanks to the projections of the
UWP APIs available in each language. When writing an app in C# or Visual Basic, this convenience comes at an
interop cost because the UWP APIs are usually implemented in native code, and any UWP invocation from C# or
Visual Basic requires that the CLR transition from a managed to a native stack frame and marshal function
parameters to representations accessible by native code. This overhead is negligible for most apps. But when you
make many calls (hundreds of thousands, to millions) to UWP APIs in the critical path of an app, this cost can
become noticeable. In general you want to ensure that the time spent in transition between languages is small
relative to the execution of the rest of your code. This is illustrated by the following diagram.
The types listed at .NET for Windows apps don't incur this interop cost when used from C# or Visual Basic. As a
rule of thumb, you can assume that types in namespaces which begin with Windows. are part of the UWP, and
types in namespaces which begin with System. are .NET types. Keep in mind that even simple usage of UWP types
such as allocation or property access incurs an interop cost.
You should measure your app and determine if interop is taking up a large portion of your apps execution time
before optimizing your interop costs. When analyzing your apps performance with Visual Studio, you can easily
get an upper bound on your interop costs by using the Functions view and looking at inclusive time spent in
methods which call into the UWP.
If your app is slow because of interop overhead, you can improve its performance by reducing calls to UWP APIs on
hot code paths. For example, a game engine that is doing tons of physics calculations by constantly querying the
position and dimensions of UIElements can save a lot of time by storing the necessary info from UIElements to
local variables, doing calculations on these cached values, and assigning the end result back to the UIElements
after the calculations are done. Another example: if a collection is heavily accessed by C# or Visual Basic code, then
it is more efficient to use a collection from the System.Collections namespace, rather than a collection from the
Windows.Foundation.Collections namespace. You may also consider combining calls to UWP components; one
example where this is possible is by using the Windows.Storage.BulkAccess APIs.
Building a UWP component
If you write a UWP component for use in apps written in C++ or JavaScript, make sure that your component is
designed for good performance. Your API surface defines your interop boundary and defines the degree to which
your users will have to think about the guidance in this topic. If you are distributing your components to other
parties then this becomes especially important.
All of the suggestions for getting good performance in apps apply to getting good performance in components.
Measure your component to find out which APIs have high traffic patterns, and for those areas, consider providing
APIs that enable your users to do work with few calls. Significant effort was put into designing the UWP to allow
apps to use it without requiring frequent crossing of the interop boundary.
Tools for profiling and performance
3/6/2017 1 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Microsoft provides several tools to help you improve the performance of your Universal Windows Platform (UWP)
app. Follow these links to learn how to use these tools.
XAML UI Responsiveness tool in Visual Studio 2015. One of the best tools to use for measuring perf impact
within your app is the XAML UI Responsiveness Tool. This shipped has been updated for Visual Studio 2015 to
support even more scenarios.
See also:

Analyze the performance of Windows Store apps using Visual By showing you where the code of your app spends its time as
Studio diagnostic tools your program executes, the Visual Studio profilers can help
you find the performance bottlenecks in your apps, functions,
and algorithms.

XAML Performance: Techniques for Maximizing Universal In this //build session, you will learn about the new platform
Windows App Experiences Built with XAML features, new tooling features and new techniques to
dramatically increase the performance of your XAML-based
Universal Windows app.

New XAML Tools in Visual Studio 2015 In this //build session, you will learn about some of the new
capabilities in Visual Studio 2015, including the re-designed
Blend experience, UI debugging tools and the XAML editor
enhancements.

Windows Performance Analyzer Included in the Windows Assessment and Deployment Kit
(Windows ADK), Windows Performance Analyzer (WPA) is a
tool that creates graphs and data tables of Event Tracing for
Windows (ETW) events that are recorded by Windows
Performance Recorder (WPR), Xperf, or an assessment that is
run in the Assessment Platform. WPA can open any event
trace log (ETL) file for analysis.
Version adaptive code: Use new APIs while
maintaining compatibility with previous versions
3/6/2017 15 min to read Edit on GitHub

Each release of the Windows 10 SDK adds exciting new functionality that you'll want to take advantage of.
However, not all your customers will update their devices to the latest version of Windows 10 at the same time, and
you want to make sure your app works on the broadest possible range of devices. Here, we show you how to
design your app so that it runs on earlier versions of Windows 10, but also takes advantage of new features
whenever your app runs on a device with the latest update installed.
There are 2 steps to take to make sure your app supports the broadest range of Windows 10 devices. First,
configure your Visual Studio project to target the latest APIs. This affects what happens when you compile your
app. Second, perform runtime checks to ensure that you only call APIs that are present on the device your app is
running on.

Configure your Visual Studio project


The first step in supporting multiple Windows 10 versions is to specify the Target and Minimum supported OS/SDK
versions in your Visual Studio project.
Target: The SDK version that Visual Studio compiles your app code and run all tools against. All APIs and
resources in this SDK version are available in your app code at compile time.
Minimum: The SDK version that supports the earliest OS version that your app can run on (and will be deployed
to by the store) and the version that Visual Studio compiles your app markup code against.
During runtime your app will run against the OS version that it is deployed to, so your app will throw exceptions if
you use resources or call APIs that are not available in that version. We show you how to use runtime checks to call
the correct APIs later in this article.
The Target and Minimum settings specify the ends of a range of OS/SDK versions. However, if you test your app on
the minimum version, you can be sure it will run on any versions between the Minimum and Target.

TIP
Visual Studio does not warn you about API compatibility. It is your responsibility to test and ensure that your app performs
as expected on all OS versions between and including the Minimum and Target.

When you create a new project in Visual Studio 2015, Update 2 or later, you are prompted to set the Target and
Minimum versions that your app supports. By default, the Target Version is the highest installed SDK version, and
the Minimum Version is the lowest installed SDK version. You can choose Target and Minimum only from SDK
versions that are installed on your machine.
We typically recommend that you leave the defaults. However, if you have a Preview version of the SDK installed
and you are writing production code, you should change the Target Version from the Preview SDK to the latest
official SDK version.
To change the Minimum and Target version for a project that has already been created in Visual Studio, go to
Project -> Properties -> Application tab -> Targeting.

For reference these are the build numbers for each SDK:
Windows 10, version 1506: SDK version 10240
Windows 10, version 1511 (November Update): SDK version 10586
Windows 10, version 1607 (Anniversary Update): SDK version 14393.
You can download any released version of the SDK from the Windows SDK and emulator archive. You can
download the latest Windows Insider Preview SDK from the developer section of the Windows Insider site.

Write adaptive code


You can think about writing adaptive code similarly to how you think about creating an adaptive UI. You might
design your base UI to run on the smallest screen, and then move or add elements when you detect that your app
is running on a larger screen. With adaptive code, you write your base code to run on the lowest OS version, and
you can add hand-selected features when you detect that your app is running on a higher version where the new
feature is available.
Runtime API checks
You use the Windows.Foundation.Metadata.ApiInformation class in a condition in your code to test for the presence
of the API you want to call. This condition is evaluated wherever your app runs, but it evaluates to true only on
devices where the API is present and therefore available to call. This lets you to write version adaptive code in order
to create apps that use APIs that are available only on certain OS versions.
Here we look at specific examples for targeting new features in the Windows Insider Preview. For a general
overview of using ApiInformation, see Guide to UWP apps and the blog post Dynamically detecting features with
API contracts.

TIP
Numerous runtime API checks can affect the performance of your app. We show the checks inline in these examples. In
production code, you should perform the check once and cache the result, then used the cached result throughout your app.
Unsupported scenarios
In most cases, you can keep your app's Minimum Version set to SDK version 10240 and use runtime checks to
enable any new APIs when your app runs on later a version. However, there are some cases where you must
increase your app's Minimum Version in order to use new features.
You must increase your app's Minimum Version if you use:
a new API that requires a capability that isn't available in an earlier version. You must increase the minimum
supported version to one that includes that capability. For more info, see App capability declarations.
any new resource keys added to generic.xaml and not available in a previous version. The version of
generic.xaml used at runtime is determined by the OS version the device is running on. You can't use runtime
API checks to determine the presence of XAML resources. So, you must only use resource keys that are available
in the minimum version that your app supports or a XAMLParseException will cause your app to crash at
runtime.
Adaptive code options
There are two ways to create adaptive code. In most cases, you write your app markup to run on the Minimum
version, then use your app code to tap into newer OS features when present. However, if you need to update a
property in a visual state, and there is only a property or enumeration value change between OS versions, you can
create an extensible state trigger thats activated based on the presence of an API.
Here, we compare these options.
App code
When to use:
Recommended for all adaptive code scenarios except for specific cases defined below for extensible triggers.
Benefits:
Avoids developer overhead/complexity of tying API differences into markup.
Drawbacks:
No Designer support.
State Triggers
When to use:
Use when there is only a property or enum change between OS versions that doesnt require logic changes, and
is connected to a visual state.
Benefits:
Lets you create specific visual states that are triggered based on the presence of an API.
Some designer support available.
Drawbacks:
Use of custom triggers is restricted to visual states, which doesnt lend itself to complicated adaptive layouts.
Must use Setters to specify value changes, so only simple changes are possible.
Custom state triggers are fairly verbose to set up and use.

Adaptive code examples


In this section, we show several examples of adaptive code that use APIs that are new in Windows 10, version 1607
(Windows Insider Preview).
Example 1: New enum value
Windows 10, version 1607 adds a new value to the InputScopeNameValue enumeration: ChatWithoutEmoji. This
new input scope has the same input behavior as the Chat input scope (spellchecking, auto-complete, auto-
capitalization), but it maps to a touch keyboard without an emoji button. This is useful if you create your own emoji
picker and want to disable the built-in emoji button in the touch keyboard.
This example shows how to check if the ChatWithoutEmoji enum value is present and sets the InputScope
property of a TextBox if it is. If its not present on the system the app is run on, the InputScope is set to Chat
instead. The code shown could be placed in a Page consructor or Page.Loaded event handler.

TIP
When you check an API, use static strings instead of relying on .NET language features, otherwise your app might try to
access a type that isnt defined and crash at runtime.

C#

// Create a TextBox control for sending messages


// and initialize an InputScope object.
TextBox messageBox = new TextBox();
messageBox.AcceptsReturn = true;
messageBox.TextWrapping = TextWrapping.Wrap;
InputScope scope = new InputScope();
InputScopeName scopeName = new InputScopeName();

// Check that the ChatWithEmoji value is present.


// (It's present starting with Windows 10, version 1607,
// the Target version for the app. This check returns false on earlier versions.)
if (ApiInformation.IsEnumNamedValuePresent("Windows.UI.Xaml.Input.InputScopeNameValue", "ChatWithoutEmoji"))
{
// Set new ChatWithoutEmoji InputScope if present.
scopeName.NameValue = InputScopeNameValue.ChatWithoutEmoji;
}
else
{
// Fall back to Chat InputScope.
scopeName.NameValue = InputScopeNameValue.Chat;
}

// Set InputScope on messaging TextBox.


scope.Names.Add(scopeName);
messageBox.InputScope = scope;

// For this example, set the TextBox text to show the selected InputScope.
messageBox.Text = messageBox.InputScope.Names[0].NameValue.ToString();

// Add the TextBox to the XAML visual tree (rootGrid is defined in XAML).
rootGrid.Children.Add(messageBox);

In the previous example, the TextBox is created and all properties are set in code. However, if you have existing
XAML, and just need to change the InputScope property on systems where the new value is supported, you can do
that without changing your XAML, as shown here. You set the default value to Chat in XAML, but you override it in
code if the ChatWithoutEmoji value is present.
XAML
<TextBox x:Name="messageBox"
AcceptsReturn="True" TextWrapping="Wrap"
InputScope="Chat"
Loaded="messageBox_Loaded"/>

C#

private void messageBox_Loaded(object sender, RoutedEventArgs e)


{
if (ApiInformation.IsEnumNamedValuePresent("Windows.UI.Xaml.Input.InputScopeNameValue", "ChatWithoutEmoji"))
{
// Check that the ChatWithEmoji value is present.
// (It's present starting with Windows 10, version 1607,
// the Target version for the app. This code is skipped on earlier versions.)
InputScope scope = new InputScope();
InputScopeName scopeName = new InputScopeName();
scopeName.NameValue = InputScopeNameValue.ChatWithoutEmoji;
// Set InputScope on messaging TextBox.
scope.Names.Add(scopeName);
messageBox.InputScope = scope;
}

// For this example, set the TextBox text to show the selected InputScope.
// This is outside of the API check, so it will happen on all OS versions.
messageBox.Text = messageBox.InputScope.Names[0].NameValue.ToString();
}

Now that we have a concrete example, lets see how the Target and Minimum version settings apply to it.
In these examples, you can use the Chat enum value in XAML, or in code without a check, because its present in the
minimum supported OS version.
If you use the ChatWithoutEmoji value in XAML, or in code without a check, it will compile without error because
it's present in the Target OS version. It will also run without error on a system with the Target OS version. However,
when the app runs on a system with an OS using the Minimum version, it will crash at runtime because the
ChatWithoutEmoji enum value is not present. Therefore, you must use this value only in code, and wrap it in a
runtime API check so its called only if its supported on the current system.
Example 2: New control
A new version of Windows typically brings new controls to the UWP API surface that bring new functionality to the
platform. To leverage the presence of a new control, use the ApiInformation.IsTypePresent method.
Windows 10, version 1607 introduces a new media control called MediaPlayerElement. This control builds on the
MediaPlayer class, so it brings features like the ability to easily tie into background audio, and it makes use of
architectural improvements in the media stack.
However, if the app runs on a device thats running a version of Windows 10 older than version 1607, you must
use the MediaElement control instead of the new MediaPlayerElement control. You can use the
ApiInformation.IsTypePresent method to check for the presence of the MediaPlayerElement control at runtime,
and load whichever control is suitable for the system where the app is running.
This example shows how to create an app that uses either the new MediaPlayerElement or the old MediaElement
depending on whether MediaPlayerElement type is present. In this code, you use the UserControl class to
componentize the controls and their related UI and code so that you can switch them in and out based on the OS
version. As an alternative, you can use a custom control, which provides more functionality and custom behavior
than whats needed for this simple example.
MediaPlayerUserControl
The MediaPlayerUserControl encapsulates a MediaPlayerElement and several buttons that are used to skip through
the media frame by frame. The UserControl lets you treat these controls as a single entity and makes it easier to
switch with a MediaElement on older systems. This user control should be used only on systems where
MediaPlayerElement is present, so you dont use ApiInformation checks in the code inside this user control.

NOTE
To keep this example simple and focused, the frame step buttons are placed outside of the media player. For a better user
experiance, you should customize the MediaTransportControls to include your custom buttons. See Custom transport
controls for more info.

XAML

<UserControl
x:Class="MediaApp.MediaPlayerUserControl"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:local="using:MediaApp"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
mc:Ignorable="d"
d:DesignHeight="300"
d:DesignWidth="400">

<Grid x:Name="MPE_grid>
<Grid.RowDefinitions>
<RowDefinition/>
<RowDefinition Height="Auto"/>
</Grid.RowDefinitions>
<StackPanel Orientation="Horizontal"
HorizontalAlignment="Center" Grid.Row="1">
<RepeatButton Click="StepBack_Click" Content="Step Back"/>
<RepeatButton Click="StepForward_Click" Content="Step Forward"/>
</StackPanel>
</Grid>
</UserControl>

C#
using System;
using Windows.Media.Core;
using Windows.UI.Xaml;
using Windows.UI.Xaml.Controls;

namespace MediaApp
{
public sealed partial class MediaPlayerUserControl : UserControl
{
public MediaPlayerUserControl()
{
this.InitializeComponent();

// The markup code compiler runs against the Minimum OS version so MediaPlayerElement must be created in app code
MPE = new MediaPlayerElement();
Uri videoSource = new Uri("ms-appx:///Assets/UWPDesign.mp4");
MPE.Source = MediaSource.CreateFromUri(videoSource);
MPE.AreTransportControlsEnabled = true;
MPE.MediaPlayer.AutoPlay = true;

// Add MediaPlayerElement to the Grid


MPE_grid.Children.Add(MPE);

private void StepForward_Click(object sender, RoutedEventArgs e)


{
// Step forward one frame, only available using MediaPlayerElement.
MPE.MediaPlayer.StepForwardOneFrame();
}

private void StepBack_Click(object sender, RoutedEventArgs e)


{
// Step forward one frame, only available using MediaPlayerElement.
MPE.MediaPlayer.StepForwardOneFrame();
}
}
}

MediaElementUserControl
The MediaElementUserControl encapsulates a MediaElement control.
XAML

<UserControl
x:Class="MediaApp.MediaElementUserControl"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:local="using:MediaApp"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
mc:Ignorable="d"
d:DesignHeight="300"
d:DesignWidth="400">

<Grid>
<MediaElement AreTransportControlsEnabled="True"
Source="Assets/UWPDesign.mp4"/>
</Grid>
</UserControl>
NOTE
The code page for MediaElementUserControl contains only generated code, so it's not shown.

Initialize a control based on IsTypePresent


At runtime, you call ApiInformation.IsTypePresent to check for MediaPlayerElement. If it's present, you load
MediaPlayerUserControl , if it's not, you load MediaElementUserControl .

C#

public MainPage()
{
this.InitializeComponent();

UserControl mediaControl;

// Check for presence of type MediaPlayerElement.


if (ApiInformation.IsTypePresent("Windows.UI.Xaml.Controls.MediaPlayerElement"))
{
mediaControl = new MediaPlayerUserControl();
}
else
{
mediaControl = new MediaElementUserControl();
}

// Add mediaControl to XAML visual tree (rootGrid is defined in XAML).


rootGrid.Children.Add(mediaControl);
}

IMPORTANT
Remember that this check only sets the mediaControl object to either MediaPlayerUserControl or
MediaElementUserControl . You need to perform these conditional checks anywhere else in your code that you need to
determine whether to use MediaPlayerElement or MediaElement APIs. You should perform the check once and cache the
result, then used the cached result throughout your app.

State trigger examples


Extensible state triggers let you use markup and code together to trigger visual state changes based on a condition
that you check in code; in this case, the presence of a specific API. We dont recommend state triggers for common
adaptive code scenarios because of the overhead involved, and the restriction to only visual states.
You should use state triggers for adaptive code only when you have small UI changes between different OS
versions that wont impact the remaining UI, such as a property or enum value change on a control.
Example 1: New property
The first step in setting up an extensible state trigger is subclassing the StateTriggerBase class to create a custom
trigger that will be active based on the presence of an API. This example shows a trigger that activates if the
property presence matches the _isPresent variable set in XAML.
C#
class IsPropertyPresentTrigger : StateTriggerBase
{
public string TypeName { get; set; }
public string PropertyName { get; set; }

private Boolean _isPresent;


private bool? _isPropertyPresent = null;

public Boolean IsPresent


{
get { return _isPresent; }
set
{
_isPresent = value;
if (_isPropertyPresent == null)
{
// Call into ApiInformation method to determine if property is present.
_isPropertyPresent =
ApiInformation.IsPropertyPresent(TypeName, PropertyName);
}

// If the property presence matches _isPresent then the trigger will be activated;
SetActive(_isPresent == _isPropertyPresent);
}
}
}

The next step is setting up the visual state trigger in XAML so that two different visual states result based on the
presence of the API.
Windows 10, version 1607 introduces a new property on the FrameworkElement class called
AllowFocusOnInteraction that determines whether a control takes focus when a user interacts with it. This is useful
if you want to keep focus on a text box for data entry (and keep the touch keyboard showing) while the user clicks a
button.
The trigger in this example checks if the property is present. If the property is present it sets the
AllowFocusOnInteraction property on a Button to false; if the property isnt present, the Button retains its
original state. The TextBox is included to make it easier to see the effect of this property when you run the code.
XAML
<Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">
<StackPanel>
<TextBox Width="300" Height="36"/>
<!-- Button to set the new property on. -->
<Button x:Name="testButton" Content="Test" Margin="12"/>
</StackPanel>

<VisualStateManager.VisualStateGroups>
<VisualStateGroup x:Name="propertyPresentStateGroup">
<VisualState>
<VisualState.StateTriggers>
<!--Trigger will activate if the AllowFocusOnInteraction property is present-->
<local:IsPropertyPresentTrigger
TypeName="Windows.UI.Xaml.FrameworkElement"
PropertyName="AllowFocusOnInteraction" IsPresent="True"/>
</VisualState.StateTriggers>
<VisualState.Setters>
<Setter Target="testButton.AllowFocusOnInteraction"
Value="False"/>
</VisualState.Setters>
</VisualState>
</VisualStateGroup>
</VisualStateManager.VisualStateGroups>
</Grid>

Example 2: New enum value


This example shows how to set different enumeration values based on whether a value is present. It uses a custom
state trigger to achieve the same result as the previous chat example. In this example, you use the new
ChatWithoutEmoji input scope if the device is running Windows 10, version 1607, otherwise the Chat input scope
is used. The visual states that use this trigger are set up in an if-else style where the input scope is chosen based on
the presence of the new enum value.
C#

class IsEnumPresentTrigger : StateTriggerBase


{
public string EnumTypeName { get; set; }
public string EnumValueName { get; set; }

private Boolean _isPresent;


private bool? _isEnumValuePresent = null;

public Boolean IsPresent


{
get { return _isPresent; }
set
{
_isPresent = value;

if (_isEnumValuePresent == null)
{
// Call into ApiInformation method to determine if value is present.
_isEnumValuePresent =
ApiInformation.IsEnumNamedValuePresent(EnumTypeName, EnumValueName);
}

// If the property presence matches _isPresent then the trigger will be activated;
SetActive(_isPresent == _isEnumValuePresent);
}
}
}

XAML
<Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">

<TextBox x:Name="messageBox"
AcceptsReturn="True" TextWrapping="Wrap"/>

<VisualStateManager.VisualStateGroups>
<VisualStateGroup x:Name="EnumPresentStates">
<!--if-->
<VisualState x:Name="isPresent">
<VisualState.StateTriggers>
<local:IsEnumPresentTrigger
EnumTypeName="Windows.UI.Xaml.Input.InputScopeNameValue"
EnumValueName="ChatWithoutEmoji" IsPresent="True"/>
</VisualState.StateTriggers>
<VisualState.Setters>
<Setter Target="messageBox.InputScope" Value="ChatWithoutEmoji"/>
</VisualState.Setters>
</VisualState>
<!--else-->
<VisualState x:Name="isNotPresent">
<VisualState.StateTriggers>
<local:IsEnumPresentTrigger
EnumTypeName="Windows.UI.Xaml.Input.InputScopeNameValue"
EnumValueName="ChatWithoutEmoji" IsPresent="False"/>
</VisualState.StateTriggers>
<VisualState.Setters>
<Setter Target="messageBox.InputScope" Value="Chat"/>
</VisualState.Setters>
</VisualState>
</VisualStateGroup>
</VisualStateManager.VisualStateGroups>
</Grid>

Related articles
Guide to UWP apps
Dynamically detecting features with API contracts
Develop Universal Windows apps for education
3/6/2017 2 min to read Edit on GitHub

The following resources will help you write a Universal Windows app for education.
Accessibility
Education apps need to be accessible. See Developing apps for accessibility for more information.
Secure assessments
Assessment/testing apps will often need to produce a locked down environment in order to prevent students from
using other computers or Internet resources during a test. This functionality is available through the Take a Test API.
See the Take a Test web app in the Windows IT Center for an example of a testing environment with locked down
online access for high-stakes testing.
User input
User input is a critical part of education apps; UI controls must be responsive and intuitive so as not to break the
focus of their users. For a general overview of the input options available in a Universal Windows app, see the Input
primer and the topics below it in the Design & UI section. Additionally, the following sample apps showcase basic
UI handling in the Universal Windows Platform.
Basic input sample shows how to handle input in Universal Windows Apps.
User interaction mode sample shows how to detect and respond to the user interaction mode.
Focus visuals sample shows how to take advantage of the new system drawn focus visuals or create your own
custom focus visuals if the system drawn ones do not fit your needs.
The Windows Ink platform can make education apps shine by fitting them with an input mode that students are
accustomed to. See Pen interactions and Windows Ink and the topics below it for a comprehensive guide to
implementing Windows Ink in your app. The following sample apps provide working examples of this API.
Ink sample demonstrates how to use ink functionality (such as capturing, manipulating, and interpreting ink
strokes) in Universal Windows apps using JavaScript.
Simple ink sample demonstrates how to use ink functionality (such as capturing ink from user input and
performing handwriting recognition on ink strokes) in Universal Windows apps using C#.
Complex ink sample demonstrates how to use advanced InkPresenter functionality to interleave ink with other
objects, select ink, copy/paste, and handle events. It is built upon the Universal Windows Platform in C++ and
can run on Desktop and Mobile Windows 10 SKUs.
Windows Store
Education apps are often released under special circumstances to a specific organization. See Distribute line-of-
business apps to enterprises for information on this.

Related Topics
Windows 10 for Education on the Windows IT Center
Take a Test JavaScript API
3/6/2017 3 min to read Edit on GitHub

Take a Test is a browser-based app that renders locked down online assessments for high-stakes testing. It
supports the SBAC browser API standard for high stakes common core testing and allows you to focus on the
assessment content rather than how to lock down Windows.
Take a Test, powered by Microsoft's Edge browser, features a JavaScript API that Web applications can use to
provide a locked down experience for taking tests.
The API (based on the Common Core SBAC API) provides text-to-speech capability and the ability to query whether
the device is locked down, what user and system processes are running, and more.
See the Take a Test app technical reference for information about the app itself.

IMPORTANT
These APIs do not work in a remote session.

For troubleshooting help, see Troubleshoot Microsoft Take a Test with the event viewer.

Reference documentation
The Take a Test API consists of the following namespaces.

NAMESPACE DESCRIPTION

security namespace Enables you to lock down the device

tts namespace Text-to-speech functionality

Security namespace
The security namespace you to lock down the device, check the list of user and system processes, obtain MAC and
IP addresses, and clear cached web resources.

METHOD DESCRIPTION

clearCache Clears cached web resources

close Closes the browser and unlocks the device

enableLockDown Locks down the device. Also used to unlock the device

getIPAddressList Gets the list of IP addresses for the device

getMACAddress Gets the list of MAC addresses for the device

getProcessList Gets the list of running user and system processes


METHOD DESCRIPTION

isEnvironmentSecure Determines whether the lockdown context is still applied to


the device

void clearCache()
Clear cached web resources.
Syntax
browser.security.clearCache();

Parameters
None

Return value
None

Requirements
Windows 10, version 1607

close(boolean restart)
Closes the browser and unlocks the device.
Syntax
browser.security.close(false);

Parameters
restart - this parameter is ignored but must be provided.

Return value
None

Requirements
Windows 10, version 1607

enableLockdown(boolean lockdown)
Locks down the device. Also used to unlock the device.
Syntax
browser.security.enableLockDown(true|false);

Parameters
lockdown - true to run the Take-a-Test app above the lock screen and apply policies discussed in this document.
False stops running Take-a-Test above the lock screen and closes it unless the app is not locked down; in which
case there is no effect.
Return value
None

Requirements
Windows 10, version 1607

string[] getIPAddressList()
Gets the list of IP addresses for the device.
Syntax
browser.security.getIPAddressList();

Parameters
None

Return value
An array of IP addresses.

string[] getMACAddress()
Gets the list of MAC addresses for the device.
Syntax
browser.security.getMACAddress();

Parameters
None

Return value
An array of MAC addresses.

Requirements
Windows 10, version 1607

string[] getProcessList()
Gets the list the users running processes.
Syntax
browser.security.getProcessList();

Parameters
None

Return value
An array of running process names.

Remarks The list does not include system processes.


Requirements
Windows 10, version 1607

boolean isEnvironmentSecure()
Determines whether the lockdown context is still applied to the device.
Syntax
browser.security.isEnvironmentSecure();

Parameters
None

Return value
True indicates that the lockdown context is applied to the device; otherwise false.

Requirements
Windows 10, version 1607

Tts namespace
The tts namespace handles the app's text-to-speech functionality.

METHOD DESCRIPTION

getStatus Gets the speech playback status

getVoices Gets a list of available voice packs

pause Pauses speech synthesis

resume Resume paused speech synthesis

speak Client-side text-to-speech synthesis

stop Stops speech synthesis

TIP
The Microsoft Edge Speech Synthesis API is an implementation of the W3C Speech Api and we recommend that developers
use this API when possible.

string getStatus()
Gets the speech playback status.
Syntax
browser.tts.getStatus();

Parameters
None

Return value
The speech playback status. Possible values are: available, idle, paused, and speaking.

Requirements
Windows 10, version 1607

string[] getVoices()
Gets a list of available voice packs.
Syntax
browser.tts.getVoices();

Parameters
None

Return value
The available voice packs. For example: Microsoft Zira Mobile, Microsoft Mark Mobile

Requirements
Windows 10, version 1607

void pause()
Pauses speech synthesis.
Syntax
browser.tts.pause();

Parameters
None

Return value
None

Requirements
Windows 10, version 1607

void resume()
Resume paused speech synthesis.
Syntax
browser.tts.resume();

Parameters None

Return value None

Requirements
Windows 10, version 1607

void speak(string text, object options, function callback)


Starts the client-side text-to-speech synthesis.
Syntax
void browser.tts.speak(Hello world, options, callback);

Parameters
Speech options such as gender, pitch, rate, volume. For example:

var options = {
'gender': this.currentGender,
'language': this.currentLanguage,
'pitch': 1,
'rate': 1,
'voice': this.currentVoice,
'volume': 1
};

Return value
None

Remarks Option variables must be lowercase. The gender, language, and voice parameters take strings. Volume,
pitch, and rate must be marked up within the speech synthesis markup language file (SSML), not within the options
object. The options object must follow the order, naming, and casing shown in the example above.
Requirements
Windows 10, version 1607

void stop()
Stops speech synthesis.
Syntax
void browser.tts.speak(Hello world, options, callback);

Parameters
None

Return value
None

Requirements
Windows 10, version 1607
Devices, sensors, and power
3/6/2017 2 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
In order to provide a rich experience for your users, you may find it necessary to integrate external devices or
sensors into your app. Here are some examples of features that you can add to your app using the technology
described in this section.
Providing an enhanced print experience
Integrating motion and orientation sensors into your game
Connecting to a device directly or through a network protocol

TOPIC DESCRIPTION

Enable device capabilities This tutorial describes how to declare device capabilities in
Microsoft Visual Studio. This enables your app to use cameras,
microphones, location sensors, and other devices.

Enable usermode access for Windows IoT This tutorial describes how to enable usermode access to
GPIO, I2C, SPI, and UART on Windows 10 IoT Core.

Enumerate devices The enumeration namespace enables you to find devices that
are internally connected to the system, externally connected,
or detectable over wireless or networking protocols.

Pair devices Some devices need to be paired before they can be used. The
Windows.Devices.Enumeration namespace supports three
different ways to pair devices.

Out-of-band pairing This section describes how out-of-band pairing allows apps to
connect to certain devices without requiring discovery.

Sensors Sensors let your app know the relationship between a device
and the physical world around it. Sensors can tell your app the
direction, orientation, and movement of the device.

Bluetooth This section contains articles on how to integrate Bluetooth


into Universal Windows Platform (UWP) apps, including how
to use RFCOMM, GATT, and Low Energy (LE) Advertisements.

Printing and scanning This section describes how to print and scan from your
Universal Windows app.

3D printing This section describes how to utilize 3D printing functionality


in your Universal Windows app.
TOPIC DESCRIPTION

Create an NFC Smart Card app Windows Phone 8.1 supported NFC card emulation apps
using a SIM-based secure element, but that model required
secure payment apps to be tightly coupled with mobile-
network operators (MNO). This limited the variety of possible
payment solutions by other merchants or developers that are
not coupled with MNOs. In Windows 10 Mobile, we have
introduced a new card emulation technology called, Host Card
Emulation (HCE). HCE technology allows your app to directly
communicate with an NFC card reader. This topic illustrates
how Host Card Emulation (HCE) works on Windows 10 Mobile
devices and how you can develop an HCE app so that your
customers can access your services through their phone
instead of a physical card without collaborating with an MNO.

Get battery information Learn how to get detailed battery information using APIs in
the Windows.Devices.Power namespace.
Enable device capabilities
3/6/2017 5 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This tutorial describes how to declare device capabilities in Microsoft Visual Studio. This enables your app to use
cameras, microphones, location sensors, and other devices.

Specify the device capabilities your app will use


Windows apps require you to specify in the app package manifest when you use certain types of devices. In Visual
Studio, you can declare most capabilities by using Manifest Designer or you can add them manually as described in
How to specify device capabilities in a package manifest (manually). This tutorial assumes you're using Manifest
Designer.
Note
Some types of devices, such as printers, scanners, and sensors, don't need to be declared in the app package
manifest.
In Visual Studio Solution Explorer, double-click the package manifest file, Package.appxmanifest.
Open the Capabilities tab.
Select the device capabilities that your app uses. If you don't see the capability you're looking for in Manifest
Designer, add it manually. For more info, see How to specify device capabilities in a package manifest.

DEVICE CAPABILITY MANIFEST DESIGNER DESCRIPTION

AllJoyn Allows AllJoyn-enabled apps and


devices on a network to discover and
interact with each other. App apps that
access APIs in the
Windows.Devices.AllJoyn namespace
must use this capability.

Blocked Chat Messages Allows apps to read SMS and MMS


messages that have been blocked by
the Spam Filter app.

Chat Message Access Allows apps to read and delete Text


Messages. It also allows apps to store
chat messages in the system data store.

Code Generation Allows apps to generate code


dynamically.

Enterprise Authentication This capability is subject to the Windows


Store policy. It provides the capability to
connect to enterprise intranet resources
that require domain credentials. This
capability is not typically needed for
most apps.
DEVICE CAPABILITY MANIFEST DESIGNER DESCRIPTION

Internet (Client) Provides outbound access to the


Internet and networks in public places
like airports and coffee shops. For
example, Intranet networks where the
user has designated the network as
public. Most apps that require Internet
access should use the capability.

Internet (Client & Server) Provides inbound and outbound access


to the Internet and the networks in
public places like airports and coffee
shops. This capability is a superset of
Internet (Client). Internet (Client)
doesn't need to be enabled if this
capability is also enabled. Inbound
access to critical ports is always blocked.

Location Provides access to the current location.


This is obtained from dedicated
hardware like a GPS sensor in the PC, or
derived from available network
information.

Microphone Provides access to the microphone's


audio feed. This allows the app to
record from connected microphones.

Music Library Provides the capability to add, change,


or delete files in the Music Library for
the local PC and HomeGroup PCs.

Objects 3D Provides programmatic access to the


user's 3D Objects, allowing the app to
enumerate and access all files in the
library without user interaction. This
capability is typically used in 3D apps
and games that need to access the
entire 3D Objects library.

Phone Call Allows apps to access all of the phone


lines on the device and perform the
following functions: place a call on the
phone and show the system dialer
without prompting the user; access line-
related metadata; access line-related
triggers. Allows the user-selected spam
filter app to set and check the block list
and call origin information.

Pictures Library Provides the capability to add, change,


or delete files in the Pictures Library
for the local PC and HomeGroup PCs.
DEVICE CAPABILITY MANIFEST DESIGNER DESCRIPTION

Private Networks (Client & Server) Provides inbound and outbound access
to Intranet networks that have an
authenticated domain controller, or that
the user has designated as either home
or work networks. Inbound access to
critical ports is always blocked.

Proximity Provides the capability to connect to


devices in close proximity to the PC via
near-field communication (NFC). Near-
field proximity may be used to send files
or communicate with an app on the
nearby device.

Removable Storage Provides the capability to add, change,


or delete files on removable storage
devices. The app can only access the file
types on removable storage that are
defined in the manifest using the File
Type Associations declaration. The app
can't access removable storage on
HomeGroup PCs.

Shared User Certificates This capability is subject to the Windows


Store policy. It provides the capability to
access software and hardware
certificates, such as smart card
certificates, for validating a user's
identity. When related APIs are invoked
at runtime, the user must take action
(insert card, select certificate, etc.). This
capability is not necessary if your app
includes a private certificate via a
Certificates declaration.

User Account Information Gives apps the ability to access the


user's name and picture. This capability
is required to access some APIs in the
Windows.System.UserProfile
namespace.

Videos Library Provides the capability to add, change,


or delete files in the Videos Library for
the local PC and HomeGroup PCs.

VOIP Calling Allows apps to access the VOIP calling


APIs in the
Windows.ApplicationModel.Calls
namespace.

Webcam Provides access to the built-in camera


or attached webcam's video feed. This
allows the app to capture snapshots
and movies.
DEVICE CAPABILITY MANIFEST DESIGNER DESCRIPTION

USB Provides access to custom USB devices.


This capability requires child elements.
This feature is not supported on
Windows Phone.

Human Interface Device (HID) Provides access to Human Interface


Devices (HID). This capability requires
child elements. For more info, see How
to specify device capabilities for HID.

Bluetooth GATT Provides access to Bluetooth LE devices


through a collection of primary services,
included services, characteristics, and
descriptors. This capability requires child
elements. For more info, see How to
specify device capabilities for Bluetooth.

Bluetooth RFCOMM Provides access to APIs that support


the Basic Rate/Extended Data Rate
(BR/EDR) transport and also lets your
Windows Store app access a device that
implements Serial Port Profile (SPP). This
capability requires child elements. For
more info, see How to specify device
capabilities for Bluetooth.

pointOfService Provides access to Point of Service


(POS) barcode scanners and magnetic
stripe readers. This feature is not
supported on Windows Phone.

Use the Windows Runtime API for communicating with your device
The following table connects some of the capabilities to Windows Runtime APIs.

DEVICE CAPABILITY API

AllJoyn Windows.Devices.AllJoyn

Blocked Chat Messages Windows.ApplicationModel.CommunicationBlocking

Location See Maps and location overview for more information.

Phone Call Windows.ApplicationModel.Calls

User Account Information Windows.System.UserProfile

VOIP Calling Windows.ApplicationModel.Calls

USB Windows.Devices.Usb

HID Windows.Devices.HumanInterfaceDevice

Bluetooth GATT Windows.Devices.Bluetooth.GenericAttributeProfile


DEVICE CAPABILITY API

Bluetooth RFCOMM Windows.Devices.Bluetooth.Rfcomm

Point of Service (POS) Windows.Devices.PointOfService


Enable usermode access on Windows 10 IoT Core
3/6/2017 35 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Windows 10 IoT Core contains new APIs for accessing GPIO, I2C, SPI, and UART directly from usermode.
Development boards like Raspberry Pi 2 expose a subset of these connections which enable users to extend a base
compute module with custom circuitry to address a particular application. These low level buses are usually shared
with other critical onboard functions, with only a subset of GPIO pins and buses exposed on headers. To preserve
system stability, it is necessary to specify which pins and buses are safe for modification by usermode applications.
This document describes how to specify this configuration in ACPI and provides tools to validate that the
configuration was specified correctly.

IMPORTANT
The audience for this document is UEFI and ACPI developers. Some familiarity with ACPI, ASL authoring, and SpbCx/GpioClx
is assumed.

Usermode access to low level buses on Windows is plumbed through the existing GpioClx and SpbCx frameworks.
A new driver called RhProxy, only available on Windows 10 IoT Core, exposes GpioClx and SpbCx resources to
usermode. To enable the APIs, a device node for rhproxy must be declared in your ACPI tables with each of the
GPIO and SPB resources that should be exposed to usermode. This document walks through authoring and
verifying the ASL.

ASL by example
Lets walk through the rhproxy device node declaration on Raspberry Pi 2. First, create the ACPI device declaration
in the \_SB scope.

Device(RHPX)
{
Name(_HID, "MSFT8000")
Name(_CID, "MSFT8000")
Name(_UID, 1)

_HID Hardware Id. Set this to a vendor-specific hardware ID.


_CID Compatible Id. Must be MSFT8000.
_UID Unique Id. Set to 1.
Next we declare each of the GPIO and SPB resources that should be exposed to usermode. The order in which
resources are declared is important because resource indexes are used to associate properties with resources. If
there are multiple I2C or SPI busses exposed, the first declared bus is considered the default bus for that type, and
will be the instance returned by the GetDefaultAsync() methods of Windows.Devices.I2c.I2cController and
Windows.Devices.Spi.SpiController.
SPI
Raspberry Pi has two exposed SPI buses. SPI0 has two hardware chip select lines and SPI1 has one hardware chip
select line. One SPISerialBus() resource declaration is required for each chip select line for each bus. The following
two SPISerialBus resource declarations are for the two chip select lines on SPI0. The DeviceSelection field contains
a unique value which the driver interprets as a hardware chip select line identifier. The exact value that you put in
the DeviceSelection field depends on how your driver interprets this field of the ACPI connection descriptor.

// Index 0
SPISerialBus( // SCKL - GPIO 11 - Pin 23
// MOSI - GPIO 10 - Pin 19
// MISO - GPIO 9 - Pin 21
// CE0 - GPIO 8 - Pin 24
0, // Device selection (CE0)
PolarityLow, // Device selection polarity
FourWireMode, // wiremode
0, // databit len: placeholder
ControllerInitiated, // slave mode
0, // connection speed: placeholder
ClockPolarityLow, // clock polarity: placeholder
ClockPhaseFirst, // clock phase: placeholder
"\\_SB.SPI0", // ResourceSource: SPI bus controller name
0, // ResourceSourceIndex
// Resource usage
) // Vendor Data

// Index 1
SPISerialBus( // SCKL - GPIO 11 - Pin 23
// MOSI - GPIO 10 - Pin 19
// MISO - GPIO 9 - Pin 21
// CE1 - GPIO 7 - Pin 26
1, // Device selection (CE1)
PolarityLow, // Device selection polarity
FourWireMode, // wiremode
0, // databit len: placeholder
ControllerInitiated, // slave mode
0, // connection speed: placeholder
ClockPolarityLow, // clock polarity: placeholder
ClockPhaseFirst, // clock phase: placeholder
"\\_SB.SPI0", // ResourceSource: SPI bus controller name
0, // ResourceSourceIndex
// Resource usage
) // Vendor Data

How does software know that these two resources should be associated with the same bus? The mapping between
bus friendly name and resource index is specified in the DSD:

Package(2) { "bus-SPI-SPI0", Package() { 0, 1 }},

This creates a bus named SPI0 with two chip select lines resource indexes 0 and 1. Several more properties are
required to declare the capabilities of the SPI bus.

Package(2) { "SPI0-MinClockInHz", 7629 },


Package(2) { "SPI0-MaxClockInHz", 125000000 },

The MinClockInHz and MaxClockInHz properties specify the minimum and maximum clock speeds that are
supported by the controller. The API will prevent users from specifying values outside this range. The clock speed is
passed to your SPB driver in the _SPE field of the connection descriptor (ACPI section 6.4.3.8.2.2).

Package(2) { "SPI0-SupportedDataBitLengths", Package() { 8 }},

The SupportedDataBitLengths property lists the data bit lengths supported by the controller. Multiple values can
be specified in a comma-separated list. The API will prevent users from specifying values outside this list. The data
bit length is passed to your SPB driver in the _LEN field of the connection descriptor (ACPI section 6.4.3.8.2.2).
You can think of these resource declarations as templates. Some of the fields are fixed at system boot while
others are specified dynamically at runtime. The following fields of the SPISerialBus descriptor are fixed:
DeviceSelection
DeviceSelectionPolarity
WireMode
SlaveMode
ResourceSource
The following fields are placeholders for values specified by the user at runtime:
DataBitLength
ConnectionSpeed
ClockPolarity
ClockPhase
Since SPI1 contains only a single chip select line, a single SPISerialBus() resource is declared:

// Index 2

SPISerialBus( // SCKL - GPIO 21 - Pin 40


// MOSI - GPIO 20 - Pin 38
// MISO - GPIO 19 - Pin 35
// CE1 - GPIO 17 - Pin 11
1, // Device selection (CE1)
PolarityLow, // Device selection polarity
FourWireMode, // wiremode
0, // databit len: placeholder
ControllerInitiated, // slave mode
0, // connection speed: placeholder
ClockPolarityLow, // clock polarity: placeholder
ClockPhaseFirst, // clock phase: placeholder
"\\_SB.SPI1", // ResourceSource: SPI bus controller name
0, // ResourceSourceIndex
// Resource usage
) // Vendor Data

The accompanying friendly name declaration which is required is specified in the DSD and refers to the index of
this resource declaration.

Package(2) { "bus-SPI-SPI1", Package() { 2 }},

This creates a bus named SPI1 and associates it with resource index 2.
SPI Driver Requirements
Must use SpbCx or be SpbCx-compatible
Must have passed the MITT SPI Tests
Must support 4Mhz clock speed
Must support 8-bit data length
Must support all SPI Modes: 0, 1, 2, 3
I2C
Next, we declare the I2C resources. Raspberry Pi exposes a single I2C bus on pins 3 and 5.
// Index 3
I2CSerialBus( // Pin 3 (GPIO2, SDA1), 5 (GPIO3, SCL1)
0xFFFF, // SlaveAddress: placeholder
, // SlaveMode: default to ControllerInitiated
0, // ConnectionSpeed: placeholder
, // Addressing Mode: placeholder
"\\_SB.I2C1", // ResourceSource: I2C bus controller name
,
,
) // VendorData

The accompanying friendly name declaration which is required is specified in the DSD:

Package(2) { "bus-I2C-I2C1", Package() { 3 }},

This declares an I2C bus with friendly name I2C1 that refers to resource index 3, which is the index of the
I2CSerialBus() resource that we declared above.
The following fields of the I2CSerialBus() descriptor are fixed:
SlaveMode
ResourceSource
The following fields are placeholders for values specified by the user at runtime.
SlaveAddress
ConnectionSpeed
AddressingMode
I2C Driver Requirements
Must use SpbCx or be SpbCx-compatible
Must have passed the MITT I2C Tests
Must support 7-bit addressing
Must support 100kHz clock speed
Must support 400kHz clock speed
GPIO
Next, we declare all the GPIO pins that are exposed to usermode. We offer the following guidance in deciding
which pins to expose:
Declare all pins on exposed headers.
Declare pins that are connected to useful onboard functions like buttons and LEDs.
Do not declare pins that are reserved for system functions or are not connected to anything.
The following block of ASL declares two pins GPIO4 and GPIO5. The other pins are not shown here for brevity.
Appendix C contains a sample powershell script which can be used to generate the GPIO resources.

// Index 4 GPIO 4
GpioIO(Shared, PullUp, , , , \\_SB.GPI0, , , , ) { 4 }
GpioInt(Edge, ActiveBoth, Shared, PullUp, 0, \\_SB.GPI0,) { 4 }

// Index 6 GPIO 5
GpioIO(Shared, PullUp, , , , \\_SB.GPI0, , , , ) { 5 }
GpioInt(Edge, ActiveBoth, Shared, PullUp, 0, \\_SB.GPI0,) { 5 }

The following requirements must be observed when declaring GPIO pins:


Only memory mapped GPIO controllers are supported. GPIO controllers interfaced over I2C/SPI are not
supported. The controller driver is a memory mapped controller if it sets the MemoryMappedController flag in
the CLIENT_CONTROLLER_BASIC_INFORMATION structure in response to the
CLIENT_QueryControllerBasicInformation callback.
Each pin requires both a GpioIO and a GpioInt resource. The GpioInt resource must immediately follow the
GpioIO resource and must refer to the same pin number.
GPIO resources must be ordered by increasing pin number.
Each GpioIO and GpioInt resource must contain exactly one pin number in the pin list.
The ShareType field of both descriptors must be Shared
The EdgeLevel field of the GpioInt descriptor must be Edge
The ActiveLevel field of the GpioInt descriptor must be ActiveBoth
The PinConfig field
Must be the same in both the GpioIO and GpioInt descriptors
Must be one of PullUp, PullDown, or PullNone. It cannot be PullDefault.
The pull configuration must match the power-on state of the pin. Putting the pin in the specified pull
mode from power-on state must not change the state of the pin. For example, if the datasheet specifies
that the pin comes up with a pull up, specify PinConfig as PullUp.
Firmware, UEFI, and driver initialization code should not change the state of a pin from its power-on state during
boot. Only the user knows whats attached to a pin and therefore which state transitions are safe. The power-on
state of each pin must be documented so that users can design hardware that correctly interfaces with a pin. A pin
must not change state unexpectedly during boot.
If an exposed pin has multiple alternate functions, it is the responsibility of firmware to initialize the pin in the
correct mux configuration for subsequent use by the OS. Dynamically changing the function of a pin (muxing) is
not currently supported on Windows.
Supported Drive Modes
If your GPIO controller supports built-in pull up and pull down resistors in addition to high impedance input and
CMOS output, you must specify this with the optional SupportedDriveModes property.

Package (2) { GPIO-SupportedDriveModes, 0xf },

The SupportedDriveModes property indicates which drive modes are supported by the GPIO controller. In the
example above, all of the following drive modes are supported. The property is a bitmask of the following values:

FLAG VALUE DRIVE MODE DESCRIPTION

0x1 InputHighImpedance The pin supports high impedance input,


which corresponds to the PullNone
value in ACPI.

0x2 InputPullUp The pin supports a built-in pull-up


resistor, which corresponds to the
PullUp value in ACPI.

0x4 InputPullDown The pin supports a built-in pull-down


resistor, which corresponds to the
PullDown value in ACPI.

0x8 OutputCmos The pin supports generating both


strong highs and strong lows (as
opposed to open drain).
InputHighImpedance and OutputCmos are supported by almost all GPIO controllers. If the SupportedDriveModes
property is not specified, this is the default.
If a GPIO signal goes through a level shifter before reaching an exposed header, declare the drive modes supported
by the SOC, even if the drive mode would not be observable on the external header. For example, if a pin goes
through a bidirectional level shifter that makes a pin appear as open drain with resistive pull up, you will never
observe a high impedance state on the exposed header even if the pin is configured as a high impedance input.
You should still declare that the pin supports high impedance input.
Pin Numbering
Windows supports two pin numbering schemes:
Sequential Pin Numbering Users see numbers like 0, 1, 2 up to the number of exposed pins. 0 is the first
GpioIo resource declared in ASL, 1 is the second GpioIo resource declared in ASL, and so on.
Native Pin Numbering Users see the pin numbers specified in GpioIo descriptors, e.g. 4, 5, 12, 13, .

Package (2) { GPIO-UseDescriptorPinNumbers, 1 },

The UseDescriptorPinNumbers property tells Windows to use native pin numbering instead of sequential pin
numbering. If the UseDescriptorPinNumbers property is not specified or its value is zero, Windows will default to
Sequential pin numbering.
If native pin numbering is used, you must also specify the PinCount property.

Package (2) { GPIO-PinCount, 54 },

The PinCount property should match the value returned through the TotalPins property in the
CLIENT_QueryControllerBasicInformation callback of the GpioClx driver.
Choose the numbering scheme that is most compatible with existing published documentation for your board. For
example, Raspberry Pi uses native pin numbering because many existing pinout diagrams use the BCM2835 pin
numbers. MinnowBoardMax uses sequential pin numbering because there are few existing pinout diagrams, and
sequential pin numbering simplifies the developer experience because only 10 pins are exposed out of more than
200 pins. The decision to use sequential or native pin numbering should aim to reduce developer confusion.
GPIO Driver Requirements
Must use GpioClx
Must be on-SOC memory mapped
Must use emulated ActiveBoth interrupt handling
UART
UART is not supported on Raspberry Pi at the time of writing, so the following UART declaration is from
MinnowBoardMax.
// Index 2
UARTSerialBus( // Pin 17, 19 of JP1, for SIO_UART2
115200, // InitialBaudRate: in bits ber second
, // BitsPerByte: default to 8 bits
, // StopBits: Defaults to one bit
0xfc, // LinesInUse: 8 1-bit flags to declare line enabled
, // IsBigEndian: default to LittleEndian
, // Parity: Defaults to no parity
, // FlowControl: Defaults to no flow control
32, // ReceiveBufferSize
32, // TransmitBufferSize
"\\_SB.URT2", // ResourceSource: UART bus controller name
,
,
,
)

Only the ResourceSource field is fixed while all other fields are placeholders for values specified at runtime by the
user.
The accompanying friendly name declaration is:

Package(2) { "bus-UART-UART2", Package() { 2 }},

This assigns the friendly name UART2 to the controller, which is the identifier users will use to access the bus
from usermode.

Runtime Pin Muxing


Pin muxing is the ability to use the same physical pin for different functions. Several different on-chip peripherals,
such as an I2C controller, SPI controller, and GPIO controller, might be routed to the same physical pin on a SOC.
The mux block controls which function is active on the pin at any given time. Traditionally, firmware is responsible
for establishing function assignments at boot, and this assignment remains static through the boot session.
Runtime pin muxing adds the ability to reconfigure pin function assignments at runtime. Enabling users to choose
a pins function at runtime speeds development by enabling users to quickly reconfigure a boards pins, and
enables hardware to support a broader range of applications than would a static configuration.
Users consume muxing support for GPIO, I2C, SPI, and UART without writing any additional code. When a user
opens a GPIO or bus using OpenPin() or FromIdAsync(), the underlying physical pins are automatically muxed to
the requested function. If the pins are already in use by a different function, the OpenPin() or FromIdAsync() call
will fail. When the user closes the device by disposing the GpioPin, I2cDevice, SpiDevice, or SerialDevice object, the
pins are released, allowing them to later be opened for a different function.
Windows contains built-in support for pin muxing in the GpioClx, SpbCx, and SerCx frameworks. These frameworks
work together to automatically switch a pin to the correct function when a GPIO pin or bus is accessed. Access to
the pins is arbitrated to prevent conflicts among multiple clients. In addition to this built-in support, the interfaces
and protocols for pin muxing are general purpose and can be extended to support additional devices and
scenarios.
This document first describes the underlying interfaces and protocols involved in pin muxing, and then describes
how to add support for pin muxing to GpioClx, SpbCx, and SerCx controller drivers.
Pin Muxing Architecture
This section describes the underlying interfaces and protocols involved in pin muxing. Knowledge of the underlying
protocols is not necessarily needed to support pin muxing with GpioClx/SpbCx/SerCx drivers. For details on how to
support pin muxing with GpioCls/SpbCx/SerCx drivers, see Implementing pin muxing support in GpioClx client
drivers and Consuming muxing support in SpbCx and SerCx controller drivers.
Pin muxing is accomplished by the cooperation of several components.
Pin muxing servers these are drivers that control the pin muxing control block. Pin muxing servers receive pin
muxing requests from clients via requests to reserve muxing resources (via IRP_MJ_CREATE) requests, and
requests to switch a pins function (via IOCTL_GPIO_COMMIT_FUNCTION_CONFIG_PINS requests). The pin
muxing server is usually the GPIO driver, since the muxing block is sometimes part of the GPIO block. Even if
the muxing block is a separate peripheral, the GPIO driver is a logical place to put muxing functionality.
Pin muxing clients these are drivers that consume pin muxing. Pin muxing clients receive pin muxing
resources from ACPI firmware. Pin muxing resources are a type of connection resource and are managed by the
resource hub. Pin muxing clients reserve pin muxing resources by opening a handle to the resource. To effect a
hardware change, clients must commit the configuration by sending an
IOCTL_GPIO_COMMIT_FUNCTION_CONFIG_PINS request. Clients release pin muxing resources by closing the
handle, at which point muxing configuration is reverted to its default state.
ACPI firmware specifies muxing configuration with MsftFunctionConfig() resources. MsftFunctionConfig
resources express which pins, in which muxing configuration, are required by a client. MsftFunctionConfig
resources contain function number, pull configuration, and list of pin numbers. MsftFunctionConfig resources
are supplied to pin muxing clients as hardware resources, which are received by drivers in their
PrepareHardware callback similarly to GPIO and SPB connection resources. Clients receive a resource hub ID
which can be used to open a handle to the resource.

You must pass the /MsftInternal command line switch to asl.exe to compile ASL files containing
MsftFunctionConfig() descriptors since these descriptors are currently under review by the ACPI working
committee. For example: asl.exe /MsftInternal dsdt.asl

The sequence of operations involved in pin muxing is shown below.

1. The client receives MsftFunctionConfig resources from ACPI firmware in its EvtDevicePrepareHardware()
callback.
2. The client uses the resource hub helper function RESOURCE_HUB_CREATE_PATH_FROM_ID() to create a path from
the resource ID, then opens a handle to the path (using ZwCreateFile(), IoGetDeviceObjectPointer(), or
WdfIoTargetOpen()).
3. The server extracts the resource hub ID from the file path using resource hub helper functions
RESOURCE_HUB_ID_FROM_FILE_NAME() , then queries the resource hub to get the resource descriptor.
4. The server performs sharing arbitration for each pin in the descriptor and completes the IRP_MJ_CREATE
request.
5. The client issues an IOCTL_GPIO_COMMIT_FUNCTION_CONFIG_PINS request on the received handle.
6. In response to IOCTL_GPIO_COMMIT_FUNCTION_CONFIG_PINS, the server performs the hardware muxing
operation by making the specified function active on each pin.
7. The client proceeds with operations that depend on the requested pin muxing configuration.
8. When the client no longer requires the pins to be muxed, it closes the handle.
9. In response to the handle being closed, the server reverts the pins back to their initial state.
Protocol description for pin muxing clients
This section describes how a client consumes pin muxing functionality. This does not apply to SerCx and SpbCx
controller drivers, since the frameworks implement this protocol on behalf of controller drivers.
Parsing resources
A WDF driver receives MsftFunctionConfig() resources in its EvtDevicePrepareHardware() routine.
MsftFunctionConfig resources can be identified by the following fields:

CM_PARTIAL_RESOURCE_DESCRIPTOR::Type = CmResourceTypeConnection
CM_PARTIAL_RESOURCE_DESCRIPTOR::u.Connection.Class = CM_RESOURCE_CONNECTION_CLASS_FUNCTION_CONFIG
CM_PARTIAL_RESOURCE_DESCRIPTOR::u.Connection.Type = CM_RESOURCE_CONNECTION_TYPE_FUNCTION_CONFIG

An EvtDevicePrepareHardware() routine might extract MsftFunctionConfig resources as follows:


EVT_WDF_DEVICE_PREPARE_HARDWARE evtDevicePrepareHardware;

_Use_decl_annotations_
NTSTATUS
evtDevicePrepareHardware (
WDFDEVICE WdfDevice,
WDFCMRESLIST ResourcesTranslated
)
{
PAGED_CODE();

LARGE_INTEGER connectionId;
ULONG functionConfigCount = 0;

const ULONG resourceCount = WdfCmResourceListGetCount(ResourcesTranslated);


for (ULONG index = 0; index < resourceCount; ++index) {
const CM_PARTIAL_RESOURCE_DESCRIPTOR* resDescPtr =
WdfCmResourceListGetDescriptor(ResourcesTranslated, index);

switch (resDescPtr->Type) {
case CmResourceTypeConnection:
switch (resDescPtr->u.Connection.Class) {
case CM_RESOURCE_CONNECTION_CLASS_FUNCTION_CONFIG:
switch (resDescPtr->u.Connection.Type) {
case CM_RESOURCE_CONNECTION_TYPE_FUNCTION_CONFIG:
switch (functionConfigCount) {
case 0:
// save the connection ID
connectionId.LowPart = resDescPtr->u.Connection.IdLowPart;
connectionId.HighPart = resDescPtr->u.Connection.IdHighPart;
break;
} // switch (functionConfigCount)
++functionConfigCount;
break; // CM_RESOURCE_CONNECTION_TYPE_FUNCTION_CONFIG

} // switch (resDescPtr->u.Connection.Type)
break; // CM_RESOURCE_CONNECTION_CLASS_FUNCTION_CONFIG
} // switch (resDescPtr->u.Connection.Class)
break;
} // switch
} // for (resource list)

if (functionConfigCount < 1) {
return STATUS_INVALID_DEVICE_CONFIGURATION;
}
// TODO: save connectionId in the device context for later use

return STATUS_SUCCESS;
}

Reserving and committing resources


When a client wants to mux pins, it reserves and commits the MsftFunctionConfig resource. The following example
shows how a client might reserve and commit MsftFunctionConfig resources.
_IRQL_requires_max_(PASSIVE_LEVEL)
NTSTATUS AcquireFunctionConfigResource (
WDFDEVICE WdfDevice,
LARGE_INTEGER ConnectionId,
_Out_ WDFIOTARGET* ResourceHandlePtr
)
{
PAGED_CODE();

//
// Form the resource path from the connection ID
//
DECLARE_UNICODE_STRING_SIZE(resourcePath, RESOURCE_HUB_PATH_CHARS);
NTSTATUS status = RESOURCE_HUB_CREATE_PATH_FROM_ID(
&resourcePath,
ConnectionId.LowPart,
ConnectionId.HighPart);
if (!NT_SUCCESS(status)) {
return status;
}

//
// Create a WDFIOTARGET
//
WDFIOTARGET resourceHandle;
status = WdfIoTargetCreate(WdfDevice, WDF_NO_ATTRIBUTES, &resourceHandle);
if (!NT_SUCCESS(status)) {
return status;
}

//
// Reserve the resource by opening a WDFIOTARGET to the resource
//
WDF_IO_TARGET_OPEN_PARAMS openParams;
WDF_IO_TARGET_OPEN_PARAMS_INIT_OPEN_BY_NAME(
&openParams,
&resourcePath,
FILE_GENERIC_READ | FILE_GENERIC_WRITE);

status = WdfIoTargetOpen(resourceHandle, &openParams);


if (!NT_SUCCESS(status)) {
return status;
}
//
// Commit the resource
//
status = WdfIoTargetSendIoctlSynchronously(
resourceHandle,
WDF_NO_HANDLE, // WdfRequest
IOCTL_GPIO_COMMIT_FUNCTION_CONFIG_PINS,
nullptr, // InputBuffer
nullptr, // OutputBuffer
nullptr, // RequestOptions
nullptr); // BytesReturned

if (!NT_SUCCESS(status)) {
WdfIoTargetClose(resourceHandle);
return status;
}

//
// Pins were successfully muxed, return the handle to the caller
//
*ResourceHandlePtr = resourceHandle;
return STATUS_SUCCESS;
}
The driver should store the WDFIOTARGET in one of its context areas so that it can be closed later. When the driver
is ready to release the muxing configuration, it should close the resource handle by calling WdfObjectDelete(), or
WdfIoTargetClose() if you intend to reuse the WDFIOTARGET.

WdfObjectDelete(resourceHandle);

When the client closes its resource handle, the pins are muxed back to their initial state, and can now be acquired
by a different client.
Protocol description for pin muxing servers
This section describes how a pin muxing server exposes its functionality to clients. This does not apply to GpioClx
miniport drivers, since the framework implements this protocol on behalf of client drivers. For details on how to
support pin muxing in GpioClx client drivers, see Implementing muxing support in GpioClx Client Drivers.
Handling IRP_MJ_CREATE requests
Clients open a handle to a resource when they want to reserve a pin muxing resource. A pin muxing server receives
IRP_MJ_CREATE requests by way of a reparse operation from the resource hub. The trailing path component of the
IRP_MJ_CREATE request contains the resource hub ID, which is a 64-bit integer in hexadecimal format. The server
should extract the resource hub ID from the filename using RESOURCE_HUB_ID_FROM_FILE_NAME() from reshub.h,
and send IOCTL_RH_QUERY_CONNECTION_PROPERTIES to the resource hub to obtain the MsftFunctionConfig()
descriptor.
The server should validate the descriptor and extract the sharing mode and pin list from the descriptor. It should
then perform sharing arbitration for the pins, and if successful, mark the pins as reserved before completing the
request.
Sharing arbitration succeeds overall if sharing arbitration succeeds for each pin in the pin list. Each pin should be
arbitrated as follows:
If the pin is not already reserved, sharing arbitration succeeds.
If the pin is already reserved as exclusive, sharing arbitration fails.
If the pin is already reserved as shared,
and the incoming request is shared, sharing arbitration succeeds.
and the incoming request is exclusive, sharing arbitration fails.
If sharing arbitration fails, the request should be completed with STATUS_GPIO_INCOMPATIBLE_CONNECT_MODE.
If sharing arbitration succeeds, the request should completed with STATUS_SUCCESS.
Note that the sharing mode of the incoming request should be taken from the MsftFunctionConfig descriptor, not
IrpSp->Parameters.Create.ShareAccess.
Handling IOCTL_GPIO_COMMIT_FUNCTION_CONFIG_PINS requests
After the client has successfully reserved a MsftFunctionConfig resource by opening a handle, it can send
IOCTL_GPIO_COMMIT_FUNCTION_CONFIG_PINS to request the server to perform the actual hardware muxing
operation. When the server receives IOCTL_GPIO_COMMIT_FUNCTION_CONFIG_PINS, for each pin in the pin list it
should
Set the pull mode specified in the PinConfiguration member of the PNP_FUNCTION_CONFIG_DESCRIPTOR
structure into hardware.
Mux the pin to the function specified by the FunctionNumber member of the
PNP_FUNCTION_CONFIG_DESCRIPTOR structure.
The server should then complete the request with STATUS_SUCCESS.
The meaning of FunctionNumber is defined by the server, and it is understood that the MsftFunctionConfig
descriptor was authored with knowledge of how the server interprets this field.
Remember that when the handle is closed, the server will have to revert the pins to the configuration they were in
when IOCTL_GPIO_COMMIT_FUNCTION_CONFIG_PINS was received, so the server may need to save the pins
state before modifying them.
Handling IRP_MJ_CLOSE requests
When a client no longer requires a muxing resource, it closes its handle. When a server receives a IRP_MJ_CLOSE
request, it should revert the pins to the state they were in when IOCTL_GPIO_COMMIT_FUNCTION_CONFIG_PINS
was received. If the client never sent a IOCTL_GPIO_COMMIT_FUNCTION_CONFIG_PINS, no action is necessary. The
server should then mark the pins as available with respect to sharing arbitration, and complete the request with
STATUS_SUCCESS. Be sure to properly synchronize IRP_MJ_CLOSE handling with IRP_MJ_CREATE handling.
Authoring guidelines for ACPI tables
This section describes how to supply muxing resources to client drivers. Note that you will need Microsoft ASL
compiler build 14327 or later to compile tables containing MsftFunctionConfig() resources. MsftFunctionConfig()
resources are supplied to pin muxing clients as hardware resources. MsftFunctionConfig() resources should be
supplied to drivers that require pin muxing changes, which are typically SPB and serial controller drivers, but
should not be supplied to SPB and serial peripheral drivers, since the controller driver handles muxing
configuration. The MsftFunctionConfig() ACPI macro is defined as follows:

MsftFunctionConfig(Shared/Exclusive
PinPullConfig,
FunctionNumber,
ResourceSource,
ResourceSourceIndex,
ResourceConsumer/ResourceProducer,
VendorData) { Pin List }

Shared/Exclusive If exclusive, this pin can be acquired by a single client at a time. If shared, multiple shared
clients can acquire the resource. Always set this to exclusive since allowing multiple uncoordinated clients to
access a mutable resource can lead to data races and therefore unpredictable results.
PinPullConfig one of
PullDefault use the SOC-defined power-on default pull configuration
PullUp enable pull-up resistor
PullDown enable pull-down resistor
PullNone disable all pull resistors
FunctionNumber the function number to program into the mux.
ResourceSource The ACPI namespace path of the pin muxing server
ResourceSourceIndex set this to 0
ResourceConsumer/ResourceProducer set this to ResourceConsumer
VendorData optional binary data whose meaning is defined by the pin muxing server. This should usually be
left blank
Pin List a comma separated list of pin numbers to which the configuration applies. When the pin muxing
server is a GpioClx driver, these are GPIO pin numbers and have the same meaning as pin numbers in a GpioIo
descriptor.
The following example shows how one might supply a MsftFunctionConfig() resource to an I2C controller driver.
Device(I2C1)
{
Name(_HID, "BCM2841")
Name(_CID, "BCMI2C")
Name(_UID, 0x1)
Method(_STA)
{
Return(0xf)
}
Method(_CRS, 0x0, NotSerialized)
{
Name(RBUF, ResourceTemplate()
{
Memory32Fixed(ReadWrite, 0x3F804000, 0x20)
Interrupt(ResourceConsumer, Level, ActiveHigh, Shared) { 0x55 }
MsftFunctionConfig(Exclusive, PullUp, 4, "\\_SB.GPI0", 0, ResourceConsumer, ) { 2, 3 }
})
Return(RBUF)
}
}

In addition to the memory and interrupt resources typically required by a controller driver, a MsftFunctionConfig()
resource is also specified. This resource enables the I2C controller driver to put pins 2 and 3 - managed by the
device node at \_SB.GPIO0 in function 4 with pull-up resistor enabled.
Supporting muxing support in GpioClx client drivers
GpioClx has built-in support for pin muxing. GpioClx miniport drivers (also referred to as GpioClx client drivers),
drive GPIO controller hardware. As of Windows 10 build 14327, GpioClx miniport drivers can add support for pin
muxing by implementing two new DDIs:
CLIENT_ConnectFunctionConfigPins called by GpioClx to command the miniport driver to apply the specified
muxing configuration.
CLIENT_DisconnectFunctionConfigPins called by GpioClx to command the miniport driver to revert the
muxing configuration.
See GpioClx Event Callback Functions for a description of these routines.
In addition to these two new DDIs, existing DDIs should be audited for pin muxing compatibility:
CLIENT_ConnectIoPins/CLIENT_ConnectInterrupt CLIENT_ConnectIoPins is called by GpioClx to command the
miniport driver to configure a set pins for GPIO input or output. GPIO is mutually exclusive with
MsftFunctionConfig, meaning a pin will never be connected for GPIO and MsftFunctionConfig at the same time.
Since a pins default function is not required to be GPIO, a pin may not necessarily not be muxed to GPIO when
ConnectIoPins is called. ConnectIoPins is required to perform all operations necessary to make the pin ready for
GPIO IO, including muxing operations. CLIENT_ConnectInterrupt should behave similarly, since interrupts can be
thought of as a special case of GPIO input.
CLIENT_DisconnectIoPins/CLIENT_DisconnectInterrupt These routine should return pins to the state they were
in when CLIENT_ConnectIoPins/CLIENT_ConnectInterrupt was called, unless the PreserveConfiguration flag is
specified. In addition to reverting the direction of pins to their default state, the miniport should also revert each
pins muxing state to the state it was in when the _Connect routine was called.
For example, assume that a pins default muxing configuration is UART, and the pin can also be used as GPIO.
When CLIENT_ConnectIoPins is called to connect the pin for GPIO, it should mux the pin to GPIO, and in
CLIENT_DisconnectIoPins, it should mux the pin back to UART. In general, the _Disconnect routines should undo
operations done by the _Connect routines.
Supporting muxing in SpbCx and SerCx controller drivers
As of Windows 10 build 14327, the SpbCx and SerCx frameworks contain built-in support for pin muxing that
enables SpbCx and SerCx controller drivers to be pin muxing clients without any code changes to the controller
drivers themselves. By extension, any SpbCx/SerCx peripheral driver that connects to a muxing-enabled
SpbCx/SerCx controller driver will trigger pin muxing activity.
The following diagram shows the dependencies between each of these components. As you can see, pin muxing
introduces a dependency from SerCx and SpbCx controller drivers to the GPIO driver, which is usually responsible
for muxing.

At device initialization time, the SpbCx and SerCx frameworks parse all MsftFunctionConfig() resources supplied as
hardware resources to the device. SpbCx/SerCx then acquire and release the pin muxing resources on demand.
SpbCx applies pin muxing configuration in its IRP_MJ_CREATE handler, just before calling the client drivers
EvtSpbTargetConnect() callback. If muxing configuration could not be applied, the controller drivers
EvtSpbTargetConnect() callback will not be called. Therefore, an SPB controller driver may assume that pins are muxed
to the SPB function by the time EvtSpbTargetConnect() is called.
SpbCx reverts pin muxing configuration in its IRP_MJ_CLOSE handler, just after invoking the controller drivers
EvtSpbTargetDisconnect() callback. The result is that pins are muxed to the SPB function whenever a peripheral
driver opens a handle to the SPB controller driver, and are muxed away when the peripheral driver closes their
handle.
SerCx behaves similarly. SerCx acquires all MsftFunctionConfig() resources in its IRP_MJ_CREATE handler just before
invoking the controller drivers EvtSerCx2FileOpen() callback, and releases all resources in its IRP_MJ_CLOSE
handler, just after invoking the controller drivers EvtSerCx2FileClose callback.
The implication of dynamic pin muxing for SerCx and SpbCx controller drivers is that they must be able to tolerate
pins being muxed away from SPB/UART function at certain times. Controller drivers need to assume that pins will
not be muxed until EvtSpbTargetConnect() or EvtSerCx2FileOpen() is called. Pins are not necessary muxed to SPB/UART
function during the following callbacks. The following is not a complete list, but represents the most common PNP
routines implemented by controller drivers.
DriverEntry
EvtDriverDeviceAdd
EvtDevicePrepareHardware/EvtDeviceReleaseHardware
EvtDeviceD0Entry/EvtDeviceD0Exit

Verification
When youve finished authoring your ASL, you should run the Hardware Lab Kit (HLK) tests to verify that all
resources are exposed correctly and the underlying busses meet the functional contract of the API. The following
sections describe how to load the rhproxy device node for testing without recompiling your firmware and how to
run the HLK tests.
Compile and load ASL with ACPITABL.dat
The first step is to compile and load the ASL file onto your system under test. We recommend using ACPITABL.dat
during development and validation as it does not require a full UEFI rebuild to test ASL changes.
1. Create a file named yourboard.asl and put the RHPX device node inside a DefinitionBlock:
DefinitionBlock ("ACPITABL.dat", "SSDT", 1, "MSFT", "RHPROXY", 1) { Scope (\_SB) { Device(RHPX) { ... } } }
2. Download the WDK and get asl.exe
3. Run the following command to generate ACPITABL.dat: asl.exe yourboard.asl
4. Copy the resulting ACPITABL.dat file to c:\windows\system32 on your system under test.
5. Turn on testsigning on your system under test: bcdedit /set testsigning on
6. Reboot the system under test. The system will append the ACPI tables defined in ACPITABL.dat to the system
firmware tables.
7. Verify that the RHPX device node was added to the system: devcon status *msft8000 The output of devcon should
indicate that the device is present, although the driver may have failed to initialize if there are bugs in the ASL
that need to be worked out.
Run the HLK Tests
When you select the rhproxy device node in HLK manager, the applicable tests will automatically be selected.
In the HLK manager, select Resource Hub Proxy device:

Then click the Tests tab, and select I2C WinRT, Gpio WinRT, and Spi WinRT tests.
Click Run Selected. Further documentation on each test is available by right clicking on the test and clicking Test
Description.
More testing resources
Simple command line tools for Gpio, I2c, Spi, and Serial are available on the ms-iot github samples repository
(https://github.com/ms-iot/samples). These tools can be helpful for manual debugging.

TOOL LINK

GpioTestTool https://developer.microsoft.com/windows/iot/samples/gpiotest
tool

I2cTestTool https://developer.microsoft.com/windows/iot/samples/I2cTestT
ool

SpiTestTool https://developer.microsoft.com/windows/iot/samples/spitestt
ool

MinComm (Serial) https://github.com/ms-iot/samples/tree/develop/MinComm

Resources
DESTINATION LINK

ACPI 5.0 specification http://acpi.info/spec.htm

Asl.exe (Microsoft ASL Compiler) https://msdn.microsoft.com/library/windows/hardware/dn551


195.aspx
DESTINATION LINK

Windows.Devices.Gpio https://msdn.microsoft.com/library/windows/apps/windows.de
vices.gpio.aspx

Windows.Devices.I2c https://msdn.microsoft.com/library/windows/apps/windows.de
vices.i2c.aspx

Windows.Devices.Spi https://msdn.microsoft.com/library/windows/apps/windows.de
vices.spi.aspx

Windows.Devices.SerialCommunication https://msdn.microsoft.com/library/windows/apps/windows.de
vices.serialcommunication.aspx

Test Authoring and Execution Framework (TAEF) https://msdn.microsoft.com/library/windows/hardware/hh439


725.aspx

SpbCx https://msdn.microsoft.com/library/windows/hardware/hh450
906.aspx

GpioClx https://msdn.microsoft.com/library/windows/hardware/hh439
508.aspx

SerCx https://msdn.microsoft.com/library/windows/hardware/ff5469
39.aspx

MITT I2C Tests https://msdn.microsoft.com/library/windows/hardware/dn919


852.aspx

GpioTestTool https://developer.microsoft.com/windows/iot/samples/GPIOTe
stTool

I2cTestTool https://developer.microsoft.com/windows/iot/samples/I2cTestT
ool

SpiTestTool https://developer.microsoft.com/windows/iot/samples/spitestt
ool

MinComm (Serial) https://github.com/ms-iot/samples/tree/develop/MinComm

Hardware Lab Kit (HLK) https://msdn.microsoft.com/library/windows/hardware/dn930


814.aspx

Apendix
Appendix A - Raspberry Pi ASL Listing
Header pinout: https://developer.microsoft.com/windows/iot/samples/PinMappingsRPi2

DefinitionBlock ("ACPITABL.dat", "SSDT", 1, "MSFT", "RHPROXY", 1)


{

Scope (\_SB)
{
//
// RHProxy Device Node to enable WinRT API
//
Device(RHPX)
Device(RHPX)
{
Name(_HID, "MSFT8000")
Name(_CID, "MSFT8000")
Name(_UID, 1)

Name(_CRS, ResourceTemplate()
{
// Index 0
SPISerialBus( // SCKL - GPIO 11 - Pin 23
// MOSI - GPIO 10 - Pin 19
// MISO - GPIO 9 - Pin 21
// CE0 - GPIO 8 - Pin 24
0, // Device selection (CE0)
PolarityLow, // Device selection polarity
FourWireMode, // wiremode
0, // databit len: placeholder
ControllerInitiated, // slave mode
0, // connection speed: placeholder
ClockPolarityLow, // clock polarity: placeholder
ClockPhaseFirst, // clock phase: placeholder
"\\_SB.SPI0", // ResourceSource: SPI bus controller name
0, // ResourceSourceIndex
// Resource usage
) // Vendor Data

// Index 1
SPISerialBus( // SCKL - GPIO 11 - Pin 23
// MOSI - GPIO 10 - Pin 19
// MISO - GPIO 9 - Pin 21
// CE1 - GPIO 7 - Pin 26
1, // Device selection (CE1)
PolarityLow, // Device selection polarity
FourWireMode, // wiremode
0, // databit len: placeholder
ControllerInitiated, // slave mode
0, // connection speed: placeholder
ClockPolarityLow, // clock polarity: placeholder
ClockPhaseFirst, // clock phase: placeholder
"\\_SB.SPI0", // ResourceSource: SPI bus controller name
0, // ResourceSourceIndex
// Resource usage
) // Vendor Data

// Index 2
SPISerialBus( // SCKL - GPIO 21 - Pin 40
// MOSI - GPIO 20 - Pin 38
// MISO - GPIO 19 - Pin 35
// CE1 - GPIO 17 - Pin 11
1, // Device selection (CE1)
PolarityLow, // Device selection polarity
FourWireMode, // wiremode
0, // databit len: placeholder
ControllerInitiated, // slave mode
0, // connection speed: placeholder
ClockPolarityLow, // clock polarity: placeholder
ClockPhaseFirst, // clock phase: placeholder
"\\_SB.SPI1", // ResourceSource: SPI bus controller name
0, // ResourceSourceIndex
// Resource usage
) // Vendor Data
// Index 3
I2CSerialBus( // Pin 3 (GPIO2, SDA1), 5 (GPIO3, SCL1)
0xFFFF, // SlaveAddress: placeholder
, // SlaveMode: default to ControllerInitiated
0, // ConnectionSpeed: placeholder
, // Addressing Mode: placeholder
"\\_SB.I2C1", // ResourceSource: I2C bus controller name
,
,
) // VendorData

// Index 4 - GPIO 4 -
GpioIO(Shared, PullUp, , , , "\\_SB.GPI0", , , , ) { 4 }
GpioInt(Edge, ActiveBoth, Shared, PullUp, 0, "\\_SB.GPI0",) { 4 }
// Index 6 - GPIO 5 -
GpioIO(Shared, PullUp, , , , "\\_SB.GPI0", , , , ) { 5 }
GpioInt(Edge, ActiveBoth, Shared, PullUp, 0, "\\_SB.GPI0",) { 5 }
// Index 8 - GPIO 6 -
GpioIO(Shared, PullUp, , , , "\\_SB.GPI0", , , , ) { 6 }
GpioInt(Edge, ActiveBoth, Shared, PullUp, 0, "\\_SB.GPI0",) { 6 }
// Index 10 - GPIO 12 -
GpioIO(Shared, PullDown, , , , "\\_SB.GPI0", , , , ) { 12 }
GpioInt(Edge, ActiveBoth, Shared, PullDown, 0, "\\_SB.GPI0",) { 12 }
// Index 12 - GPIO 13 -
GpioIO(Shared, PullDown, , , , "\\_SB.GPI0", , , , ) { 13 }
GpioInt(Edge, ActiveBoth, Shared, PullDown, 0, "\\_SB.GPI0",) { 13 }
// Index 14 - GPIO 16 -
GpioIO(Shared, PullDown, , , , "\\_SB.GPI0", , , , ) { 16 }
GpioInt(Edge, ActiveBoth, Shared, PullDown, 0, "\\_SB.GPI0",) { 16 }
// Index 16 - GPIO 18 -
GpioIO(Shared, PullDown, , , , "\\_SB.GPI0", , , , ) { 18 }
GpioInt(Edge, ActiveBoth, Shared, PullDown, 0, "\\_SB.GPI0",) { 18 }
// Index 18 - GPIO 22 -
GpioIO(Shared, PullDown, , , , "\\_SB.GPI0", , , , ) { 22 }
GpioInt(Edge, ActiveBoth, Shared, PullDown, 0, "\\_SB.GPI0",) { 22 }
// Index 20 - GPIO 23 -
GpioIO(Shared, PullDown, , , , "\\_SB.GPI0", , , , ) { 23 }
GpioInt(Edge, ActiveBoth, Shared, PullDown, 0, "\\_SB.GPI0",) { 23 }
// Index 22 - GPIO 24 -
GpioIO(Shared, PullDown, , , , "\\_SB.GPI0", , , , ) { 24 }
GpioInt(Edge, ActiveBoth, Shared, PullDown, 0, "\\_SB.GPI0",) { 24 }
// Index 24 - GPIO 25 -
GpioIO(Shared, PullDown, , , , "\\_SB.GPI0", , , , ) { 25 }
GpioInt(Edge, ActiveBoth, Shared, PullDown, 0, "\\_SB.GPI0",) { 25 }
// Index 26 - GPIO 26 -
GpioIO(Shared, PullDown, , , , "\\_SB.GPI0", , , , ) { 26 }
GpioInt(Edge, ActiveBoth, Shared, PullDown, 0, "\\_SB.GPI0",) { 26 }
// Index 28 - GPIO 27 -
GpioIO(Shared, PullDown, , , , "\\_SB.GPI0", , , , ) { 27 }
GpioInt(Edge, ActiveBoth, Shared, PullDown, 0, "\\_SB.GPI0",) { 27 }
// Index 30 - GPIO 35 -
GpioIO(Shared, PullUp, , , , "\\_SB.GPI0", , , , ) { 35 }
GpioInt(Edge, ActiveBoth, Shared, PullUp, 0, "\\_SB.GPI0",) { 35 }
// Index 32 - GPIO 47 -
GpioIO(Shared, PullUp, , , , "\\_SB.GPI0", , , , ) { 47 }
GpioInt(Edge, ActiveBoth, Shared, PullUp, 0, "\\_SB.GPI0",) { 47 }
})

Name(_DSD, Package()
{
ToUUID("daffd814-6eba-4d8c-8a91-bc9bbf4aa301"),
Package()
{
// Reference http://www.raspberrypi.org/documentation/hardware/raspberrypi/spi/README.md
// SPI 0
Package(2) { "bus-SPI-SPI0", Package() { 0, 1 }}, // Index 0 & 1
Package(2) { "SPI0-MinClockInHz", 7629 }, // 7629 Hz
Package(2) { "SPI0-MaxClockInHz", 125000000 }, // 125 MHz
Package(2) { "SPI0-SupportedDataBitLengths", Package() { 8 }}, // Data Bit Length
// SPI 1
Package(2) { "bus-SPI-SPI1", Package() { 2 }}, // Index 2
Package(2) { "SPI1-MinClockInHz", 30518 }, // 30518 Hz
Package(2) { "SPI1-MaxClockInHz", 125000000 }, // 125 MHz
Package(2) { "SPI1-SupportedDataBitLengths", Package() { 8 }}, // Data Bit Length
// I2C1
Package(2) { "bus-I2C-I2C1", Package() { 3 }},
// GPIO Pin Count and supported drive modes
Package (2) { "GPIO-PinCount", 54 },
Package (2) { "GPIO-UseDescriptorPinNumbers", 1 },
Package (2) { "GPIO-SupportedDriveModes", 0xf }, // InputHighImpedance, InputPullUp, InputPullDown, OutputCmos
}
})
}
}
}

Appendix B - MinnowBoardMax ASL Listing


Header pinout: https://developer.microsoft.com/windows/iot/samples/PinMappingsMBM

DefinitionBlock ("ACPITABL.dat", "SSDT", 1, "MSFT", "RHPROXY", 1)


{
Scope (\_SB)
{
Device(RHPX)
{
Name(_HID, "MSFT8000")
Name(_CID, "MSFT8000")
Name(_UID, 1)

Name(_CRS, ResourceTemplate()
{
// Index 0
SPISerialBus( // Pin 5, 7, 9 , 11 of JP1 for SIO_SPI
1, // Device selection
PolarityLow, // Device selection polarity
FourWireMode, // wiremode
8, // databit len
ControllerInitiated, // slave mode
8000000, // Connection speed
ClockPolarityLow, // Clock polarity
ClockPhaseSecond, // clock phase
"\\_SB.SPI1", // ResourceSource: SPI bus controller name
0, // ResourceSourceIndex
ResourceConsumer, // Resource usage
JSPI, // DescriptorName: creates name for offset of resource descriptor
) // Vendor Data

// Index 1
I2CSerialBus( // Pin 13, 15 of JP1, for SIO_I2C5 (signal)
0xFF, // SlaveAddress: bus address
, // SlaveMode: default to ControllerInitiated
400000, // ConnectionSpeed: in Hz
, // Addressing Mode: default to 7 bit
"\\_SB.I2C6", // ResourceSource: I2C bus controller name (For MinnowBoard Max, hardware I2C5(0-based) is reported as ACPI
I2C6(1-based))
,
,
JI2C, // Descriptor Name: creates name for offset of resource descriptor
) // VendorData

// Index 2
UARTSerialBus( // Pin 17, 19 of JP1, for SIO_UART2
115200, // InitialBaudRate: in bits ber second
, // BitsPerByte: default to 8 bits
, // StopBits: Defaults to one bit
0xfc, // LinesInUse: 8 1-bit flags to declare line enabled
, // IsBigEndian: default to LittleEndian
, // Parity: Defaults to no parity
, // FlowControl: Defaults to no flow control
32, // ReceiveBufferSize
32, // TransmitBufferSize
"\\_SB.URT2", // ResourceSource: UART bus controller name
,
,
,
UAR2, // DescriptorName: creates name for offset of resource descriptor
)

// Index 3
GpioIo (Shared, PullNone, 0, 0, IoRestrictionNone, "\\_SB.GPO2",) {0} // Pin 21 of JP1 (GPIO_S5[00])
// Index 4
GpioInt(Edge, ActiveBoth, SharedAndWake, PullNone, 0,"\\_SB.GPO2",) {0}

// Index 5
GpioIo (Shared, PullNone, 0, 0, IoRestrictionNone, "\\_SB.GPO2",) {1} // Pin 23 of JP1 (GPIO_S5[01])
// Index 6
GpioInt(Edge, ActiveBoth, SharedAndWake, PullNone, 0,"\\_SB.GPO2",) {1}

// Index 7
GpioIo (Shared, PullNone, 0, 0, IoRestrictionNone, "\\_SB.GPO2",) {2} // Pin 25 of JP1 (GPIO_S5[02])
// Index 8
GpioInt(Edge, ActiveBoth, SharedAndWake, PullNone, 0,"\\_SB.GPO2",) {2}

// Index 9
UARTSerialBus( // Pin 6, 8, 10, 12 of JP1, for SIO_UART1
115200, // InitialBaudRate: in bits ber second
, // BitsPerByte: default to 8 bits
, // StopBits: Defaults to one bit
0xfc, // LinesInUse: 8 1-bit flags to declare line enabled
, // IsBigEndian: default to LittleEndian
, // Parity: Defaults to no parity
FlowControlHardware, // FlowControl: Defaults to no flow control
32, // ReceiveBufferSize
32, // TransmitBufferSize
"\\_SB.URT1", // ResourceSource: UART bus controller name
,
,
UAR1, // DescriptorName: creates name for offset of resource descriptor
)

// Index 10
GpioIo (Shared, PullNone, 0, 0, IoRestrictionNone, "\\_SB.GPO0",) {62} // Pin 14 of JP1 (GPIO_SC[62])
// Index 11
GpioInt(Edge, ActiveBoth, SharedAndWake, PullNone, 0,"\\_SB.GPO0",) {62}

// Index 12
GpioIo (Shared, PullNone, 0, 0, IoRestrictionNone, "\\_SB.GPO0",) {63} // Pin 16 of JP1 (GPIO_SC[63])
// Index 13
GpioInt(Edge, ActiveBoth, SharedAndWake, PullNone, 0,"\\_SB.GPO0",) {63}

// Index 14
GpioIo (Shared, PullNone, 0, 0, IoRestrictionNone, "\\_SB.GPO0",) {65} // Pin 18 of JP1 (GPIO_SC[65])
// Index 15
GpioInt(Edge, ActiveBoth, SharedAndWake, PullNone, 0,"\\_SB.GPO0",) {65}

// Index 16
GpioIo (Shared, PullNone, 0, 0, IoRestrictionNone, "\\_SB.GPO0",) {64} // Pin 20 of JP1 (GPIO_SC[64])
// Index 17
GpioInt(Edge, ActiveBoth, SharedAndWake, PullNone, 0,"\\_SB.GPO0",) {64}

// Index 18
GpioIo (Shared, PullNone, 0, 0, IoRestrictionNone, "\\_SB.GPO0",) {94} // Pin 22 of JP1 (GPIO_SC[94])
// Index 19
GpioInt(Edge, ActiveBoth, SharedAndWake, PullNone, 0,"\\_SB.GPO0",) {94}

// Index 20
GpioIo (Shared, PullNone, 0, 0, IoRestrictionNone, "\\_SB.GPO0",) {95} // Pin 24 of JP1 (GPIO_SC[95])
// Index 21
GpioInt(Edge, ActiveBoth, SharedAndWake, PullNone, 0,"\\_SB.GPO0",) {95}

// Index 22
GpioIo (Shared, PullNone, 0, 0, IoRestrictionNone, "\\_SB.GPO0",) {54} // Pin 26 of JP1 (GPIO_SC[54])
// Index 23
GpioInt(Edge, ActiveBoth, SharedAndWake, PullNone, 0,"\\_SB.GPO0",) {54}
GpioInt(Edge, ActiveBoth, SharedAndWake, PullNone, 0,"\\_SB.GPO0",) {54}
})

Name(_DSD, Package()
{
ToUUID("daffd814-6eba-4d8c-8a91-bc9bbf4aa301"),
Package()
{
// SPI Mapping
Package(2) { "bus-SPI-SPI0", Package() { 0 }},

Package(2) { "SPI0-MinClockInHz", 100000 },


Package(2) { "SPI0-MaxClockInHz", 15000000 },
// SupportedDataBitLengths takes a list of support data bit length
// Example : Package(2) { "SPI0-SupportedDataBitLengths", Package() { 8, 7, 16 }},
Package(2) { "SPI0-SupportedDataBitLengths", Package() { 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27,
28, 29, 30, 31, 32 }},
// I2C Mapping
Package(2) { "bus-I2C-I2C5", Package() { 1 }},
// UART Mapping
Package(2) { "bus-UART-UART2", Package() { 2 }},
Package(2) { "bus-UART-UART1", Package() { 9 }},
}
})
}
}
}

Appendix C - Sample Powershell script to generate GPIO resources


The following script can be used to generate the GPIO resource declarations for Raspberry Pi:

$pins = @(
@{PinNumber=4;PullConfig='PullUp'},
@{PinNumber=5;PullConfig='PullUp'},
@{PinNumber=6;PullConfig='PullUp'},
@{PinNumber=12;PullConfig='PullDown'},
@{PinNumber=13;PullConfig='PullDown'},
@{PinNumber=16;PullConfig='PullDown'},
@{PinNumber=18;PullConfig='PullDown'},
@{PinNumber=22;PullConfig='PullDown'},
@{PinNumber=23;PullConfig='PullDown'},
@{PinNumber=24;PullConfig='PullDown'},
@{PinNumber=25;PullConfig='PullDown'},
@{PinNumber=26;PullConfig='PullDown'},
@{PinNumber=27;PullConfig='PullDown'},
@{PinNumber=35;PullConfig='PullUp'},
@{PinNumber=47;PullConfig='PullUp'})

# generate the resources


$FIRST_RESOURCE_INDEX = 4
$resourceIndex = $FIRST_RESOURCE_INDEX
$pins | % {
$a = @"
// Index $resourceIndex - GPIO $($_.PinNumber) - $($_.Name)
GpioIO(Shared, $($_.PullConfig), , , , "\\_SB.GPI0", , , , ) { $($_.PinNumber) }
GpioInt(Edge, ActiveBoth, Shared, $($_.PullConfig), 0, "\\_SB.GPI0",) { $($_.PinNumber) }
"@
Write-Host $a
$resourceIndex += 2;
}
Enumerate devices
3/6/2017 12 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]

Samples
The simplest way to enumerate all available devices is to take a snapshot with the FindAllAsync command
(explained further in a section below).

async void enumerateSnapshot(){


DeviceInformationCollection collection = await DeviceInformation.FindAllAsync();
}

To download a sample showing the more advanced usages of the Windows.Devices.Enumeration APIs, click
here.

Enumeration APIs
The enumeration namespace enables you to find devices that are internally connected to the system, externally
connected, or detectable over wireless or networking protocols. The APIs that you use to enumerate through the
possible devices are the Windows.Devices.Enumeration namespace. Some reasons for using these APIs include
the following.
Finding a device to connect to with your application.
Getting information about devices connected to or discoverable by the system.
Have an app receive notifications when devices are added, connect, disconnect, change online status, or change
other properties.
Have an app receive background triggers when devices connect, disconnect, change online status, or change
other properties.
These APIs can enumerate devices over any of the following protocols and buses, provided the individual device
and the system running the app support that technology. This is not an exhaustive list, and other protocols may be
supported by a specific device.
Physically connected buses. This includes PCI and USB. For example, anything that you can see in the Device
Manager.
UPnP
Digital Living Network Alliance (DLNA)
Discovery and Launch (DIAL)
DNS Service Discovery (DNS-SD)
Web Services on Devices (WSD)
Bluetooth
Wi-Fi Direct
WiGig
Point of Service
In many cases, you will not need to worry about using the enumeration APIs. This is because many APIs that use
devices will automatically select the appropriate default device or provide a more streamlined enumeration API.
For example, MediaElement will automatically use the default audio renderer device. As long as your app can use
the default device, there is no need to use the enumeration APIs in your application. The enumeration APIs provide
a general and flexible way for you to discover and connect to available devices. This topic provides information
about enumerating devices and describes the four common ways to enumerate devices.
Using the DevicePicker UI
Enumerating a snapshot of devices currently discoverable by the system
Enumerating devices currently discoverable and watch for changes
Enumerating devices currently discoverable and watch for changes in a background task

DeviceInformation objects
Working with the enumeration APIs, you will frequently need to use DeviceInformation objects. These objects
contain most of the available information about the device. The following table explains some of the
DeviceInformation properties you will be interested in. For a complete list, see the reference page for
DeviceInformation.

PROPERTY COMMENTS

DeviceInformation.Id This is the unique identifier of the device and is provided as a


string variable. In most cases, this is an opaque value you will
just pass from one method to another to indicate the specific
device you are interested in. You can also use this property
and the DeviceInformation.Kind property after closing
down your app and reopening it. This will ensure that you can
recover and reuse the same DeviceInformation object.

DeviceInformation.Kind This indicates the kind of device object represented by the


DeviceInformation object. This is not the device category or
type of device. A single device can be represented by several
different DeviceInformation objects of different kinds. The
possible values for this property are listed in
DeviceInformationKind as well as how they relate to one
another.

DeviceInformation.Properties This property bag contains information that is requested for


the DeviceInformation object. The most common properties
are easily referenced as properties of the DeviceInformation
object, such as with DeviceInformation.Name. For more
information, see Device information properties.

DevicePicker UI
The DevicePicker is a control provided by Windows that creates a small UI that enables the user to select a device
from a list. You can customize the DevicePicker window in a few ways.
You can control the devices that are displayed in the UI by adding a SupportedDeviceSelectors, a
SupportedDeviceClasses, or both to the DevicePicker.Filter. In most cases, you only need to add one
selector or class, but if you do need more than one you can add multiple. If you do add multiple selectors or
classes, they are conjoined using an OR logic function.
You can specify the properties you want to retrieve for the devices. You can do this by adding properties to
DevicePicker.RequestedProperties.
You can alter the appearance of the DevicePicker using Appearance.
You can specify the size and location of the DevicePicker when it is displayed.
While the DevicePicker is displayed, the contents of the UI will be automatically updated if devices are added,
removed, or updated.
Note You cannot specify the DeviceInformationKind using the DevicePicker. If you want to have devices of a
specific DeviceInformationKind, you will need to build a DeviceWatcher and provide your own UI.
Casting media content and DIAL also each provide their own pickers if you want to use them. They are
CastingDevicePicker and DialDevicePicker, respectively.

Enumerate a snapshot of devices


In some scenarios, the DevicePicker will not be suitable for your needs and you need something more flexible.
Perhaps you want to build your own UI or need to enumerate devices without displaying a UI to the user. In these
situations, you could enumerate a snapshot of devices. This involves looking through the devices that are currently
connected to or paired with the system. However, you need to be aware that this method only looks at a snapshot
of devices that are available, so you will not be able to find devices that connect after you enumerate through the
list. You also will not be notified if a device is updated or removed. Another potential downside to be aware of is
that this method will hold back any results until the entire enumeration is completed. For this reason, you should
not use this method when you are interested in AssociationEndpoint, AssociationEndpointContainer, or
AssociationEndpointService objects since they are found over a network or wireless protocol. This can take up
to 30 seconds to complete. In that scenario, you should use a DeviceWatcher object to enumerate through the
possible devices.
To enumerate through a snapshot of devices, use the FindAllAsync method. This method waits until the entire
enumeration process is complete and returns all the results as one DeviceInformationCollection object. This
method is also overloaded to provide you with several options for filtering your results and limiting them to the
devices that you are interested in. You can do this by providing a DeviceClass or passing in a device selector. The
device selector is an AQS string that specifies the devices you want to enumerate. For more information, see Build
a device selector.
An example of a device enumeration snapshot is provided below:
In addition to limiting the results, you can also specify the properties that you want to retrieve for the devices. If
you do, the specified properties will be available in the property bag for each of the DeviceInformation objects
returned in the collection. It is important to note that not all properties are available for all device kinds. To see
what properties are available for which device kinds, see Device information properties.

Enumerate and watch devices


A more powerful and flexible method of enumerating devices is creating a DeviceWatcher. This option provides
the most flexibility when you are enumerating devices. It allows you to enumerate devices that are currently
present, and also receive notifications when devices that match your device selector are added, removed, or
properties change. When you create a DeviceWatcher, you provide a device selector. For more information about
device selectors, see Build a device selector. After creating the watcher, you will receive the following notifications
for any device that matches your provided criteria.
Add notification when a new device is added.
Update notification when a property you are interested in is changed.
Remove notification when a device is no longer available or no longer matches your filter.
In most cases where you are using a DeviceWatcher, you are maintaining a list of devices and adding to it,
removing items from it, or updating items as your watcher receives updates from the devices that you are
watching. When you receive an update notification, the updated information will be available as a
DeviceInformationUpdate object. In order to update your list of devices, first find the appropriate
DeviceInformation that changed. Then call the Update method for that object, providing the
DeviceInformationUpdate object. This is a convenience function that will automatically update your
DeviceInformation object.
Since a DeviceWatcher sends notifications as devices arrive and when they change, you should use this method
of enumerating devices when you are interested in AssociationEndpoint, AssociationEndpointContainer, or
AssociationEndpointService objects since they are enumerated over networking or wireless protocols.
To create a DeviceWatcher, use one of the CreateWatcher methods. These methods are overloaded to enable
you to specify the devices that you are interested in. You can do this by providing a DeviceClass or passing in a
device selector. The device selector is an AQS string that specifies the devices you want to enumerate. For more
information, see Build a device selector. You can also specify the properties that you want to retrieve for the
devices and are interested in. If you do, the specified properties will be available in the property bag for each of the
DeviceInformation objects returned in the collection. It is important to note that not all properties are available
for all device kinds. To see what properties are available for which device kinds, see Device information properties

Watch devices as a background task


Watching devices as a background task is very similar to creating a DeviceWatcher as described above. In fact,
you will still need to create a normal DeviceWatcher object first as described in the previous section. Once you
create it, you call GetBackgroundTrigger instead of DeviceWatcher.Start. When you call
GetBackgroundTrigger, you must specify which of the notifications you are interested in: add, remove, or
update. You cannot request update or remove without requesting add as well. Once you register the trigger, the
DeviceWatcher will start running immediately in the background. From this point forward, whenever it receives a
new notification for your application that matches your criteria, the background task will trigger and it will provide
you the latest changes since it last triggered your application.
Important The first time that a DeviceWatcherTrigger triggers your application will be when the watcher
reaches the EnumerationCompleted state. This means it will contain all of the initial results. Any future times it
triggers your application, it will only contain the add, update, and remove notifications that have occurred since
the last trigger. This is slightly different from a foreground DeviceWatcher object because the initial results do
not come in one at a time and are only delivered in a bundle after the EnumerationCompleted is reached.
Some wireless protocols behave differently if they are scanning in the background versus the foreground, or they
may not support scanning in the background at all. There are three possibilities with relation to background
scanning. The following table lists the possibilities and the effects this may have on your application. For example,
Bluetooth and Wi-Fi Direct do not support background scans, so by extension, they do not support a
DeviceWatcherTrigger.

BEHAVIOR IMPACT

Same behavior in background None

Only passive scans possible in background Device may take longer to discover while waiting for a passive
scan to occur.

Background scans not supported No devices will be detectable by the DeviceWatcherTrigger,


and no updates will be reported.

If your DeviceWatcherTrigger includes a protocol that does not support scanning in as a background task, your
trigger will still work. However, you will not be able to get any updates or results over that protocol. The updates
for other protocols or devices will still be detected normally.

Using DeviceInformationKind
In most scenarios, you will not need to worry about the DeviceInformationKind of a DeviceInformation object.
This is because the device selector returned by the device API you're using will often guarantee you are getting the
correct kinds of device objects to use with their API. However, in some scenarios you will want to get the
DeviceInformation for devices, but there is not a corresponding device API to provide a device selector. In these
cases you will need to build your own selector. For example, Web Services on Devices does not have a dedicated
API, but you can discover those devices and get information about them using the
Windows.Devices.Enumeration APIs and then use them using the socket APIs.
If you are building your own device selector to enumerate through device objects, DeviceInformationKind will
be important for you to understand. All of the possible kinds, as well as how they relate to one another, are
described on the reference page for DeviceInformationKind. One of the most common uses of
DeviceInformationKind is to specify what kind of devices you are searching for when submitting a query in
conjunction with a device selector. By doing this, it makes sure that you only enumerate over devices that match
the provided DeviceInformationKind. For example, you could find a DeviceInterface object and then run a
query to get the information for the parent Device object. That parent object may contain additional information.
It is important to note that the properties available in the property bag for a DeviceInformation object will vary
depending on the DeviceInformationKind of the device. Certain properties are only available with certain kinds.
For more information about which properties are available for which kinds, see Device information properties.
Hence, in the above example, searching for the parent Device will give you access to more information that was
not available from the DeviceInterface device object. Because of this, when you create your AQS filter strings, it is
important to ensure that the requested properties are available for the DeviceInformationKind objects you are
enumerating. For more information about building a filter, see Build a device selector.
When enumerating AssociationEndpoint, AssociationEndpointContainer, or AssociationEndpointService
objects, you are enumerating over a wireless or network protocol. In these situations, we recommend that you
don't use FindAllAsync and instead use CreateWatcher. This is because searching over a network often results
in search operations that won't timeout for 10 or more seconds before generating EnumerationCompleted.
FindAllAsync doesn't complete its operation until EnumerationCompleted is triggered. If you are using a
DeviceWatcher, you'll get results closer to real time regardless of when EnumerationCompleted is called.

Save a device for later use


Any DeviceInformation object is uniquely identified by a combination of two pieces of information:
DeviceInformation.Id and DeviceInformation.Kind. If you keep these two pieces of information, you can
recreate the DeviceInformation object after it lost by supplying this information to CreateFromIdAsync. If you
do this, you can save user preferences for a device that integrates with your app.
Build a device selector
3/6/2017 4 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Important APIs
Windows.Devices.Enumeration
Building a device selector will enable you to limit the devices you are searching through when enumerating
devices. This will enable you to only get relevant results and will also improve the performance of the system. In
most scenarios you get a device selector from a device stack. For example, you might use GetDeviceSelector for
devices discovered over USB. These device selectors return an Advanced Query Syntax (AQS) string. If you are not
familiar with the AQS format, you can read more at Using Advanced Query Syntax Programmatically.

Building the filter string


There are some cases where you need to enumerate devices and a provided device selector is not available for
your scenario. A device selector is an AQS filter string that contains the following information. Before creating a
filter string, you need to know some key pieces of information about the devices you want to enumerate.
The DeviceInformationKind of the devices you are interested in. For more information about how
DeviceInformationKind impacts enumerating devices, see Enumerate devices.
How to build an AQS filter string, which is explained in this topic.
The properties you are interested in. The available properties will depend upon the DeviceInformationKind.
See Device information properties for more information.
The protocols you are querying over. This is only needed if you are searching for devices over a wireless or
wired network. For more information about doing this, see Enumerate devices over a network.
When using the Windows.Devices.Enumeration APIs, you frequently combine the device selector with the
device kind that you are interested in. The available list of device kinds is defined by the DeviceInformationKind
enumeration. This combination of factors helps you to limit the devices that are available to the ones that you are
interested in. If you do not specify the DeviceInformationKind, or the method you are using does not provide a
DeviceInformationKind parameter, the default kind is DeviceInterface.
The Windows.Devices.Enumeration APIs use canonical AQS syntax, but not all of the operators are supported.
For a list of properties that are available when you are constructing your filter string, see Device information
properties.
Caution Custom properties that are defined using the {GUID} PID format cannot be used when constructing your
AQS filter string. This is because the property type is derived from the well-known property name.
The following table lists the AQS operators and what types of parameters they support.

OPERATOR SUPPORTED TYPES

COP_EQUAL String, boolean, GUID, UInt16, UInt32

COP_NOTEQUAL String, boolean, GUID, UInt16, UInt32

COP_LESSTHAN UInt16, UInt32


OPERATOR SUPPORTED TYPES

COP_GREATERTHAN UInt16, UInt32

COP_LESSTHANOREQUAL UInt16, UInt32

COP_GREATERTHANOREQUAL UInt16, UInt32

COP_VALUE_CONTAINS String, string array, boolean array, GUID array, UInt16 array,
UInt32 array

COP_VALUE_NOTCONTAINS String, string array, boolean array, GUID array, UInt16 array,
UInt32 array

COP_VALUE_STARTSWITH String

COP_VALUE_ENDSWITH String

COP_DOSWILDCARDS Not supported

COP_WORD_EQUAL Not supported

COP_WORD_STARTSWITH Not supported

COP_APPLICATION_SPECIFIC Not supported

Tip You can specify NULL for COP_EQUAL or COP_NOTEQUAL. This translates to a property with no value,
or that the value does not exist. In AQS, you specify NULL by using empty brackets [].
Important When using the COP_VALUE_CONTAINS and COP_VALUE_NOTCONTAINS operators, they
behave differently with strings and string arrays. In the case of a string, the system will perform a case-
insensitive search to see if the device contains the indicated string as a substring. In the case of a string array,
substrings are not searched. With the string array, the array is searched to see if it contains the entire specified
string. It is not possible to search a string array to see if the elements in the array contain a substring.

If you cannot create a single AQS filter string that will scope your results appropriately, you can filter your results
after you receive them. However, if you choose to do this, we recommend limiting the results from your initial
AQS filter string as much as possible when you provide it to the Windows.Devices.Enumeration APIs. This will
help improve the performance of your application.

AQS string examples


The following examples demonstrate how the AQS syntax can be used to limit the devices you want to enumerate.
All of these filter strings are paired up with a DeviceInformationKind to create a complete filter. If no kind is
specified, remember that the default kind is DeviceInterface.
When this filter is paired with a DeviceInformationKind of DeviceInterface, it enumerates all objects that
contain the Audio Capture interface class and that are currently enabled. = translates to COP_EQUALS.

System.Devices.InterfaceClassGuid:="{2eef81be-33fa-4800-9670-1cd474972c3f}" AND
System.Devices.InterfaceEnabled:=System.StructuredQueryType.Boolean#True

When this filter is paired with a DeviceInformationKind of Device, it enumerates all objects that have at least
one hardware id of GenCdRom. ~~ translates to COP_VALUE_CONTAINS.

System.Devices.HardwareIds:~~"GenCdRom"

When this filter is paired with a DeviceInformationKind of DeviceContainer, it enumerates all objects that
have a model name containing the substring Microsoft. ~~ translates to COP_VALUE_CONTAINS.

System.Devices.ModelName:~~"Microsoft"

When this filter is paired with a DeviceInformationKind of DeviceInterface, it enumerates all objects that have
a name starting with the substring Microsoft. ~< translates to COP_STARTSWITH.

System.ItemNameDisplay:~<"Microsoft"

When this filter is paired with a DeviceInformationKind of Device, it enumerates all objects that have a
System.Devices.IpAddress property set. <>[] translates to COP_NOTEQUALS combined with a NULL value.

System.Devices.IpAddress:<>[]

When this filter is paired with a DeviceInformationKind of Device, it enumerates all objects that do not have a
System.Devices.IpAddress property set. =[] translates to COP_EQUALS combined with a NULL value.

System.Devices.IpAddress:=[]
Enumerate devices over a network
3/6/2017 2 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Important APIs
Windows.Devices.Enumeration
In addition to discovering locally connected devices, you can use the Windows.Devices.Enumeration APIs to
enumerate devices over wireless and networked protocols.

Enumerating devices over networked or wireless protocols


Sometimes you need to enumerate devices that are not locally connected and can only be discovered over a
wireless or networking protocols. In order to do so, the Windows.Devices.Enumeration APIs have three different
kinds of device objects: the AssociationEndpoint (AEP), the AssociationEndpointContainer (AEP Container),
and the AssociationEndpointService (AEP Service). As a group these are referred to as AEPs or AEP objects.
Some device APIs provide a selector string that you can use to enumerate through the available AEP objects. This
could include both devices that are paired and are not paired with the system. Some of the devices might not
require pairing. Those device APIs may attempt to pair the device if pairing it is necessary before interacting with it.
Wi-Fi Direct is an example of APIs that follow this pattern. If those device APIs do not automatically pair the device,
you can pair it using the DeviceInformationPairing object available from DeviceInformation.Pairing.
However, there may be cases where you want to manually discover devices on your own without using a pre-
defined selector string. For example, you may just need to gather information about AEP devices without
interacting with them or you may want to find more AEP objects than will be discovered with the pre-defined
selector string. In this case, you will build your own selector string and use it following the instructions under Build
a device selector.
When you build your own selector, it is strongly recommended that you limit your scope of enumeration to the
protocols that you are interested in. For example, you don't want to have the Wi-Fi radio search for Wi-Fi Direct
devices if you are particularly interested in UPnP devices. Windows has defined an identity for each protocol that
you can use to scope your enumeration. The following table lists the protocol types and identifiers.

PROTOCOL OR NETWORK DEVICE TYPE ID

UPnP (including DIAL and DLNA) {0e261de4-12f0-46e6-91ba-428607ccef64}

Web services on devices (WSD) {782232aa-a2f9-4993-971b-aedc551346b0}

Wi-Fi Direct {0407d24e-53de-4c9a-9ba1-9ced54641188}

DNS service discovery (DNS-SD) {4526e8c1-8aac-4153-9b16-55e86ada0e54}

Point of service {d4bf61b3-442e-4ada-882d-fa7B70c832d9}

Network printers (active directory printers) {37aba761-2124-454c-8d82-c42962c2de2b}

Windows connect now (WNC) {4c1b1ef8-2f62-4b9f-9bc5-b21ab636138f}


PROTOCOL OR NETWORK DEVICE TYPE ID

WiGig docks {a277f3a5-8764-4f88-8045-4c5e962640b1}

Wi-Fi provisioning for HP printers {c85ef710-f344-4792-bb6d-85a4346f1e69}

Bluetooth {e0cbf06c-cd8b-4647-bb8a-263b43f0f974}

Bluetooth LE {bb7bb05e-5972-42b5-94fc-76eaa7084d49}

AQS examples
Each AEP kind has a property you can use to constrain your enumeration to a specific protocol. Keep in mind you
can use the OR operator in an AQS filter to combine multiple protocols. Here are some examples of AQS filter
strings that show how to query for AEP devices.
This AQS queries for all UPnP AssociationEndpoint objects when the DeviceInformationKind is set to
AsssociationEndpoint.

System.Devices.Aep.ProtocolId:="{0e261de4-12f0-46e6-91ba-428607ccef64}"

This AQS queries for all UPnP and WSD AssociationEndpoint objects when the DeviceInformationKind is set to
AsssociationEndpoint.

System.Devices.Aep.ProtocolId:="{782232aa-a2f9-4993-971b-aedc551346b0}" OR
System.Devices.Aep.ProtocolId:="{0e261de4-12f0-46e6-91ba-428607ccef64}"

This AQS queries for all UPnP AssociationEndpointService objects if the DeviceInformationKind is set to
AsssociationEndpointService.

System.Devices.AepService.ProtocolId:="{0e261de4-12f0-46e6-91ba-428607ccef64}"

This AQS queries AssociationEndpointContainer objects when the DeviceInformationKind is set to


AssociationEndpointContainer, but only finds them by enumerating the UPnP protocol. Typically, it wouldn't be
useful to enumerate containers that only come from one protocol. However, this might be useful by limiting your
filter to protocols where you know your device can be discovered.

System.Devices.AepContainer.ProtocolIds:~~"{0e261de4-12f0-46e6-91ba-428607ccef64}"
Device information properties
3/6/2017 8 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Important APIs
Windows.Devices.Enumeration
Each device has associated DeviceInformation properties that you can use when you need specific information
or when you are building a device selector. These properties can be specified an AQS filter to limit the devices
that you are enumerating over in order to find the devices with the specified traits. You can also use these
properties to indicate what information you want returned for each device. That enables you to specify the device
information that is returned to your application.
For more information about using DeviceInformation properties in your device selector, see Build a device
selector. This topic goes into how to request information properties and also lists some common properties and
their purpose.
A DeviceInformation object is composed of an identity (DeviceInformation.Id), a kind
(DeviceInformation.Kind), and a property bag (DeviceInformation.Properties). All of the other properties of
a DeviceInformation object are derived from the Properties property bag. For example, Name is derived from
System.ItemNameDisplay. This means that the property bag always contains the information necessary to
determine the other properties.

Requesting properties
A DeviceInformation object has some basic properties, such as Id and Kind, but most of the properties are
stored in a property bag under Properties. Because of this, the property bag contains the properties used to
source the properties out of the property bag. For example, use System.ItemNameDisplay to source the Name
property. This is a case of a common and well-known property that has a user-friendly name. Windows provides
several of these user-friendly names to make querying for properties easier.
When you are requesting properties, you are not limited to the common properties with user-friendly names. You
can specify the underlying GUID and property ID (PID) to request any property that is available, even custom
properties that are supplied by an individual device or driver. The format for specifying a custom property is "
{GUID} PID ". For example: " {744e3bed-3684-4e16-9f8a-07953a8bf2ab} 7 "

Some properties are common across all DeviceInformationKind objects, but most are unique to a specific kind.
The following sections list some common properties sorted by the individual DeviceInformationKind. For more
information about how the different kinds relate to one another, see DeviceInformationKind.

DeviceInterface properties
DeviceInterface is the default and most common DeviceInformationKind object used in app scenarios. This is
the kind of object that you should use unless the device API indicates a different specific
DeviceInformationKind.

NAME TYPE DESCRIPTION


NAME TYPE DESCRIPTION

System.Devices.ContainerId GUID Identity of the


DeviceInformationKind.DeviceConta
iner that contains the Device
containing this DeviceInterface. You
can pass this value to
CreateFromIdAsync along with
DeviceInformationKind.DeviceConta
iner to find the appropriate container.

System.Devices.InterfaceClassGuid GUID The interface class GUID this interface


represents.

System.Devices.DeviceInstanceId String Identity of the parent


DeviceInformationKind.Device. You
can pass this value to
CreateFromIdAsync along with
DeviceInformationKind.Device to
find the appropriate device.

System.Devices.InterfaceEnabled Boolean Indicates if the interface is enabled.


DeviceInformation.IsEnabled is
derived from this property.

System.Devices.GlyphIcon String Icon path for the glyph.

System.Devices.IsDefault Boolean Indicates whether this is the default


device for the
System.Devices.InterfaceClassGuid.
This is mainly used for printers. This
does not work for audio since there are
multiple audio defaults. Use
GetDefaultAudioRenderId or
GetDefaultAudioCaptureId to get
audio defaults.

System.Devices.Icon String Icon path.

System.ItemNameDisplay String The best display name for the device


object.

Device properties
NAME TYPE DESCRIPTION

System.Devices.ClassGuid GUID Device class used during device


installation. For more information, see
Device Setup Classes.

System.Devices.CompatibleIds String[] The compatible ids of the device. These


are used when Windows is determining
the best driver to install on the device.
For more information, see Compatible
ID.
NAME TYPE DESCRIPTION

System.Devices.ContainerId GUID Identity of the


DeviceInformationKind.DeviceConta
iner that includes this device. You can
pass this value to CreateFromIdAsync
along with
DeviceInformationKind.DeviceConta
iner to find the appropriate container.

System.Devices.DeviceCapabilities UInt32 A bitwise-OR of the CM_DEVCAP_X


capabilities flags that are defined in
CfgMgr32.h. For more information,
see DEVPKEY_Device_Capabilities.

System.Devices.DeviceHasProblem Boolean The device currently has a problem and


is likely not functioning correctly. This
could be due to an outdated, missing,
or invalid driver.

System.Devices.DeviceInstanceId String The identity of the device. This is also


the value of DeviceInformation.Id.

System.Devices.DeviceManufacture String The manufacturer of the device.


r

System.Devices.HardwareIds String[] The hardware ids of the device.


Windows uses these ids when
determining the best driver to install.
Device vendors can use this property
to identify their device from their app.
For more information, see Hardware ID.

System.Devices.Parent String The DeviceInformation.Id of the


parent device. This is the connection
parent, not the DeviceContainer
parent.

System.Devices.Present Boolean Indicates whether the device is


currently present and available.

System.ItemNameDisplay String The best display name for this device


object. In this case, this is not
necessarily the best name for users. A
more likely candidate for a user-friendly
name could be found by referencing
the System.ItemNameDisplay of the
associated DeviceContainer or
DeviceInterface.

DeviceContainer properties
NAME TYPE DESCRIPTION
NAME TYPE DESCRIPTION

System.Devices.Category String[] A list of descriptions of the categories


the device belongs to. This list is
provided as singular categories. For
example, "Display", "Phone", or "Audio
device".

System.Devices.CategoryIds String[] Contains a list of categories this device


belongs to. For example,
Audio.Headphone, Display.Monitor,
or Input.Gaming.

System.Devices.CateogryPlural String[] A list of descriptions of the categories


the device belongs to. This list is
provided as plural categories. For
example, "Displays", "Phones", or
"Audio devices".

System.Devices.CompatibleIds String[] The collection of compatible ids for all


the child
DeviceInformationKind.Device
objects.

System.Devices.Connected Boolean Indicates whether the device is


currently connected to the system or
not.

System.Devices.GlyphIcon String Icon path for the glyph.

System.Devices.HardwareIds String[] The collection of hardware ids for all


the child
DeviceInformationKind.Device
objects.

System.Devices.Icon String Icon path.

System.Devices.LocalMachine Boolean True if this DeviceContainer


represents the system itself, false if the
device is external to the system.

System.Devices.Manufacturer String The manufacturer of the device.

System.Devices.ModelName String Model name of the device container.

System.Devices.Paired Boolean Indicates whether any of the child


DeviceInformationKind.Device
objects are wireless or network devices
that are currently paired with the
system.

System.ItemNameDisplay String The best display name for this device.

DeviceInterfaceClass properties
NAME TYPE DESCRIPTION

System.ItemNameDisplay String The best display name for this device.

AssociationEndpoint properties
NAME TYPE DESCRIPTION

System.Devices.Aep.AepId String Identity of this device. This is also the


value of DeviceInformation.Id.

System.Devices.Aep.CanPair Boolean Indicates if the device can be paired


with the system or not.
DeviceInformationPairing.CanPair is
derived from this property.

System.Devices.Aep.Category String[] The categories that the device is part


of. For example, printer or camera.

System.Devices.Aep.ContainerId GUID The id of the parent


AssociationEndpointContainer
object.

System.Devices.Aep.DeviceAddress String The address of the device. If the device


is a network device, this is the IP
address.

System.Devices.Aep.IsConnected Boolean Indicates if the device is currently


connected to the system.

System.Devices.Aep.IsPaired Boolean Indicates if the device is currently


paired.
DeviceInformationPairing.IsPaired
is derived from this property.

System.Devices.Aep.IsPresent Boolean Indicates if the device is currently


present, meaning the device is live and
discovered over the network or wireless
protocol. Once a device has been
paired with the system, the device is
cached. After this, the device will be
automatically discovered when
querying for AssociationEndpoint
objects. Because of this, you cannot
rely on just discovering the device with
a query to indicate whether or not it is
currently usable. That is why this
property is important.

System.Devices.Aep.Manufacturer String The manufacturer of the device.

System.Devices.Aep.ModelId GUID The model id of the device.

System.Devices.Aep.ModelName String The model name of the device.


NAME TYPE DESCRIPTION

System.Devices.Aep.ProtocolId GUID Indicates the protocol used to discover


this AssocationEndpoint device.

System.Devices.Aep.SignalStrength Int32 The signal strength of the device. This


property is only applicable for some
protocols.

System.ItemNameDisplay String The best display name for the device.

AssociationEndpointContainer properties
NAME TYPE DESCRIPTION

System.Devices.AepContainer.Cate String[] The categories that the device is part


gories of. For example, printer or camera.

System.Devices.AepContainer.Child String[] The collection of ids for the


ren AssocationEndpoint objects that are
part of this container.

System.Devices.AepContainer.CanP Boolean Indicates if one of the child


air AssociationEndpoint devices can be
paired with the system or not.
DeviceInformationPairing.CanPair is
derived from this property.

System.Devices.AepContainer.Cont GUID Identity of this device. This is also the


ainerId value of DeviceInformation.Id, but in
GUID form.

System.Devices.AepContainer.IsPair Boolean Indicates if one of the child


ed AssociationEndpoint devices is
currently paired.
DeviceInformationPairing.IsPaired
is derived from this property.

System.Devices.AepContainer.IsPre Boolean Indicates if one of the child


sent AssociationEndpoint devices is
currently present, meaning the device
is live and discovered over the network
or wireless protocol. Once a device has
been paired with the system, the device
is cached. After this, the device will be
automatically discovered when
querying for AssociationEndpoint
objects. Because of this, you cannot
rely on just discovering the device with
a query to indicate whether or not it is
currently usable. That is why this
property is important.

System.Devices.AepContainer.Manu String The manufacturer of the device.


facturer
NAME TYPE DESCRIPTION

System.Devices.AepContainer.Mod String[] A list of model ids for the device. Each


elIds model is a GUID in string form.

System.Devices.AepContainer.Mod String The model name of the device.


elName

System.Devices.AepContainer.Proto GUID[] A list of protocol ids that have


colIds contributed to building this
AssociationEndpointContainer
object. Keep in mind that an
AssociationEndpointContainer device
is created by collecting all
AssociationEndpoint devices
discovered over different protocols for
the same physical device.

System.Devices.AepContainer.Supp String[] List of casting URI schemes supported


ortedUriSchemes by this device.

System.Devices.AepContainer.Supp Boolean Indicates if this device supports audio


ortsAudio casting.

System.Devices.AepContainer.Supp Boolean Indicates if this device supports image


ortsImages casting.

System.Devices.AepContainer.Supp Boolean Indicates if this device supports video


ortsVideo casting.

System.ItemNameDisplay String The best display name for the device.

AssociationEndpointService properties
NAME TYPE DESCRIPTION

System.Devices.AepService.AepId String The identifier of the parent


AssociationEndpoint object.

System.Devices.AepService.Contain GUID The identifier of the parent


erId AssociationEndpointContainer
object.

System.Devices.AepService.ParentA Boolean Indicates whether the parent


epIsPaired AssociationEndpoint object is paired
with the system.

System.Devices.AepService.Protoco GUID Identity of the protocol used to


lId discover this device.

System.Devices.AepService.Service GUID Iidentity of the service represented by


ClassId this device.

System.Devices.AeoService.ServiceI String Identity of this service. This is also the


d value of DeviceInformation.Id.
NAME TYPE DESCRIPTION

System.ItemNameDisplay String The best display name for the service.


AEP service class IDs
3/6/2017 3 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Important APIs
Windows.Devices.Enumeration
Association Endpoint (AEP) services provide a programming contract for services that a device supports over a
given protocol. Several of these services have established identifiers that should be used when referencing them.
These contracts are identified with the System.Devices.AepService.ServiceClassId property. This topic lists
several well-known AEP service class IDs. The AEP service class ID is also applicable to protocols with custom class
IDs.
An app developer should use advanced query syntax (AQS) filters based on the class IDs to limit their queries to the
AEP services they plan to use. This will both limit the query results to the relevant services and will significantly
increase the performance, battery life, and quality of service for the device. For example, an application can use
these service class IDs to use a device as a Miracast sync or DLNA digital media renderer (DMR). For more
information about how devices and services interact with each other, see DeviceInformationKind.

Bluetooth and Bluetooth LE services


Bluetooth services fall under one of two protocols, either the Bluetooth protocol or the Bluetooth LE protocol. The
identifiers for these protocols are:
Bluetooth protocol ID: {e0cbf06c-cd8b-4647-bb8a263b43f0f974}
Bluetooth LE protocol ID: {bb7bb05e-5972-42b5-94fc76eaa7084d49}
The Bluetooth protocol supports several services, all following the same basic format. The first four digits of the
GUID vary based upon the service, but all Bluetooth GUIDs end with 0000-0000-1000-8000-00805F9B34FB. For
example, the RFCOMM service has the precursor of 0x0003, so the full ID would be 00030000-0000-1000-8000-
00805F9B34FB. The following table lists some common Bluetooth services.

SERVICE NAME GUID

RFCOMM 00030000-0000-1000-8000-00805F9B34FB

GATT - Alert notification service 18110000-0000-1000-8000-00805F9B34FB

GATT - Automation IO 18150000-0000-1000-8000-00805F9B34FB

GATT - Battery service 180F0000-0000-1000-8000-00805F9B34FB

GATT - Blood pressure 18100000-0000-1000-8000-00805F9B34FB

GATT - Body composition 181B0000-0000-1000-8000-00805F9B34FB

GATT - Bond management 181E0000-0000-1000-8000-00805F9B34FB

GATT - Continuous glucose monitoring 181F0000-0000-1000-8000-00805F9B34FB


SERVICE NAME GUID

GATT - Current time service 18050000-0000-1000-8000-00805F9B34FB

GATT - Cycling power 18180000-0000-1000-8000-00805F9B34FB

GATT - Cycling speed and cadence 18160000-0000-1000-8000-00805F9B34FB

GATT - Device information 180A0000-0000-1000-8000-00805F9B34FB

GATT - Environmental sensing 181A0000-0000-1000-8000-00805F9B34FB

GATT - Generic access 18000000-0000-1000-8000-00805F9B34FB

GATT - Generic attribute 18010000-0000-1000-8000-00805F9B34FB

GATT - Glucose 18080000-0000-1000-8000-00805F9B34FB

GATT - Health thermometer 18090000-0000-1000-8000-00805F9B34FB

GATT - Heart rate 180D0000-0000-1000-8000-00805F9B34FB

GATT - Human interface device 18120000-0000-1000-8000-00805F9B34FB

GATT - Immediate alert 18020000-0000-1000-8000-00805F9B34FB

GATT - Indoor positioning 18210000-0000-1000-8000-00805F9B34FB

GATT - Internet protocol support 18200000-0000-1000-8000-00805F9B34FB

GATT - Link loss 18030000-0000-1000-8000-00805F9B34FB

GATT - Location and navigation 18190000-0000-1000-8000-00805F9B34FB

GATT - Next DST change service 18070000-0000-1000-8000-00805F9B34FB

GATT - Phone alert status service 180E0000-0000-1000-8000-00805F9B34FB

GATT - Pulse oximeter 18220000-0000-1000-8000-00805F9B34FB

GATT - Reference time update service 18060000-0000-1000-8000-00805F9B34FB

GATT - Running speed and cadence 18140000-0000-1000-8000-00805F9B34FB

GATT - Scan parameters 18130000-0000-1000-8000-00805F9B34FB

GATT - Tx power 18040000-0000-1000-8000-00805F9B34FB

GATT - User data 181C0000-0000-1000-8000-00805F9B34FB

GATT - Weight scale 181D0000-0000-1000-8000-00805F9B34FB

For a more complete listing of available Bluetooth services, see Bluetooth's protocol and service pages here and
here. You can also use the GattServiceUuids API to get some common GATT services.

Custom Bluetooth LE services


Custom Bluetooth LE services use the following protocol identifier: {bb7bb05e-5972-42b5-94fc76eaa7084d49}
Custom profiles are defined with their own defined GUIDs. This custom GUID should be used for
System.Devices.AepService.ServiceClassId.

UPnP services
UPnP services use the following protocol identifier: {0e261de4-12f0-46e6-91ba428607ccef64}
In general, all UPnP services have their name hashed into a GUID using the algorithm defined in RFC 4122. The
following table lists some common UPnP services defined in Windows.

SERVICE NAME GUID

Connection manager ba36014c-b51f-51cc-bf711ad779ced3c6

AV transport deeacb78-707a-52df-b1c66f945e7e25bf

Rendering control cc7fe721-a3c7-5a14-8c494419dc895513

Layer 3 forwarding 97d477fa-f403-577b-a714b29a9007797f

WAN common interface configuration e4c1c624-c3c4-5104-b72eac425d9d157c

WAP IP connection e4ac1c23-b5ac-5c27-88146bd837d8832c

WFA WLAN configuration 23d5f7db-747f-5099-8f213ddfd0c3c688

Printer enhanced fb9074da-3d9f-5384-922e9978ae51ef0c

Printer basic 5d2a7252-d45c-5158-87-a405212da327e1

Media receiver registrar 0b4a2add-d725-5198-b2ba852b8bf8d183

Content directory 89e701dd-0597-5279-a31c235991d0db1c

DIAL 085dfa4a-3948-53c7-a0d716d8ec26b29b

WSD services
WSD services use the following protocol identifier: {782232aa-a2f9-4993-971baedc551346b0}
In general, all WSD services have their name hashed into a GUID using the algorithm defined in RFC 4122. The
following table lists some common WSD services defined in Windows.

SERVICE NAME GUID

Printer 65dca7bd-2611-583e-9a12ad90f47749cf

Scanner 56ec8b9e-0237-5cae-aa3fd322dd2e6c1e
AQS sample
This AQS will filter for all UPnP AssociationEndpointService objects that support DIAL. In this case,
DeviceInformationKind is set to AsssociationEndpointService.

System.Devices.AepService.ProtocolId:="{0e261de4-12f0-46e6-91ba-428607ccef64}" AND
System.Devices.AepService.ServiceClassId:="{085DFA4A-3948-53C7-A0D716D8EC26B29B}"
Pair devices
3/6/2017 5 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Important APIs
Windows.Devices.Enumeration
Some devices need to be paired before they can be used. The Windows.Devices.Enumeration namespace
supports three different ways to pair devices.
Automatic pairing
Basic pairing
Custom pairing
Tip Some devices do not need to be paired in order to be used. This is covered under the section on automatic
pairing.

Automatic pairing
Sometimes you want to use a device in your application, but do not care whether or not the device is paired. You
simply want to be able to use the functionality associated with a device. For example, if your app wants to simply
capture an image from a webcam, you are not necessarily interested in the device itself, just the image capture. If
there are device APIs available for the device you are interested in, this scenario would fall under automatic pairing.
In this case, you simply use the APIs associated with the device, making the calls as necessary and trusting the
system to handle any pairing that might be necessary. Some devices do not need to be paired in order for you to
use their functionality. If the device does need to be paired, then the device APIs will handle the pairing action
behind the scenes so you do not need to integrate that functionality into your app. Your app will have no
knowledge about whether or not a given device is paired or needs to be, but you will still be able to access the
device and use its functionality.

Basic pairing
Basic pairing is when your application uses the Windows.Devices.Enumeration APIs in order to attempt to pair
the device. In this scenario, you are letting Windows attempt the pairing process and handle it. If any user
interaction is necessary, it will be handled by Windows. You would use basic pairing if you need to pair with a
device and there is not a relevant device API that will attempt automatic pairing. You just want to be able to use the
device and need to pair with it first.
In order to attempt basic pairing, you first need to obtain the DeviceInformation object for the device you are
interested in. Once you receive that object, you will interact with the DeviceInformation.Pairing property, which
is a DeviceInformationPairing object. To attempt to pair, simply call DeviceInformationPairing.PairAsync.
You will need to await the result in order to give your app time to attempt to complete the pairing action. The
result of the pairing action will be returned, and as long as no errors are returned, the device will be paired.
If you are using basic pairing, you also have access to additional information about the pairing status of the device.
For example you know the pairing status (IsPaired) and whether the device can pair (CanPair). Both of these are
properties of the DeviceInformationPairing object. If you are using automatic pairing, you might not have access
to this information unless you obtain the relevant DeviceInformation objects.
Custom pairing
Custom pairing enables your app to participate in the pairing process. This allows your app to specify the
DevicePairingKinds that are supported for the pairing process. You will also be responsible for creating your own
user interface to interact with the user as needed. Use custom pairing when you want your app to have a little more
influence over how the pairing process proceeds or to display your own pairing user interface.
In order to implement custom pairing, you will need to obtain the DeviceInformation object for the device you
are interested in, just like with basic pairing. However, the specific property your are interested in is
DeviceInformation.Pairing.Custom. This will give you a DeviceInformationCustomPairing object. All of the
DeviceInformationCustomPairing.PairAsync methods require you to include a DevicePairingKinds
parameter. This indicates the actions that the user will need to take in order to attempt to pair the device. See the
DevicePairingKinds reference page for more information about the different kinds and what actions the user will
need to take. Just like with basic pairing, you will need to await the result in order to give your app time to attempt
to complete the pairing action. The result of the pairing action will be returned, and as long as no errors are
returned, the device will be paired.
To support custom pairing, you will need to create a handler for the PairingRequested event. This handler needs
to make sure to account for all the different DevicePairingKinds that might be used in a custom pairing scenario.
The appropriate action to take will depend on the DevicePairingKinds provided as part of the event arguments.
It is important to be aware that custom pairing is always a system-level operation. Because of this, when you are
operating on Desktop or Windows Phone, a system dialog will always be shown to the user when pairing is going
to happen. This is because both of those platforms posses a user experience that requires user consent. Since that
dialog is automatically generated, you will not need to create your own dialog when you are opting for a
DevicePairingKinds of ConfirmOnly when operating on these platforms. For the other DevicePairingKinds,
you will need to perform some special handling depending on the specific DevicePairingKinds value. See the
sample for examples of how to handle custom pairing for different DevicePairingKinds values.

Unpairing
Unpairing a device is only relevant in the basic or custom pairing scenarios described above. If you are using
automatic pairing, your app remains oblivious to the pairing status of the device and there is no need to unpair it. If
you do choose to unpair a device, the process is identical whether you implement basic or custom pairing. This is
because there is no need to provide additional information or interact in the unpairing process.
The first step to unpairing a device is obtaining the DeviceInformation object for the device that you want to
unpair. Then you need to retrieve the DeviceInformation.Pairing property and call
DeviceInformationPairing.UnpairAsync. Just like with pairing, you will want to await the result. The result of
the unpairing action will be returned, and as long as no errors are returned, the device will be unpaired.

Sample
To download a sample showing how to use the Windows.Devices.Enumeration APIs, click here.
Point of Service
3/10/2017 1 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This section contains articles on how to use Point Of Service (POS) with Universal Windows Platform (UWP) apps,
including how to use Espon ESC/POS command language and out-of-band pairing of a POS peripheral.

TOPIC DESCRIPTION

Barcode Scanner This article provides device support information for barcode
scanners.

Cash Drawer This article provides device support information for cash
drawers.

Magnetic Stripe Reader This article provides device support information for magnetic
stripe readers.

POS Printer This article provides device support information for point of
service printers.

Deploy scanner profiles with MDM This article provides information on how darcode scanner
profiles can be deployed with an MDM server.

Epson ESC/POS with formatting This article provides an overview of the Epson ESC/POS
command language and how to use it with the
Windows.Devices.PointOfService APIs.

Out of band pairing This article provides information on how out-of-band pairing
allows apps to connect to a Point-of-Service peripheral
without requiring discovery, using the
Windows.Devices.PointOfService namespace.
Barcode Scanner
3/6/2017 1 min to read Edit on GitHub

Enables application developers to access barcode scanners to retrieve decoded data from a variety of barcode
symbologies such as UPC and QR Codes depending on support from the hardware. See the BarcodeSymbologies
class for a full list of supported symbologies.

Requirements
Applications which require this namespace require the addition of pointOfService DeviceCapability to the app
package manifest.

Device support
USB
HID.Scanner
Windows contains a barcode scanner class driver which is based on the HID.Scanner (8C) usage page defined by
USB.org. This class driver will support any barcode scanner which implements this standard, such as: Manufacturer
Model(s) Honeywell 1900GSR-2, 1200g-2 Intermec SG20
Consult the manual for your barcode scanner or contact the manufacturer to determine if it can be configured as a
USB.HID.Scanner.
HID.Vendor specific
Windows supports implementation of vendor specific drivers to support additional barcode scanners. Please check
with your barcode scanner manufacturer for availability if the device is not supported with the in-box
USB.HID.Scanner.
Bluetooth
Serial Port Protocol (SPP) Simple Serial Interface (SSI)
Windows supports SPP-SSI based Bluetooth barcode scanners.

MANUFACTURER MODEL(S)

Socket Mobile CHS 7 Series:


7 Ci 1D Imager Barcode Scanner
7Di 1D Durable, Imager Barcode Scanner
7Mi 1D Laser Barcode Scanner
7Pi 1D Durable, LaserBarcode Scanner
DuraScan 700 Series:
D700 1D Imager Barcode Scanner
D730 1D Laser Barcode Scanner
SocketScan 800 Series
S800 1D Imager Barcode Scanner (formerly CHS 8Ci)
S850 2D Imager Barcode Scanner (formerly CHS 8Qi)

Examples
See the barcode scanner sample for an example implementation.
Barcode scanner sample
Cash Drawer
3/6/2017 1 min to read Edit on GitHub

Enables application developers to interact with cash drawers.

Requirements
Applications which require this namespace require the addition of pointOfService DeviceCapability to the app
package manifest.

Device support
Connection directly to the cash drawer can be made over the network or through Bluetooth, depending on the
capabilities of the cash drawer unit. Additionally, cash drawers that dont have network or Bluetooth capabilities can
be connected via the DK port on a supported POS printer, or the Star Micronics DK-AirCash accessory. At this time,
there is no support for cash drawers connected via USB or serial port.
Note: Contact Star Micronics for more information about the DK-AirCash.
Network/Bluetooth

MANUFACTURER MODEL(S)

APG Cash Drawer NetPRO, BluePRO

Examples
See the cash drawer sample for an example implementation.
Cash drawer sample
Magnetic Stripe Reader
3/6/2017 1 min to read Edit on GitHub

Enables application developers to access magnetic stripe readers to retrieve data from magnetic stripe enabled
cards such as credit/debit cards, loyalty cards, access cards, etc.

Requirements
Applications which require this namespace require the addition of pointOfService DeviceCapability to the app
package manifest.

Device support
USB
Supported vendor specific
Windows provides support for the following magnetic stripe readers from Magtek and IDTech based on their
Vendor ID and Product ID (VID/PID).

MANUFACTURER MODEL(S) PART NUMBER

IDTech SecureMag (VID:0ACD PID:2010) IDRE-3x5xxxx

MiniMag (VID:0ACD PID:0500) IDMB-3x5xxxx

Magtek MagneSafe (VID:0801 PID:0011) 210730xx

Dynamag (VID:0801 PID:0002) 210401xx

Custom vendor specific


Windows supports implementation of additional vendor specific drivers to support additional magnetic stripe
readers. Please check with your magnetic stripe reader manufacturer for availability.

Examples
See the magnetic stripe reader sample for an example implementation.
Magnetic stripe reader sample
POS Printer
3/6/2017 1 min to read Edit on GitHub

Enables application developers to print to network and Bluetooth connected receipt printers using the Epson
ESC/POS printer control language.

Requirements
Applications which require this namespace require the addition of pointOfService DeviceCapability to the app
package manifest.

Device support
Windows supports the ability to print to network and Bluetooth connected receipt printers using the Epson
ESC/POS printer control language. For more information on ESC/POS, see Epson ESC/POS with formatting.
While the classes, enumerations, and interfaces exposed in the API support receipt printer, slip printer as well as
journal printer, the driver interface only supports receipt printer. Attempting to use slip printer or journal printer at
this time will return a status of not implemented.
Support is currently limited to the Network and Bluetooth device models listed in the tables below. USB connected
printers are currently not supported. Please check back for additional support to be added in the future.
Stationary POS printers (Network, Bluetooth)

MANUFACTURER MODEL(S)

Epson TM-T88V, TM-T70, TM-T20, TM-U220

Mobile POS printers (Bluetooth)

MANUFACTURER MODEL(S)

Epson Mobilink P20 (TM-P20), Mobilink P60 (TM-P60), Mobilink P80


(TM-P80)

Examples
See the POS printer sample for an example implementation.
POS printer sample
Out-of-band pairing
3/6/2017 1 min to read Edit on GitHub

Out-of-band pairing allows apps to connect to a Point-of-Service peripheral without requiring discovery. Apps
must use the Windows.Devices.PointOfService namespace and pass in a specifically formatted string (out-of-
band blob) to the appropriate FromIdAsync method for the desired peripheral. When FromIdAsync is executed,
the host device pairs and connects to the peripheral before the operation returns to the caller.

Out-of-band blob format


"connectionKind":"Network",
"physicalAddress":"AA:BB:CC:DD:EE:FF",
"connectionString":"192.168.1.1:9001",
"peripheralKinds":"{C7BC9B22-21F0-4F0D-9BB6-66C229B8CD33}",
"providerId":"{02FFF12E-7291-4A5D-ADFA-DA8FB7769CD2}",
"providerName":"PrinterProtocolProvider.dll"

connectionKind - The type of connection. Valid values are "Network" and "Bluetooth".
physicalAddress - The MAC address of the peripheral. For example, in case of a network printer, this would be the
MAC address that is provided by the printer test sheet in AA:BB:CC:DD:EE:FF format.
connectionString - The connection string of the peripheral. For example, in the case of a network printer, this
would be the IP address provided by the printer test sheet in 192.168.1.1:9001 format. This field is omitted for all
Bluetooth peripherals.
peripheralKinds - The GUID for the device type. Valid values are:

DEVICE TYPE GUID

POS printer C7BC9B22-21F0-4F0D-9BB6-66C229B8CD33

Barcode scanner C243FFBD-3AFC-45E9-B3D3-2BA18BC7EBC5

Cash drawer 772E18F2-8925-4229-A5AC-6453CB482FDA

providerId - The GUID for the protocol provider class. Valid values are:

PROTOCOL PROVIDER CLASS GUID

Generic ESC/POS network printer 02FFF12E-7291-4A5D-ADFA-DA8FB7769CD2

Generic ESC/POS BT printer CCD5B810-95B9-4320-BA7E-78C223CAF418

Epson BT printer 94917594-544F-4AF8-B53B-EC6D9F8A4464

Epson network printer 9F0F8BE3-4E59-4520-BFBA-AF77614A31CE

Star network printer 1E3A32C2-F411-4B8C-AC91-CC2C5FD21996


PROTOCOL PROVIDER CLASS GUID

Socket BT scanner 6E7C8178-A006-405E-85C3-084244885AD2

APG network drawer E619E2FE-9489-4C74-BF57-70AED670B9B0

APG BT drawer 332E6550-2E01-42EB-9401-C6A112D80185

providerName - The name of the provider DLL. The default providers are:

PROVIDER DLL NAME

Printer PrinterProtocolProvider.dll

Cash drawer CashDrawerProtocolProvider.dll

Scanner BarcodeScannerProtocolProvider.dll

Usage example: Network printer


String oobBlobNetworkPrinter =
"{\"connectionKind\":\"Network\"," +
"\"physicalAddress\":\"AA:BB:CC:DD:EE:FF\"," +
"\"connectionString\":\"192.168.1.1:9001\"," +
"\"peripheralKinds\":\"{C7BC9B22-21F0-4F0D-9BB6-66C229B8CD33}\"," +
"\"providerId\":\"{02FFF12E-7291-4A5D-ADFA-DA8FB7769CD2}\"," +
"\"providerName\":\"PrinterProtocolProvider.dll\"}";

printer = await PosPrinter.FromIdAsync(oobBlobNetworkPrinter);

Usage example: Bluetooth printer


string oobBlobBTPrinter =
"{\"connectionKind\":\"Bluetooth\"," +
"\"physicalAddress\":\"AA:BB:CC:DD:EE:FF\"," +
"\"peripheralKinds\":\"{C7BC9B22-21F0-4F0D-9BB6-66C229B8CD33}\"," +
"\"providerId\":\"{CCD5B810-95B9-4320-BA7E-78C223CAF418}\"," +
"\"providerName\":\"PrinterProtocolProvider.dll\"}";

printer = await PosPrinter.FromIdAsync(oobBlobBTPrinter);


Epson ESC/POS with formatting
3/6/2017 1 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Important APIs
PointofService Printer
Windows.Devices.PointOfService
Learn how to use the ESC/POS command language to format text, such as bold and double size characters, for your
Point of Service printer.

ESC/POS usage
Windows Point of Service provides use of a variety of printers, including several Epson TM series printers (for a full
list of supported printers, see the PointofService Printer page). Windows supports printing through the ESC/POS
printer control language, which provides efficient and functional commands for communicating with your printer.
ESC/POS is a command system created by Epson used across a wide range of POS printer systems, aimed at
avoiding incompatible command sets by providing universal applicability. Most modern printers support ESC/POS.
All commands start with the ESC character (ASCII 27, HEX 1B) or GS (ASCII 29, HEX 1D), followed by another
character that specifies the command. Normal text is simply sent to the printer, separated by line breaks.
The Windows PointOfService API provides much of that functionality for you via the Print() or PrintLine()
methods. However, to get certain formatting or to send specific commands, you must use ESC/POS commands,
built as a string and sent to the printer.

Example using bold and double size characters


The example below shows how to use ESC/POS commands to print in bold and double sized characters. Note that
each command is built as a string, then inserted into the printJob calls.

// prior plumbing code removed for brevity


// this code assumed you've already created a receipt print job (printJob)
// and also that you've already checked the PosPrinter Capabilities to
// verify that the printer supports Bold and DoubleHighDoubleWide print modes

const string ESC = "\u001B";


const string GS = "\u001D";
const string InitializePrinter = ESC + "@";
const string BoldOn = ESC + "E" + "\u0001";
const string BoldOff = ESC + "E" + "\0";
const string DoubleOn = GS + "!" + "\u0011"; // 2x sized text (double-high + double-wide)
const string DoubleOff = GS + "!" + "\0";

printJob.Print(InitializePrinter);
printJob.PrintLine("Here is some normal text.");
printJob.PrintLine(BoldOn + "Here is some bold text." + BoldOff);
printJob.PrintLine(DoubleOn + "Here is some large text." + DoubleOff);

printJob.ExecuteAsync();

For more information on ESC/POS, including available commands, check out the Epson ESC/POS FAQ. For details
on Windows.Devices.PointOfService and all the available functionality, see PointofService Printer on MSDN.
Deploy barcode scanner profiles with MDM
3/10/2017 1 min to read Edit on GitHub

Note This feature requires Windows 10 Mobile or later.


Barcode scanner profiles can be deployed with an MDM server. To deploy the profiles, use OemProfile in the
EnterpriseExtFileSystem CSP to place them into the \Data\SharedData\OEM\Public\Profile folder. These scanner
profiles can then be used by driver manufacturers to configure settings that are not exposed through the API
surface.
Microsoft does not define the specifics of a scanner profile or how to implement them.

Related topics
Barcode Scanner
EnterpriseExtFileSystem CSP
Sensors
3/6/2017 10 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Sensors let your app know the relationship between a device and the physical world around it. Sensors can tell
your app the direction, orientation, and movement of the device. These sensors can help make your game,
augmented reality app, or utility app more useful and interactive by providing a unique form of input, such as
using the motion of the device to arrange the characters on the screen or to simulate being in a cockpit and using
the device as the steering wheel.
As a general rule, decide from the outset whether your app will depend exclusively on sensors or if sensors will just
offer an additional control mechanism. For example, a driving game using a device as a virtual steering wheel
could alternatively be controlled through an on-screen GUI this way, the app works regardless of the sensors
available on the system. On the other hand, a marble tilt maze could be coded to only work on systems that have
the appropriate sensors. You must make the strategic choice of whether to fully rely on sensors. Note that a
mouse/touch control scheme trades immersion for greater control.
The following video demonstrates some of the sensors available to you when you are building your app. This is not
an exhaustive list, but goes over some of the more common sensors and demonstrates their purpose.
Click here to view

TOPIC DESCRIPTION

Calibrate sensors Sensors in a device based on the magnetometer the


compass, inclinometer and orientation sensor - can become in
need of calibration due to environmental factors. The
MagnetometerAccuracy enumeration can help determine a
course of action when your device is in need of calibration.

Sensor orientation Sensor data from the OrientationSensor classes is defined by


their reference axes. These axes are defined by the device's
landscape orientation and rotate with the device as the user
turns it.

Use the accelerometer Learn how to use the accelerometer to respond to user
movement.

Use the compass Learn how to use the compass to determine the current
heading.

Use the gyrometer Learn how to use the gyrometer to detect changes in user
movement.

Use the inclinometer Learn how to use the inclinometer to determine pitch, roll,
and yaw.

Use the light sensor Learn how to use the ambient light sensor to detect changes
in lighting.

Use the orientation sensor Learn how to use the orientation sensors to determine the
device orientation.
Sensor batching
Some sensors support the concept of batching. This will vary depending on the individual sensor available. When a
sensor implements batching, it collects several points of data over a specified time interval and then transfers all of
that data at one time. This is different from normal behavior where a sensor reports its findings as soon as it
performs a reading. Consider the following diagram which shows how data is collected and then delivered, first
with normal delivery and then with batched delivery.

The primary advantage for sensor batching is prolonging battery life. When the data is not sent immediately, that
saves on processor power and prevents the data from needing to be immediately processed. Parts of the system
can sleep until they are needed, which generates a significant power savings.
You can influence how often the sensor sends batches by adjusting the latency. For example, the Accelerometer
sensor has the ReportLatency property. When this property is set for an application, the sensor will send data
after the specified amount of time. You can control how much data is accumulated over a given latency by setting
the ReportInterval property.
There are a couple of caveats to keep in mind with respect to setting the latency. The first caveat is that each sensor
has a MaxBatchSize that it can support based on the sensor itself. This is the number of events that the sensor
can cache before it is forced to send them. If you multiply MaxBatchSize by ReportInterval, that determines the
maximum ReportLatency value. If you specify a higher value than this, the maximum latency will be used so that
you do not lose data. In addition, multiple applications can each set a desired latency. In order to meet the needs of
all applications, the shortest latency period will be used. Because of these facts, the latency you set in your
application may not match the observed latency.
If a sensor is using batch reporting, calling GetCurrentReading will clear the current batch of data and start a new
latency period.

Accelerometer
The Accelerometer sensor measures G-force values along the X, Y, and Z axes of the device and is great for
simple motion-based applications. Note that G-force values include acceleration due to gravity. If the device has
the SimpleOrientation of FaceUp on a table, then the accelerometer would read -1 G on the Z axis. Thus,
accelerometers do not necessarily measure just coordinate acceleration the rate of change of velocity. When
using an accelerometer, make sure to differentiate between the gravitational vector from gravity and the linear
acceleration vector from motion. Note that the gravitational vector should normalize to 1 for a stationary device.
The following diagrams illustrate:
V1 = Vector 1 = Force due to gravity
V2 = Vector 2 = -Z axis of device chassis (points out of back of screen)
i = Tilt angle (inclination) = angle between Z axis of device chassis and gravity vector

Apps that might use the accelerometer sensor include a game where a marble on the screen rolls in the direction
you tilt the device (gravitational vector). This type of functionality closely mirrors that of the Inclinometer and
could also be done with that sensor by using a combination of pitch and roll. Using the accelerometers gravity
vector simplifies this somewhat by providing an easily mathematically manipulated vector for device tilt. Another
example would be an app that makes a whips cracking sound when the user flicks the device through the air
(linear acceleration vector).

Activity sensor
The Activity sensor determines the current status of the device attached to the sensor. This sensor is frequently
used in fitness applications to keep track of when a user carrying a device is running or walking. See ActivityType
for a list of possible activities that can be detected by this sensor API.

Altimeter
The Altimeter sensor returns a value that indicates the altitude of the sensor. This enables you to keep track of a
change in altitude in terms of meters from sea level. One example of an app that might use this would be a
running app that keeps track of the elevation changes during a run to calculate the calories burned. In this case,
this sensor data could be combined with the Activity sensor to provide more accurate tracking information.

Barometer
The Barometer sensor enables an application to get barometric readings. A weather application could use this
information to provide the current atmospheric pressure. This could be used to provide more detailed information
and predict potential weather changes.

Compass
The Compass sensor returns a 2D heading with respect to magnetic north based on the horizontal plane of the
earth. The compass sensor should not be used in determining specific device orientation or for representing
anything in 3D space. Geographical features can cause natural declination in the heading, so some systems
support both HeadingMagneticNorth and HeadingTrueNorth. Think about which one your app prefers, but
remember that not all systems will report a true north value. The gyrometer and magnetometer (a device
measuring magnetic strength magnitude) sensors combine their data to produce the compass heading, which has
the net effect of stabilizing the data (magnetic field strength is very unstable due to electrical system components).

Apps that want to display a compass rose or navigate a map would typically use the compass sensor.

Gyrometer
The Gyrometer sensor measures angular velocities along the X, Y, and Z axes. These are very useful in simple
motion-based apps that do not concern themselves with device orientation but care about the device rotating at
different speeds. Gyrometers can suffer from noise in the data or a constant bias along one or more of the axes.
You should query the accelerometer to verify whether the device is moving in order to determine if the gyrometer
suffers from a bias, and then compensate accordingly in your app.

An example of an app that could use the gyrometer sensor is a game that spins a roulette wheel based on a quick
rotational jerk of the device.

Inclinometer
The Inclinometer sensor specifies the yaw, pitch, and roll values of a device and work best with apps that care
about how the device is situated in space. Pitch and roll are derived by taking the accelerometers gravity vector
and by integrating the data from the gyrometer. Yaw is established from magnetometer and gyrometer (similar to
compass heading) data. Inclinometers offer advanced orientation data in an easily digestible and understandable
way. Use inclinometers when you need device orientation but do not need to manipulate the sensor data.
Apps that change their view to match the orientation of the device can use the inclinometer sensor. Also, an app
that displays an airplane that matches the yaw, pitch, and roll of the device would also use the inclinometer
readings.

Light sensor
The Light sensor is capable of determining the ambient light surrounding the sensor. This enables an app to
determine when the light setting surrounding a device has changed. For example, a user with a slate device might
walk from indoors to outdoors on a sunny day. A smart application could use this value to increase the contrast
between the background and the font being rendered. That would make the content still readable in the brighter,
outdoor setting.

Orientation sensor
Device orientation is expressed through both quaternion and a rotation matrix. The OrientationSensor offers a
high degree of precision in determining how the device is situated in space with respect to absolute heading. The
OrientationSensor data is derived from the accelerometer, gyrometer, and magnetometer. As such, both the
inclinometer and compass sensors can be derived from the quaternion values. Quaternions and rotation matrices
lend themselves well to advanced mathematical manipulation and are often used in graphical programming. Apps
using complex manipulation should favor the orientation sensor as many transforms are based off of quaternions
and rotation matrices.

The orientation sensor is often used in advanced augmented reality apps that paint an overlay on your
surroundings based on the direction the back of the device is pointing.
Pedometer
The Pedometer sensor keeps track of the number of steps taken by the user carrying the connected device. The
sensor is configured to keep track of the number of steps over a given time period. Several fitness applications like
to keep track of the number of steps taken in order to help the user set and reach various goals. This information
can then be collected and stored to show progress over time.

Proximity sensor
The Proximity sensor can be used to indicate whether or not objects are detected by the sensor. In addition to
determining whether or not an object is within range of the device, the proximity sensor also can determine the
distance to the detected object. One example where this could be used is with an application that wants to emerge
from a sleep state when a user comes within a specified range. The device could be in a low-powered sleep state
until the proximity sensor detects an object, and then could enter a more active state.

Simple orientation
The SimpleOrientationSensor detects the current quadrant orientation of the specified device or its face-up or
face-down. It has six possible SimpleOrientation states (NotRotated, Rotated90, Rotated180, Rotated270,
FaceUp, FaceDown).
A reader app that changes its display based on the device being held parallel or perpendicular to the ground would
use the values from the SimpleOrientationSensor to determine how the device is being held.

Samples
For some samples that demonstrate using a couple of different sensors, see Windows Sensor Samples.
Calibrate sensors
3/6/2017 1 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Important APIs
Windows.Devices.Sensors
Windows.Devices.Sensors.Custom
Sensors in a device based on the magnetometer the compass, inclinometer and orientation sensor - can become
in need of calibration due to environmental factors. The MagnetometerAccuracy enumeration can help
determine a course of action when your device is in need of calibration.

When to calibrate the magnetometer


The MagnetometerAccuracy enumeration has four values that help you determine if the device your app is
running on needs to be calibrated. If a device needs to be calibrated, you should let the user know that calibration is
needed. However, you should not prompt the user to calibrate too frequently. We recommend no more than once
every 10 minutes.
| Value | Description |-----------------|-------------------| | Unknown | The sensor driver could not report the current
accuracy. This does not necessarily mean the device is out of calibration. It is up to your app to decide the best
course of action if Unknown is returned. If your app is dependant on an accurate sensor reading, you may want to
prompt the user to calibrate the device. | | Unreliable | There is currently a high degree of inaccuracy in the
magnetometer. Apps should always ask for a calibration from the user when this value is first returned. | |
Approximate | The data is accurate enough for some applications. A virtual reality app, that only needs to know if
the user has moved the device up/down or left/right, can continue without calibration. Apps that need an absolute
heading, like a navigation app that needs to know what direction you are driving in order to give you directions,
need to ask for calibration. | | High | The data is precise. No calibration is needed, even for apps that need to know
an absolute heading such as augmented reality or navigation apps. |

How to calibrate the magnetometer


This short video gives an overview of how to calibrate the magnetometer.Click here to view
Sensor orientation
3/6/2017 4 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
** Important APIs **
Windows.Devices.Sensors
Windows.Devices.Sensors.Custom
Sensor data from the Accelerometer, Gyrometer, Compass, Inclinometer, and OrientationSensor classes is
defined by their reference axes. These axes are defined by the device's landscape orientation and rotate with the
device as the user turns it. If your app supports automatic rotation and reorients itself to accommodate the device
as the user rotates it, you must adjust your sensor data for the rotation before using it.

Display orientation vs device orientation


In order to understand the reference axes for sensors, you need to distinguish display orientation from device
orientation. Display orientation is the direction text and images are displayed on the screen whereas device
orientation is the physical positioning of the device. In the following picture, both the device and display orientation
are in Landscape (note that the sensor axes shown are only applicable to landscape-first devices).

The following picture shows both the display and device orientation in LandscapeFlipped.

The next picture shows the display orientation in Landscape while the device orientation is LandscapeFlipped.
You can query the orientation values through the DisplayInformation class by using the GetForCurrentView
method with the CurrentOrientation property. Then you can create logic by comparing against the
DisplayOrientations enumeration. Remember that for every orientation you support, you have to support a
conversion of the reference axes to that orientation.

Landscape-first vs portrait-first devices


Manufacturers produce both landscape-first and portrait-first devices. The reference frame varies between
landscape-first devices (like desktops and laptops) and portrait-first devices (like phones and some tablets). The
following table shows the sensor axes for both landscape-first and portrait-first devices.

ORIENTATION LANDSCAPE-FIRST PORTRAIT-FIRST

Landscape

Portrait

**LandscapeFlipped **
ORIENTATION LANDSCAPE-FIRST PORTRAIT-FIRST

PortraitFlipped

Devices broadcasting display and headless devices


Some devices have the ability to broadcast the display to another device. For example, you could take a tablet and
broadcast the display to a projector that will be in landscape orientation. In this scenario, it is important to keep in
mind that the device orientation is based on the original device, not the one presenting the display. So an
accelerometer would report data for the tablet.
Furthermore, some devices do not have a display. With these devices, the default orientation for these devices is
portrait.

Display orientation and compass heading


Compass heading depends upon the reference axes and so it changes with the device orientation. You compensate
based on the this table (assume the user is facing north).

REFERENCE AXIS FOR COMPASS API COMPASS HEADING WHEN COMPASS HEADING
DISPLAY ORIENTATION HEADING FACING NORTH COMPENSATION

Landscape -Z 0 Heading

Portrait Y 90 (Heading + 270) % 360

LandscapeFlipped Z 180 (Heading + 180) % 360

PortraitFlipped Y 270 (Heading + 90) % 360

Modify the compass heading as shown in the table in order to correctly display the heading. The following code
snippet demonstrates how to do this.
private void ReadingChanged(object sender, CompassReadingChangedEventArgs e)
{
double heading = e.Reading.HeadingMagneticNorth;
double displayOffset;

// Calculate the compass heading offset based on


// the current display orientation.
DisplayInformation displayInfo = DisplayInformation.GetForCurrentView();

switch (displayInfo.CurrentOrientation)
{
case DisplayOrientations.Landscape:
displayOffset = 0;
break;
case DisplayOrientations.Portrait:
displayOffset = 270;
break;
case DisplayOrientations.LandscapeFlipped:
displayOffset = 180;
break;
case DisplayOrientations.PortraitFlipped:
displayOffset = 90;
break;
}

double displayCompensatedHeading = (heading + displayOffset) % 360;

// Update the UI...


}

Display orientation with the accelerometer and gyrometer


This table converts accelerometer and gyrometer data for display orientation.

REFERENCE AXES X Y Z

Landscape X Y Z

Portrait Y -X Z

LandscapeFlipped -X -Y Z

PortraitFlipped -Y X Z

The following code example applies these conversions to the gyrometer.


private void ReadingChanged(object sender, GyrometerReadingChangedEventArgs e)
{
double x_Axis;
double y_Axis;
double z_Axis;

GyrometerReading reading = e.Reading;

// Calculate the gyrometer axes based on


// the current display orientation.
DisplayInformation displayInfo = DisplayInformation.GetForCurrentView();
switch (displayInfo.CurrentOrientation)
{
case DisplayOrientations.Landscape:
x_Axis = reading.AngularVelocityX;
y_Axis = reading.AngularVelocityY;
z_Axis = reading.AngularVelocityZ;
break;
case DisplayOrientations.Portrait:
x_Axis = reading.AngularVelocityY;
y_Axis = -1 * reading.AngularVelocityX;
z_Axis = reading.AngularVelocityZ;
break;
case DisplayOrientations.LandscapeFlipped:
x_Axis = -1 * reading.AngularVelocityX;
y_Axis = -1 * reading.AngularVelocityY;
z_Axis = reading.AngularVelocityZ;
break;
case DisplayOrientations.PortraitFlipped:
x_Axis = -1 * reading.AngularVelocityY;
y_Axis = reading.AngularVelocityX;
z_Axis = reading.AngularVelocityZ;
break;
}

// Update the UI...


}

Display orientation and device orientation


The OrientationSensor data must be changed in a different way. Think of these different orientations as rotations
counterclockwise to the Z axis, so we need to reverse the rotation to get back the users orientation. For quaternion
data, we can use Eulers formula to define a rotation with a reference quaternion, and we can also use a reference
rotation matrix.

To get the relative orientation you


want, multiply the reference object against the absolute object. Note that this math is not commutative.

In the preceding expression, the absolute object is returned by the sensor data.
REFERENCE QUATERNION REFERENCE ROTATION MATRIX
COUNTERCLOCKWISE (REVERSE ROTATION) (REVERSE ROTATION)
DISPLAY ORIENTATION ROTATION AROUND Z

Landscape 0 1 + 0i + 0j + 0k [1 0 0
010
0 0 1]

Portrait 90 cos(-45) + (i + j + k)*sin(- [0 1 0


45) -1 0 0
0 0 1]

LandscapeFlipped 180 0-i-j-k [1 0 0


010
0 0 1]

PortraitFlipped 270 cos(-135) + (i + j + k)*sin(- [0 -1 0


135) 100
0 0 1]
Use the accelerometer
3/6/2017 3 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Important APIs
Windows.Devices.Sensors
Accelerometer
[Some information relates to pre-released product which may be substantially modified before it's commercially
released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Learn how to use the accelerometer to respond to user movement.
A simple game app relies on a single sensor, the accelerometer, as an input device. These apps typically use only
one or two axes for input; but they may also use the shake event as another input source.

Prerequisites
You should be familiar with Extensible Application Markup Language (XAML), Microsoft Visual C#, and events.
The device or emulator that you're using must support an accelerometer.

Create a simple accelerometer app


This section is divided into two subsections. The first subsection will take you through the steps necessary to create
a simple accelerometer application from scratch. The following subsection explains the app you have just created.
Instructions
Create a new project, choosing a Blank App (Universal Windows) from the Visual C# project templates.
Open your project's MainPage.xaml.cs file and replace the existing code with the following.
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using Windows.Foundation;
using Windows.Foundation.Collections;
using Windows.UI.Xaml;
using Windows.UI.Xaml.Controls;
using Windows.UI.Xaml.Controls.Primitives;
using Windows.UI.Xaml.Data;
using Windows.UI.Xaml.Input;
using Windows.UI.Xaml.Media;
using Windows.UI.Xaml.Navigation;

// Required to support the core dispatcher and the accelerometer

using Windows.UI.Core;
using Windows.Devices.Sensors;

namespace App1
{

public sealed partial class MainPage : Page


{
// Sensor and dispatcher variables
private Accelerometer _accelerometer;

// This event handler writes the current accelerometer reading to


// the three acceleration text blocks on the app' s main page.

private async void ReadingChanged(object sender, AccelerometerReadingChangedEventArgs e)


{
await Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () =>
{
AccelerometerReading reading = e.Reading;
txtXAxis.Text = String.Format("{0,5:0.00}", reading.AccelerationX);
txtYAxis.Text = String.Format("{0,5:0.00}", reading.AccelerationY);
txtZAxis.Text = String.Format("{0,5:0.00}", reading.AccelerationZ);

});
}

public MainPage()
{
this.InitializeComponent();
_accelerometer = Accelerometer.GetDefault();

if (_accelerometer != null)
{
// Establish the report interval
uint minReportInterval = _accelerometer.MinimumReportInterval;
uint reportInterval = minReportInterval > 16 ? minReportInterval : 16;
_accelerometer.ReportInterval = reportInterval;

// Assign an event handler for the reading-changed event


_accelerometer.ReadingChanged += new TypedEventHandler<Accelerometer, AccelerometerReadingChangedEventArgs>
(ReadingChanged);
}
}
}
}

You'll need to rename the namespace in the previous snippet with the name you gave your project. For example, if
you created a project named AccelerometerCS, you'd replace namespace App1 with namespace AccelerometerCS .
Open the file MainPage.xaml and replace the original contents with the following XML.
<Page
x:Class="App1.MainPage"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:local="using:App1"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
mc:Ignorable="d">

<Grid x:Name="LayoutRoot" Background="#FF0C0C0C">


<TextBlock HorizontalAlignment="Left" Height="25" Margin="8,20,0,0" TextWrapping="Wrap" Text="X-axis:" VerticalAlignment="Top"
Width="62" Foreground="#FFEDE6E6"/>
<TextBlock HorizontalAlignment="Left" Height="27" Margin="8,49,0,0" TextWrapping="Wrap" Text="Y-axis:" VerticalAlignment="Top"
Width="62" Foreground="#FFF5F2F2"/>
<TextBlock HorizontalAlignment="Left" Height="23" Margin="8,80,0,0" TextWrapping="Wrap" Text="Z-axis:" VerticalAlignment="Top"
Width="62" Foreground="#FFF6F0F0"/>
<TextBlock x:Name="txtXAxis" HorizontalAlignment="Left" Height="15" Margin="70,16,0,0" TextWrapping="Wrap" Text="TextBlock"
VerticalAlignment="Top" Width="61" Foreground="#FFF2F2F2"/>
<TextBlock x:Name="txtYAxis" HorizontalAlignment="Left" Height="15" Margin="70,49,0,0" TextWrapping="Wrap" Text="TextBlock"
VerticalAlignment="Top" Width="53" Foreground="#FFF2EEEE"/>
<TextBlock x:Name="txtZAxis" HorizontalAlignment="Left" Height="15" Margin="70,80,0,0" TextWrapping="Wrap" Text="TextBlock"
VerticalAlignment="Top" Width="53" Foreground="#FFFFF8F8"/>

</Grid>
</Page>

You'll need to replace the first part of the class name in the previous snippet with the namespace of your app. For
example, if you created a project named AccelerometerCS, you'd replace x:Class="App1.MainPage" with
x:Class="AccelerometerCS.MainPage" . You should also replace xmlns:local="using:App1" with xmlns:local="using:AccelerometerCS"
.
Press F5 or select Debug > Start Debugging to build, deploy, and run the app.
Once the app is running, you can change the accelerometer values by moving the device or using the emulator
tools.
Stop the app by returning to Visual Studio and pressing Shift+F5 or select Debug > Stop Debugging to stop
the app.
Explanation
The previous example demonstrates how little code you'll need to write in order to integrate accelerometer input in
your app.
The app establishes a connection with the default accelerometer in the MainPage method.

_accelerometer = Accelerometer.GetDefault();

The app establishes the report interval within the MainPage method. This code retrieves the minimum interval
supported by the device and compares it to a requested interval of 16 milliseconds (which approximates a 60-Hz
refresh rate). If the minimum supported interval is greater than the requested interval, the code sets the value to the
minimum. Otherwise, it sets the value to the requested interval.

uint minReportInterval = _accelerometer.MinimumReportInterval;


uint reportInterval = minReportInterval > 16 ? minReportInterval : 16;
_accelerometer.ReportInterval = reportInterval;

The new accelerometer data is captured in the ReadingChanged method. Each time the sensor driver receives
new data from the sensor, it passes the values to your app using this event handler. The app registers this event
handler on the following line.

_accelerometer.ReadingChanged += new TypedEventHandler<Accelerometer,


AccelerometerReadingChangedEventArgs>(ReadingChanged);

These new values are written to the TextBlocks found in the project's XAML.

<TextBlock x:Name="txtXAxis" HorizontalAlignment="Left" Height="15" Margin="70,16,0,0" TextWrapping="Wrap" Text="TextBlock"


VerticalAlignment="Top" Width="61" Foreground="#FFF2F2F2"/>
<TextBlock x:Name="txtYAxis" HorizontalAlignment="Left" Height="15" Margin="70,49,0,0" TextWrapping="Wrap" Text="TextBlock"
VerticalAlignment="Top" Width="53" Foreground="#FFF2EEEE"/>
<TextBlock x:Name="txtZAxis" HorizontalAlignment="Left" Height="15" Margin="70,80,0,0" TextWrapping="Wrap" Text="TextBlock"
VerticalAlignment="Top" Width="53" Foreground="#FFFFF8F8"/>

Related topics
Accelerometer Sample
Use the compass
3/6/2017 4 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Important APIs
Windows.Devices.Sensors
Compass
[Some information relates to pre-released product which may be substantially modified before it's commercially
released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Learn how to use the compass to determine the current heading.
An app can retrieve the current heading with respect to magnetic, or true, north. Navigation apps use the compass
to determine the direction a device is facing and then orient the map accordingly.

Prerequisites
You should be familiar with Extensible Application Markup Language (XAML), Microsoft Visual C#, and events.
The device or emulator that you're using must support a compass.

Create a simple compass app


This section is divided into two subsections. The first subsection will take you through the steps necessary to create
a simple compass application from scratch. The following subsection explains the app you have just created.
Instructions
Create a new project, choosing a Blank App (Universal Windows) from the Visual C# project templates.
Open your project's MainPage.xaml.cs file and replace the existing code with the following.
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using Windows.Foundation;
using Windows.Foundation.Collections;
using Windows.UI.Xaml;
using Windows.UI.Xaml.Controls;
using Windows.UI.Xaml.Controls.Primitives;
using Windows.UI.Xaml.Data;
using Windows.UI.Xaml.Input;
using Windows.UI.Xaml.Media;
using Windows.UI.Xaml.Navigation;

using Windows.UI.Core; // Required to access the core dispatcher object


using Windows.Devices.Sensors; // Required to access the sensor platform and the compass

namespace App1
{
/// <summary>
/// An empty page that can be used on its own or navigated to within a Frame.
/// </summary>
public sealed partial class MainPage : Page
{
private Compass _compass; // Our app' s compass object

// This event handler writes the current compass reading to


// the textblocks on the app' s main page.

private async void ReadingChanged(object sender, CompassReadingChangedEventArgs e)


{
await Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () =>
{
CompassReading reading = e.Reading;
txtMagnetic.Text = String.Format("{0,5:0.00}", reading.HeadingMagneticNorth);
if (reading.HeadingTrueNorth.HasValue)
txtNorth.Text = String.Format("{0,5:0.00}", reading.HeadingTrueNorth);
else
txtNorth.Text = "No reading.";
});
}

public MainPage()
{
this.InitializeComponent();
_compass = Compass.GetDefault(); // Get the default compass object

// Assign an event handler for the compass reading-changed event


if (_compass != null)
{
// Establish the report interval for all scenarios
uint minReportInterval = _compass.MinimumReportInterval;
uint reportInterval = minReportInterval > 16 ? minReportInterval : 16;
_compass.ReportInterval = reportInterval;
_compass.ReadingChanged += new TypedEventHandler<Compass, CompassReadingChangedEventArgs>(ReadingChanged);
}
}
}
}

You'll need to rename the namespace in the previous snippet with the name you gave your project. For example, if
you created a project named CompassCS, you'd replace namespace App1 with namespace CompassCS .
Open the file MainPage.xaml and replace the original contents with the following XML.
<Page
x:Class="App1.MainPage"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:local="using:App1"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
mc:Ignorable="d">

<Grid x:Name="LayoutRoot" Background="#FF0C0C0C">


<TextBlock HorizontalAlignment="Left" Height="22" Margin="8,18,0,0" TextWrapping="Wrap" Text="Magnetic Heading:"
VerticalAlignment="Top" Width="104" Foreground="#FFFBF9F9"/>
<TextBlock HorizontalAlignment="Left" Height="18" Margin="8,58,0,0" TextWrapping="Wrap" Text="True North Heading:"
VerticalAlignment="Top" Width="104" Foreground="#FFF3F3F3"/>
<TextBlock x:Name="txtMagnetic" HorizontalAlignment="Left" Height="22" Margin="130,18,0,0" TextWrapping="Wrap"
Text="TextBlock" VerticalAlignment="Top" Width="116" Foreground="#FFFBF6F6"/>
<TextBlock x:Name="txtNorth" HorizontalAlignment="Left" Height="18" Margin="130,58,0,0" TextWrapping="Wrap" Text="TextBlock"
VerticalAlignment="Top" Width="116" Foreground="#FFF5F1F1"/>

</Grid>
</Page>

You'll need to replace the first part of the class name in the previous snippet with the namespace of your app. For
example, if you created a project named CompassCS, you'd replace x:Class="App1.MainPage" with
x:Class="CompassCS.MainPage" . You should also replace xmlns:local="using:App1" with xmlns:local="using:CompassCS" .

Press F5 or select Debug > Start Debugging to build, deploy, and run the app.
Once the app is running, you can change the compass values by moving the device or using the emulator tools.
Stop the app by returning to Visual Studio and pressing Shift+F5 or select Debug > Stop Debugging to stop
the app.
Explanation
The previous example demonstrates how little code you'll need to write in order to integrate compass input in your
app.
The app establishes a connection with the default compass in the MainPage method.

_compass = Compass.GetDefault(); // Get the default compass object

The app establishes the report interval within the MainPage method. This code retrieves the minimum interval
supported by the device and compares it to a requested interval of 16 milliseconds (which approximates a 60-Hz
refresh rate). If the minimum supported interval is greater than the requested interval, the code sets the value to the
minimum. Otherwise, it sets the value to the requested interval.

uint minReportInterval = _compass.MinimumReportInterval;


uint reportInterval = minReportInterval > 16 ? minReportInterval : 16;
_compass.ReportInterval = reportInterval;

The new compass data is captured in the ReadingChanged method. Each time the sensor driver receives new data
from the sensor, it passes the values to your app using this event handler. The app registers this event handler on
the following line.

_compass.ReadingChanged += new TypedEventHandler<Compass,


CompassReadingChangedEventArgs>(ReadingChanged);
These new values are written to the TextBlocks found in the project's XAML.

<TextBlock HorizontalAlignment="Left" Height="22" Margin="8,18,0,0" TextWrapping="Wrap" Text="Magnetic Heading:"


VerticalAlignment="Top" Width="104" Foreground="#FFFBF9F9"/>
<TextBlock HorizontalAlignment="Left" Height="18" Margin="8,58,0,0" TextWrapping="Wrap" Text="True North Heading:"
VerticalAlignment="Top" Width="104" Foreground="#FFF3F3F3"/>
<TextBlock x:Name="txtMagnetic" HorizontalAlignment="Left" Height="22" Margin="130,18,0,0" TextWrapping="Wrap" Text="TextBlock"
VerticalAlignment="Top" Width="116" Foreground="#FFFBF6F6"/>
<TextBlock x:Name="txtNorth" HorizontalAlignment="Left" Height="18" Margin="130,58,0,0" TextWrapping="Wrap" Text="TextBlock"
VerticalAlignment="Top" Width="116" Foreground="#FFF5F1F1"/>

Related topics
Compass Sample
Use the gyrometer
3/6/2017 4 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Important APIs
Windows.Devices.Sensors
Gyrometer
[Some information relates to pre-released product which may be substantially modified before it's commercially
released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Learn how to use the gyrometer to detect changes in user movement.
Gyrometers compliment accelerometers as game controllers. The accelerometer can measure linear motion while
the gyrometer measures angular velocity or rotational motion.

Prerequisites
You should be familiar with Extensible Application Markup Language (XAML), Microsoft Visual C#, and events.
The device or emulator that you're using must support a gyrometer.

Create a simple gyrometer app


This section is divided into two subsections. The first subsection will take you through the steps necessary to create
a simple gyrometer application from scratch. The following subsection explains the app you have just created.
Instructions
Create a new project, choosing a Blank App (Universal Windows) from the Visual C# project templates.
Open your project's MainPage.xaml.cs file and replace the existing code with the following.
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using Windows.Foundation;
using Windows.Foundation.Collections;
using Windows.UI.Xaml;
using Windows.UI.Xaml.Controls;
using Windows.UI.Xaml.Controls.Primitives;
using Windows.UI.Xaml.Data;
using Windows.UI.Xaml.Input;
using Windows.UI.Xaml.Media;
using Windows.UI.Xaml.Navigation;

using Windows.UI.Core; // Required to access the core dispatcher object


using Windows.Devices.Sensors; // Required to access the sensor platform and the gyrometer

namespace App1
{
/// <summary>
/// An empty page that can be used on its own or navigated to within a Frame.
/// </summary>
public sealed partial class MainPage : Page
{
private Gyrometer _gyrometer; // Our app' s gyrometer object

// This event handler writes the current gyrometer reading to


// the three textblocks on the app' s main page.

private async void ReadingChanged(object sender, GyrometerReadingChangedEventArgs e)


{
await Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () =>
{
GyrometerReading reading = e.Reading;
txtXAxis.Text = String.Format("{0,5:0.00}", reading.AngularVelocityX);
txtYAxis.Text = String.Format("{0,5:0.00}", reading.AngularVelocityY);
txtZAxis.Text = String.Format("{0,5:0.00}", reading.AngularVelocityZ);
});
}

public MainPage()
{
this.InitializeComponent();
_gyrometer = Gyrometer.GetDefault(); // Get the default gyrometer sensor object

if (_gyrometer != null)
{
// Establish the report interval for all scenarios
uint minReportInterval = _gyrometer.MinimumReportInterval;
uint reportInterval = minReportInterval > 16 ? minReportInterval : 16;
_gyrometer.ReportInterval = reportInterval;

// Assign an event handler for the gyrometer reading-changed event


_gyrometer.ReadingChanged += new TypedEventHandler<Gyrometer, GyrometerReadingChangedEventArgs>(ReadingChanged);
}

}
}
}

You'll need to rename the namespace in the previous snippet with the name you gave your project. For example, if
you created a project named GyrometerCS, you'd replace namespace App1 with namespace GyrometerCS .
Open the file MainPage.xaml and replace the original contents with the following XML.
<Page
x:Class="App1.MainPage"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:local="using:App1"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
mc:Ignorable="d">

<Grid x:Name="LayoutRoot" Background="#FF0C0C0C">


<TextBlock HorizontalAlignment="Left" Height="23" Margin="8,8,0,0" TextWrapping="Wrap" Text="X-Axis:" VerticalAlignment="Top"
Width="46" Foreground="#FFFDFDFD"/>
<TextBlock x:Name="txtXAxis" HorizontalAlignment="Left" Height="23" Margin="67,8,0,0" TextWrapping="Wrap"
VerticalAlignment="Top" Width="88" Foreground="#FFFDFAFA"/>
<TextBlock HorizontalAlignment="Left" Height="20" Margin="8,52,0,0" TextWrapping="Wrap" Text="Y Axis:" VerticalAlignment="Top"
Width="46" Foreground="White"/>
<TextBlock x:Name="txtYAxis" HorizontalAlignment="Left" Height="24" Margin="54,48,0,0" TextWrapping="Wrap"
VerticalAlignment="Top" Width="80" Foreground="#FFFBFBFB"/>
<TextBlock HorizontalAlignment="Left" Height="21" Margin="8,93,0,0" TextWrapping="Wrap" Text="Z Axis:" VerticalAlignment="Top"
Width="46" Foreground="#FFFEFBFB"/>
<TextBlock x:Name="txtZAxis" HorizontalAlignment="Left" Height="21" Margin="54,93,0,0" TextWrapping="Wrap"
VerticalAlignment="Top" Width="63" Foreground="#FFF8F3F3"/>

</Grid>
</Page>

You'll need to replace the first part of the class name in the previous snippet with the namespace of your app. For
example, if you created a project named GyrometerCS, you'd replace x:Class="App1.MainPage" with
x:Class="GyrometerCS.MainPage" . You should also replace xmlns:local="using:App1" with xmlns:local="using:GyrometerCS" .

Press F5 or select Debug > Start Debugging to build, deploy, and run the app.
Once the app is running, you can change the gyrometer values by moving the device or using the emulator tools.
Stop the app by returning to Visual Studio and pressing Shift+F5 or select Debug > Stop Debugging to stop
the app.
Explanation
The previous example demonstrates how little code you'll need to write in order to integrate gyrometer input in
your app.
The app establishes a connection with the default gyrometer in the MainPage method.

_gyrometer = Gyrometer.GetDefault(); // Get the default gyrometer sensor object

The app establishes the report interval within the MainPage method. This code retrieves the minimum interval
supported by the device and compares it to a requested interval of 16 milliseconds (which approximates a 60-Hz
refresh rate). If the minimum supported interval is greater than the requested interval, the code sets the value to the
minimum. Otherwise, it sets the value to the requested interval.

uint minReportInterval = _gyrometer.MinimumReportInterval;


uint reportInterval = minReportInterval > 16 ? minReportInterval : 16;
_gyrometer.ReportInterval = reportInterval;

The new gyrometer data is captured in the ReadingChanged method. Each time the sensor driver receives new
data from the sensor, it passes the values to your app using this event handler. The app registers this event handler
on the following line.
_gyrometer.ReadingChanged += new TypedEventHandler<Gyrometer,
GyrometerReadingChangedEventArgs>(ReadingChanged);

These new values are written to the TextBlocks found in the project's XAML.

<TextBlock HorizontalAlignment="Left" Height="23" Margin="8,8,0,0" TextWrapping="Wrap" Text="X-Axis:" VerticalAlignment="Top"


Width="46" Foreground="#FFFDFDFD"/>
<TextBlock x:Name="txtXAxis" HorizontalAlignment="Left" Height="23" Margin="67,8,0,0" TextWrapping="Wrap"
VerticalAlignment="Top" Width="88" Foreground="#FFFDFAFA"/>
<TextBlock HorizontalAlignment="Left" Height="20" Margin="8,52,0,0" TextWrapping="Wrap" Text="Y Axis:" VerticalAlignment="Top"
Width="46" Foreground="White"/>
<TextBlock x:Name="txtYAxis" HorizontalAlignment="Left" Height="24" Margin="54,48,0,0" TextWrapping="Wrap"
VerticalAlignment="Top" Width="80" Foreground="#FFFBFBFB"/>
<TextBlock HorizontalAlignment="Left" Height="21" Margin="8,93,0,0" TextWrapping="Wrap" Text="Z Axis:" VerticalAlignment="Top"
Width="46" Foreground="#FFFEFBFB"/>
<TextBlock x:Name="txtZAxis" HorizontalAlignment="Left" Height="21" Margin="54,93,0,0" TextWrapping="Wrap"
VerticalAlignment="Top" Width="63" Foreground="#FFF8F3F3"/>

Related topics
Gyrometer Sample
Use the inclinometer
3/6/2017 3 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Important APIs
Windows.Devices.Sensors
Inclinometer
Learn how to use the inclinometer to determine pitch, roll, and yaw.
Some 3-D games require an inclinometer as an input device. One common example is the flight simulator, which
maps the three axes of the inclinometer (X, Y, and Z) to the elevator, aileron, and rudder inputs of the aircraft.

Prerequisites
You should be familiar with Extensible Application Markup Language (XAML), Microsoft Visual C#, and events.
The device or emulator that you're using must support a inclinometer.

Create a simple inclinometer app


This section is divided into two subsections. The first subsection will take you through the steps necessary to create
a simple inclinometer application from scratch. The following subsection explains the app you have just created.
Instructions
Create a new project, choosing a Blank App (Universal Windows) from the Visual C# project templates.
Open your project's MainPage.xaml.cs file and replace the existing code with the following.
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using Windows.Foundation;
using Windows.Foundation.Collections;
using Windows.UI.Xaml;
using Windows.UI.Xaml.Controls;
using Windows.UI.Xaml.Controls.Primitives;
using Windows.UI.Xaml.Data;
using Windows.UI.Xaml.Input;
using Windows.UI.Xaml.Media;
using Windows.UI.Xaml.Navigation;

using Windows.UI.Core;
using Windows.Devices.Sensors;

namespace App1
{
/// <summary>
/// An empty page that can be used on its own or navigated to within a Frame.
/// </summary>
public sealed partial class MainPage : Page
{
private Inclinometer _inclinometer;

// This event handler writes the current inclinometer reading to


// the three text blocks on the app' s main page.

private async void ReadingChanged(object sender, InclinometerReadingChangedEventArgs e)


{
await Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () =>
{
InclinometerReading reading = e.Reading;
txtPitch.Text = String.Format("{0,5:0.00}", reading.PitchDegrees);
txtRoll.Text = String.Format("{0,5:0.00}", reading.RollDegrees);
txtYaw.Text = String.Format("{0,5:0.00}", reading.YawDegrees);
});
}

public MainPage()
{
this.InitializeComponent();
_inclinometer = Inclinometer.GetDefault();

if (_inclinometer != null)
{
// Establish the report interval for all scenarios
uint minReportInterval = _inclinometer.MinimumReportInterval;
uint reportInterval = minReportInterval > 16 ? minReportInterval : 16;
_inclinometer.ReportInterval = reportInterval;

// Establish the event handler


_inclinometer.ReadingChanged += new TypedEventHandler<Inclinometer, InclinometerReadingChangedEventArgs>
(ReadingChanged);
}
}
}
}

You'll need to rename the namespace in the previous snippet with the name you gave your project. For example, if
you created a project named InclinometerCS, you'd replace namespace App1 with namespace InclinometerCS .
Open the file MainPage.xaml and replace the original contents with the following XML.
<Page
x:Class="App1.MainPage"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:local="using:App1"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
mc:Ignorable="d">

<Grid x:Name="LayoutRoot" Background="#FF0C0C0C">


<TextBlock HorizontalAlignment="Left" Height="21" Margin="0,8,0,0" TextWrapping="Wrap" Text="Pitch: " VerticalAlignment="Top"
Width="45" Foreground="#FFF9F4F4"/>
<TextBlock x:Name="txtPitch" HorizontalAlignment="Left" Height="21" Margin="59,8,0,0" TextWrapping="Wrap" Text="TextBlock"
VerticalAlignment="Top" Width="71" Foreground="#FFFDF9F9"/>
<TextBlock HorizontalAlignment="Left" Height="23" Margin="0,29,0,0" TextWrapping="Wrap" Text="Roll:" VerticalAlignment="Top"
Width="55" Foreground="#FFF7F1F1"/>
<TextBlock x:Name="txtRoll" HorizontalAlignment="Left" Height="23" Margin="59,29,0,0" TextWrapping="Wrap" Text="TextBlock"
VerticalAlignment="Top" Width="50" Foreground="#FFFCF9F9"/>
<TextBlock HorizontalAlignment="Left" Height="19" Margin="0,56,0,0" TextWrapping="Wrap" Text="Yaw:" VerticalAlignment="Top"
Width="55" Foreground="#FFF7F3F3"/>
<TextBlock x:Name="txtYaw" HorizontalAlignment="Left" Height="19" Margin="55,56,0,0" TextWrapping="Wrap" Text="TextBlock"
VerticalAlignment="Top" Width="54" Foreground="#FFF6F2F2"/>

</Grid>
</Page>

You'll need to replace the first part of the class name in the previous snippet with the namespace of your app. For
example, if you created a project named InclinometerCS, you'd replace x:Class="App1.MainPage" with
x:Class="InclinometerCS.MainPage" . You should also replace xmlns:local="using:App1" with xmlns:local="using:InclinometerCS" .

Press F5 or select Debug > Start Debugging to build, deploy, and run the app.
Once the app is running, you can change the inclinometer values by moving the device or using the emulator tools.
Stop the app by returning to Visual Studio and pressing Shift+F5 or select Debug > Stop Debugging to stop
the app.
Explanation
The previous example demonstrates how little code you'll need to write in order to integrate inclinometer input in
your app.
The app establishes a connection with the default inclinometer in the MainPage method.

_inclinometer = Inclinometer.GetDefault();

The app establishes the report interval within the MainPage method. This code retrieves the minimum interval
supported by the device and compares it to a requested interval of 16 milliseconds (which approximates a 60-Hz
refresh rate). If the minimum supported interval is greater than the requested interval, the code sets the value to the
minimum. Otherwise, it sets the value to the requested interval.

uint minReportInterval = _inclinometer.MinimumReportInterval;


uint reportInterval = minReportInterval > 16 ? minReportInterval : 16;
_inclinometer.ReportInterval = reportInterval;

The new inclinometer data is captured in the ReadingChanged method. Each time the sensor driver receives new
data from the sensor, it passes the values to your app using this event handler. The app registers this event handler
on the following line.
_inclinometer.ReadingChanged += new TypedEventHandler<Inclinometer,
InclinometerReadingChangedEventArgs>(ReadingChanged);

These new values are written to the TextBlocks found in the project's XAML.

<TextBlock HorizontalAlignment="Left" Height="21" Margin="0,8,0,0" TextWrapping="Wrap" Text="Pitch: " VerticalAlignment="Top"


Width="45" Foreground="#FFF9F4F4"/>
<TextBlock x:Name="txtPitch" HorizontalAlignment="Left" Height="21" Margin="59,8,0,0" TextWrapping="Wrap" Text="TextBlock"
VerticalAlignment="Top" Width="71" Foreground="#FFFDF9F9"/>
<TextBlock HorizontalAlignment="Left" Height="23" Margin="0,29,0,0" TextWrapping="Wrap" Text="Roll:" VerticalAlignment="Top"
Width="55" Foreground="#FFF7F1F1"/>
<TextBlock x:Name="txtRoll" HorizontalAlignment="Left" Height="23" Margin="59,29,0,0" TextWrapping="Wrap" Text="TextBlock"
VerticalAlignment="Top" Width="50" Foreground="#FFFCF9F9"/>
<TextBlock HorizontalAlignment="Left" Height="19" Margin="0,56,0,0" TextWrapping="Wrap" Text="Yaw:" VerticalAlignment="Top"
Width="55" Foreground="#FFF7F3F3"/>
<TextBlock x:Name="txtYaw" HorizontalAlignment="Left" Height="19" Margin="55,56,0,0" TextWrapping="Wrap" Text="TextBlock"
VerticalAlignment="Top" Width="54" Foreground="#FFF6F2F2"/>

Related topics
Inclinometer Sample
Use the light sensor
3/6/2017 3 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Important APIs
Windows.Devices.Sensors
LightSensor
Learn how to use the ambient light sensor to detect changes in lighting.
An ambient light sensor is one of the several types of environmental sensors that allow apps to respond to changes
in the user's environment.

Prerequisites
You should be familiar with Extensible Application Markup Language (XAML), Microsoft Visual C#, and events.
The device or emulator that you're using must support an ambient light sensor.

Create a simple light-sensor app


This section is divided into two subsections. The first subsection will take you through the steps necessary to create
a simple light-sensor application from scratch. The following subsection explains the app you have just created.
Instructions
Create a new project, choosing a Blank App (Universal Windows) from the Visual C# project templates.
Open your project's BlankPage.xaml.cs file and replace the existing code with the following.
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using Windows.Foundation;
using Windows.Foundation.Collections;
using Windows.UI.Xaml;
using Windows.UI.Xaml.Controls;
using Windows.UI.Xaml.Controls.Primitives;
using Windows.UI.Xaml.Data;
using Windows.UI.Xaml.Input;
using Windows.UI.Xaml.Media;
using Windows.UI.Xaml.Navigation;

using Windows.UI.Core; // Required to access the core dispatcher object


using Windows.Devices.Sensors; // Required to access the sensor platform and the ALS

// The Blank Page item template is documented at http://go.microsoft.com/fwlink/p/?linkid=234238

namespace App1
{
/// <summary>
/// An empty page that can be used on its own or navigated to within a Frame.
/// </summary>
public sealed partial class BlankPage : Page
{
private LightSensor _lightsensor; // Our app' s lightsensor object

// This event handler writes the current light-sensor reading to


// the textbox named "txtLUX" on the app' s main page.

private void ReadingChanged(object sender, LightSensorReadingChangedEventArgs e)


{
Dispatcher.RunAsync(CoreDispatcherPriority.Normal, (s, a) =>
{
LightSensorReading reading = (a.Context as LightSensorReadingChangedEventArgs).Reading;
txtLuxValue.Text = String.Format("{0,5:0.00}", reading.IlluminanceInLux);
});
}

public BlankPage()
{
InitializeComponent();
_lightsensor = LightSensor.GetDefault(); // Get the default light sensor object

// Assign an event handler for the ALS reading-changed event


if (_lightsensor != null)
{
// Establish the report interval for all scenarios
uint minReportInterval = _lightsensor.MinimumReportInterval;
uint reportInterval = minReportInterval > 16 ? minReportInterval : 16;
_lightsensor.ReportInterval = reportInterval;

// Establish the even thandler


_lightsensor.ReadingChanged += new TypedEventHandler<LightSensor, LightSensorReadingChangedEventArgs>
(ReadingChanged);
}

}
}

You'll need to rename the namespace in the previous snippet with the name you gave your project. For example, if
you created a project named LightingCS, you'd replace namespace App1 with namespace LightingCS .
Open the file MainPage.xaml and replace the original contents with the following XML.

<Page
x:Class="App1.BlankPage"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:local="using:App1"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
mc:Ignorable="d">

<Grid x:Name="LayoutRoot" Background="Black">


<TextBlock HorizontalAlignment="Left" Height="44" Margin="52,38,0,0" TextWrapping="Wrap" Text="LUX Reading"
VerticalAlignment="Top" Width="150"/>
<TextBlock x:Name="txtLuxValue" HorizontalAlignment="Left" Height="44" Margin="224,38,0,0" TextWrapping="Wrap"
Text="TextBlock" VerticalAlignment="Top" Width="217"/>

</Grid>

</Page>

You'll need to replace the first part of the class name in the previous snippet with the namespace of your app. For
example, if you created a project named LightingCS, you'd replace x:Class="App1.MainPage" with
x:Class="LightingCS.MainPage" . You should also replace xmlns:local="using:App1" with xmlns:local="using:LightingCS" .

Press F5 or select Debug > Start Debugging to build, deploy, and run the app.
Once the app is running, you can change the light sensor values by altering the light available to the sensor or
using the emulator tools.
Stop the app by returning to Visual Studio and pressing Shift+F5 or select Debug > Stop Debugging to stop
the app.
Explanation
The previous example demonstrates how little code you'll need to write in order to integrate light-sensor input in
your app.
The app establishes a connection with the default sensor in the BlankPage method.

_lightsensor = LightSensor.GetDefault(); // Get the default light sensor object

The app establishes the report interval within the BlankPage method. This code retrieves the minimum interval
supported by the device and compares it to a requested interval of 16 milliseconds (which approximates a 60-Hz
refresh rate). If the minimum supported interval is greater than the requested interval, the code sets the value to the
minimum. Otherwise, it sets the value to the requested interval.

uint minReportInterval = _lightsensor.MinimumReportInterval;


uint reportInterval = minReportInterval > 16 ? minReportInterval : 16;
_lightsensor.ReportInterval = reportInterval;

The new light-sensor data is captured in the ReadingChanged method. Each time the sensor driver receives new
data from the sensor, it passes the value to your app using this event handler. The app registers this event handler
on the following line.

_lightsensor.ReadingChanged += new TypedEventHandler<LightSensor,


LightSensorReadingChangedEventArgs>(ReadingChanged);
These new values are written to a TextBlock found in the project's XAML.

<TextBlock HorizontalAlignment="Left" Height="44" Margin="52,38,0,0" TextWrapping="Wrap" Text="LUX Reading" VerticalAlignment="Top"


Width="150"/>
<TextBlock x:Name="txtLuxValue" HorizontalAlignment="Left" Height="44" Margin="224,38,0,0" TextWrapping="Wrap" Text="TextBlock"
VerticalAlignment="Top" Width="217"/>

Related topics
LightSensor Sample
Use the orientation sensor
3/6/2017 8 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Important APIs
Windows.Devices.Sensors
OrientationSensor
SimpleOrientation
Learn how to use the orientation sensors to determine the device orientation.
There are two different types of orientation sensor APIs included in the Windows.Devices.Sensors namespace:
OrientationSensor and SimpleOrientation. While both of these sensors are orientation sensors, that term is
overloaded and they are used for very different purposes. However, since both are orientation sensors, they are
both covered in this article.
The OrientationSensor API is used for 3-D apps two obtain a quaternion and a rotation matrix. A quaternion can
be most easily understood as a rotation of a point [x,y,z] about an arbitrary axis (contrasted with a rotation matrix,
which represents rotations around three axes). The mathematics behind quaternions is fairly exotic in that it
involves the geometric properties of complex numbers and mathematical properties of imaginary numbers, but
working with them is simple, and frameworks like DirectX support them. A complex 3-D app can use the
Orientation sensor to adjust the user's perspective. This sensor combines input from the accelerometer, gyrometer,
and compass.
The SimpleOrientation API is used to determine the current device orientation in terms of definitions like portrait
up, portrait down, landscape left, and landscape right. It can also detect if a device is face-up or face-down. Rather
than returning properties like "portrait up" or "landscape left", this sensor returns a rotation value: "Not rotated",
"Rotated90DegreesCounterclockwise", and so on. The following table maps common orientation properties to the
corresponding sensor reading.

ORIENTATION CORRESPONDING SENSOR READING

Portrait Up NotRotated

Landscape Left Rotated90DegreesCounterclockwise

Portrait Down Rotated180DegreesCounterclockwise

Landscape Right Rotated270DegreesCounterclockwise

Prerequisites
You should be familiar with Extensible Application Markup Language (XAML), Microsoft Visual C#, and events.
The device or emulator that you're using must support a orientation sensor.

Create an OrientationSensor app


This section is divided into two subsections. The first subsection will take you through the steps necessary to create
an orientation application from scratch. The following subsection explains the app you have just created.
Instructions
Create a new project, choosing a Blank App (Universal Windows) from the Visual C# project templates.
Open your project's MainPage.xaml.cs file and replace the existing code with the following.

using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using Windows.Foundation;
using Windows.Foundation.Collections;
using Windows.UI.Xaml;
using Windows.UI.Xaml.Controls;
using Windows.UI.Xaml.Controls.Primitives;
using Windows.UI.Xaml.Data;
using Windows.UI.Xaml.Input;
using Windows.UI.Xaml.Media;
using Windows.UI.Xaml.Navigation;

using Windows.UI.Core;
using Windows.Devices.Sensors;

// The Blank Page item template is documented at http://go.microsoft.com/fwlink/p/?linkid=234238

namespace App1
{
/// <summary>
/// An empty page that can be used on its own or navigated to within a Frame.
/// </summary>
public sealed partial class MainPage : Page
{
private OrientationSensor _sensor;

private async void ReadingChanged(object sender, OrientationSensorReadingChangedEventArgs e)


{
await Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () =>
{
OrientationSensorReading reading = e.Reading;

// Quaternion values
txtQuaternionX.Text = String.Format("{0,8:0.00000}", reading.Quaternion.X);
txtQuaternionY.Text = String.Format("{0,8:0.00000}", reading.Quaternion.Y);
txtQuaternionZ.Text = String.Format("{0,8:0.00000}", reading.Quaternion.Z);
txtQuaternionW.Text = String.Format("{0,8:0.00000}", reading.Quaternion.W);

// Rotation Matrix values


txtM11.Text = String.Format("{0,8:0.00000}", reading.RotationMatrix.M11);
txtM12.Text = String.Format("{0,8:0.00000}", reading.RotationMatrix.M12);
txtM13.Text = String.Format("{0,8:0.00000}", reading.RotationMatrix.M13);
txtM21.Text = String.Format("{0,8:0.00000}", reading.RotationMatrix.M21);
txtM22.Text = String.Format("{0,8:0.00000}", reading.RotationMatrix.M22);
txtM23.Text = String.Format("{0,8:0.00000}", reading.RotationMatrix.M23);
txtM31.Text = String.Format("{0,8:0.00000}", reading.RotationMatrix.M31);
txtM32.Text = String.Format("{0,8:0.00000}", reading.RotationMatrix.M32);
txtM33.Text = String.Format("{0,8:0.00000}", reading.RotationMatrix.M33);
});
}

public MainPage()
{
this.InitializeComponent();
_sensor = OrientationSensor.GetDefault();

// Establish the report interval for all scenarios


uint minReportInterval = _sensor.MinimumReportInterval;
uint reportInterval = minReportInterval > 16 ? minReportInterval : 16;
_sensor.ReportInterval = reportInterval;

// Establish event handler


_sensor.ReadingChanged += new TypedEventHandler<OrientationSensor, OrientationSensorReadingChangedEventArgs>
(ReadingChanged);
}
}
}

You'll need to rename the namespace in the previous snippet with the name you gave your project. For example, if
you created a project named OrientationSensorCS, you'd replace namespace App1 with namespace OrientationSensorCS .
Open the file MainPage.xaml and replace the original contents with the following XML.
<Page
x:Class="App1.MainPage"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:local="using:App1"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
mc:Ignorable="d">

<Grid x:Name="LayoutRoot" Background="Black">


<TextBlock HorizontalAlignment="Left" Height="28" Margin="4,4,0,0" TextWrapping="Wrap" Text="M11:" VerticalAlignment="Top"
Width="46"/>
<TextBlock HorizontalAlignment="Left" Height="23" Margin="4,36,0,0" TextWrapping="Wrap" Text="M12:" VerticalAlignment="Top"
Width="39"/>
<TextBlock HorizontalAlignment="Left" Height="24" Margin="4,72,0,0" TextWrapping="Wrap" Text="M13:" VerticalAlignment="Top"
Width="39"/>
<TextBlock HorizontalAlignment="Left" Height="31" Margin="4,118,0,0" TextWrapping="Wrap" Text="M21:" VerticalAlignment="Top"
Width="39"/>
<TextBlock HorizontalAlignment="Left" Height="24" Margin="4,160,0,0" TextWrapping="Wrap" Text="M22:" VerticalAlignment="Top"
Width="39"/>
<TextBlock HorizontalAlignment="Left" Height="24" Margin="8,201,0,0" TextWrapping="Wrap" Text="M23:" VerticalAlignment="Top"
Width="35"/>
<TextBlock HorizontalAlignment="Left" Height="23" Margin="4,234,0,0" TextWrapping="Wrap" Text="M31:" VerticalAlignment="Top"
Width="39"/>
<TextBlock HorizontalAlignment="Left" Height="28" Margin="4,274,0,0" TextWrapping="Wrap" Text="M32:" VerticalAlignment="Top"
Width="46"/>
<TextBlock HorizontalAlignment="Left" Height="21" Margin="4,322,0,0" TextWrapping="Wrap" Text="M33:" VerticalAlignment="Top"
Width="39"/>
<TextBlock x:Name="txtM11" HorizontalAlignment="Left" Height="19" Margin="43,4,0,0" TextWrapping="Wrap" Text="TextBlock"
VerticalAlignment="Top" Width="53"/>
<TextBlock x:Name="txtM12" HorizontalAlignment="Left" Height="23" Margin="43,36,0,0" TextWrapping="Wrap" Text="TextBlock"
VerticalAlignment="Top" Width="53"/>
<TextBlock x:Name="txtM13" HorizontalAlignment="Left" Height="15" Margin="43,72,0,0" TextWrapping="Wrap" Text="TextBlock"
VerticalAlignment="Top" Width="53"/>
<TextBlock x:Name="txtM21" HorizontalAlignment="Left" Height="20" Margin="43,114,0,0" TextWrapping="Wrap" Text="TextBlock"
VerticalAlignment="Top" Width="53"/>
<TextBlock x:Name="txtM22" HorizontalAlignment="Left" Height="19" Margin="43,156,0,0" TextWrapping="Wrap" Text="TextBlock"
VerticalAlignment="Top" Width="53"/>
<TextBlock x:Name="txtM23" HorizontalAlignment="Left" Height="16" Margin="43,197,0,0" TextWrapping="Wrap" Text="TextBlock"
VerticalAlignment="Top" Width="53"/>
<TextBlock x:Name="txtM31" HorizontalAlignment="Left" Height="17" Margin="43,230,0,0" TextWrapping="Wrap" Text="TextBlock"
VerticalAlignment="Top" Width="53"/>
<TextBlock x:Name="txtM32" HorizontalAlignment="Left" Height="19" Margin="43,270,0,0" TextWrapping="Wrap" Text="TextBlock"
VerticalAlignment="Top" Width="53"/>
<TextBlock x:Name="txtM33" HorizontalAlignment="Left" Height="21" Margin="43,322,0,0" TextWrapping="Wrap" Text="TextBlock"
VerticalAlignment="Top" Width="53"/>
<TextBlock HorizontalAlignment="Left" Height="15" Margin="194,8,0,0" TextWrapping="Wrap" Text="Quaternion X:"
VerticalAlignment="Top" Width="81"/>
<TextBlock HorizontalAlignment="Left" Height="23" Margin="194,36,0,0" TextWrapping="Wrap" Text="Quaternion Y:"
VerticalAlignment="Top" Width="81"/>
<TextBlock HorizontalAlignment="Left" Height="15" Margin="194,72,0,0" TextWrapping="Wrap" Text="Quaternion Z:"
VerticalAlignment="Top" Width="81"/>
<TextBlock x:Name="txtQuaternionX" HorizontalAlignment="Left" Height="15" Margin="279,8,0,0" TextWrapping="Wrap"
Text="TextBlock" VerticalAlignment="Top" Width="104"/>
<TextBlock x:Name="txtQuaternionY" HorizontalAlignment="Left" Height="12" Margin="275,36,0,0" TextWrapping="Wrap"
Text="TextBlock" VerticalAlignment="Top" Width="108"/>
<TextBlock x:Name="txtQuaternionZ" HorizontalAlignment="Left" Height="19" Margin="275,68,0,0" TextWrapping="Wrap"
Text="TextBlock" VerticalAlignment="Top" Width="89"/>
<TextBlock HorizontalAlignment="Left" Height="21" Margin="194,96,0,0" TextWrapping="Wrap" Text="Quaternion W:"
VerticalAlignment="Top" Width="81"/>
<TextBlock x:Name="txtQuaternionW" HorizontalAlignment="Left" Height="12" Margin="279,96,0,0" TextWrapping="Wrap"
Text="TextBlock" VerticalAlignment="Top" Width="72"/>

</Grid>
</Page>
You'll need to replace the first part of the class name in the previous snippet with the namespace of your app. For
example, if you created a project named OrientationSensorCS, you'd replace x:Class="App1.MainPage" with
x:Class="OrientationSensorCS.MainPage" . You should also replace xmlns:local="using:App1" with
xmlns:local="using:OrientationSensorCS" .

Press F5 or select Debug > Start Debugging to build, deploy, and run the app.
Once the app is running, you can change the orientation by moving the device or using the emulator tools.
Stop the app by returning to Visual Studio and pressing Shift+F5 or select Debug > Stop Debugging to stop
the app.
Explanation
The previous example demonstrates how little code you'll need to write in order to integrate orientation-sensor
input in your app.
The app establishes a connection with the default orientation sensor in the MainPage method.

_sensor = OrientationSensor.GetDefault();

The app establishes the report interval within the MainPage method. This code retrieves the minimum interval
supported by the device and compares it to a requested interval of 16 milliseconds (which approximates a 60-Hz
refresh rate). If the minimum supported interval is greater than the requested interval, the code sets the value to the
minimum. Otherwise, it sets the value to the requested interval.

uint minReportInterval = _sensor.MinimumReportInterval;


uint reportInterval = minReportInterval > 16 ? minReportInterval : 16;
_sensor.ReportInterval = reportInterval;

The new sensor data is captured in the ReadingChanged method. Each time the sensor driver receives new data
from the sensor, it passes the values to your app using this event handler. The app registers this event handler on
the following line.

_sensor.ReadingChanged += new TypedEventHandler<OrientationSensor,


OrientationSensorReadingChangedEventArgs>(ReadingChanged);

These new values are written to the TextBlocks found in the project's XAML.

Create a SimpleOrientation app


This section is divided into two subsections. The first subsection will take you through the steps necessary to create
a simple orientation application from scratch. The following subsection explains the app you have just created.
Instructions
Create a new project, choosing a Blank App (Universal Windows) from the Visual C# project templates.
Open your project's MainPage.xaml.cs file and replace the existing code with the following.

using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using Windows.Foundation;
using Windows.Foundation.Collections;
using Windows.UI.Xaml;
using Windows.UI.Xaml.Controls;
using Windows.UI.Xaml.Controls.Primitives;
using Windows.UI.Xaml.Data;
using Windows.UI.Xaml.Input;
using Windows.UI.Xaml.Media;
using Windows.UI.Xaml.Navigation;

using Windows.UI.Core;
using Windows.Devices.Sensors;
// The Blank Page item template is documented at http://go.microsoft.com/fwlink/p/?linkid=234238

namespace App1
{
/// <summary>
/// An empty page that can be used on its own or navigated to within a Frame.
/// </summary>
public sealed partial class MainPage : Page
{
// Sensor and dispatcher variables
private SimpleOrientationSensor _simpleorientation;

// This event handler writes the current sensor reading to


// a text block on the app' s main page.

private async void OrientationChanged(object sender, SimpleOrientationSensorOrientationChangedEventArgs e)


{
await Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () =>
{
SimpleOrientation orientation = e.Orientation;
switch (orientation)
{
case SimpleOrientation.NotRotated:
txtOrientation.Text = "Not Rotated";
break;
case SimpleOrientation.Rotated90DegreesCounterclockwise:
txtOrientation.Text = "Rotated 90 Degrees Counterclockwise";
break;
case SimpleOrientation.Rotated180DegreesCounterclockwise:
txtOrientation.Text = "Rotated 180 Degrees Counterclockwise";
break;
case SimpleOrientation.Rotated270DegreesCounterclockwise:
txtOrientation.Text = "Rotated 270 Degrees Counterclockwise";
break;
case SimpleOrientation.Faceup:
txtOrientation.Text = "Faceup";
break;
case SimpleOrientation.Facedown:
txtOrientation.Text = "Facedown";
break;
default:
txtOrientation.Text = "Unknown orientation";
break;
}
});
}

public MainPage()
{
this.InitializeComponent();
_simpleorientation = SimpleOrientationSensor.GetDefault();

// Assign an event handler for the sensor orientation-changed event


if (_simpleorientation != null)
{
_simpleorientation.OrientationChanged += new TypedEventHandler<SimpleOrientationSensor,
SimpleOrientationSensorOrientationChangedEventArgs>(OrientationChanged);
}
}
}
}
}

You'll need to rename the namespace in the previous snippet with the name you gave your project. For example, if
you created a project named SimpleOrientationCS, you'd replace namespace App1 with namespace SimpleOrientationCS .
Open the file MainPage.xaml and replace the original contents with the following XML.

<Page
x:Class="App1.MainPage"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:local="using:App1"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
mc:Ignorable="d">

<Grid x:Name="LayoutRoot" Background="#FF0C0C0C">


<TextBlock HorizontalAlignment="Left" Height="24" Margin="8,8,0,0" TextWrapping="Wrap" Text="Current Orientation:"
VerticalAlignment="Top" Width="101" Foreground="#FFF8F7F7"/>
<TextBlock x:Name="txtOrientation" HorizontalAlignment="Left" Height="24" Margin="118,8,0,0" TextWrapping="Wrap"
Text="TextBlock" VerticalAlignment="Top" Width="175" Foreground="#FFFEFAFA"/>

</Grid>
</Page>

You'll need to replace the first part of the class name in the previous snippet with the namespace of your app. For
example, if you created a project named SimpleOrientationCS, you'd replace x:Class="App1.MainPage" with
x:Class="SimpleOrientationCS.MainPage" . You should also replace xmlns:local="using:App1" with
xmlns:local="using:SimpleOrientationCS" .

Press F5 or select Debug > Start Debugging to build, deploy, and run the app.
Once the app is running, you can change the orientation by moving the device or using the emulator tools.
Stop the app by returning to Visual Studio and pressing Shift+F5 or select Debug > Stop Debugging to stop
the app.
Explanation
The previous example demonstrates how little code you'll need to write in order to integrate simple-orientation
sensor input in your app.
The app establishes a connection with the default sensor in the MainPage method.

_simpleorientation = SimpleOrientationSensor.GetDefault();

The new sensor data is captured in the OrientationChanged method. Each time the sensor driver receives new
data from the sensor, it passes the values to your app using this event handler. The app registers this event handler
on the following line.

_simpleorientation.OrientationChanged += new TypedEventHandler<SimpleOrientationSensor,


SimpleOrientationSensorOrientationChangedEventArgs>(OrientationChanged);

These new values are written to a TextBlock found in the project's XAML.
<TextBlock HorizontalAlignment="Left" Height="24" Margin="8,8,0,0" TextWrapping="Wrap" Text="Current Orientation:"
VerticalAlignment="Top" Width="101" Foreground="#FFF8F7F7"/>
<TextBlock x:Name="txtOrientation" HorizontalAlignment="Left" Height="24" Margin="118,8,0,0" TextWrapping="Wrap" Text="TextBlock"
VerticalAlignment="Top" Width="175" Foreground="#FFFEFAFA"/>

Related topics
OrientationSensor Sample
SimpleOrientation Sensor Sample
Bluetooth
3/6/2017 1 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This section contains articles on how to integrate Bluetooth into Universal Windows Platform (UWP) apps,
including how to use RFCOMM, GATT, and Low Energy (LE) Advertisements.

TOPIC DESCRIPTION

RFCOMM This article provides an overview of the Bluetooth RFCOMM


APIs in the Windows.Devices.Bluetooth.Rfcomm
namespace, along with example code on how to send or
receive a file.

GATT This article provides an overview of the Bluetooth Generic


Attribute Profile (GATT) APIs in the
Windows.Devices.Bluetooth.GenericAttributeProfile
namespace, along with sample code for three common GATT
scenarios: retrieving Bluetooth data, controlling a Bluetooth LE
thermometer device, and controlling the presentation of
Bluetooth LE device data.

Low Energy (LE) Advertisements This article demonstrates how to send and receive Bluetooth
Low Energy advertisements using the APIs in the
Windows.Devices.Bluetooth.Advertisement namespace.

Bluetooth developer FAQ This article provides some answers to commonly asked
bluetooth developer questions.
Bluetooth RFCOMM
3/6/2017 8 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Important APIs
Windows.Devices.Bluetooth
Windows.Devices.Bluetooth.Rfcomm
This article provides an overview of Bluetooth RFCOMM in Universal Windows Platform (UWP) apps, along with
example code on how to send or receive a file..

Overview
The APIs in the Windows.Devices.Bluetooth.Rfcomm namespace build on existing patterns for
Windows.Devices, including enumeration and instantiation. Data reading and writing is designed to take
advantage of established data stream patterns and objects in Windows.Storage.Streams. Service Discovery
Protocol (SDP) attributes have a value and an expected type. However, some common devices have faulty
implementations of SDP attributes where the value is not of the expected type. Additionally, many usages of
RFCOMM do not require additional SDP attributes at all. For these reasons, this API offers access to the unparsed
SDP data, from which developers can obtain the information they need.
The RFCOMM APIs use the concept of service identifiers. Although a service identifier is simply a 128-bit GUID, it is
also commonly specified as either a 16- or 32-bit integer. The RFCOMM API offers a wrapper for service identifiers
that allows them be specified and consumed as 128-bit GUIDs as well as 32-bit integers but does not offer 16-bit
integers. This is not an issue for the API because languages will automatically upsize to a 32-bit integer and the
identifier can still be correctly generated.
Apps can perform multi-step device operations in a background task so that they can run to completion even if the
app is moved to the background and suspended. This allows for reliable device servicing such as changes to
persistent settings or firmware, and content synchronization, without requiring the user to sit and watch a progress
bar. Use the DeviceServicingTrigger for device servicing and the DeviceUseTrigger for content synchronization.
Note that these background tasks constrain the amount of time the app can run in the background, and are not
intended to allow indefinite operation or infinite synchronization.
For a complete code sample that details RFCOMM operation, see the Bluetooth Rfcomm Chat Sample on Github.

Send a file as a client


When sending a file, the most basic scenario is to connect to a paired device based on a desired service. This
involves the following steps:
Use the RfcommDeviceService.GetDeviceSelector\* functions to help generate an AQS query that can be
used to enumerated paired device instances of the desired service.
Pick an enumerated device, create an RfcommDeviceService, and read the SDP attributes as needed (using
established data helpers to parse the attribute's data).
Create a socket and use the RfcommDeviceService.ConnectionHostName and
RfcommDeviceService.ConnectionServiceName properties to StreamSocket.ConnectAsync to the
remote device service with the appropriate parameters.
Follow established data stream patterns to read chunks of data from the file and send it on the socket's
StreamSocket.OutputStream to the device.

Windows.Devices.Bluetooth.RfcommDeviceService _service;
Windows.Networking.Sockets.StreamSocket _socket;

async void Initialize()


{
// Enumerate devices with the object push service
auto services =
await Windows.Devices.Enumeration.DeviceInformation.FindAllAsync(
RfcommDeviceService.GetDeviceSelector(
RfcommServiceId.ObexObjectPush));

if (services.Count > 0)
{
// Initialize the target Bluetooth BR device
auto service = await RfcommDeviceService.FromIdAsync(services[0].Id);

// Check that the service meets this App's minimum requirement


if (SupportsProtection(service) && IsCompatibleVersion(service))
{
_service = service;

// Create a socket and connect to the target


_socket = new StreamSocket();
await _socket.ConnectAsync(
_service.ConnectionHostName,
_service.ConnectionServiceName,
SocketProtectionLevel
.BluetoothEncryptionAllowNullAuthentication);

// The socket is connected. At this point the App can wait for
// the user to take some action, e.g. click a button to send a
// file to the device, which could invoke the Picker and then
// send the picked file. The transfer itself would use the
// Sockets API and not the Rfcomm API, and so is omitted here for
// brevity.
}
}
}

// This App requires a connection that is encrypted but does not care about
// whether its authenticated.
bool SupportsProtection(RfcommDeviceService service)
{
switch (service.ProtectionLevel)
{
case SocketProtectionLevel.PlainSocket:
if ((service.MaximumProtectionLevel == SocketProtectionLevel
.BluetoothEncryptionWithAuthentication)
|| (service.MaximumProtectionLevel == SocketProtectionLevel
.BluetoothEncryptionAllowNullAuthentication)
{
// The connection can be upgraded when opening the socket so the
// App may offer UI here to notify the user that Windows may
// prompt for a PIN exchange.
return true;
}
else
{
// The connection cannot be upgraded so an App may offer UI here
// to explain why a connection won't be made.
return false;
}
case SocketProtectionLevel.BluetoothEncryptionWithAuthentication:
return true;
case SocketProtectionLevel.BluetoothEncryptionAllowNullAuthentication:
return true;
return true;
}
return false;
}

// This App relies on CRC32 checking available in version 2.0 of the service.
const uint SERVICE_VERSION_ATTRIBUTE_ID = 0x0300;
const byte SERVICE_VERSION_ATTRIBUTE_TYPE = 0x0A; // UINT32
const uint MINIMUM_SERVICE_VERSION = 200;
bool IsCompatibleVersion(RfcommDeviceService service)
{
auto attributes = await service.GetSdpRawAttributesAsync(
BluetothCacheMode.Uncached);
auto attribute = attributes[SERVICE_VERSION_ATTRIBUTE_ID];
auto reader = DataReader.FromBuffer(attribute);

// The first byte contains the attribute' s type


byte attributeType = reader.ReadByte();
if (attributeType == SERVICE_VERSION_ATTRIBUTE_TYPE)
{
// The remainder is the data
uint version = reader.Uint32();
return version >= MINIMUM_SERVICE_VERSION;
}
}

Windows::Devices::Bluetooth::RfcommDeviceService^ _service;
Windows::Networking::Sockets::StreamSocket^ _socket;

void Initialize()
{
// Enumerate devices with the object push service
create_task(
Windows::Devices::Enumeration::DeviceInformation::FindAllAsync(
RfcommDeviceService::GetDeviceSelector(
RfcommServiceId::ObexObjectPush)))
.then([](DeviceInformationCollection^ services)
{
if (services->Size > 0)
{
// Initialize the target Bluetooth BR device
create_task(RfcommDeviceService::FromIdAsync(services[0]->Id))
.then([](RfcommDeviceService^ service)
{
// Check that the service meets this App's minimum
// requirement
if (SupportsProtection(service)
&& IsCompatibleVersion(service))
{
_service = service;

// Create a socket and connect to the target


_socket = ref new StreamSocket();
create_task(_socket->ConnectAsync(
_service->ConnectionHostName,
_service->ConnectionServiceName,
SocketProtectionLevel
::BluetoothEncryptionAllowNullAuthentication)
.then([](void)
{
// The socket is connected. At this point the App can
// wait for the user to take some action, e.g. click
// a button to send a file to the device, which could
// invoke the Picker and then send the picked file.
// The transfer itself would use the Sockets API and
// not the Rfcomm API, and so is omitted here for
//brevity.
});
});
}
});
}
});
}

// This App requires a connection that is encrypted but does not care about
// whether its authenticated.
bool SupportsProtection(RfcommDeviceService^ service)
{
switch (service->ProtectionLevel)
{
case SocketProtectionLevel->PlainSocket:
if ((service->MaximumProtectionLevel == SocketProtectionLevel
::BluetoothEncryptionWithAuthentication)
|| (service->MaximumProtectionLevel == SocketProtectionLevel
::BluetoothEncryptionAllowNullAuthentication)
{
// The connection can be upgraded when opening the socket so the
// App may offer UI here to notify the user that Windows may
// prompt for a PIN exchange.
return true;
}
else
{
// The connection cannot be upgraded so an App may offer UI here
// to explain why a connection won't be made.
return false;
}
case SocketProtectionLevel::BluetoothEncryptionWithAuthentication:
return true;
case SocketProtectionLevel::BluetoothEncryptionAllowNullAuthentication:
return true;
}
return false;
}

// This App relies on CRC32 checking available in version 2.0 of the service.
const uint SERVICE_VERSION_ATTRIBUTE_ID = 0x0300;
const byte SERVICE_VERSION_ATTRIBUTE_TYPE = 0x0A; // UINT32
const uint MINIMUM_SERVICE_VERSION = 200;
bool IsCompatibleVersion(RfcommDeviceService^ service)
{
auto attributes = await service->GetSdpRawAttributesAsync(
BluetoothCacheMode::Uncached);
auto attribute = attributes[SERVICE_VERSION_ATTRIBUTE_ID];
auto reader = DataReader.FromBuffer(attribute);

// The first byte contains the attribute' s type


byte attributeType = reader->ReadByte();
if (attributeType == SERVICE_VERSION_ATTRIBUTE_TYPE)
{
// The remainder is the data
uint version = reader->Uint32();
return version >= MINIMUM_SERVICE_VERSION;
}
}

Receive File as a Server


Another common RFCOMM App scenario is to host a service on the PC and expose it for other devices.
Create a RfcommServiceProvider to advertise the desired service.
Set the SDP attributes as needed (using established data helpers to generate the attributes Data) and starts
advertising the SDP records for other devices to retrieve.
To connect to a client device, create a socket listener to start listening for incoming connection requests.
When a connection is received, store the connected socket for later processing.
Follow established data stream patterns to read chunks of data from the socket's InputStream and save it to a
file.
In order to persist an RFCOMM service in the background, use the RfcommConnectionTrigger. The background
task is triggered on connection to the service. The developer receives a handle to the socket in the background task.
The background task is long running and persists for as long as the socket is in use.

Windows.Devices.Bluetooth.RfcommServiceProvider _provider;

async void Initialize()


{
// Initialize the provider for the hosted RFCOMM service
_provider = await Windows.Devices.Bluetooth.
RfcommServiceProvider.CreateAsync(RfcommServiceId.ObexObjectPush);

// Create a listener for this service and start listening


StreamSocketListener listener = new StreamSocketListener();
listener.ConnectionReceived += OnConnectionReceived;
await listener.BindServiceNameAsync(
_provider.ServiceId.AsString(),
SocketProtectionLevel
.BluetoothEncryptionAllowNullAuthentication);

// Set the SDP attributes and start advertising


InitializeServiceSdpAttributes(_provider);
_provider.StartAdvertising();
}

const uint SERVICE_VERSION_ATTRIBUTE_ID = 0x0300;


const byte SERVICE_VERSION_ATTRIBUTE_TYPE = 0x0A; // UINT32
const uint SERVICE_VERSION = 200;
void InitializeServiceSdpAttributes(RfcommServiceProvider provider)
{
auto writer = new Windows.Storage.Streams.DataWriter();

// First write the attribute type


writer.WriteByte(SERVICE_VERSION_ATTRIBUTE_TYPE)
// Then write the data
writer.WriteUint32(SERVICE_VERSION);

auto data = writer.DetachBuffer();


provider.SdpRawAttributes.Add(SERVICE_VERSION_ATTRIBUTE_ID, data);
}

void OnConnectionReceived(
StreamSocketListener listener,
StreamSocketListenerConnectionReceivedEventArgs args)
{
// Stop advertising/listening so that we're only serving one client
_provider.StopAdvertising();
await listener.Close();
_socket = args.Socket;

// The client socket is connected. At this point the App can wait for
// the user to take some action, e.g. click a button to receive a file
// from the device, which could invoke the Picker and then save the
// received file to the picked location. The transfer itself would use
// the Sockets API and not the Rfcomm API, and so is omitted here for
// brevity.
}
Windows::Devices::Bluetooth::RfcommServiceProvider^ _provider;

void Initialize()
{
// Initialize the provider for the hosted RFCOMM service
create_task(Windows::Devices::Bluetooth.
RfcommServiceProvider::CreateAsync(
RfcommServiceId::ObexObjectPush))
.then([](RfcommServiceProvider^ provider) -> task<void> {
_provider = provider;

// Create a listener for this service and start listening


auto listener = ref new StreamSocketListener();
listener->ConnectionReceived += ref new TypedEventHandler<
StreamSocketListener^,
StreamSocketListenerConnectionReceivedEventArgs^>
(&OnConnectionReceived);
return create_task(listener->BindServiceNameAsync(
_provider->ServiceId->AsString(),
SocketProtectionLevel
::BluetoothEncryptionAllowNullAuthentication));
}).then([listener](void) {
// Set the SDP attributes and start advertising
InitializeServiceSdpAttributes(_provider);
_provider->StartAdvertising();
});
}

const uint SERVICE_VERSION_ATTRIBUTE_ID = 0x0300;


const byte SERVICE_VERSION_ATTRIBUTE_TYPE = 0x0A; // UINT32
const uint SERVICE_VERSION = 200;
void InitializeServiceSdpAttributes(RfcommServiceProvider^ provider)
{
auto writer = ref new Windows::Storage::Streams::DataWriter();

// First write the attribute type


writer->WriteByte(SERVICE_VERSION_ATTRIBUTE_TYPE)
// Then write the data
writer->WriteUint32(SERVICE_VERSION);

auto data = writer->DetachBuffer();


provider->SdpRawAttributes->Add(SERVICE_VERSION_ATTRIBUTE_ID, data);
}

void OnConnectionReceived(
StreamSocketListener^ listener,
StreamSocketListenerConnectionReceivedEventArgs^ args)
{
// Stop advertising/listening so that we're only serving one client
_provider->StopAdvertising();
create_task(listener->Close())
.then([args](void) {
_socket = args->Socket;

// The client socket is connected. At this point the App can wait for
// the user to take some action, e.g. click a button to receive a
// file from the device, which could invoke the Picker and then save
// the received file to the picked location. The transfer itself
// would use the Sockets API and not the Rfcomm API, and so is
// omitted here for brevity.
});
}
Bluetooth GATT
3/6/2017 6 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
** Important APIs **
Windows.Devices.Bluetooth
Windows.Devices.Bluetooth.GenericAttributeProfile
This article provides an overview of Bluetooth Generic Attribute Profile (GATT) for Universal Windows Platform
(UWP) apps, along with sample code for three common GATT scenarios: retrieving Bluetooth data, controlling a
Bluetooth LE thermometer device, and controlling the presentation of Bluetooth LE device data.

Overview
Developers can use the APIs in the Windows.Devices.Bluetooth.GenericAttributeProfile namespace to access
Bluetooth LE services, descriptors, and characteristics. Bluetooth LE devices expose their functionality through a
collection of:
Primary Services
Included Services
Characteristics
Descriptors
Primary services define the functional contract of the LE device and contain a collection of characteristics that define
the service. Those characteristics, in turn, contain descriptors that describe the characteristics.
The Bluetooth GATT APIs expose objects and functions, rather than access to the raw transport. At the driver level
primary services are enumerated as Child Device Nodes of the Bluetooth LE device using the
Windows.Devices.Enumeration APIs.
The Bluetooth GATT APIs also enable developers to work with Bluetooth LE devices with the ability to perform the
following tasks:
Perform Service / Characteristic / Descriptor discovery
Read and Write Characteristic / Descriptor values
Register a callback for the Characteristic ValueChanged event
The Bluetooth GATT APIs simplify development by dealing with common properties and providing reasonable
defaults to aid in device management and configuration. They provide a means for developers to access
functionality of a Bluetooth LE device from an app.
To create a useful implementation a developer has to have prior knowledge of the GATT services and characteristics
the application intends to consume, and to process the specific characteristic values such that the binary data
provided by the API is transformed into useful data before being presented to the user. The Bluetooth GATT APIs
expose only the basic primitives required to communicate with a Bluetooth LE device. To interpret the data, an
application profile must be defined, either by a Bluetooth SIG standard profile, or a custom profile implemented by
a device vendor. A profile creates a binding contract between the application and the device, as to what the
exchanged data represents and how to interpret it.
For convenience the Bluetooth SIG maintains a list of public profiles available.
Retrieve Bluetooth data
In this example, the app consumes temperature measurements from a Bluetooth device that implements the
Bluetooth LE Health Thermometer Service. The app specifies that it wants to be notified when a new temperature
measurement is available. By registering an event handler for the "Thermometer Characteristic Value Changed"
event, the app will receive characteristic value changed event notifications while it is running in the foreground.
Note that when the app is suspended, it must release all device resources and when it resumes, it must perform
device enumeration and initialization once again. If device interaction in the background is desired, please take a
look at DeviceUseTrigger or GattCharacteristicNotificationTrigger. DeviceUseTrigger is typically better for higher
frequency events whereas GattCharacteristicNotificationTrigger is better at handling infrequent events.

double convertTemperatureData(byte[] temperatureData)


{
// Read temperature data in IEEE 11703 floating point format
// temperatureData[0] contains flags about optional data - not used
UInt32 mantissa = ((UInt32)temperatureData[3] << 16) |
((UInt32)temperatureData[2] << 8) |
((UInt32)temperatureData[1]);

Int32 exponent = (Int32)temperatureData[4];

return mantissa * Math.Pow(10.0, exponent);


}

async void Initialize()


{
var themometerServices = await Windows.Devices.Enumeration
.DeviceInformation.FindAllAsync(GattDeviceService
.GetDeviceSelectorFromUuid(
GattServiceUuids.HealthThermometer),
null);

GattDeviceService firstThermometerService = await


GattDeviceService.FromIdAsync(themometerServices[0].Id);

serviceNameTextBlock.Text = "Using service: " +


themometerServices[0].Name;

GattCharacteristic thermometerCharacteristic =
firstThermometerService.GetCharacteristics(
GattCharacteristicUuids.TemperatureMeasurement)[0];

thermometerCharacteristic.ValueChanged += temperatureMeasurementChanged;

await thermometerCharacteristic
.WriteClientCharacteristicConfigurationDescriptorAsync(
GattClientCharacteristicConfigurationDescriptorValue.Notify);
}

void temperatureMeasurementChanged(
GattCharacteristic sender,
GattValueChangedEventArgs eventArgs)
{
byte[] temperatureData = new byte[eventArgs.CharacteristicValue.Length];
Windows.Storage.Streams.DataReader.FromBuffer(
eventArgs.CharacteristicValue).ReadBytes(temperatureData);

var temperatureValue = convertTemperatureData(temperatureData);

temperatureTextBlock.Text = temperatureValue.ToString();
}
double MainPage::ConvertTemperatureData(
const Array<unsigned char>^ temperatureData)
{
unsigned mantissa = ((unsigned)temperatureData[3] << 16) |
((unsigned)temperatureData[2] << 8) |
((unsigned)temperatureData[1]);

int exponent = (int)temperatureData[4];

return mantissa * pow(10.0, (double)exponent);


}

void MainPage::Initialize()
{
create_task(DeviceInformation::FindAllAsync(
GattDeviceService::GetDeviceSelectorFromUuid(
GattServiceUuids::HealthThermometer),
nullptr)).then(
[this] (DeviceInformationCollection^ thermometerServices)
{
create_task(GattDeviceService::FromIdAsync(
thermometerServices->GetAt(0)->Id))
.then([this] (GattDeviceService^ firstThermometerService)
{
GattCharacteristic^ thermometerCharacteristic =
firstThermometerService->GetCharacteristics(
GattCharacteristicUuids::TemperatureMeasurement)
->GetAt(0);

thermometerCharacteristic->ValueChanged +=
ref new TypedEventHandler<
GattCharacteristic^,
GattValueChangedEventArgs^>(
this, &MainPage::TemperatureMeasurementChanged);

create_task(thermometerCharacteristic->
WriteClientCharacteristicConfigurationDescriptorAsync(
GattClientCharacteristicConfigurationDescriptorValue
::Notify));
});
});
}

void MainPage::TemperatureMeasurementChanged(
GattCharacteristic^ sender,
GattValueChangedEventArgs^ eventArgs)
{
auto temperatureData = ref new Array<unsigned char>(
eventArgs->CharacteristicValue->Length);
DataReader::FromBuffer(eventArgs->CharacteristicValue)
->ReadBytes(temperatureData);

double temperatureValue = ConvertTemperatureData(temperatureData);


std::wstringstream str;
str << temperatureValue;

temperatureTextBlock->Text = ref new String(str.str().c_str());


}

Control a Bluetooth LE thermometer device


In this example, an UWP app acts as a controller for a fictitious Bluetooth LE Thermometer device. The device also
declares a format characteristic which allows users to retrieve the value reading in either Celsius or Fahrenheit
degrees, in addition to the standard characteristics of the HealthThermometer profile. The app uses reliable write
transactions to make sure that the format and measurement interval are set as a single value.

// Uuid of the "Format" Characteristic Value


Guid formatCharacteristicUuid =
new Guid("{00000000-0000-0000-0000-000000000010}");

// Constant representing a Fahrenheit scale temperature measurement


const byte FahrenheitReading = 1;
async void Initialize()
{
var themometerServices = await Windows.Devices.Enumeration
.DeviceInformation.FindAllAsync(GattDeviceService
.GetDeviceSelectorFromUuid(
GattServiceUuids.HealthThermometer),
null);

GattDeviceService thermometerService = await


GattDeviceService.FromIdAsync(themometerServices[0].Id);

serviceNameTextBlock.Text = "Using service: " +


themometerServices[0].Name;

GattCharacteristic intervalCharacteristic = thermometerService


.GetCharacteristics(GattCharacteristicUuids.MeasurementInterval)[0];

GattCharacteristic formatCharacteristic = thermometerService


.GetCharacteristics(formatCharacteristicUuid)[0];

GattReliableWriteTransaction gattTransaction =
new GattReliableWriteTransaction();

var writer = new Windows.Storage.Streams.DataWriter();

// Get a temperature reading every 60 seconds


writer.WriteUInt16(60);

gattTransaction.WriteValue(
intervalCharacteristic,
writer.DetachBuffer());

// Get the reading on the Fahrenheit scale


writer.WriteByte(FahrenheitReading);

gattTransaction.WriteValue(
formatCharacteristic,
writer.DetachBuffer());

GattCommunicationStatus status = await gattTransaction.CommitAsync();

if (GattCommunicationStatus.Unreachable == status)
{
statusTextBlock.Text = "Writing to your device failed !";
}
}
// Uuid of the "Format" Characteristic Value
Guid formatCharacteristicUuid(0x00000000, 0x0000, 0x0000, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x10);

// Constant representing a Fahrenheit scale temperature measurement


const unsigned char FAHRENHEIT_READING = 1;

void MainPage::Initialize()
{
create_task(DeviceInformation::FindAllAsync(
GattDeviceService::GetDeviceSelectorFromUuid(
GattServiceUuids::HealthThermometer),
nullptr)).then(
[this] (DeviceInformationCollection^ thermometerServices)
{
create_task(GattDeviceService::FromIdAsync(
thermometerServices->GetAt(0)->Id)).then([this] (
GattDeviceService^ thermometerService)
{
GattCharacteristic^ intervalCharacteristic =
thermometerService->GetCharacteristics(
GattCharacteristicUuids::MeasurementInterval)
->GetAt(0);

GattCharacteristic^ formatCharacteristic =
thermometerService->GetCharacteristics(
formatCharacteristicUuid)->GetAt(0);

GattReliableWriteTransaction^ gattTransaction =
ref new GattReliableWriteTransaction();

DataWriter^ writer = ref new DataWriter();

// Get a temperature reading every 60 seconds


writer->WriteUInt16(60);

gattTransaction->WriteValue(
intervalCharacteristic,
writer->DetachBuffer());

writer->WriteByte(FAHRENHEIT_READING);

gattTransaction->WriteValue(
formatCharacteristic,
writer->DetachBuffer());

create_task(gattTransaction->CommitAsync())
.then([this] (GattCommunicationStatus status)
{
if (GattCommunicationStatus::Unreachable == status)
{
statusTextBlock->Text =
ref new String(L"Writing to your device failed !");
}
});
});
});

Control the presentation of Bluetooth LE device data


A Bluetooth LE devices may expose a battery service that provides the current battery level to the user. The battery
service includes an optional PresentationFormats descriptor which allows some flexibility in interpretation of the
battery level data. This scenario provides example of an app that works with such a device and uses the
PresentationFormats property to format a characteristic value, before presenting it to the user.
async void Initialize()
{
var batteryServices = await Windows.Devices.Enumeration
.DeviceInformation.FindAllAsync(GattDeviceService
.GetDeviceSelectorFromUuid(GattServiceUuids.Battery),
null);

if (batteryServices.Count > 0)
{
// Use the first Battery service on the system
GattDeviceService batteryService = await GattDeviceService
.FromIdAsync(batteryServices[0].Id);

// Use the first Characteristic of that Service


GattCharacteristic batteryLevelCharacteristic =
batteryService.GetCharacteristics(
GattCharacteristicUuids.BatteryLevel)[0];

batteryLevelCharacteristic.ValueChanged += batteryLevelChanged;
}
else
{
statusTextBlock.Text = "No Battery services found !";
}
}

void batteryLevelChanged(
GattCharacteristic sender,
GattValueChangedEventArgs eventArgs)
{
byte levelData = Windows.Storage.Streams.DataReader
.FromBuffer(eventArgs.CharacteristicValue).ReadByte();

double levelValue;

if (sender.PresentationFormats.Count > 0)
{
levelValue = levelData *
Math.Pow(10.0, sender.PresentationFormats[0].Exponent);
}
else
{
levelValue = (double)levelData;
}

batteryLevelTextBlock.Text = levelValue.ToString();
}
void MainPage::Initialize()
{
create_task(DeviceInformation::FindAllAsync(
GattDeviceService::GetDeviceSelectorFromUuid(
GattServiceUuids::Battery),
nullptr)).then([this] (DeviceInformationCollection^ batteryServices)
{
create_task(GattDeviceService::FromIdAsync(
batteryServices->GetAt(0)->Id)).then([this] (
GattDeviceService^ batteryService)
{
GattCharacteristic^ batteryLevelCharacteristic =
batteryService->GetCharacteristics(
GattCharacteristicUuids::BatteryLevel)->GetAt(0);

batteryLevelCharacteristic->ValueChanged +=
ref new TypedEventHandler<
GattCharacteristic^,
GattValueChangedEventArgs^>
(this, &MainPage::BatteryLevelChanged);

create_task(batteryLevelCharacteristic
->WriteClientCharacteristicConfigurationDescriptorAsync(
GattClientCharacteristicConfigurationDescriptorValue
::Notify));
});
});
}

void MainPage::BatteryLevelChanged(
GattCharacteristic^ sender,
GattValueChangedEventArgs^ eventArgs)
{
unsigned char batteryLevelData = DataReader::FromBuffer(
eventArgs->CharacteristicValue)->ReadByte();

// if this characteristic has a presentation format


// use that information to format the value
double batteryLevelValue;
if (sender->PresentationFormats->Size > 0)
{
batteryLevelValue = batteryLevelData *
pow(10.0, sender->PresentationFormats->GetAt(0)->Exponent);
}
else
{
batteryLevelValue = batteryLevelData;
}

std::wstringstream str;
str << batteryLevelValue;
batteryLevelTextBlock->Text =
ref new String(str.str().c_str());
}
Bluetooth LE Advertisements
3/6/2017 4 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Important APIs
Windows.Devices.Bluetooth.Advertisement
This article provides an overview of Bluetooth Low Energy (LE) Advertisement beacons for Universal Windows
Platform (UWP) apps.

Overview
There are two main functions that a developer can perform using the LE Advertisement APIs:
Advertisement Watcher: listen for nearby beacons and filter them out based on payload or proximity.
Advertisement Publisher: define a payload for Windows to advertise on a developers behalf.
Full sample code is found in the Bluetooth Advertisement Sample on Github

Basic Setup
To use basic Bluetooth LE functionality in a Universal Windows Platform app, you must check the Bluetooth
capability in the Package.appxmanifest.
1. Open Package.appxmanifest
2. Go to the Capabilities tab
3. Find Bluetooth in the list on the left and check the box next to it.

Publishing Advertisements
Bluetooth LE Advertisements allow your device to constantly beacon out a specific payload, called an
advertisement. This advertisement can be seen by any nearby Bluetooth LE capable device, if they are set up to
listen for this specific advertisment.
Note For user privacy, the lifespan of your advertisement is tied to that of your app. You can create a
BluetoothLEAdvertisementPublisher and call Start in a background task for advertisement in the background. For
more information about background tasks, see Launching, resuming, and background tasks.
Basic Publishing
There are many different ways to add data to an Advertisement. This example shows a common way to create a
company-specific advertisement.
First, create the advertisement publisher that controls whether or not the device is beaconing out a specific
advertisement.

BluetoothLEAdvertisementPublisher publisher = new BluetoothLEAdvertisementPublisher();

Second, create a custom data section. This example uses an unassigned CompanyId value 0xFFFE and adds the text
Hello World to the advertisement.
// Add custom data to the advertisement
var manufacturerData = new BluetoothLEManufacturerData();
manufacturerData.CompanyId = 0xFFFE;

var writer = new DataWriter();


writer.WriteString("Hello World");

// Make sure that the buffer length can fit within an advertisement payload (~20 bytes).
// Otherwise you will get an exception.
manufacturerData.Data = writer.DetachBuffer();

// Add the manufacturer data to the advertisement publisher:


publisher.Advertisement.ManufacturerData.Add(manufacturerData);

Now that the publisher has been created and setup, you can call Start to begin advertising.

publisher.Start();

Watching for Advertisements


Basic Watching
The following code demonstrates how to create a Bluetooth LE Advertisement watcher, set a callback, and start
watching for all LE advertisements.

BluetoothLEAdvertisementWatcher watcher = new BluetoothLEAdvertisementWatcher();


watcher.Received += OnAdvertisementReceived;
watcher.Start();

private async void OnAdvertisementReceived(BluetoothLEAdvertisementWatcher watcher, BluetoothLEAdvertisementReceivedEventArgs


eventArgs)
{
// Do whatever you want with the advertisement
}

Active Scanning
To receive scan response advertisements as well, set the following after creating the watcher. Note that this will
cause greater power drain and is not available while in background modes.

watcher.ScanningMode = BluetoothLEScanningMode.Active;

Watching for a Specific Advertisement Pattern


Sometimes you want to listen for a specific advertisement. In this case, listen for an advertisement containing a
payload with a made up company (identified as 0xFFFE) and containing the string Hello World in the advertisement.
This can be paired with the Basic Publishing example to have one Windows machine advertising and another
watching. Be sure to set this advertisement filter before you start the watcher!
var manufacturerData = new BluetoothLEManufacturerData();
manufacturerData.CompanyId = 0xFFFE;

// Make sure that the buffer length can fit within an advertisement payload (~20 bytes).
// Otherwise you will get an exception.
var writer = new DataWriter();
writer.WriteString("Hello World");
manufacturerData.Data = writer.DetachBuffer();

watcher.AdvertisementFilter.Advertisement.ManufacturerData.Add(manufacturerData);

Watching for a Nearby Advertisement


Sometimes you only want to trigger your watcher when the device advertising has come in range. You can define
your own range, just note that values will be clipped to between 0 and -128.

// Set the in-range threshold to -70dBm. This means advertisements with RSSI >= -70dBm
// will start to be considered "in-range" (callbacks will start in this range).
watcher.SignalStrengthFilter.InRangeThresholdInDBm = -70;

// Set the out-of-range threshold to -75dBm (give some buffer). Used in conjunction
// with OutOfRangeTimeout to determine when an advertisement is no longer
// considered "in-range".
watcher.SignalStrengthFilter.OutOfRangeThresholdInDBm = -75;

// Set the out-of-range timeout to be 2 seconds. Used in conjunction with


// OutOfRangeThresholdInDBm to determine when an advertisement is no longer
// considered "in-range"
watcher.SignalStrengthFilter.OutOfRangeTimeout = TimeSpan.FromMilliseconds(2000);

Gauging Distance
When your Bluetooth LE Watcher's callback is triggered, the eventArgs include an RSSI value telling you the
received signal strength (how strong the Bluetooth signal is).

private async void OnAdvertisementReceived(BluetoothLEAdvertisementWatcher watcher, BluetoothLEAdvertisementReceivedEventArgs


eventArgs)
{
// The received signal strength indicator (RSSI)
Int16 rssi = eventArgs.RawSignalStrengthInDBm;
}

This can be roughly translated into distance, but should not be used to measure true distances as each individual
radio is different. Different environmental factors can make distance difficult to gauge (such as walls, cases around
the radio, or even air humidity).
An alternative to judging pure distance is to define "buckets". Radios tend to report 0 to -50 DBm when they are
very close, -50 to -90 when they are a medium distance away, and below -90 when they are far away. Trial and
error is best to determine what you want these buckets to be for your app.
Bluetooth Developer FAQ
3/6/2017 1 min to read Edit on GitHub

This article contains answers to commonly asked UWP Bluetooth API questions.

Why does my Bluetooth LE Device stop responding after a disconnect?


The common reason this happens is because the remote device has lost pairing information. A lot of earlier
Bluetooth devices don't require authentication. To protect the user, all pairing ceremonies performed from the
Settings app will require authentication and some devices don't know how to deal with that.
Starting with Windows 10 release 1511, developers have control over the pairing ceremony. The Device
Enumeration and Pairing Sample details the various aspects of associating new devices.
In this example, we initiate pairing with a device using no encryption. Note, this will only work if the remote device
does not require encryption or authentication to function.

// Get ceremony type and protection level selections


// You must select at least ConfirmOnly or the pairing attempt will fail
DevicePairingKinds ceremonySelected = DevicePairingKinds.ConfirmOnly;

// Workaround remote devices losing pairing information


DevicePairingProtectionLevel protectionLevel = DevicePairingProtectionLevel.None

DeviceInformationCustomPairing customPairing = deviceInfoDisp.DeviceInformation.Pairing.Custom;

// Declare an event handler - you don't need to do much in PairingRequestedHandler since the ceremony is "None"
customPairing.PairingRequested += PairingRequestedHandler;
DevicePairingResult result = await customPairing.PairAsync(ceremonySelected, protectionLevel);

Do I have to pair Bluetooth devices before using them?


You don't have to for Bluetooth RFCOMM (classic) devices. Starting with Windows 10 release 1607, you can simply
query for nearby devices and connect to them. The updated RFCOMM Chat Sample shows this functionality.
This feature is not available for Bluetooth Low Energy (GATT Client), so you will still have to pair either through the
Settings page or using the Windows.Devices.Enumeration APIs in order access these devices.
Printing and scanning
3/6/2017 1 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This section describes how to print and scan from your Universal Windows app.

TOPIC DESCRIPTION

Epson ESC/POS with formatting Learn how to use the ESC/POS command language to format
text, such as bold and double size characters, for your Point of
Service printer.

Print from your app Learn how to print documents from a Universal Windows app.
This topic also shows how to print specific pages.

Customize the print preview UI This section describes how to customize the print options and
settings in the print preview UI.

Scan from your app Learn here how to scan content from your app by using a
flatbed, feeder, or auto-configured scan source.

Related topics
Design guidelines for printing
//Build 2015 video: Developing apps that print in Windows 10
UWP print sample
Print from your app
3/6/2017 11 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Important APIs
Windows.Graphics.Printing
Windows.UI.Xaml.Printing
PrintDocument
Learn how to print documents from a Universal Windows app. This topic also shows how to print specific pages.
For more advanced changes to the print preview UI, see Customize the print preview UI.
Tip Most of the examples in this topic are based on the print sample. To see the full code, download the Universal
Windows Platform (UWP) print sample from the Windows-universal-samples repo on GitHub.

Register for printing


The first step to add printing to your app is to register for the Print contract. Your app must do this on every screen
from which you want your customer to be able to print. Only the screen that is displayed to the user can be
registered for printing. If one screen of your app has registered for printing, it must unregister for printing when it
exits. If it is replaced by another screen, the next screen must register for a new Print contract when it opens.
Tip If you need to support printing from more than one page in your app, you can put this print code in a common
helper class and have your app pages reuse it. For an example of how to do this, see the PrintHelper class in the
UWP print sample.
First, declare the PrintManager and PrintDocument. The PrintManager type is in the
Windows.Graphics.Printing namespace along with types to support other Windows printing functionality. The
PrintDocument type is in the Windows.UI.Xaml.Printing namespace along with other types that support
preparing XAML content for printing. You can make it easier to write your printing code by adding the following
using or Imports statements to your page.

using Windows.UI.Xaml;
using Windows.UI.Xaml.Controls;

The PrintDocument class is used to handle much of the interaction between the app and the PrintManager, but it
exposes several callbacks of its own. During registration, create instances of PrintManager and PrintDocument
and register handlers for their printing events.
In the UWP print sample, registration is performed by the RegisterForPrinting method.
public virtual void RegisterForPrinting()
{
printDocument = new PrintDocument();
printDocumentSource = printDocument.DocumentSource;
printDocument.Paginate += CreatePrintPreviewPages;
printDocument.GetPreviewPage += GetPrintPreviewPage;
printDocument.AddPages += AddPrintPages;

PrintManager printMan = PrintManager.GetForCurrentView();


printMan.PrintTaskRequested += PrintTaskRequested;
}

When the user goes to a page that supports, it initiates the registration within the OnNavigatedTo method.

protected override void OnNavigatedTo(NavigationEventArgs e)


{
// Initalize common helper class and register for printing
printHelper = new PrintHelper(this);
printHelper.RegisterForPrinting();

// Initialize print content for this scenario


printHelper.PreparePrintContent(new PageToPrint());

// Tell the user how to print


MainPage.Current.NotifyUser("Print contract registered with customization, use the Print button to print.", NotifyType.StatusMessage);
}

When the user leaves the page, disconnect the printing event handlers. If you have a multiple-page app and don't
disconnect printing, an exception is thrown when the user leaves the page and then comes back to it.

protected override void OnNavigatedFrom(NavigationEventArgs e)


{
if (printHelper != null)
{
printHelper.UnregisterForPrinting();
}
}

Create a print button


Add a print button to your app's screen where you'd like it to appear. Make sure that it doesn't interfere with the
content that you want to print.

<Button x:Name="InvokePrintingButton" Content="Print" Click="OnPrintButtonClick"/>

Next, add an event handler to your app's code to handle the click event. Use the ShowPrintUIAsync method to
start printing from your app. ShowPrintUIAsync is an asynchronous method that displays the appropriate
printing window. We recommend calling the IsSupported method first in order to check that the app is being run
on a device that supports printing (and handle the case in which it is not). If printing can't be performed at that time
for any other reason, ShowPrintUIAsync will throw an exception. We recommend catching these exceptions and
letting the user know when printing can't proceed.
async private void OnPrintButtonClick(object sender, RoutedEventArgs e)
{
if (Windows.Graphics.Printing.PrintManager.IsSupported())
{
try
{
// Show print UI
await Windows.Graphics.Printing.PrintManager.ShowPrintUIAsync();

}
catch
{
// Printing cannot proceed at this time
ContentDialog noPrintingDialog = new ContentDialog()
{
Title = "Printing error",
Content = "\nSorry, printing can' t proceed at this time.", PrimaryButtonText = "OK"
};
await noPrintingDialog.ShowAsync();
}
}
else
{
// Printing is not supported on this device
ContentDialog noPrintingDialog = new ContentDialog()
{
Title = "Printing not supported",
Content = "\nSorry, printing is not supported on this device.",PrimaryButtonText = "OK"
};
await noPrintingDialog.ShowAsync();
}
}

In this example, a print window is displayed in the event handler for a button click. If the method throws an
exception (because printing can't be performed at that time), a ContentDialog control informs the user of the
situation.

Format your app's content


When ShowPrintUIAsync is called, the PrintTaskRequested event is raised. The PrintTaskRequested event
handler shown in this step creates a PrintTask by calling the PrintTaskRequest.CreatePrintTask method and
passes the title for the print page and the name of a PrintTaskSourceRequestedHandler delegate. Notice that in
this example, the PrintTaskSourceRequestedHandler is defined inline. The
PrintTaskSourceRequestedHandler provides the formatted content for printing and is described later.
In this example, a completion handler is also defined to catch errors. It's a good idea to handle completion events
because then your app can let the user know if an error occurred and provide possible solutions. Likewise, your app
could use the completion event to indicate subsequent steps for the user to take after the print job is successful.
protected virtual void PrintTaskRequested(PrintManager sender, PrintTaskRequestedEventArgs e)
{
PrintTask printTask = null;
printTask = e.Request.CreatePrintTask("C# Printing SDK Sample", sourceRequested =>
{
// Print Task event handler is invoked when the print job is completed.
printTask.Completed += async (s, args) =>
{
// Notify the user when the print operation fails.
if (args.Completion == PrintTaskCompletion.Failed)
{
await scenarioPage.Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () =>
{
MainPage.Current.NotifyUser("Failed to print.", NotifyType.ErrorMessage);
});
}
};

sourceRequested.SetSource(printDocumentSource);
});
}

After the print task is created, the PrintManager requests a collection of print pages to show in the print preview
UI by raising the Paginate event. This corresponds with the Paginate method of the
IPrintPreviewPageCollection interface. The event handler you created during registration will be called at this
time.
Important If the user changes print settings, the paginate event handler will be called again to allow you to reflow
the content. For the best user experience, we recommend checking the settings before you reflow the content and
avoid reinitializing the paginated content when it's not necessary.
In the Paginate event handler (the CreatePrintPreviewPages method in the UWP print sample), create the pages to
show in the print preview UI and to send to the printer. The code you use to prepare your app's content for printing
is specific to your app and the content you print. Refer to the UWP print sample source code to see how it formats
its content for printing.
protected virtual void CreatePrintPreviewPages(object sender, PaginateEventArgs e)
{
// Clear the cache of preview pages
printPreviewPages.Clear();

// Clear the print canvas of preview pages


PrintCanvas.Children.Clear();

// This variable keeps track of the last RichTextBlockOverflow element that was added to a page which will be printed
RichTextBlockOverflow lastRTBOOnPage;

// Get the PrintTaskOptions


PrintTaskOptions printingOptions = ((PrintTaskOptions)e.PrintTaskOptions);

// Get the page description to deterimine how big the page is


PrintPageDescription pageDescription = printingOptions.GetPageDescription(0);

// We know there is at least one page to be printed. passing null as the first parameter to
// AddOnePrintPreviewPage tells the function to add the first page.
lastRTBOOnPage = AddOnePrintPreviewPage(null, pageDescription);

// We know there are more pages to be added as long as the last RichTextBoxOverflow added to a print preview
// page has extra content
while (lastRTBOOnPage.HasOverflowContent && lastRTBOOnPage.Visibility == Windows.UI.Xaml.Visibility.Visible)
{
lastRTBOOnPage = AddOnePrintPreviewPage(lastRTBOOnPage, pageDescription);
}

if (PreviewPagesCreated != null)
{
PreviewPagesCreated.Invoke(printPreviewPages, null);
}

PrintDocument printDoc = (PrintDocument)sender;

// Report the number of preview pages created


printDoc.SetPreviewPageCount(printPreviewPages.Count, PreviewPageCountType.Intermediate);
}

When a particular page is to be shown in the print preview window, the PrintManager raises the
GetPreviewPage event. This corresponds with the MakePage method of the IPrintPreviewPageCollection
interface. The event handler you created during registration will be called at this time.
In the GetPreviewPage event handler (the GetPrintPreviewPage method in the UWP print sample), set the
appropriate page on the print document.

protected virtual void GetPrintPreviewPage(object sender, GetPreviewPageEventArgs e)


{
PrintDocument printDoc = (PrintDocument)sender;
printDoc.SetPreviewPage(e.PageNumber, printPreviewPages[e.PageNumber - 1]);
}

Finally, once the user clicks the print button, the PrintManager requests the final collection of pages to send to the
printer by calling the MakeDocument method of the IDocumentPageSource interface. In XAML, this raises the
AddPages event. The event handler you created during registration will be called at this time.
In the AddPages event handler (the AddPrintPages method in the UWP print sample), add pages from the page
collection to the PrintDocument object to be sent to the printer. If a user specifies particular pages or a range of
pages to print, you use that information here to add only the pages that will actually be sent to the printer.
protected virtual void AddPrintPages(object sender, AddPagesEventArgs e)
{
// Loop over all of the preview pages and add each one to add each page to be printied
for (int i = 0; i < printPreviewPages.Count; i++)
{
// We should have all pages ready at this point...
printDocument.AddPage(printPreviewPages[i]);
}

PrintDocument printDoc = (PrintDocument)sender;

// Indicate that all of the print pages have been provided


printDoc.AddPagesComplete();
}

Prepare print options


Next prepare print options. As an example, this section will describe how to set the page range option to allow
printing of specific pages. For more advanced options, see Customize the print preview UI.
This step creates a new print option, defines a list of values that the option supports, and then adds the option to
the print preview UI. The page range option has these settings:

OPTION NAME ACTION

Print all Print all pages in the document.

Print Selection Print only the content the user selected.

Print Range Display an edit control into which the user can enter the pages
to print.

First, modify the PrintTaskRequested event handler to add the code to get a PrintTaskOptionDetails object.

PrintTaskOptionDetails printDetailedOptions = PrintTaskOptionDetails.GetFromPrintTaskOptions(printTask.Options);

Clear the list of options that are shown in the print preview UI and add the options that you want to display when
the user wants to print from the app.
Note The options appear in the print preview UI in the same order they are appended, with the first option shown
at the top of the window.

IList<string> displayedOptions = printDetailedOptions.DisplayedOptions;

displayedOptions.Clear();
displayedOptions.Add(Windows.Graphics.Printing.StandardPrintTaskOptions.Copies);
displayedOptions.Add(Windows.Graphics.Printing.StandardPrintTaskOptions.Orientation);
displayedOptions.Add(Windows.Graphics.Printing.StandardPrintTaskOptions.ColorMode);

Create the new print option and initialize the list of option values.

// Create a new list option


PrintCustomItemListOptionDetails pageFormat = printDetailedOptions.CreateItemListOption("PageRange", "Page Range");
pageFormat.AddItem("PrintAll", "Print all");
pageFormat.AddItem("PrintSelection", "Print Selection");
pageFormat.AddItem("PrintRange", "Print Range");
Add your custom print option and assign the event handler. The custom option is appended last so that it appears
at the bottom of the list of options. But you can put it anywhere in the list, custom print options don't need to be
added last.

// Add the custom option to the option list


displayedOptions.Add("PageRange");

// Create new edit option


PrintCustomTextOptionDetails pageRangeEdit = printDetailedOptions.CreateTextOption("PageRangeEdit", "Range");

// Register the handler for the option change event


printDetailedOptions.OptionChanged += printDetailedOptions_OptionChanged;

The CreateTextOption method creates the Range text box. This is where the user enters the specific pages they
want to print when they select the Print Range option.

Handle print option changes


The OptionChanged event handler does two things. First, it shows and hides the text edit field for the page range
depending on the page range option that the user selected. Second, it tests the text entered into the page range text
box to make sure that it represents a valid page range for the document.
This example shows how the UWP print sample handles change events.

async void printDetailedOptions_OptionChanged(PrintTaskOptionDetails sender, PrintTaskOptionChangedEventArgs args)


{
if (args.OptionId == null)
{
return;
}

string optionId = args.OptionId.ToString();

// Handle change in Page Range Option


if (optionId == "PageRange")
{
IPrintOptionDetails pageRange = sender.Options[optionId];
string pageRangeValue = pageRange.Value.ToString();

selectionMode = false;

switch (pageRangeValue)
{
case "PrintRange":
// Add PageRangeEdit custom option to the option list
sender.DisplayedOptions.Add("PageRangeEdit");
pageRangeEditVisible = true;
await scenarioPage.Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () =>
{
ShowContent(null);
});
break;
case "PrintSelection":
await scenarioPage.Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () =>
{
Scenario4PageRange page = (Scenario4PageRange)scenarioPage;
PageToPrint pageContent = (PageToPrint)page.PrintFrame.Content;
ShowContent(pageContent.TextContentBlock.SelectedText);
});
RemovePageRangeEdit(sender);
break;
default:
await scenarioPage.Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () =>
{
{
ShowContent(null);
});
RemovePageRangeEdit(sender);
break;
}

Refresh();
}
else if (optionId == "PageRangeEdit")
{
IPrintOptionDetails pageRange = sender.Options[optionId];
// Expected range format (p1,p2...)*, (p3-p9)* ...
if (!Regex.IsMatch(pageRange.Value.ToString(), @"^\s*\d+\s*(\-\s*\d+\s*)?(\,\s*\d+\s*(\-\s*\d+\s*)?)*$"))
{
pageRange.ErrorText = "Invalid Page Range (eg: 1-3, 5)";
}
else
{
pageRange.ErrorText = string.Empty;
try
{
GetPagesInRange(pageRange.Value.ToString());
Refresh();
}
catch (InvalidPageException ipex)
{
pageRange.ErrorText = ipex.Message;
}
}
}
}

Tip See the GetPagesInRange method in the UWP print sample for details on how to parse the page range the user
enters in the Range text box.

Preview selected pages


How you format your app's content for printing depends on the nature of your app and its content. The UWP print
sample uses a print helper class to format its content for printing.
When printing a subset of the pages, there are several ways to show the content in the print preview. Regardless of
the method you chose to show the page range in the print preview, the printed output must contain only the
selected pages.
Show all the pages in the print preview whether a page range is specified or not, leaving the user to know which
pages will actually be printed.
Show only the pages selected by the user's page range in the print preview, updating the display whenever the
user changes the page range.
Show all the pages in print preview, but grey out the pages that are not in page range selected by the user.

Related topics
Design guidelines for printing
//Build 2015 video: Developing apps that print in Windows 10
UWP print sample
Customize the print preview UI
3/6/2017 4 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Important APIs
Windows.Graphics.Printing
Windows.UI.Xaml.Printing
PrintManager
This section describes how to customize the print options and settings in the print preview UI. For more info about
printing, see Print from your app.
Tip Most of the examples in this topic are based on the print sample. To see the full code, download the Universal
Windows Platform (UWP) print sample from the Windows-universal-samples repo on GitHub.

Customize print options


By default, the print preview UI shows the ColorMode, Copies, and Orientation print options. In addition to
those, there are several other common printer options that you can add to the print preview UI:
Binding
Collation
Duplex
HolePunch
InputBin
MediaSize
MediaType
NUp
PrintQuality
Staple
These options are defined in the StandardPrintTaskOptions class. You can add to or remove options from the list
of options displayed in the print preview UI. You can also change the order in which they appear, and set the
default settings that are shown to the user.
However, the modifications that you make by using this method affect only the print preview UI. The user can
always access all of the options that the printer supports by tapping More settings in the print preview UI.
Note Although your app can specify any print options to be displayed, only those that are supported by the
selected printer are shown in the print preview UI. The print UI won't show options that the selected printer doesn't
support.
Define the options to display
When the app's screen is loaded, it registers for the Print contract. Part of that registration includes defining the
PrintTaskRequested event handler. The code to customize the options displayed in the print preview UI is added
to the PrintTaskRequested event handler.
Modify the PrintTaskRequested event handler to include the printTask.options instructions that configure the
print settings that you want to display in the print preview UI.For the screen in your app for which you want to
show a customized list of print options, override the PrintTaskRequested event handler in the helper class to
include code that specifies the options to display when the screen is printed.

protected override void PrintTaskRequested(PrintManager sender, PrintTaskRequestedEventArgs e)


{
PrintTask printTask = null;
printTask = e.Request.CreatePrintTask("C# Printing SDK Sample", sourceRequestedArgs =>
{
IList<string> displayedOptions = printTask.Options.DisplayedOptions;

// Choose the printer options to be shown.


// The order in which the options are appended determines the order in which they appear in the UI
displayedOptions.Clear();
displayedOptions.Add(Windows.Graphics.Printing.StandardPrintTaskOptions.Copies);
displayedOptions.Add(Windows.Graphics.Printing.StandardPrintTaskOptions.Orientation);
displayedOptions.Add(Windows.Graphics.Printing.StandardPrintTaskOptions.MediaSize);
displayedOptions.Add(Windows.Graphics.Printing.StandardPrintTaskOptions.Collation);
displayedOptions.Add(Windows.Graphics.Printing.StandardPrintTaskOptions.Duplex);

// Preset the default value of the printer option


printTask.Options.MediaSize = PrintMediaSize.NorthAmericaLegal;

// Print Task event handler is invoked when the print job is completed.
printTask.Completed += async (s, args) =>
{
// Notify the user when the print operation fails.
if (args.Completion == PrintTaskCompletion.Failed)
{
await scenarioPage.Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () =>
{
MainPage.Current.NotifyUser("Failed to print.", NotifyType.ErrorMessage);
});
}
};

sourceRequestedArgs.SetSource(printDocumentSource);
});
}

Important Calling displayedOptions.clear() removes all of the print options from the print preview UI, including
the More settings link. Be sure to append the options that you want to show on the print preview UI.
Specify default options
You can also set the default values of the options in the print preview UI. The following line of code, from the last
example, sets the default value of the MediaSize option.

// Preset the default value of the printer option


printTask.Options.MediaSize = PrintMediaSize.NorthAmericaLegal;

Add new print options


This section shows how to create a new print option, define a list of values that the option supports, and then add
the option to the print preview. As in the previous section, add the new print option in the PrintTaskRequested
event handler.
First, get a PrintTaskOptionDetails object. This is used to add the new print option to the print preview UI. Then
clear the list of options that are shown in the print preview UI and add the options that you want to display when
the user wants to print from the app. After that, create the new print option and initialize the list of option values.
Finally, add the new option and assign a handler for the OptionChanged event.
protected override void PrintTaskRequested(PrintManager sender, PrintTaskRequestedEventArgs e)
{
PrintTask printTask = null;
printTask = e.Request.CreatePrintTask("C# Printing SDK Sample", sourceRequestedArgs =>
{
PrintTaskOptionDetails printDetailedOptions = PrintTaskOptionDetails.GetFromPrintTaskOptions(printTask.Options);
IList<string> displayedOptions = printDetailedOptions.DisplayedOptions;

// Choose the printer options to be shown.


// The order in which the options are appended determines the order in which they appear in the UI
displayedOptions.Clear();

displayedOptions.Add(Windows.Graphics.Printing.StandardPrintTaskOptions.Copies);
displayedOptions.Add(Windows.Graphics.Printing.StandardPrintTaskOptions.Orientation);
displayedOptions.Add(Windows.Graphics.Printing.StandardPrintTaskOptions.ColorMode);

// Create a new list option


PrintCustomItemListOptionDetails pageFormat = printDetailedOptions.CreateItemListOption("PageContent", "Pictures");
pageFormat.AddItem("PicturesText", "Pictures and text");
pageFormat.AddItem("PicturesOnly", "Pictures only");
pageFormat.AddItem("TextOnly", "Text only");

// Add the custom option to the option list


displayedOptions.Add("PageContent");

printDetailedOptions.OptionChanged += printDetailedOptions_OptionChanged;

// Print Task event handler is invoked when the print job is completed.
printTask.Completed += async (s, args) =>
{
// Notify the user when the print operation fails.
if (args.Completion == PrintTaskCompletion.Failed)
{
await scenarioPage.Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () =>
{
MainPage.Current.NotifyUser("Failed to print.", NotifyType.ErrorMessage);
});
}
};

sourceRequestedArgs.SetSource(printDocumentSource);
});
}

The options appear in the print preview UI in the same order they are appended, with the first option shown at the
top of the window. In this example, the custom option is appended last so that it appears at the bottom of the list of
options. However, you could put it anywhere in the list; custom print options don't need to be added last.
When the user changes the selected option in your custom option, update the print preview image. Call the
InvalidatePreview method to redraw the image in the print preview UI, as shown below.
async void printDetailedOptions_OptionChanged(PrintTaskOptionDetails sender, PrintTaskOptionChangedEventArgs args)
{
// Listen for PageContent changes
string optionId = args.OptionId as string;
if (string.IsNullOrEmpty(optionId))
{
return;
}

if (optionId == "PageContent")
{
await scenarioPage.Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () =>
{
printDocument.InvalidatePreview();
});
}
}

Related topics
Design guidelines for printing
//Build 2015 video: Developing apps that print in Windows 10
UWP print sample
Scan from your app
3/6/2017 3 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Important APIs
Windows.Devices.Scanners
DeviceInformation
DeviceClass
Learn here how to scan content from your app by using a flatbed, feeder, or auto-configured scan source.
Important The Windows.Devices.Scanners APIs are part of the desktop device family. Apps can use these APIs
only on the desktop version of Windows 10.
To scan from your app, you must first list the available scanners by declaring a new DeviceInformation object and
getting the DeviceClass type. Only scanners that are installed locally with WIA drivers are listed and available to
your app.
After your app has listed available scanners, it can use the auto-configured scan settings based on the scanner type,
or just scan using the available flatbed or feeder scan source. To use auto-configured settings, the scanner must be
enabled for auto-configuration must not be equipped with both a flatbed and a feeder scanner. For more info, see
Auto-Configured Scanning.

Enumerate available scanners


Windows does not detect scanners automatically. You must perform this step in order for your app to
communicate with the scanner. In this example, the scanner device enumeration is done using the
Windows.Devices.Enumeration namespace.
1. First, add these using statements to your class definition file.

using Windows.Devices.Enumeration;
using Windows.Devices.Scanners;

1. Next, implement a device watcher to start enumerating scanners. For more info, see Enumerate devices.

void InitDeviceWatcher()
{
// Create a Device Watcher class for type Image Scanner for enumerating scanners
scannerWatcher = DeviceInformation.CreateWatcher(DeviceClass.ImageScanner);

scannerWatcher.Added += OnScannerAdded;
scannerWatcher.Removed += OnScannerRemoved;
scannerWatcher.EnumerationCompleted += OnScannerEnumerationComplete;
}

1. Create an event handler for when a scanner is added.


private async void OnScannerAdded(DeviceWatcher sender, DeviceInformation deviceInfo)
{
await
MainPage.Current.Dispatcher.RunAsync(
Windows.UI.Core.CoreDispatcherPriority.Normal,
() =>
{
MainPage.Current.NotifyUser(String.Format("Scanner with device id {0} has been added", deviceInfo.Id), NotifyType.StatusMessage);

// search the device list for a device with a matching device id


ScannerDataItem match = FindInList(deviceInfo.Id);

// If we found a match then mark it as verified and return


if (match != null)
{
match.Matched = true;
return;
}

// Add the new element to the end of the list of devices


AppendToList(deviceInfo);
}
);
}

Scan
1. Get an ImageScanner object
For each ImageScannerScanSource enumeration type, whether it's Default, AutoConfigured, Flatbed, or
Feeder, you must first create an ImageScanner object by calling the ImageScanner.FromIdAsync method, like
this.

ImageScanner myScanner = await ImageScanner.FromIdAsync(deviceId);

1. Just scan
To scan with the default settings, your app relies on the Windows.Devices.Scanners namespace to select a
scanner and scans from that source. No scan settings are changed. The possible scanners are auto-configure,
flatbed, or feeder. This type of scan will most likely produce a successful scan operation, even if it scans from the
wrong source, like flatbed instead of feeder.
Note If the user places the document to scan in the feeder, the scanner will scan from the flatbed instead. If the
user tries to scan from an empty feeder, the scan job won't produce any scanned files.

var result = await myScanner.ScanFilesToFolderAsync(ImageScannerScanSource.Default,


folder).AsTask(cancellationToken.Token, progress);

1. Scan from Auto-configured, Flatbed, or Feeder source


Your app can use the device's Auto-Configured Scanning to scan with the most optimal scan settings. With this
option, the device itself can determine the best scan settings, like color mode and scan resolution, based on the
content being scanned. The device selects the scan settings at run time for each new scan job.
Note Not all scanners support this feature, so the app must check if the scanner supports this feature before using
this setting.
In this example, the app first checks if the scanner is capable of auto-configuration and then scans. To specify either
flatbed or feeder scanner, simply replace AutoConfigured with Flatbed or Feeder.

if (myScanner.IsScanSourceSupported(ImageScannerScanSource.AutoConfigured))
{
...
// Scan API call to start scanning with Auto-Configured settings.
var result = await myScanner.ScanFilesToFolderAsync(
ImageScannerScanSource.AutoConfigured, folder).AsTask(cancellationToken.Token, progress);
...
}

Preview the scan


You can add code to preview the scan before scanning to a folder. In the example below, the app checks if the
Flatbed scanner supports preview, then previews the scan.

if (myScanner.IsPreviewSupported(ImageScannerScanSource.Flatbed))
{
rootPage.NotifyUser("Scanning", NotifyType.StatusMessage);
// Scan API call to get preview from the flatbed.
var result = await myScanner.ScanPreviewToStreamAsync(
ImageScannerScanSource.Flatbed, stream);

Cancel the scan


You can let users cancel the scan job midway through a scan, like this.

void CancelScanning()
{
if (ModelDataContext.ScenarioRunning)
{
if (cancellationToken != null)
{
cancellationToken.Cancel();
}
DisplayImage.Source = null;
ModelDataContext.ScenarioRunning = false;
ModelDataContext.ClearFileList();
}
}

Scan with progress


1. Create a System.Threading.CancellationTokenSource object.

cancellationToken = new CancellationTokenSource();

1. Set up the progress event handler and get the progress of the scan.

rootPage.NotifyUser("Scanning", NotifyType.StatusMessage);
var progress = new Progress<UInt32>(ScanProgress);

Scanning to the pictures library


Users can scan to any folder dynamically using the FolderPicker class, but you must declare the Pictures Library
capability in the manifest to allow users to scan to that folder. For more info on app capabilities, see App capability
declarations.
3D Printing
3/6/2017 1 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This section describes how to utilize the 3D print API to add 3D printing functionality to your Universal Windows
app.
For more information on 3D printing with Windows 10, including resources for hardware partners, community
discussion forums, and general info on 3D print capabilities, see the 3D printing with Windows 10 site on the
Hardware Dev Center.

TOPIC DESCRIPTION

3D print from your app Learn how to add 3D printing functionality to your Universal
Windows app. This topic covers how to launch the 3D print
dialog after ensuring your 3D model is printable and in the
correct format.

Generate a 3MF package Describes the structure of the 3D Manufacturing Format file
type and how it can be created and manipulated with the
Windows.Graphics.Printing3D API.

Related topics
3D printing with Windows 10 (Hardware Dev Center)
UWP 3D print sample
UWP 3D printing from Unity sample
3D printing from your app
3/6/2017 7 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Important APIs
Windows.Graphics.Printing3D
Learn how to add 3D printing functionality to your Universal Windows app. This topic covers how to load 3D
geometry data into your app and launch the 3D print dialog after ensuring your 3D model is printable and in the
correct format. For an example of these procedures in action, see the 3D printing UWP sample.

NOTE
In the sample code in this guide, error reporting and handling is greatly simplified for the sake of brevity.

Class setup
In your class that is to have 3D print functionality, add the Windows.Graphics.Printing3D namespace.

using Windows.Graphics.Printing3D;

The following additional namespaces will be used in this guide:

using System;
using Windows.Storage;
using Windows.Storage.Pickers;
using Windows.Storage.Streams;
using Windows.System;
using Windows.UI.Xaml;
using Windows.UI.Xaml.Controls;

Next, give your class some helpful member fields. Declare a Print3DTask object to serve as a reference to the
printing task that is to be passed to the print driver. Declare a StorageFile object to hold the original 3D data file to
be loaded into the app. Declare a Printing3D3MFPackage object, which represents a print-ready 3D model with
all necessary metadata.

private Print3DTask printTask;


private StorageFile file;
private Printing3D3MFPackage package = new Printing3D3MFPackage();

Create a simple UI
This sample features three user controls: a load button which will bring a file into program memory, a fix button
which will modify the file as necessary, and a print button which will initiate the print job. The following code
creates these buttons (with their click event handlers) in your class' XAML file.
<StackPanel Orientation="Vertical" VerticalAlignment="Center">
<Button x:Name="loadbutton" Content="Load Model from File" HorizontalAlignment="Center" Margin="5,5,5,5" VerticalAlignment="Center"
Click="OnLoadClick"/>
<Button x:Name="fixbutton" Content="Fix Model" HorizontalAlignment="Center" Margin="5,5,5,5" VerticalAlignment="Center"
Click="OnFixClick"/>
<Button x:Name="printbutton" Content="Print" HorizontalAlignment="center" Margin="5,5,5,5" VerticalAlignment="Center"
Click="OnPrintClick"/>

Add a TextBlock for UI feedback.

<TextBlock x:Name="OutputTextBlock" TextAlignment="Center"></TextBlock>


</StackPanel>

Get the 3D data


The method by which your app acquires 3D geometry data will vary. Your app may retrieve data from a 3D scan,
download model data from a web resource, or generate a 3D mesh programmatically using equations or user
input. For the sake of simplicity, this guide will show how to load a 3D data file (of any of several common file
types) into program memory from device storage. The 3D Builder model library provides a variety of models that
you can easily download to your device.
In your OnLoadClick method, use the FileOpenPicker class to load a single file into your app's memory.

private async void OnLoadClick(object sender, RoutedEventArgs e) {

FileOpenPicker openPicker = new FileOpenPicker();

// allow common 3D data file types


openPicker.FileTypeFilter.Add(".3mf");
openPicker.FileTypeFilter.Add(".stl");
openPicker.FileTypeFilter.Add(".ply");
openPicker.FileTypeFilter.Add(".obj");

// pick a file and assign it to this class' 'file' member


file = await openPicker.PickSingleFileAsync();
if (file == null) {
return;
}

Use 3D Builder to convert to 3D Manufacturing Format (.3mf )


At this point, you are able to load a 3D data file into your app's memory. However, 3D geometry data can come in
many different formats, and not all are efficient for 3D printing. Windows 10 uses the 3D Manufacturing Format
(.3mf) file type for all 3D printing tasks.

NOTE
The .3mf file type offers a great deal of functionality not covered in this tutorial. To learn more about 3MF and the features it
provides to producers and consumers of 3D products, refer to the 3MF Specification. To learn how to harness these features
using Windows 10 APIs, see the Generate a 3MF package tutorial.

Fortunately, the 3D Builder app can open files of most popular 3D formats and save them as .3mf files. In this
example, where the file type could vary, a very simple solution is to open 3D Builder and prompt the user to save
the imported data as a .3mf file and then reload it.
NOTE
In addition to converting file formats, 3D Builder provides simple tools to edit your models, add color data, and perform other
print-specific operations, so it is often worth integrating into an app that deals with 3D printing.

// if user loaded a non-3mf file type


if (file.FileType != ".3mf") {

// elect 3D Builder as the application to launch


LauncherOptions options = new LauncherOptions();
options.TargetApplicationPackageFamilyName = "Microsoft.3DBuilder_8wekyb3d8bbwe";

// Launch the retrieved file in 3D builder


bool success = await Windows.System.Launcher.LaunchFileAsync(file, options);

// prompt the user to save as .3mf


OutputTextBlock.Text = "save " + file.Name + " as a .3mf file and reload.";

// have user choose another file (ideally the newly-saved .3mf file)
file = await openPicker.PickSingleFileAsync();

} else {
// if the file type is .3mf
// notify user that load was successful
OutputTextBlock.Text = file.Name + " loaded as file";
}
}

Repair model data for 3D printing


Not all 3D model data is printable, even in the .3mf type. In order for the printer to correctly determine what space
to fill and what to leave empty, the model(s) to be printed must (each) be a single seamless mesh, have outward-
facing surface normals, and have manifold geometry. Issues in these areas can crop up in a variety of different
forms and can be hard to spot in complex shapes. Fortunately, modern software solutions are often adequate for
converting raw geometry to printable 3D shapes. This is known as repairing the model and will be done in the
OnFixClick method.

The 3D data file must be converted to implement IRandomAccessStream, which can then be used to generate a
Printing3DModel object.

private async void OnFixClick(object sender, RoutedEventArgs e) {

// read the loaded file's data as a data stream


IRandomAccessStream fileStream = await file.OpenAsync(FileAccessMode.Read);

// assign a Printing3DModel to this data stream


Printing3DModel model = await package.LoadModelFromPackageAsync(fileStream);

// use Printing3DModel's repair function


OutputTextBlock.Text = "repairing model";
var data = model.RepairAsync();

The Printing3DModel object is now repaired and printable. Use SaveModelToPackageAsync to assign the
model to the Printing3D3MFPackage object that you declared when creating the class.
// save model to this class' Printing3D3MFPackage
OutputTextBlock.Text = "saving model to 3MF package";
await package.SaveModelToPackageAsync(model);

Execute printing task: create a TaskRequested handler


Later on, when the 3D print dialog is displayed to the user and the user elects to begin printing, your app will need
to pass in the desired parameters to the 3D print pipeline. The 3D print API will raise the TaskRequested event.
You must write a method to handle this event appropriately. As always, the handler method must be of the same
type as its event: The TaskRequested event has parameters Print3DManager (a reference to its sender object)
and a Print3DTaskRequestedEventArgs object, which holds most of the relevant information. Its return type is
void.

private void MyTaskRequested(Print3DManager sender, Print3DTaskRequestedEventArgs args) {

The core purpose of this method is to use the args parameter to send a Printing3D3MFPackage down the
pipeline. The Print3DTaskRequestedEventArgs type has one property: Request. It is of the type
Print3DTaskRequest and represents one print job request. Its method CreateTask allows the program to submit
the right information for your print job, and it returns a reference to the Print3DTask object which was sent down
the 3D print pipeline.
CreateTask has the following input parameters: a string for the print job name, a string for the ID of the printer to
use, and a Print3DTaskSourceRequestedHandler delegate. The delegate is automatically invoked when the
3DTaskSourceRequested event is raised (this is done by the API itself). The important thing to note is that this
delegate is invoked when a print job is initiated, and it is responsible for providing the right 3D print package.
Print3DTaskSourceRequestedHandler takes one parameter, a Print3DTaskSourceRequestedArgs object
which provides the data to be sent. The one public method of this class, SetSource, accepts the package to be
printed. Implement a Print3DTaskSourceRequestedHandler delegate as follows.

// this delegate handles the API's request for a source package


Print3DTaskSourceRequestedHandler sourceHandler = delegate (Print3DTaskSourceRequestedArgs sourceRequestedArgs) {
sourceRequestedArgs.SetSource(package);
};

Next, call CreateTask, using the newly-defined delegate, sourceHandler .

// the Print3DTaskRequest ('Request'), a member of 'args', creates a Print3DTask to be sent down the pipeline.
printTask = args.Request.CreateTask("Print Title", "Default", sourceHandler);

The returned Print3DTask is assigned to the class variable declared in the beginning. You can now (optionally) use
this reference to handle certain events thrown by the task.

// optional events to handle


printTask.Completed += Task_Completed;
printTask.Submitting += Task_Submitting;
NOTE
You must implement a Task_Submitting and Task_Completed method if you wish to register them to these events.

Execute printing task: open 3D print dialog


The final piece of code needed is that which launches the 3D print dialog. Like a conventional printing dialog
window, the 3D print dialog provides a number of last-minute printing options and allows the user to choose which
printer to use (whether connected via USB or the network).
Register your MyTaskRequested method with the TaskRequested event.

private async void OnPrintClick(object sender, RoutedEventArgs e) {

// get a reference to this class' Print3DManager


Print3DManager myManager = Print3DManager.GetForCurrentView();

// register the method 'MyTaskRequested' to the Print3DManager's TaskRequested event


myManager.TaskRequested += MyTaskRequested;

After registering your TaskRequested event handler, you can invoke the method ShowPrintUIAsync, which
brings up the 3D print dialog in the current application window.

// show the 3D print dialog


OutputTextBlock.Text = "opening print dialog";
var result = await Print3DManager.ShowPrintUIAsync();

Finally, it is a good practice to de-register your event handlers once your app resumes control.

// remove the print task request after dialog is shown


myManager.TaskRequested -= MyTaskRequested;
}

Related topics
Generate a 3MF package
3D printing UWP sample
Generate a 3MF package
3/6/2017 14 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Important APIs
Windows.Graphics.Printing3D
[Some information relates to pre-released product which may be substantially modified before it's commercially
released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
This guide describes the structure of the 3D Manufacturing Format document and how it can be created and
manipulated with the Windows.Graphics.Printing3D API.

What is 3MF?
The 3D Manufacturing Format is a set of conventions for using XML to describe the appearance and structure of 3D
models for the purpose of manufacturing (3D printing). It defines a set of parts (some required and some optional)
and their relationships, with the goal of providing all necessary information to a 3D manufacturing device. A data
set that adheres to the 3D Manufacturing Format can be saved as a file with the .3mf extension.
In Windows 10, the Printing3D3MFPackage class in the Windows.Graphics.Printing3D namespace is
analogous to a single .3mf file, and other classes map to the particular XML elements in the file. This guide
describes how each of the main parts of a 3MF document can be created and set programmatically, how the 3MF
Materials Extension can be utilized, and how a Printing3D3MFPackage object can be converted and saved as a
.3mf file. For more information on the standards of 3MF or the 3MF Materials Extension, see the 3MF Specification.

Core classes in the 3MF structure


The Printing3D3MFPackage class represents a complete 3MF document, and at the core of a 3MF document is its
model part, represented by the Printing3DModel class. Most of the information we wish to specify about a 3D
model will be stored by setting the properties of the Printing3DModel class and the properties of their underlying
classes.

var localPackage = new Printing3D3MFPackage();


var model = new Printing3DModel();
// specify scaling units for model data
model.Unit = Printing3DModelUnit.Millimeter;

Metadata

The model part of a 3MF document can hold metadata in the form of key/value pairs of strings stored in the
Metadata property. There are a number of predefined names of metadata, but other pairs can be added as part of
an extension (described in more detail in the 3MF specification). It is up to the receiver of the package (a 3D
manufacturing device) to determine whether and how to handle metadata, but it is good practice to include as
much basic info as possible in the 3MF package:
model.Metadata.Add("Title", "Cube");
model.Metadata.Add("Designer", "John Smith");
model.Metadata.Add("CreationDate", "1/1/2016");

Mesh data
In the context of this guide, a mesh is a body of 3-dimensional geometry constructed from a single set of vertices
(though it does not have to appear as a single solid). A mesh part is represented by the Printing3DMesh class. A
valid mesh object must contain information about the location of all of its vertices as well as all the triangle faces
that exist between certain sets of vertices.
The following method adds vertices to a mesh and then gives them locations in 3D space:

private async Task GetVerticesAsync(Printing3DMesh mesh) {


Printing3DBufferDescription description;

description.Format = Printing3DBufferFormat.Printing3DDouble;

// have 3 xyz values


description.Stride = 3;

// have 8 vertices in all in this mesh


mesh.CreateVertexPositions(sizeof(double) * 3 * 8);
mesh.VertexPositionsDescription = description;

// set the locations (in 3D coordinate space) of each vertex


using (var stream = mesh.GetVertexPositions().AsStream()) {
double[] vertices =
{
0, 0, 0,
10, 0, 0,
0, 10, 0,
10, 10, 0,
0, 0, 10,
10, 0, 10,
0, 10, 10,
10, 10, 10,
};

// convert vertex data to a byte array


byte[] vertexData = vertices.SelectMany(v => BitConverter.GetBytes(v)).ToArray();

// write the locations to each vertex


await stream.WriteAsync(vertexData, 0, vertexData.Length);
}
// update vertex count: 8 vertices in the cube
mesh.VertexCount = 8;
}

The next method defines all of the triangles to be drawn across these vertices:
private static async Task SetTriangleIndicesAsync(Printing3DMesh mesh) {

Printing3DBufferDescription description;

description.Format = Printing3DBufferFormat.Printing3DUInt;
// 3 vertex indices
description.Stride = 3;
// 12 triangles in all in the cube
mesh.IndexCount = 12;

mesh.TriangleIndicesDescription = description;

// allocate space for 12 triangles


mesh.CreateTriangleIndices(sizeof(UInt32) * 3 * 12);

// get a datastream of the triangle indices (should be blank at this point)


var stream2 = mesh.GetTriangleIndices().AsStream();
{
// define a set of triangle indices: each row is one triangle. The values in each row
// correspond to the index of the vertex.
UInt32[] indices =
{
1, 0, 2,
1, 2, 3,
0, 1, 5,
0, 5, 4,
1, 3, 7,
1, 7, 5,
2, 7, 3,
2, 6, 7,
0, 6, 2,
0, 4, 6,
6, 5, 7,
4, 5, 6,
};
// convert index data to byte array
var vertexData = indices.SelectMany(v => BitConverter.GetBytes(v)).ToArray();
var len = vertexData.Length;
// write index data to the triangle indices stream
await stream2.WriteAsync(vertexData, 0, vertexData.Length);
}

NOTE
All triangles must have their indices defined in counter-clockwise order (when viewing the triangle from outside of the mesh
object), so that their face-normal vectors point outward.

When a Printing3DMesh object contains valid sets of vertices and triangles, it should then be added to the model's
Meshes property. All Printing3DMesh objects in a package must be stored under the Meshes property of the
Printing3DModel class.

// add the mesh to the model


model.Meshes.Add(mesh);

Create materials
A 3D model can hold data for multiple materials. This convention is intended to take advantage of 3D
manufacturing devices that can use multiple materials on a single print job. There are also multiple types of
material gropus, each one capable of supporting a number of different individual materals. Each material group
must have a unique reference id number, and each material within that group must also have a unique id.
The different mesh objects within a model can then reference these materials. Furthermore, individual triangles on
each mesh can specify different materials. Further still, different materials can even be represented within a single
triangle, with each triangle vertex having a different material assigned to it and the face material calculated as the
gradient between them.
This guide will first show how to create different kinds of materials within their respective material groups and
store them as resources on the model object. Then, we will go about assigning different materials to individual
meshes and individual triangles.
Base materials
The default material type is Base Material, which has both a Color Material value (described below) and a name
attribute that is intended to specify the type of material to use.

// add material group


// all material indices need to start from 1: 0 is a reserved id
// create new base materialgroup with id = 1
var baseMaterialGroup = new Printing3DBaseMaterialGroup(1);

// create color objects


// 'A' should be 255 if alpha = 100%
var darkBlue = Windows.UI.Color.FromArgb(255, 20, 20, 90);
var orange = Windows.UI.Color.FromArgb(255, 250, 120, 45);
var teal = Windows.UI.Color.FromArgb(255, 1, 250, 200);

// create new ColorMaterials, assigning color objects


var colrMat = new Printing3DColorMaterial();
colrMat.Color = darkBlue;

var colrMat2 = new Printing3DColorMaterial();


colrMat2.Color = orange;

var colrMat3 = new Printing3DColorMaterial();


colrMat3.Color = teal;

// setup new materials using the ColorMaterial objects


// set desired material type in the Name property
var baseMaterial = new Printing3DBaseMaterial {
Name = Printing3DBaseMaterial.Pla,
Color = colrMat
};

var baseMaterial2 = new Printing3DBaseMaterial {


Name = Printing3DBaseMaterial.Abs,
Color = colrMat2
};

// add base materials to the basematerialgroup

// material group index 0


baseMaterialGroup.Bases.Add(baseMaterial);
// material group index 1
baseMaterialGroup.Bases.Add(baseMaterial2);

// add material group to the basegroups property of the model


model.Material.BaseGroups.Add(baseMaterialGroup);
NOTE
The 3D manufacturing device will determine which available physical materials map to which virtual material elements stored
in the 3MF. Material mapping doesn't have to be 1:1: if a 3D printer only uses one material, it will print the whole model in
that material, regardless of which objects or faces were assigned different materials.

Color materials
Color Materials are similar to Base Materials, but they do not contain a name. Thus, they give no instructions as
to what type of material should be used by the machine. They hold only color data, and let the machine choose the
material type (and the machine may then prompt the user to choose). In the code below, the colrMat objects from
the previous method are used on their own.

// add ColorMaterials to the Color Material Group (with id 2)


var colorGroup = new Printing3DColorMaterialGroup(2);

// add the previous ColorMaterial objects to this ColorMaterialGroup


colorGroup.Colors.Add(colrMat);
colorGroup.Colors.Add(colrMat2);
colorGroup.Colors.Add(colrMat3);

// add colorGroup to the ColorGroups property on the model


model.Material.ColorGroups.Add(colorGroup);

Composite materials
Composite Materials simply instruct the manufacturing device to use a uniform mixture of different Base
Materials. Each Composite Material Group must reference exactly one Base Material Group from which to
draw ingredients. Additonally, the Base Materials within this group that are to be made available must be listed
out in a Material Indices list, which each Composite Material will then reference when specifying the ratios
(every Composite Material is simply a ratio of Base Materials).
// CompositeGroups
// create new composite material group with id = 3
var compositeGroup = new Printing3DCompositeMaterialGroup(3);

// indices point to base materials in BaseMaterialGroup with id =1


compositeGroup.MaterialIndices.Add(0);
compositeGroup.MaterialIndices.Add(1);

// create new composite materials


var compMat = new Printing3DCompositeMaterial();
// fraction adds to 1.0
compMat.Values.Add(0.2); // .2 of first base material in BaseMaterialGroup 1
compMat.Values.Add(0.8); // .8 of second base material in BaseMaterialGroup 1

var compMat2 = new Printing3DCompositeMaterial();


// fraction adds to 1.0
compMat2.Values.Add(0.5);
compMat2.Values.Add(0.5);

var compMat3 = new Printing3DCompositeMaterial();


// fraction adds to 1.0
compMat3.Values.Add(0.8);
compMat3.Values.Add(0.2);

var compMat4 = new Printing3DCompositeMaterial();


// fraction adds to 1.0
compMat4.Values.Add(0.4);
compMat4.Values.Add(0.6);

// add composites to group


compositeGroup.Composites.Add(compMat);
compositeGroup.Composites.Add(compMat2);
compositeGroup.Composites.Add(compMat3);
compositeGroup.Composites.Add(compMat4);

// add group to model


model.Material.CompositeGroups.Add(compositeGroup);

Texture coordinate materials


3MF supports the use of 2D images to color the surfaces of 3D models. This way, the model can convey much more
color data per triangle face (as opposed to having just one color value per triangle vertex). Like Color Materials,
texture coordinate materials only convery color data. To use a 2D texture, a texture resource must first be declared:

// texture resource setup


Printing3DTextureResource texResource = new Printing3DTextureResource();
// name conveys the path within the 3MF document
texResource.Name = "/3D/Texture/msLogo.png";

// in this case, we reference texture data in the sample appx, convert it to


// an IRandomAccessStream, and assign it as the TextureData
Uri texUri = new Uri("ms-appx:///Assets/msLogo.png");
StorageFile file = await StorageFile.GetFileFromApplicationUriAsync(texUri);
IRandomAccessStreamWithContentType iRandomAccessStreamWithContentType = await file.OpenReadAsync();
texResource.TextureData = iRandomAccessStreamWithContentType;
// add this testure resource to the 3MF Package
localPackage.Textures.Add(texResource);

// assign this texture resource to a Printing3DModelTexture


var modelTexture = new Printing3DModelTexture();
modelTexture.TextureResource = texResource;
NOTE
Texture data belongs to the 3MF Package itself, not to the model part within the package.

Next, we must fill out Texture3Coord Materials. Each of these references a texture resource and specifies a
particular point on the image (in UV coordinates).

// texture2Coord Group
// create new Texture2CoordMaterialGroup with id = 4
var tex2CoordGroup = new Printing3DTexture2CoordMaterialGroup(4);

// create texture materials:


// set up four tex2coordmaterial objects with four (u,v) pairs,
// mapping to each corner of the image:

var tex2CoordMaterial = new Printing3DTexture2CoordMaterial();


tex2CoordMaterial.U = 0.0;
tex2CoordMaterial.V = 1.0;
tex2CoordGroup.Texture2Coords.Add(tex2CoordMaterial);

var tex2CoordMaterial2 = new Printing3DTexture2CoordMaterial();


tex2CoordMaterial2.U = 1.0;
tex2CoordMaterial2.V = 1.0;
tex2CoordGroup.Texture2Coords.Add(tex2CoordMaterial2);

var tex2CoordMaterial3 = new Printing3DTexture2CoordMaterial();


tex2CoordMaterial3.U = 0.0;
tex2CoordMaterial3.V = 0.0;
tex2CoordGroup.Texture2Coords.Add(tex2CoordMaterial3);

var tex2CoordMaterial4 = new Printing3DTexture2CoordMaterial();


tex2CoordMaterial4.U = 1.0;
tex2CoordMaterial4.V = 0.0;
tex2CoordGroup.Texture2Coords.Add(tex2CoordMaterial4);

// add our Printing3DModelTexture to the Texture property of the group


tex2CoordGroup.Texture = modelTexture;

// add metadata about the texture so that u,v values can be used
model.Metadata.Add("tex4", "/3D/Texture/msLogo.png");
// add group to groups on the model's material
model.Material.Texture2CoordGroups.Add(tex2CoordGroup);

Map materials to faces


In order to dictate which materials are mapped to which vertices on each triangle, we must do some more work on
the mesh object of our model (if a model contains multiple meshes, they must each have their materials assigned
separately). As mentioned above, materials are assigned per-vertex, per-triangle. Refer to the code below to see
how this information is entered and interpreted.
private static async Task SetMaterialIndicesAsync(Printing3DMesh mesh) {
// declare a description of the material indices
Printing3DBufferDescription description;
description.Format = Printing3DBufferFormat.Printing3DUInt;
// 4 indices for material description per triangle
description.Stride = 4;
// 12 triangles total
mesh.IndexCount = 12;
mesh.TriangleMaterialIndicesDescription = description;

// create space for storing this data


mesh.CreateTriangleMaterialIndices(sizeof(UInt32) * 4 * 12);

{
// each row is a triangle face (in the order they were created)
// first column is the id of the material group, last 3 columns show which material id (within that group)
// maps to each triangle vertex (in the order they were listed when creating triangles)
UInt32[] indices =
{
// base materials:
// in the BaseMaterialGroup (id=1), the BaseMaterial with id=0 will be applied to these triangle vertices
1, 0, 0, 0,
1, 0, 0, 0,
// color materials:
// in the ColorMaterialGroup (id=2), the ColorMaterials with these ids will be applied to these triangle vertices
2, 1, 1, 1,
2, 1, 1, 1,
2, 0, 0, 0,
2, 0, 0, 0,
2, 0, 1, 2,
2, 1, 0, 2,
// composite materials:
// in the CompositeMaterialGroup (id=3), the CompositeMaterial with id=0 will be applied to these triangles
3,0,0,0,
3,0,0,0,
// texture materials:
// in the Texture2CoordMaterialGroup (id=4), each texture coordinate is mapped to the appropriate vertex on these
// two adjacent triangle faces, so that the square face they create displays the original rectangular image
4, 0, 3, 1,
4, 2, 3, 0,
};

// get the current (unassigned) vertex data as a stream and write our new 'indices' data to it.
var stream = mesh.GetTriangleMaterialIndices().AsStream();
var vertexData = indices.SelectMany(v => BitConverter.GetBytes(v)).ToArray();
var len = vertexData.Length;
await stream.WriteAsync(vertexData, 0, vertexData.Length);
}
}

Components and build


The component structure allows the user to place more than one mesh object in a printable 3D model. A
Printing3DComponent object contains a single mesh and a list of references to other components. This is actually
a list of Printing3DComponentWithMatrix objects. Printing3DComponentWithMatrix objects each contain a
Printing3DComponent and, importantly, a transform matrix that applies to the mesh and contained components
of said Printing3DComponent.
For example, a model of a car might consist of a "Body" Printing3DComponent that holds the mesh for the car's
body. The "Body" component may then contain references to four different Printing3DComponentWithMatrix
objects, which all reference the same Printing3DComponent with the "Wheel" mesh and contain four different
transform matrices (mapping the wheels to four different positions on the car's body). In this scenario, the "Body"
mesh and "Wheel" mesh would each only need to be stored once, even though the final product would feature five
meshes in total.
All Printing3DComponent objects must be directly referenced in the model's Components property. The one
particular component that is to be used in the printing job is stored in the Build Property.

// create new component


Printing3DComponent component = new Printing3DComponent();

// assign mesh to the component's mesh


component.Mesh = mesh;

// add component to the model's list of all used components


// a model can have references to multiple components
model.Components.Add(component);

// create the transform matrix


var componentWithMatrix = new Printing3DComponentWithMatrix();
// assign component to this componentwithmatrix
componentWithMatrix.Component = component;

// create an identity matrix


var identityMatrix = Matrix4x4.Identity;

// use the identity matrix as the transform matrix (no transformation)


componentWithMatrix.Matrix = identityMatrix;

// add component to the build property.


model.Build.Components.Add(componentWithMatrix);

Save package
Now that we have a model, with defined materials and components, we can save it to the package.

// save the model to the package:


await localPackage.SaveModelToPackageAsync(model);
// get the model stream
var modelStream = localPackage.ModelPart;

// fix any textures in the model file


localPackage.ModelPart = await FixTextureContentType(modelStream);

From here, we can either initiate a print job within the app (see 3D printing from your app), or save this
Printing3D3MFPackage as a .3mf file.
The following method takes a finished Printing3D3MFPackage and saves its data to a .3mf file.
private async void SaveTo3mf(Printing3D3MFPackage localPackage) {

// prompt the user to choose a location to save the file to


FileSavePicker savePicker = new FileSavePicker();
savePicker.DefaultFileExtension = ".3mf";
savePicker.SuggestedStartLocation = PickerLocationId.DocumentsLibrary;
savePicker.FileTypeChoices.Add("3MF File", new[] { ".3mf" });
var storageFile = await savePicker.PickSaveFileAsync();
if (storageFile == null) {
return;
}

// save the 3MF Package to an IRandomAccessStream


using (var stream = await localPackage.SaveAsync()) {
// go to the beginning of the stream
stream.Seek(0);

// read from the file stream and write to a buffer


using (var dataReader = new DataReader(stream)) {
await dataReader.LoadAsync((uint)stream.Size);
var buffer = dataReader.ReadBuffer((uint)stream.Size);

// write from the buffer to the storagefile specified


await FileIO.WriteBufferAsync(storageFile, buffer);
}
}
}

Related topics
3D printing from your app
3D printing UWP sample
NFC
3/6/2017 1 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This section contains articles on how to integrate NFC into Universal Windows Platform (UWP) apps.

TOPIC DESCRIPTION

Create an NFC Smart Card app Windows Phone 8.1 supported NFC card emulation apps
using a SIM-based secure element, but that model required
secure payment apps to be tightly coupled with mobile-
network operators (MNO). This limited the variety of possible
payment solutions by other merchants or developers that are
not coupled with MNOs. In Windows 10 Mobile, we have
introduced a new card emulation technology called, Host Card
Emulation (HCE). This article serves as a guide to develop an
HCE app.
Create an NFC Smart Card app
3/6/2017 18 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Important This topic applies to Windows 10 Mobile only.
Windows Phone 8.1 supported NFC card emulation apps using a SIM-based secure element, but that model
required secure payment apps to be tightly coupled with mobile-network operators (MNO). This limited the variety
of possible payment solutions by other merchants or developers that are not coupled with MNOs. In Windows 10
Mobile, we have introduced a new card emulation technology called, Host Card Emulation (HCE). HCE technology
allows your app to directly communicate with an NFC card reader. This topic illustrates how Host Card Emulation
(HCE) works on Windows 10 Mobile devices and how you can develop an HCE app so that your customers can
access your services through their phone instead of a physical card without collaborating with an MNO.

What you need to develop an HCE app


To develop an HCE-based card emulation app for Windows 10 Mobile, you will need to get your development
environment setup. You can get set up by installing Microsoft Visual Studio 2015, which includes the Windows
developer tools and the Windows 10 Mobile emulator with NFC emulation support. For more information about
getting setup, see, Get set up
Optionally, if you want to test with a real Windows 10 Mobile device instead of the included Windows 10 Mobile
emulator, you will also need the following items.
A Windows 10 Mobile device with NFC HCE support. Currently, the Lumia 730, 830, 640, and the 640 XL have
the hardware to support NFC HCE apps.
A reader terminal that supports protocols ISO/IEC 14443-4 and ISO/IEC 7816-4
Windows 10 Mobile implements an HCE service that provides the following functionalities.
Apps can register the applet identifiers (AIDs) for the cards they would like to emulate.
Conflict resolution and routing of the Application Protocol Data Unit (APDU) command and response pairs to
one of the registered apps based on the external reader card selection and user preference.
Handling of events and notifications to the apps as a result of user actions.
Windows 10 supports emulation of smart cards that are based on ISO-DEP (ISO-IEC 14443-4) and communicates
using APDUs as defined in the ISO-IEC 7816-4 specification. Windows 10 supports ISO/IEC 14443-4 Type A
technology for HCE apps. Type B, type F, and non-ISO-DEP (eg MIFARE) technologies are routed to the SIM by
default.
Only Windows 10 Mobile devices are enabled with the card emulation feature. SIM-based and HCE-based card
emulation is not available on other versions of Windows 10.
The architecture for HCE and SIM based card emulation support is shown in the diagram below.
App selection and AID routing
To develop an HCE app, you must understand how Windows 10 Mobile devices route AIDs to a specific app
because users can install multiple different HCE apps. Each app can register multiple HCE and SIM-based cards.
Legacy Windows Phone 8.1 apps that are SIM-based will continue to work on Windows 10 Mobile as long as the
user chooses the "SIM Card" option as their default payment card in the NFC Setting menu. This is set by default
when turning the device on for the first time.
When the user taps their Windows 10 Mobile device to a terminal, the data is automatically routed to the proper
app installed on the device. This routing is based on the applet IDs (AIDs) which are 5-16 byte identifiers. During a
tap, the external terminal will transmit a SELECT command APDU to specify the AID it would like all subsequent
APDU commands to be routed to. Subsequent SELECT commands, will change the routing again. Based on the AIDs
registered by apps and user settings, the APDU traffic is routed to a specific app, which will send a response APDU.
Be aware that a terminal may want to communicate with several different apps during the same tap. So you must
ensure your app's background task exits as quickly as possible when deactivated to make room for another app's
background task to respond to the APDU. We will discuss background tasks later in this topic.
HCE apps must register themselves with particular AIDs they can handle so they will receive APDUs for an AID.
Apps decalre AIDs by using AID groups. An AID group is conceptually equivalent to an individual physical card. For
example, one credit card is declared with an AID group and a second credit card from a different bank is declared
with a different, second AID group, even though both of them may have the same AID.

Conflict resolution for payment AID groups


When an app registers physical cards (AID groups), it can declare the AID group category as either "Payment" or
"Other." While there can be multiple payment AID groups registered at any given time, only one of these payment
AID groups may be enabled for Tap and Pay at a time, which is selected by the user. This behavior exists because
the user expects be in control of consciously choosing a single payment, credit, or debit card to use so they don't
pay with a different unintended card when tapping their device to a terminal.
However, multiple AID groups registered as "Other" can be enabled at the same time without user interaction. This
behavior exists because other types of cards like loyalty, coupons, or transit are expected to just work without any
effort or prompting whenever they tap their phone.
All the AID groups that are registered as "Payment" appear in the list of cards in the NFC Settings page, where the
user can select their default payment card. When a default payment card is selected, the app that registered this
payment AID group becomes the default payment app. Default payment apps can enable or disable any of their
AID groups without user interaction. If the user declines the default payment app prompt, then the current default
payment app (if any) continues to remain as default. The following screenshot shows the NFC Settings page.
Using the example screenshot above, if the user changes his default payment card to another card that is not
registered by "HCE Application 1," the system creates a confirmation prompt for the users consent. However, if the
user changes his default payment card to another card that is registered by "HCE Application 1," the system does
not create a confirmation prompt for the user, because "HCE Application1" is already the default payment app.

Conflict resolution for non-payment AID groups


Non-payment cards categorized as "Other" do not appear in the NFC settings page.
Your app can create, register and enable non-payment AID groups in the same manner as payment AID groups.
The main difference is that for non-payment AID groups the emulation category is set to "Other" as opposed to
"Payment". After registering the AID group with the system, you need to enable the AID group to receive NFC
traffic. When you try to enable a non-payment AID group to receive traffic, the user is not prompted for a
confirmation unless there is a conflict with one of the AIDs already registered in the system by a different app. If
there is a conflict, the user will be prompted with information about which card and it's associated app will be
disabled if the user chooses to enable the newly registered AID group.
Coexistence with SIM based NFC applications
In Windows 10 Mobile, the system sets up the NFC controller routing table that is used to make routing decisions
at the controller layer. The table contains routing information for the following items.
Individual AID routes.
Protocol based route (ISO-DEP).
Technology based routing (NFC-A/B/F).
When an external reader sends a "SELECT AID" command, the NFC controller first checks AID routes in the routing
table for a match. If there is no match, it will use the protocol-based route as the default route for ISO-DEP (14443-
4-A) traffic. For any other non-ISO-DEP traffic it will use the technology based routing.
Windows 10 Mobile provides a menu option "SIM Card" in the NFC Settings page to continue to use legacy
Windows Phone 8.1 SIM-based apps, which do not register their AIDs with the system. If the user selects "SIM card"
as their default payment card, then the ISO-DEP route is set to UICC, for all other selections in the drop-down menu
the ISO-DEP route is to the host.
The ISO-DEP route is set to "SIM Card" for devices that have an SE enabled SIM card when the device is booted for
the first time with Windows 10 Mobile. When the user installs an HCE enabled app and that app enables any HCE
AID group registrations, the ISO-DEP route will be pointed to the host. New SIM-based applications need to
register the AIDs in the SIM in order for the specific AID routes to be populated in the controller routing table.
Creating an HCE based app
Your HCE app has two parts.
The main foreground app for the user interaction.
A background task that is triggered by the system to process APDUs for a given AID.
Because of the extremely tight performance requirements for loading your background task in response to an NFC
tap, we recommend that your entire background task be implementing in C++/CX native code (including any
dependencies, references, or libraries you depend on) rather than C# or managed code. While C# and managed
code normally performs well, there is overhead, like loading the .NET CLR, that can be avoided by writing it in
C++/CX.

Create and register your background task


You need to create a background task in your HCE app for processing and responding to APDUs routed to it by the
system. During the first time your app is launched, the foreground registers an HCE background task that
implements the IBackgroundTaskRegistration interface as shown in the following code.

var taskBuilder = new BackgroundTaskBuilder();


taskBuilder.Name = bgTaskName;
taskBuilder.TaskEntryPoint = taskEntryPoint;
taskBuilder.SetTrigger(new SmartCardTrigger(SmartCardTriggerType.EmulatorHostApplicationActivated));
bgTask = taskBuilder.Register();

Notice that the task trigger is set to SmartCardTriggerType. EmulatorHostApplicationActivated. This means
that whenever a SELECT AID command APDU is received by the OS resolving to your app, your background task
will be launched.

Receive and respond to APDUs


When there is an APDU targeted for your app, the system will launch your background task. Your background task
receives the APDU passed through the SmartCardEmulatorApduReceivedEventArgs objects CommandApdu
property and responds to the APDU using the TryRespondAsync method of the same object. Consider keeping
your background task for light operations for performance reasons. For example, respond to the APDUs
immediately and exit your background task when all processing is complete. Due to the nature of NFC transactions,
users tend to hold their device against the reader for only a very short amount of time. Your background task will
continue to receive traffic from the reader until your connection is deactivated, in which case you will receive a
SmartCardEmulatorConnectionDeactivatedEventArgs object. Your connection can be deactivated because of
the following reasons as indicated in the SmartCardEmulatorConnectionDeactivatedEventArgs.Reason
property.
If the connection is deactivated with the ConnectionLost value, it means that the user pulled their device away
from the reader. If your app needs the user to tap to the terminal longer, you might want to consider prompting
them with feedback. You should terminate your background task quickly (by completing your deferral) to
ensure if they tap again it wont be delayed waiting for the previous background task to exit.
If the connection is deactivated with the ConnectionRedirected, it means that the terminal sent a new SELECT
AID command APDU directed to a different AID. In this case, your app should exit the background task
immediately (by completing your deferral) to allow another background task to run.
The background task should also register for the Canceled event on IBackgroundTaskInstance interface, and
likewise quickly exit the background task (by completing your deferral) because this event is fired by the system
when it is finished with your background task. Below is code that demonstrates an HCE app background task.

void BgTask::Run(
void BgTask::Run(
IBackgroundTaskInstance^ taskInstance)
{
m_triggerDetails = static_cast<SmartCardTriggerDetails^>(taskInstance->TriggerDetails);
if (m_triggerDetails == nullptr)
{
// May be not a smart card event that triggered us
return;
}

m_emulator = m_triggerDetails->Emulator;
m_taskInstance = taskInstance;

switch (m_triggerDetails->TriggerType)
{
case SmartCardTriggerType::EmulatorHostApplicationActivated:
HandleHceActivation();
break;

case SmartCardTriggerType::EmulatorAppletIdGroupRegistrationChanged:
HandleRegistrationChange();
break;

default:
break;
}
}

void BgTask::HandleHceActivation()
{
try
{
auto lock = m_srwLock.LockShared();
// Take a deferral to keep this background task alive even after this "Run" method returns
// You must complete this deferal immediately after you have done processing the current transaction
m_deferral = m_taskInstance->GetDeferral();

DebugLog(L"*** HCE Activation Background Task Started ***");

// Set up a handler for if the background task is cancelled, we must immediately complete our deferral
m_taskInstance->Canceled += ref new Windows::ApplicationModel::Background::BackgroundTaskCanceledEventHandler(
[this](
IBackgroundTaskInstance^ sender,
BackgroundTaskCancellationReason reason)
{
DebugLog(L"Cancelled");
DebugLog(reason.ToString()->Data());
EndTask();
});

if (Windows::Phone::System::SystemProtection::ScreenLocked)
{
auto denyIfLocked = Windows::Storage::ApplicationData::Current->RoamingSettings->Values->Lookup("DenyIfPhoneLocked");
if (denyIfLocked != nullptr && (bool)denyIfLocked == true)
{
// The phone is locked, and our current user setting is to deny transactions while locked so let the user know
// Denied
DoLaunch(Denied, L"Phone was locked at the time of tap");

// We still need to respond to APDUs in a timely manner, even though we will just return failure
m_fDenyTransactions = true;
}
}
else
{
m_fDenyTransactions = false;
}

m_emulator->ApduReceived += ref new TypedEventHandler<SmartCardEmulator^, SmartCardEmulatorApduReceivedEventArgs^>(


this, &BgTask::ApduReceived);
this, &BgTask::ApduReceived);

m_emulator->ConnectionDeactivated += ref new TypedEventHandler<SmartCardEmulator^,


SmartCardEmulatorConnectionDeactivatedEventArgs^>(
[this](
SmartCardEmulator^ emulator,
SmartCardEmulatorConnectionDeactivatedEventArgs^ eventArgs)
{
DebugLog(L"Connection deactivated");
EndTask();
});

m_emulator->Start();
DebugLog(L"Emulator started");
}
catch (Exception^ e)
{
DebugLog(("Exception in Run: " + e->ToString())->Data());
EndTask();
}
}

Create and register AID groups


During the first launch of your application when the card is being provisioned, you will create and register AID
groups with the system. The system determines the app that an external reader would like to talk to and route
APDUs accordingly based on the registered AIDs and user settings.
Most of the payment cards register for the same AID (which is PPSE AID) along with additional payment network
card specific AIDs. Each AID group represents a card and when the user enables the card, all AIDs in the group are
enabled. Similarly, when the user deactivates the card, all AIDs in the group are disabled.
To register an AID group, you need to create a SmartCardAppletIdGroup object and set its properties to reflect
that this is an HCE-based payment card. Your display name should be descriptive to the user because it will show
up in the NFC settings menu as well as user prompts. For HCE payment cards, the SmartCardEmulationCategory
property should be set to Payment and the SmartCardEmulationType property should be set to Host.

public static byte[] AID_PPSE =


{
// File name "2PAY.SYS.DDF01" (14 bytes)
(byte)'2', (byte)'P', (byte)'A', (byte)'Y',
(byte)'.', (byte)'S', (byte)'Y', (byte)'S',
(byte)'.', (byte)'D', (byte)'D', (byte)'F', (byte)'0', (byte)'1'
};

var appletIdGroup = new SmartCardAppletIdGroup(


"Example DisplayName",
new List<IBuffer> {AID_PPSE.AsBuffer()},
SmartCardEmulationCategory.Payment,
SmartCardEmulationType.Host);

For non-payment HCE cards, the SmartCardEmulationCategory property should be set to Other and the
SmartCardEmulationType property should be set to Host.
public static byte[] AID_OTHER =
{
(byte)'1', (byte)'2', (byte)'3', (byte)'4',
(byte)'5', (byte)'6', (byte)'7', (byte)'8',
(byte)'O', (byte)'T', (byte)'H', (byte)'E', (byte)'R'
};

var appletIdGroup = new SmartCardAppletIdGroup(


"Example DisplayName",
new List<IBuffer> {AID_OTHER.AsBuffer()},
SmartCardEmulationCategory.Other,
SmartCardEmulationType.Host);

You can include up to 9 AIDs (of length 5-16 bytes each) per AID group.
Use the RegisterAppletIdGroupAsync method to register your AID group with the system, which will return a
SmartCardAppletIdGroupRegistration object. By default, the ActivationPolicy property of the registration
object is set to Disabled. This means even though your AIDs are registered with the system, they are not enabled
yet and wont receive traffic.

reg = await SmartCardEmulator.RegisterAppletIdGroupAsync(appletIdGroup);

You can enable your registered cards (AID groups) by using the RequestActivationPolicyChangeAsync method
of theSmartCardAppletIdGroupRegistration class as shown below. Because only a single payment card can be
enabled at a time on the system, setting the ActivationPolicy of a payment AID group to Enabled is the same as
setting the default payment card. The user will be prompted to allow this card as a default payment card, regardless
of whether there is a default payment card already selected or not. This statement is not true if your app is already
the default payment application, and is merely changing between its own AID groups. You can register up to 10
AID groups per app.

reg.RequestActivationPolicyChangeAsync(AppletIdGroupActivationPolicy.Enabled);

You can query your apps registered AID groups with the OS and check their activation policy using the
GetAppletIdGroupRegistrationsAsync method.
Users will be prompted when you change the activation policy of a payment card from Disabled to Enabled, only
if your app is not already the default payment app. Users will only be prompted when you change the activation
policy of a non-payment card from Disabled to Enabled if there is an AID conflict.

var registrations = await SmartCardEmulator.GetAppletIdGroupRegistrationsAsync();


foreach (var registration in registrations)
{
registration.RequestActivationPolicyChangeAsync (AppletIdGroupActivationPolicy.Enabled);
}

Event notification when activation policy change


In your background task, you can register to receive events for when the activation policy of one of your AID group
registrations changes outside of your app. For example, the user may change the default payment app through the
NFC settings menu from one of your cards to another card hosted by another app. If your app needs to know
about this change for internal setup such as updating live tiles, you can receive event notifications for this change
and take action in your app accordingly.
var taskBuilder = new BackgroundTaskBuilder();
taskBuilder.Name = bgTaskName;
taskBuilder.TaskEntryPoint = taskEntryPoint;
taskBuilder.SetTrigger(new SmartCardTrigger(SmartCardTriggerType.EmulatorAppletIdGroupRegistrationChanged));
bgTask = taskBuilder.Register();

Foreground override behavior


You can change the ActivationPolicy of any of your AID group registrations to ForegroundOverride while your
app is in the foreground without prompting the user. When the user taps their device to a terminal while your app
is in the foreground, the traffic is routed to your app even if none of your payment cards were chosen by the user
as their default payment card. When you change a cards activation policy to ForegroundOverride, this change is
only temporary until your app leaves the foreground and it will not change the current default payment card set by
the user. You can change the ActivationPolicy of your payment or non-payment cards from your foreground app
as follows. Note that the RequestActivationPolicyChangeAsync method can only be called from a foreground
app and cannot be called from a background task.

reg.RequestActivationPolicyChangeAsync(AppletIdGroupActivationPolicy.ForegroundOverride);

Also, you can register an AID group consisting of a single 0-length AID which will cause the system to route all
APDUs regardless of the AID and including any command APDUs sent before a SELECT AID command is received.
However, such an AID group only works while your app is in the foreground because it can only be set to
ForegroundOverride and cannot be permanently enabled. Also, this mechanism works both for Host and UICC
values of the SmartCardEmulationType enumeration to either route all traffic to your HCE background task, or to
the SIM card.

public static byte[] AID_Foreground =


{};

var appletIdGroup = new SmartCardAppletIdGroup(


"Example DisplayName",
new List<IBuffer> {AID_Foreground.AsBuffer()},
SmartCardEmulationCategory.Other,
SmartCardEmulationType.Host);
reg = await SmartCardEmulator.RegisterAppletIdGroupAsync(appletIdGroup);
reg.RequestActivationPolicyChangeAsync(AppletIdGroupActivationPolicy.ForegroundOverride);

Check for NFC and HCE support


Your app should check whether a device has NFC hardware, supports the card emulation feature, and supports
host card emulation prior to offering such features to the user.
The NFC smart card emulation feature is only enabled on Windows 10 Mobile, so trying to use the smart card
emulator APIs in any other versions of Windows 10, will cause errors. You can check for smart card API support in
the following code snippet.

Windows.Foundation.Metadata.ApiInformation.IsTypePresent("Windows.Devices.SmartCards.SmartCardEmulator");

You can additionally check to see if the device has NFC hardware capable of some form of card emulation by
checking if the SmartCardEmulator.GetDefaultAsync method returns null. If it does, then no NFC card
emulation is supported on the device.
var smartcardemulator = await SmartCardEmulator.GetDefaultAsync();<

Support for HCE and AID-based UICC routing is only available on recently launched devices such as the Lumia 730,
830, 640, and 640 XL. Any new NFC capable devices running Windows 10 Mobile and after should support HCE.
Your app can check for HCE support as follows.

Smartcardemulator.IsHostCardEmulationSupported();

Lock screen and screen off behavior


Windows 10 Mobile has device-level card emulation settings, which can be set by the mobile operator or the
manufacturer of the device. By default, "tap to pay" toggle is disabled and the "enablement policy at device level" is
set to "Always", unless the MO or OEM overwrites these values.
Your application can query the value of the EnablementPolicy at device level and take action for each case
depending on the desired behavior of your app in each state.

SmartCardEmulator emulator = await SmartCardEmulator.GetDefaultAsync();

switch (emulator.EnablementPolicy)
{
case Never:
// you can take the user to the NFC settings to turn "tap and pay" on
await Windows.System.Launcher.LaunchUriAsync(new Uri("ms-settings-nfctransactions:"));
break;

case Always:
return "Card emulation always on";

case ScreenOn:
return "Card emulation on only when screen is on";

case ScreenUnlocked:
return "Card emulation on only when screen unlocked";
}

Your app's background task will be launched even if the phone is locked and/or the screen is off only if the external
reader selects an AID that resolves to your app. You can respond to the commands from the reader in your
background task, but if you need any input from the user or if you want to show a message to the user, you can
launch your foreground app with some arguments. Your background task can launch your foreground app with
the following behavior.
Under the device lock screen (the user will see your foreground app only after she unlocks the device)
Above the device lock screen (after the user dismisses your app, the device is still in locked state)

if (Windows::Phone::System::SystemProtection::ScreenLocked)
{
// Launch above the lock with some arguments
var result = await eventDetails.TryLaunchSelfAsync("app-specific arguments", SmartCardLaunchBehavior.AboveLock);
}

AID registration and other updates for SIM based apps


Card emulation apps that use the SIM as the secure element can register with the Windows service to declare the
AIDs supported on the SIM. This registration is very similar to an HCE-based app registration. The only difference is
the SmartCardEmulationType, which should be set to Uicc for SIM-based apps. As the result of the payment card
registration, the display name of the card will also be populated in the NFC setting menu.

var appletIdGroup = new SmartCardAppletIdGroup(


"Example DisplayName",
new List<IBuffer> {AID_PPSE.AsBuffer()},
SmartCardEmulationCategory.Payment,
SmartCardEmulationType.Uicc);

** Important **
The legacy binary SMS intercept support in Windows Phone 8.1 has been removed and replaced with new broader
SMS support in Windows 10 Mobile, but any legacy Windows Phone 8.1 apps relying on that must update to use
the new Windows 10 Mobile SMS APIs.
Get battery information
3/6/2017 6 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
** Important APIs **
Windows.Devices.Power
DeviceInformation.FindAllAsync
Learn how to get detailed battery information using APIs in the Windows.Devices.Power namespace. A battery
report (BatteryReport) describes the charge, capacity, and status of a battery or aggregate of batteries. This topic
demonstrates how your app can get battery reports and be notified of changes. Code examples are from the basic
battery app that's listed at the end of this topic.

Get aggregate battery report


Some devices have more than one battery and it's not always obvious how each battery contributes to the overall
energy capacity of the device. This is where the AggregateBattery class comes in. The aggregate battery
represents all battery controllers connected to the device and can provide a single overall BatteryReport object.
Note A Battery class actually corresponds to a battery controller. Depending on the device, sometimes the
controller is attached to the physical battery and sometimes it's attached to the device enclosure. Thus, it's possible
to create a battery object even when no batteries are present. Other times, the battery object may be null.
Once you have an aggregate battery object, call GetReport to get the corresponding BatteryReport.

private void RequestAggregateBatteryReport()


{
// Create aggregate battery object
var aggBattery = Battery.AggregateBattery;

// Get report
var report = aggBattery.GetReport();

// Update UI
AddReportUI(BatteryReportPanel, report, aggBattery.DeviceId);
}

Get individual battery reports


You can also create a BatteryReport object for individual batteries. Use GetDeviceSelector with the
FindAllAsync method to obtain a collection of DeviceInformation objects that represent any battery controllers
that are connected to the device. Then, using the Id property of the desired DeviceInformation object, create a
corresponding Battery with the FromIdAsync method. Finally, call GetReport to get the individual battery report.
This example shows how to create a battery report for all batteries connected to the device.
async private void RequestIndividualBatteryReports()
{
// Find batteries
var deviceInfo = await DeviceInformation.FindAllAsync(Battery.GetDeviceSelector());
foreach(DeviceInformation device in deviceInfo)
{
try
{
// Create battery object
var battery = await Battery.FromIdAsync(device.Id);

// Get report
var report = battery.GetReport();

// Update UI
AddReportUI(BatteryReportPanel, report, battery.DeviceId);
}
catch { /* Add error handling, as applicable */ }
}
}

Access report details


The BatteryReport object provides a lot of battery information. For more info, see the API reference for its
properties: Status (a BatteryStatus enumeration), ChargeRateInMilliwatts, DesignCapacityInMilliwattHours,
FullChargeCapacityInMilliwattHours, and RemainingCapacityInMilliwattHours. This example shows some
of the battery report properties used by the basic battery app, that's provided later in this topic.

...
TextBlock txt3 = new TextBlock { Text = "Charge rate (mW): " + report.ChargeRateInMilliwatts.ToString() };
TextBlock txt4 = new TextBlock { Text = "Design energy capacity (mWh): " + report.DesignCapacityInMilliwattHours.ToString() };
TextBlock txt5 = new TextBlock { Text = "Fully-charged energy capacity (mWh): " + report.FullChargeCapacityInMilliwattHours.ToString() };
TextBlock txt6 = new TextBlock { Text = "Remaining energy capacity (mWh): " + report.RemainingCapacityInMilliwattHours.ToString() };
...
...

Request report updates


The Battery object triggers the ReportUpdated event when charge, capacity, or status of the battery changes. This
typically happens immediately for status changes and periodically for all other changes. This example shows how
to register for battery report updates.

...
Battery.AggregateBattery.ReportUpdated += AggregateBattery_ReportUpdated;
...

Handle report updates


When a battery update occurs, the ReportUpdated event passes the corresponding Battery object to the event
handler method. However, this event handler is not called from the UI thread. You'll need to use the Dispatcher
object to invoke any UI changes, as shown in this example.
async private void AggregateBattery_ReportUpdated(Battery sender, object args)
{
if (reportRequested)
{

await Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () =>


{
// Clear UI
BatteryReportPanel.Children.Clear();

if (AggregateButton.IsChecked == true)
{
// Request aggregate battery report
RequestAggregateBatteryReport();
}
else
{
// Request individual battery report
RequestIndividualBatteryReports();
}
});
}
}

Example: basic battery app


Test out these APIs by building the following basic battery app in Microsoft Visual Studio. From the Visual Studio
start page, click New Project, and then under the Visual C# > Windows > Universal templates, create a new app
using the Blank App template.
Next, open the file MainPage.xaml and copy the following XML into this file (replacing its original contents).

<Page
x:Class="App1.MainPage"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:local="using:App1"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
mc:Ignorable="d">

<StackPanel Background="{ThemeResource ApplicationPageBackgroundThemeBrush}" >


<StackPanel VerticalAlignment="Center" Margin="15,30,0,0" >
<RadioButton x:Name="AggregateButton" Content="Aggregate results" GroupName="Type" IsChecked="True" />
<RadioButton x:Name="IndividualButton" Content="Individual results" GroupName="Type" IsChecked="False" />
</StackPanel>
<StackPanel Orientation="Horizontal">
<Button x:Name="GetBatteryReportButton"
Content="Get battery report"
Margin="15,15,0,0"
Click="GetBatteryReport"/>
</StackPanel>
<StackPanel x:Name="BatteryReportPanel" Margin="15,15,0,0"/>
</StackPanel>
</Page>

If your app isn't named App1, you'll need to replace the first part of the class name in the previous snippet with the
namespace of your app. For example, if you created a project named BasicBatteryApp, you'd replace
x:Class="App1.MainPage" with x:Class="BasicBatteryApp.MainPage" . You should also replace xmlns:local="using:App1" with
xmlns:local="using:BasicBatteryApp" .
Next, open your project's MainPage.xaml.cs file and replace the existing code with the following.

using System;
using Windows.UI.Xaml;
using Windows.UI.Xaml.Controls;
using Windows.UI.Xaml.Media;
using Windows.Devices.Enumeration;
using Windows.Devices.Power;
using Windows.UI.Core;

namespace App1
{
public sealed partial class MainPage : Page
{
bool reportRequested = false;
public MainPage()
{
this.InitializeComponent();
Battery.AggregateBattery.ReportUpdated += AggregateBattery_ReportUpdated;
}

private void GetBatteryReport(object sender, RoutedEventArgs e)


{
// Clear UI
BatteryReportPanel.Children.Clear();

if (AggregateButton.IsChecked == true)
{
// Request aggregate battery report
RequestAggregateBatteryReport();
}
else
{
// Request individual battery report
RequestIndividualBatteryReports();
}

// Note request
reportRequested = true;
}

private void RequestAggregateBatteryReport()


{
// Create aggregate battery object
var aggBattery = Battery.AggregateBattery;

// Get report
var report = aggBattery.GetReport();

// Update UI
AddReportUI(BatteryReportPanel, report, aggBattery.DeviceId);
}

async private void RequestIndividualBatteryReports()


{
// Find batteries
var deviceInfo = await DeviceInformation.FindAllAsync(Battery.GetDeviceSelector());
foreach(DeviceInformation device in deviceInfo)
{
try
{
// Create battery object
var battery = await Battery.FromIdAsync(device.Id);

// Get report
var report = battery.GetReport();
var report = battery.GetReport();

// Update UI
AddReportUI(BatteryReportPanel, report, battery.DeviceId);
}
catch { /* Add error handling, as applicable */ }
}
}

private void AddReportUI(StackPanel sp, BatteryReport report, string DeviceID)


{
// Create battery report UI
TextBlock txt1 = new TextBlock { Text = "Device ID: " + DeviceID };
txt1.FontSize = 15;
txt1.Margin = new Thickness(0, 15, 0, 0);
txt1.TextWrapping = TextWrapping.WrapWholeWords;

TextBlock txt2 = new TextBlock { Text = "Battery status: " + report.Status.ToString() };


txt2.FontStyle = Windows.UI.Text.FontStyle.Italic;
txt2.Margin = new Thickness(0, 0, 0, 15);

TextBlock txt3 = new TextBlock { Text = "Charge rate (mW): " + report.ChargeRateInMilliwatts.ToString() };
TextBlock txt4 = new TextBlock { Text = "Design energy capacity (mWh): " + report.DesignCapacityInMilliwattHours.ToString() };
TextBlock txt5 = new TextBlock { Text = "Fully-charged energy capacity (mWh): " + report.FullChargeCapacityInMilliwattHours.ToString()
};
TextBlock txt6 = new TextBlock { Text = "Remaining energy capacity (mWh): " + report.RemainingCapacityInMilliwattHours.ToString() };

// Create energy capacity progress bar & labels


TextBlock pbLabel = new TextBlock { Text = "Percent remaining energy capacity" };
pbLabel.Margin = new Thickness(0,10, 0, 5);
pbLabel.FontFamily = new FontFamily("Segoe UI");
pbLabel.FontSize = 11;

ProgressBar pb = new ProgressBar();


pb.Margin = new Thickness(0, 5, 0, 0);
pb.Width = 200;
pb.Height = 10;
pb.IsIndeterminate = false;
pb.HorizontalAlignment = HorizontalAlignment.Left;

TextBlock pbPercent = new TextBlock();


pbPercent.Margin = new Thickness(0, 5, 0, 10);
pbPercent.FontFamily = new FontFamily("Segoe UI");
pbLabel.FontSize = 11;

// Disable progress bar if values are null


if ((report.FullChargeCapacityInMilliwattHours == null)||
(report.RemainingCapacityInMilliwattHours == null))
{
pb.IsEnabled = false;
pbPercent.Text = "N/A";
}
else
{
pb.IsEnabled = true;
pb.Maximum = Convert.ToDouble(report.FullChargeCapacityInMilliwattHours);
pb.Value = Convert.ToDouble(report.RemainingCapacityInMilliwattHours);
pbPercent.Text = ((pb.Value / pb.Maximum) * 100).ToString("F2") + "%";
}

// Add controls to stackpanel


sp.Children.Add(txt1);
sp.Children.Add(txt2);
sp.Children.Add(txt3);
sp.Children.Add(txt4);
sp.Children.Add(txt5);
sp.Children.Add(txt6);
sp.Children.Add(pbLabel);
sp.Children.Add(pb);
sp.Children.Add(pb);
sp.Children.Add(pbPercent);
}

async private void AggregateBattery_ReportUpdated(Battery sender, object args)


{
if (reportRequested)
{

await Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () =>


{
// Clear UI
BatteryReportPanel.Children.Clear();

if (AggregateButton.IsChecked == true)
{
// Request aggregate battery report
RequestAggregateBatteryReport();
}
else
{
// Request individual battery report
RequestIndividualBatteryReports();
}
});
}
}
}
}

If your app isn't named App1, you'll need to rename the namespace in the previous example with the name you
gave your project. For example, if you created a project named BasicBatteryApp, you'd replace namespace App1
with namespace BasicBatteryApp .
Finally, to run this basic battery app: on the Debug menu, click Start Debugging to test the solution.
Tip To receive numeric values from the BatteryReport object, debug your app on the Local Machine or an
external Device (such as a Windows Phone). When debugging on a device emulator, the BatteryReport object
returns null to the capacity and rate properties.
Enterprise
3/6/2017 7 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This roadmap provides an overview of key enterprise features for Windows 10 Universal Windows Platform (UWP)
apps. Windows 10 lets you write once and deploy across all devices, creating one app that tailors to any device. This
lets you build the great experiences your users expect, while providing control over the security, management, and
configuration required by your organization.
Note This article is targeted towards developers writing enterprise UWP apps. For general UWP development, see
the How-to guides for Windows 10 apps. For WPF, Windows Forms, or Win32 development, visit the Desktop dev
center. For IT professional resources, like deploying Windows 10 or managing enterprise security features, see
Windows 10 on TechNet.

Security
Windows 10 provides a suite of security features for app developers to protect the identity of their users, the
security of corporate networks, and any business data stored on devices. New for Windows 10 is Microsoft
Passport, an easy-to-deploy two-factor password alternative that is accessible by using a PIN or Windows Hello,
which provides enterprise grade security and supports fingerprint, facial, and iris based recognition.

TOPIC DESCRIPTION

Intro to secure Windows app development This introductory article explains various Windows security
features across the stages of authentication, data-in-flight,
and data-at-rest. It also describes how you can integrate
those stages into your apps. It covers a large range of topics,
and is aimed primarily at helping app architects better
understand the Windows features that make creating
Universal Windows Platform apps quick and easy.

Authentication and user identity UWP apps have several options for user authentication which
are outlined in this article. For the enterprise, the new
Microsoft Passport feature is strongly recommended.
Microsoft Passport replaces passwords with strong two-factor
authentication (2FA) by verifying existing credentials and by
creating a device-specific credential that a biometric or PIN-
based user gesture protects, resulting in a both convenient
and highly secure experience.

Cryptography The cryptography section provides an overview of the


cryptography features available to UWP apps. Articles range
from introductory walkthroughs on how to easily encrypt
sensitive business data, to advanced to advanced topics such
as manipulating cryptographic keys and working with MACs,
hashes, and signatures.

Windows Information Protection (WIP) This is a hub topic covering the full developer picture of how
Windows Information Protection (WIP) relates to files, buffers,
clipboard, networking, background tasks, and data protection
under lock.
Data binding and databases
Data binding is a way for your app's UI to display data from an external source, such as a database, and optionally
to stay in sync with that data. Data binding allows you to separate the concern of data from the concern of UI, and
that results in a simpler conceptual model as well as better readability, testability, and maintainability of your app.

TOPIC DESCRIPTION

Data binding overview This topic shows you how to bind a control (or other UI
element) to a single item or bind an items control to a
collection of items in a Universal Windows Platform (UWP)
app. In addition, it shows how to control the rendering of
items, implement a details view based on a selection, and
convert data for display.

Entity Framework 7 for UWP Performing complex queries against large data sets is vastly
simplified using Entity Framework 7, which supports UWP. In
this walkthrough, you will build a UWP app that performs
basic data access against a local SQLite database using Entity
Framework.

SQLite local database This video is a comprehensive developer's guide to using


SQLite, the recommended solution for local app databases.
Visit SQLite to download the latest version for UWP, or use
the version that's already provided with the Windows 10 SDK.

Networking and data serialization


Line-of-business apps often need to communicate with or store data on a variety of other systems. This is typically
accomplished by connecting to a network service (using protocols such as REST or SOAP) and then serializing or
deserializing data into a common format. Working with networks and data serialization in UWP apps similar to
WPF, WinForms, and ASP.NET applications. See the following articles for more information.

TOPIC DESCRIPTION

Networking basics This walkthrough explains basic networking concepts relevant


to all UWP apps, regardless of the communication protocols in
use.

Which networking technology? A quick overview of the networking technologies available for
UWP apps, with suggestions on how to choose the
technologies that are the best fit for your app.

XML and SOAP serialization XML serialization converts objects into an XML stream that
conforms to a specific XML Schema definition language (XSD).
To convert between XML and a strongly-typed class, you can
use the native XDocument class, or an external library.

JSON serialization JSON (JavaScript object notation) serialization is a popular


format for communicating with REST APIs. The Newtonsoft
Json.NET, which is fully supported for UWP apps.

Devices
In order to integrate with line-of-business tools, like printers, barcode scanners, or smart card readers, you may find
it necessary to integrate external devices or sensors into your app. Here are some examples of features that you can
add to your app using the technology described in this section.

TOPIC DESCRIPTION

Enumerate devices This article explains how to use the


Windows.Devices.Enumeration namespace to find devices that
are internally connected to the system, externally connected,
or detectable over wireless or networking protocols. Start here
if you're building any app that works with devices.

Printing and scanniing Describes how to print and scan from your app, including
connecting to and working with business devices like point-of-
sale (POS) systems, receipt printers, and high-capacity feeder
scanners.

Bluetooth In addition to using traditional Bluetooth connections to send


and receive data or control devices, Windows 10 enables using
Bluetooth Low Energy (BTLE) to send or receive beacons in the
background. Use this to display notifications or enable
functionality when a user gets close to or leaves a particular
location.

Enterprise shared storage In device lockdown scenarios, learn how data can be shared
within the same app, between instances of an app, or even
between apps.

Device targeting
Many users today are bringing their own phone or tablet to work, which have varying form factors and screen
sizes. With the Universal Windows Platform (UWP), you can write a single line-of-business app that runs seamlessly
on all different types of devices, including desktop PCs and PPI displays, allowing you to maximize the reach of your
app and the efficiency of your code.

TOPIC DESCRIPTION

Guide to UWP apps In this introductory guide, you'll get acquainted with the
Windows 10UWP platform, including: what a device family is
and how to decide which one to target, new UI controls and
panels that allow you to adapt your UI to different device form
factors, and how to understand and control the API surface
that is available to your app.

Adaptive XAML UI code sample This code sample shows all the possible layout options and
controls for your app, regardless of device type, and allows
you to interact with the panels to show how to achieve any
layout you are looking for. In addition to showing how each
control responds to different form factors, the app itself is
responsive and shows various methods for achieving adaptive
UI.

Deployment
You have options for distributing apps to your organizations users. You can use Windows Store for Business,
existing mobile device management or you can sideload apps to devices. You can also make your apps available to
the general public by publishing to the Windows Store.
TOPIC DESCRIPTION

Distribute LOB apps to enterprises You can publish line-of-business apps directly to enterprises
for volume acquisition via the Windows Store for Business,
without making the apps broadly available to the public.

Sideload apps When you sideload an app, you deploy a signed app package
to a device. You maintain the signing, hosting, and
deployment of these apps. The process for sideloading apps is
streamlined for Windows 10.

Publish apps to the Windows store The unified Windows Store lets you publish and manage all of
your apps for all Windows devices. Customize your apps
availability with per-market pricing, distribution and visibility
controls, and other options.

Patterns and practices


Code bases for large scale, enterprise-grade apps can become unwieldy. Prism is a framework for building loosely
coupled, maintainable, and testable XAML applications in WPF, Windows 10 UWP, and Xamarin Forms. Prism
provides an implementation of a collection of design patterns that are helpful in writing well-structured and
maintainable XAML applications, including MVVM, dependency injection, commands, EventAggregator, and others.
For more information on Prism, see the GitHub repo.
Windows Information Protection (WIP)
3/6/2017 4 min to read Edit on GitHub

Note Windows Information Protection (WIP) policy can be applied to Windows 10, version 1607.
WIP protects data that belongs to an organization by enforcing policies that are defined by the organization. If your app is
included in those polices, all data produced by your app is subject to policy restrictions. This topic helps you to build apps that
more gracefully enforce these policies without having any impact on the user's personal data.

First, what is WIP?


WIP is a set of features on desktops, laptops, tablets, and phones that support the organization's Mobile Device Management
(MDM) and Mobile Application Management (MAM) system.
WIP together with MDM gives the organization greater control over how its data is handled on devices that the organization
manages. Sometimes users bring devices to work and do not enroll them in the organization's MDM. In those cases,
organizations can use MAM to achieve greater control over how their data is handled on specific line of business apps that users
install on their device.
using MDM or MAM, administrators can identify which apps are allowed to access files that belong to the organization and
whether users can copy data from those files and then paste that data into personal documents.
Here's how it works. Users enroll their devices into the organization's mobile device management (MDM) system. An
administrator in the managing organization uses Microsoft Intune or System Center Configuration Manager (SCCM) to define
and then deploy a policy to the enrolled devices.
If users aren't required to enroll their devices, administrators will use their MAM system to define and deploy a policy that
applies to specific apps. When users install any of those apps, they'll receive the associated policy.
That policy identifies the apps that can access enterprise data (called the policy's allowed list). These apps can access enterprise
protected files, Virtual Private Networks (VPN) and enterprise data on the clipboard or through a share contract. The policy also
defines the rules that govern the data. For example, whether data can be copied from enterprise-owned files and then pasted into
non enterprise-owned files.
If users unenroll their device from the organization's MDM system, or uninstall apps identified by the organizations MAM
system, administrators can remotely wipe enterprise data from the device.

Read more about WIP


Introducing Windows Information Protection
Protect your enterprise data using Windows Information Protection (WIP)

If your app is on the allowed list, all data produced by your app is subject to policy restrictions. That means that if administrators
revoke the user's access to enterprise data, those users lose access to all of the data that your app produced.
This is fine if your app is designed only for enterprise use. But if your app creates data that users consider personal to them,
you'll want to enlighten your app to intelligently discern between enterprise and personal data. We call this type of an app
enterprise-enlightened because it can gracefully enforce enterprise policy while preserving the integrity of the user's personal
data.

Create an enterprise-enlightened app


Use WIP APIs to enlighten your app, and then declare your app as enterprise-enlightened.
Enlighten your app if it'll be used for both organizational and personal purposes.
Enlighten your app if you want to gracefully handle the enforcement of policy elements.
For example, if policy allows users to paste enterprise data into a personal document, you can prevent users from having to
respond to a consent dialog before they paste the data. Similarly, you can present custom informational dialog boxes in response
to these sorts of events.
If you're ready to enlighten your app, see one of these guides:
For Universal Windows Platform (UWP) apps that you build by using C#
Windows Information Protection (WIP) developer guide.
For Desktop apps that you build by using C++
Windows Information Protection (WIP) developer guide (C++).

Create non-enlightened enterprise app


if you're creating an Line of Business (LOB) app that is not intended for personal use, you might not have to enlighten it.
Windows desktop apps
You don't need to enlighten a Windows desktop app but you should test your app to ensure that it functions properly under
policy. For example, start your app, use it, then unenroll the device from MDM. Then, make sure the app can start again. If files
critical to the operation of the app are encrypted, the app might not start. Also, review the files that your app interacts with to
ensure that your app won't inadvertently encrypt files that are personal to the user. This might include metadata files, images
and other things.
After you've tested your app, add this flag to the resource file or your project, and then recompile the app.
MICROSOFTEDPAUTOPROTECTIONALLOWEDAPPINFO EDPAUTOPROTECTIONALLOWEDAPPINFOID
BEGIN
0x0001
END

While MDM policies don't require the flag, MAM policies do.
UWP apps
If you expect your app to be included in a MAM policy, you should enlighten it. Policies deployed to devices under MDM won't
require it, but if you distribute your app to organizational consumers, it's difficult if not impossible to determine what type of
policy management system they'll use. To ensure that your app will work in both types of policy management systems (MDM
and MAM), you should enlighten your app.
Windows Information Protection (WIP) developer
guide
3/7/2017 19 min to read Edit on GitHub

An enlightened app differentiates between corporate and personal data and knows which to protect based on
Windows Information Protection (WIP) policies defined by the administrator.
In this guide, we'll show you how to build one. When you're done, policy administrators will be able to trust your
app to consume their organization's data. And employees will love that you've kept their personal data intact on
their device even if they un-enroll from the organization's mobile device management (MDM) or leave the
organization entirely.
Note This guide helps you enlighten a UWP app. If you want to enlighten a C++ Windows desktop app, see
Windows Information Protection (WIP) developer guide (C++).
You can read more about WIP and enlightened apps here: Windows Information Protection (WIP).
You can find a complete sample here.
If you're ready to go through each task, let's start.

First, gather what you need


You'll need these:
A test Virtual Machine (VM) that runs Windows 10, version 1607 or higher. You'll debug your app against
this test VM.
A development computer that runs Windows 10, version 1607 or higher. This could be your test VM if you
have Visual Studio installed on it.

Setup your development environment


You'll do these things:
Install the WIP Setup Developer Assistant onto your test VM
Create a protection policy by using the WIP Setup Developer Assistant
Setup a Visual Studio project
Setup remote debugging
Add namespaces to your code files
Install the WIP Setup Developer Assistant onto your test VM
Use this tool to setup a Windows Information Protection policy on your test VM.
Download the tool here: WIP Setup Developer Assistant.
Create a protection policy
Define your policy by adding information to each section in the WIP setup developer assistant. Choose the help
icon next to any setting to learn more about how to use it.
For more general guidance about how to use this tool, see the Version notes section on the app download page.
Setup a Visual Studio project
1. On your development computer, open your project.
2. Add a reference to the desktop and mobile extensions for Universal Windows Platform (UWP).

3. Add these capabilities to your package manifest file:

<Capability Name="privateNetworkClientServer" />


<rescap:Capability Name="enterpriseDataPolicy"/>

Optional Reading: The "rescap" prefix means Restricted Capability. See Special and restricted capabilities.

4. Add this namespace to your package manifest file:

xmlns:rescap="http://schemas.microsoft.com/appx/manifest/foundation/windows10/restrictedcapabilities"

5. Add the namespace prefix to the <ignorableNamespaces> element of your package manifest file.

<IgnorableNamespaces="uap mp rescap">

This way, if your app runs on a version of the Windows operating system that doesn't support restricted
capabilities, Windows will ignore the enterpriseDataPolicy capability.
Setup remote debugging

Install Visual Studio Remote Tools on your test VM only if you are developing your app on a computer other than
your VM. Then, on your development computer start the remote debugger and see if your app runs on the test VM.
See Remote PC instructions.
Add these namespaces to your code files
Add these using statements to the top of your code files(The snippets in this guide use them):
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using Windows.Security.EnterpriseData;
using Windows.Web.Http;
using Windows.Storage.Streams;
using Windows.ApplicationModel.DataTransfer;
using Windows.UI.Xaml.Controls;
using Windows.UI.Xaml;
using Windows.ApplicationModel.Activation;
using Windows.Web.Http.Filters;
using Windows.Storage;
using Windows.Data.Xml.Dom;
using Windows.Foundation.Metadata;
using Windows.Web.Http.Headers;

Determine whether to use WIP APIs in your app


Ensure that the operating system that runs your app supports WIP and that WIP is enabled on the device.

bool use_WIP_APIs = false;

if ((ApiInformation.IsApiContractPresent
("Windows.Security.EnterpriseData.EnterpriseDataContract", 3)
&& ProtectionPolicyManager.IsProtectionEnabled))
{
use_WIP_APIs = true;
}
else
{
use_WIP_APIs = false;
}

Don't call WIP APIs if the operating system doesn't support WIP or WIP is not enabled on the device.

Read enterprise data


To read protected files, network endpoints, clipboard data and data that you accept from a Share contract, your app
will have to request access.
Windows Information Protection gives your app permission if your app is on the protection policy's allowed list.
In this section:
Read data from a file
Read data from a network endpoint
Read data from the clipboard
Read data from a Share contract
Read data from a file
Step 1: Get the file handle
Windows.Storage.StorageFolder storageFolder =
Windows.Storage.ApplicationData.Current.LocalFolder;

Windows.Storage.StorageFile file =
await storageFolder.GetFileAsync(fileName);

Step 2: Determine whether your app can open the file


Call FileProtectionManager.GetProtectionInfoAsync to determine whether your app can open the file.

FileProtectionInfo protectionInfo = await FileProtectionManager.GetProtectionInfoAsync(file);

if ((protectionInfo.Status != FileProtectionStatus.Protected &&


protectionInfo.Status != FileProtectionStatus.Unprotected))
{
return false;
}
else if (protectionInfo.Status == FileProtectionStatus.Revoked)
{
// Code goes here to handle this situation. Perhaps, show UI
// saying that the user's data has been revoked.
}

A FileProtectionStatus value of Protected means that the file is protected and your app can open it because your
app is on the policy's allowed list.
A FileProtectionStatus value of UnProtected means that the file is not protected and your app can open the file
even your app is not on the policy's allowed list.

APIs
FileProtectionManager.GetProtectionInfoAsync
FileProtectionInfo
FileProtectionStatus
ProtectionPolicyManager.IsIdentityManaged

Step 3: Read the file into a stream or buffer


Read the file into a stream

var stream = await file.OpenAsync(Windows.Storage.FileAccessMode.ReadWrite);

Read the file into a buffer

var buffer = await Windows.Storage.FileIO.ReadBufferAsync(file);

Read data from a network endpoint


Create a protected thread context to read from an enterprise endpoint.
Step 1: Get the identity of the network endpoint
Uri resourceURI = new Uri("http://contoso.com/stockData.xml");

Windows.Networking.HostName hostName =
new Windows.Networking.HostName(resourceURI.Host);

string identity = await ProtectionPolicyManager.


GetPrimaryManagedIdentityForNetworkEndpointAsync(hostName);

If the endpoint isn't managed by policy, you'll get back an empty string.

APIs
ProtectionPolicyManager.GetPrimaryManagedIdentityForNetworkEndpointAsync

Step 2: Create a protected thread context


If the endpoint is managed by policy, create a protected thread context. This tags any network connections that you
make on the same thread to the identity.
It also gives you access to enterprise network resources that are managed by that policy.

if (!string.IsNullOrEmpty(identity))
{
using (ThreadNetworkContext threadNetworkContext =
ProtectionPolicyManager.CreateCurrentThreadNetworkContext(identity))
{
return await GetDataFromNetworkRedirectHelperMethod(resourceURI);
}
}
else
{
return await GetDataFromNetworkRedirectHelperMethod(resourceURI);
}

This example encloses socket calls in a using block. If you don't do this, make sure that you close the thread context
after you've retrieved your resource. See ThreadNetworkContext.Close.
Don't create any personal files on that protected thread because those files will be automatically encrypted.
The ProtectionPolicyManager.CreateCurrentThreadNetworkContext method returns a
ThreadNetworkContext object whether or not the endpoint is being managed by policy. If your app handles both
personal and enterprise resources, call ProtectionPolicyManager.CreateCurrentThreadNetworkContext for all
identities. After you get the resource, dispose the ThreadNetworkContext to clear any identity tag from the current
thread.

APIs
ProtectionPolicyManager.GetForCurrentView
ProtectionPolicyManager.Identity
ProtectionPolicyManager.CreateCurrentThreadNetworkContext

Step 3: Read the resource into a buffer


private static async Task<IBuffer> GetDataFromNetworkHelperMethod(Uri resourceURI)
{
HttpClient client;

client = new HttpClient();

try { return await client.GetBufferAsync(resourceURI); }

catch (Exception) { return null; }


}

(Optional) Use a header token instead of creating a protected thread context

public static async Task<IBuffer> GetDataFromNetworkbyUsingHeader(Uri resourceURI)


{
HttpClient client;

Windows.Networking.HostName hostName =
new Windows.Networking.HostName(resourceURI.Host);

string identity = await ProtectionPolicyManager.


GetPrimaryManagedIdentityForNetworkEndpointAsync(hostName);

if (!string.IsNullOrEmpty(identity))
{
client = new HttpClient();

HttpRequestHeaderCollection headerCollection = client.DefaultRequestHeaders;

headerCollection.Add("X-MS-Windows-HttpClient-EnterpriseId", identity);

return await GetDataFromNetworkbyUsingHeaderHelperMethod(client, resourceURI);


}
else
{
client = new HttpClient();
return await GetDataFromNetworkbyUsingHeaderHelperMethod(client, resourceURI);
}

private static async Task<IBuffer> GetDataFromNetworkbyUsingHeaderHelperMethod(HttpClient client, Uri resourceURI)


{

try { return await client.GetBufferAsync(resourceURI); }

catch (Exception) { return null; }


}

Handle page redirects


Sometimes a web server will redirect traffic to a more current version of a resource.
To handle this, make requests until the response status of your request has a value of OK.
Then use the URI of that response to get the identity of the endpoint. Here's one way to do this:
private static async Task<IBuffer> GetDataFromNetworkRedirectHelperMethod(Uri resourceURI)
{
HttpClient client = null;

HttpBaseProtocolFilter filter = new HttpBaseProtocolFilter();


filter.AllowAutoRedirect = false;

client = new HttpClient(filter);

HttpResponseMessage response = null;

HttpRequestMessage message = new HttpRequestMessage(HttpMethod.Get, resourceURI);


response = await client.SendRequestAsync(message);

if (response.StatusCode == HttpStatusCode.MultipleChoices ||
response.StatusCode == HttpStatusCode.MovedPermanently ||
response.StatusCode == HttpStatusCode.Found ||
response.StatusCode == HttpStatusCode.SeeOther ||
response.StatusCode == HttpStatusCode.NotModified ||
response.StatusCode == HttpStatusCode.UseProxy ||
response.StatusCode == HttpStatusCode.TemporaryRedirect ||
response.StatusCode == HttpStatusCode.PermanentRedirect)
{
message = new HttpRequestMessage(HttpMethod.Get, message.RequestUri);
response = await client.SendRequestAsync(message);

try { return await response.Content.ReadAsBufferAsync(); }

catch (Exception) { return null; }


}
else
{
try { return await response.Content.ReadAsBufferAsync(); }

catch (Exception) { return null; }


}
}

APIs
ProtectionPolicyManager.GetPrimaryManagedIdentityForNetworkEndpointAsync
ProtectionPolicyManager.CreateCurrentThreadNetworkContext
ProtectionPolicyManager.GetForCurrentView
ProtectionPolicyManager.Identity

Read data from the clipboard


Get permission to use data from the clipboard
To get data from the clipboard, ask Windows for permission. Use DataPackageView.RequestAccessAsync to do
that.
public static async Task PasteText(TextBox textBox)
{
DataPackageView dataPackageView = Clipboard.GetContent();

if (dataPackageView.Contains(StandardDataFormats.Text))
{
ProtectionPolicyEvaluationResult result = await dataPackageView.RequestAccessAsync();

if (result == ProtectionPolicyEvaluationResult..Allowed)
{
string contentsOfClipboard = await dataPackageView.GetTextAsync();
textBox.Text = contentsOfClipboard;
}
}
}

APIs
DataPackageView.RequestAccessAsync

Hide or disable features that use clipboard data


Determine whether current view has permission to get data that is on the clipboard.
If it doesn't, you can disable or hide controls that let users paste information from the clipboard or preview its
contents.

private bool IsClipboardAllowedAsync()


{
ProtectionPolicyEvaluationResult protectionPolicyEvaluationResult = ProtectionPolicyEvaluationResult.Blocked;

DataPackageView dataPackageView = Clipboard.GetContent();

if (dataPackageView.Contains(StandardDataFormats.Text))

protectionPolicyEvaluationResult =
ProtectionPolicyManager.CheckAccess(dataPackageView.Properties.EnterpriseId,
ProtectionPolicyManager.GetForCurrentView().Identity);

return (protectionPolicyEvaluationResult == ProtectionPolicyEvaluationResult.Allowed |


protectionPolicyEvaluationResult == ProtectionPolicyEvaluationResult.ConsentRequired);
}

APIs
ProtectionPolicyEvaluationResult
ProtectionPolicyManager.GetForCurrentView
ProtectionPolicyManager.Identity

Prevent users from being prompted with a consent dialog box


A new document isn't personal or enterprise. It's just new. If a user pastes enterprise data into it, Windows enforces
policy and the user is prompted with a consent dialog. This code prevents that from happening. This task is not
about helping to protect data. It's more about keeping users from receiving the consent dialog box in cases where
your app creates a brand new item.
private async void PasteText(bool isNewEmptyDocument)
{
DataPackageView dataPackageView = Clipboard.GetContent();

if (dataPackageView.Contains(StandardDataFormats.Text))
{
if (!string.IsNullOrEmpty(dataPackageView.Properties.EnterpriseId))
{
if (isNewEmptyDocument)
{
ProtectionPolicyManager.TryApplyProcessUIPolicy(dataPackageView.Properties.EnterpriseId);
string contentsOfClipboard = contentsOfClipboard = await dataPackageView.GetTextAsync();
// add this string to the new item or document here.

}
else
{
ProtectionPolicyEvaluationResult result = await dataPackageView.RequestAccessAsync();

if (result == ProtectionPolicyEvaluationResult.Allowed)
{
string contentsOfClipboard = contentsOfClipboard = await dataPackageView.GetTextAsync();
// add this string to the new item or document here.
}
}
}
}
}

APIs
DataPackageView.RequestAccessAsync
ProtectionPolicyEvaluationResult
ProtectionPolicyManager.TryApplyProcessUIPolicy

Read data from a Share contract


When employees choose your app to share their information, your app will open a new item that contains that
content.
As we mentioned earlier, a new item isn't personal or enterprise. It's just new. If your code adds enterprise content
to the item, Windows enforces policy and the user is prompted with a consent dialog. This code prevents that from
happening.
protected override async void OnShareTargetActivated(ShareTargetActivatedEventArgs args)
{
bool isNewEmptyDocument = true;
string identity = "corp.microsoft.com";

ShareOperation shareOperation = args.ShareOperation;


if (shareOperation.Data.Contains(StandardDataFormats.Text))
{
if (!string.IsNullOrEmpty(shareOperation.Data.Properties.EnterpriseId))
{
if (isNewEmptyDocument)
// If this is a new and empty document, and we're allowed to access
// the data, then we can avoid popping the consent dialog
ProtectionPolicyManager.TryApplyProcessUIPolicy(shareOperation.Data.Properties.EnterpriseId);
else
{
// In this case, we can't optimize the workflow, so we just
// request consent from the user in this case.

ProtectionPolicyEvaluationResult protectionPolicyEvaluationResult = await shareOperation.Data.RequestAccessAsync();

if (protectionPolicyEvaluationResult == ProtectionPolicyEvaluationResult.Allowed)
{
string text = await shareOperation.Data.GetTextAsync();

// Do something with that text.


}
}
}
else
{
// If the data has no enterprise identity, then we already have access.
string text = await shareOperation.Data.GetTextAsync();

// Do something with that text.


}

APIs
ProtectionPolicyManager.RequestAccessAsync
ProtectionPolicyEvaluationResult
ProtectionPolicyManager.TryApplyProcessUIPolicy

Protect enterprise data


Protect enterprise data that leaves your app. Data leaves your app when you show it in a page, save it to a file or
network endpoint, or through a share contract.
In this section:
Protect data that appears in pages
Protect data to a file as a background process
Protect part of a file
Read the protected part of a file
Protect data to a folder
Protect data to a network end point
Protect data that your app shares through a share contract
Protect files that you copy to another location
Protect enterprise data when the screen of the device is locked
Protect data that appears in pages
When you show data in a page, let Windows know what type of data it is (personal or enterprise). To do that, tag
the current app view or tag the entire app process.
When you tag the view or the process, Windows enforces policy on it. This helps prevent data leaks that result from
actions that your app doesn't control. For example, on a computer, a user could use CTRL-V to copy enterprise
information from a view and then paste that information to another app. Windows protects against that. Windows
also helps to enforce share contracts.
Tag the current app view
Do this if your app has multiple views where some views consume enterprise data and some consume personal
data.

// tag as enterprise data. "identity" the string that contains the enterprise ID.
// You'd get that from a file, network endpoint, or clipboard data package.
ProtectionPolicyManager.GetForCurrentView().Identity = identity;

// tag as personal data.


ProtectionPolicyManager.GetForCurrentView().Identity = String.Empty();

APIs
ProtectionPolicyManager.GetForCurrentView
ProtectionPolicyManager.Identity

Tag the process


Do this if all views in your app will work with only one type of data (personal or enterprise).
This prevents you from having to manage independently tagged views.

// tag as enterprise data. "identity" the string that contains the enterprise ID.
// You'd get that from a file, network endpoint, or clipboard data package.
bool result =
ProtectionPolicyManager.TryApplyProcessUIPolicy(identity);

// tag as personal data.


bool result =
ProtectionPolicyManager.TryApplyProcessUIPolicy(String.Empty());

APIs
ProtectionPolicyManager.TryApplyProcessUIPolicy

Protect data to a file


Create a protected file and then write to it.
Step 1: Determine if your app can create an enterprise file
Your app can create an enterprise file if the identity string is managed by policy and your app is on the Allowed list
of that policy.
if (!ProtectionPolicyManager.IsIdentityManaged(identity)) return false;

APIs
ProtectionPolicyManager.IsIdentityManaged

Step 2: Create the file and protect it to the identity

StorageFolder storageFolder = ApplicationData.Current.LocalFolder;


StorageFile storageFile = await storageFolder.CreateFileAsync("sample.txt",
CreationCollisionOption.ReplaceExisting);

FileProtectionInfo fileProtectionInfo =
await FileProtectionManager.ProtectAsync(storageFile, identity);

APIs
FileProtectionManager.ProtectAsync

Step 3: Write that stream or buffer to the file


Write a stream

if (fileProtectionInfo.Status == FileProtectionStatus.Protected)
{
var stream = await storageFile.OpenAsync(FileAccessMode.ReadWrite);

using (var outputStream = stream.GetOutputStreamAt(0))


{
using (var dataWriter = new DataWriter(outputStream))
{
dataWriter.WriteString(enterpriseData);
}
}

Write a buffer

if (fileProtectionInfo.Status == FileProtectionStatus.Protected)
{
var buffer = Windows.Security.Cryptography.CryptographicBuffer.ConvertStringToBinary(
enterpriseData, Windows.Security.Cryptography.BinaryStringEncoding.Utf8);

await FileIO.WriteBufferAsync(storageFile, buffer);

APIs
FileProtectionInfo
FileProtectionStatus

Protect data to a file as a background process


This code can run while the screen of the device is locked. If the administrator configured a secure "Data protection
under lock" (DPL) policy, Windows removes the encryption keys required to access protected resources from device
memory. This prevents data leaks if the device is lost. This same feature also removes keys associated with
protected files when their handles are closed.
You'll have to use an approach that keeps the file handle open when you create a file.
Step 1: Determine if you can create an enterprise file
You can create an enterprise file if the identity that you're using is managed by policy and your app is on the
allowed list of that policy.

if (!ProtectionPolicyManager.IsIdentityManaged(identity)) return false;

APIs
ProtectionPolicyManager.IsIdentityManaged

Step 2: Create a file and protect it to the identity


The FileProtectionManager.CreateProtectedAndOpenAsync creates a protected file and keeps the file handle
open while you write to it.

StorageFolder storageFolder = ApplicationData.Current.LocalFolder;

ProtectedFileCreateResult protectedFileCreateResult =
await FileProtectionManager.CreateProtectedAndOpenAsync(storageFolder,
"sample.txt", identity, CreationCollisionOption.ReplaceExisting);

APIs
FileProtectionManager.CreateProtectedAndOpenAsync

Step 3: Write a stream or buffer to the file


This example writes a stream to a file.

if (protectedFileCreateResult.ProtectionInfo.Status == FileProtectionStatus.Protected)
{
IOutputStream outputStream =
protectedFileCreateResult.Stream.GetOutputStreamAt(0);

using (DataWriter writer = new DataWriter(outputStream))


{
writer.WriteString(enterpriseData);
await writer.StoreAsync();
await writer.FlushAsync();
}

outputStream.Dispose();
}
else if (protectedFileCreateResult.ProtectionInfo.Status == FileProtectionStatus.AccessSuspended)
{
// Perform any special processing for the access suspended case.
}

APIs
ProtectedFileCreateResult.ProtectionInfo
FileProtectionStatus
ProtectedFileCreateResult.Stream

Protect part of a file


In most cases, it's cleaner to store enterprise and personal data separately but you can store them to the same file if
you want. For example, Microsoft Outlook can store enterprise mails alongside of personal mails in a single archive
file.
Encrypt the enterprise data but not the entire file. That way, users can continue using that file even if they un-enroll
from MDM or their enterprise data access rights are revoked. Also, your app should keep track of what data it
encrypts so that it knows what data to protect when it reads the file back into memory.
Step 1: Add enterprise data to an encrypted stream or buffer

string enterpriseDataString = "<employees><employee><name>Bill</name><social>xxx-xxx-xxxx</social></employee></employees>";

var enterpriseData= Windows.Security.Cryptography.CryptographicBuffer.ConvertStringToBinary(


enterpriseDataString, Windows.Security.Cryptography.BinaryStringEncoding.Utf8);

BufferProtectUnprotectResult result =
await DataProtectionManager.ProtectAsync(enterpriseData, identity);

enterpriseData= result.Buffer;

APIs
DataProtectionManager.ProtectAsync
BufferProtectUnprotectResult.buffer

Step 2: Add personal data to an unencrypted stream or buffer

string personalDataString = "<recipies><recipe><name>BillsCupCakes</name><cooktime>30</cooktime></recipe></recipies>";

var personalData = Windows.Security.Cryptography.CryptographicBuffer.ConvertStringToBinary(


personalDataString, Windows.Security.Cryptography.BinaryStringEncoding.Utf8);

Step 3: Write both streams or buffers to a file

StorageFolder storageFolder = ApplicationData.Current.LocalFolder;

StorageFile storageFile = await storageFolder.CreateFileAsync("data.xml",


CreationCollisionOption.ReplaceExisting);

// Write both buffers to the file and save the file.

var stream = await storageFile.OpenAsync(FileAccessMode.ReadWrite);

using (var outputStream = stream.GetOutputStreamAt(0))


{
using (var dataWriter = new DataWriter(outputStream))
{
dataWriter.WriteBuffer(enterpriseData);
dataWriter.WriteBuffer(personalData);

await dataWriter.StoreAsync();
await outputStream.FlushAsync();
}
}

Step 4: Keep track of the location of your enterprise data in the file
It's the responsibility of your app to keep track of the data in that file that is enterprise owned.
You can store that information in a property associated with the file, in a database, or in some header text in the file.
This example, saves that information to a separate XML file.

StorageFile metaDataFile = await storageFolder.CreateFileAsync("metadata.xml",


CreationCollisionOption.ReplaceExisting);

await Windows.Storage.FileIO.WriteTextAsync
(metaDataFile, "<EnterpriseDataMarker start='0' end='" + enterpriseData.Length.ToString() +
"'></EnterpriseDataMarker>");

Read the protected part of a file


Here's how you'd read the enterprise data out of that file.
Step 1: Get the position of your enterprise data in the file

Windows.Storage.StorageFolder storageFolder =
Windows.Storage.ApplicationData.Current.LocalFolder;

Windows.Storage.StorageFile metaDataFile =
await storageFolder.GetFileAsync("metadata.xml");

string metaData = await Windows.Storage.FileIO.ReadTextAsync(metaDataFile);

XmlDocument doc = new XmlDocument();

doc.LoadXml(metaData);

uint startPosition =
Convert.ToUInt16((doc.FirstChild.Attributes.GetNamedItem("start")).InnerText);

uint endPosition =
Convert.ToUInt16((doc.FirstChild.Attributes.GetNamedItem("end")).InnerText);

Step 2: Open the data file and make sure that it's not protected

Windows.Storage.StorageFile dataFile =
await storageFolder.GetFileAsync("data.xml");

FileProtectionInfo protectionInfo =
await FileProtectionManager.GetProtectionInfoAsync(dataFile);

if (protectionInfo.Status == FileProtectionStatus.Protected)
return false;

APIs
FileProtectionManager.GetProtectionInfoAsync
FileProtectionInfo
FileProtectionStatus

Step 3: Read the enterprise data from the file

var stream = await dataFile.OpenAsync(Windows.Storage.FileAccessMode.ReadWrite);

stream.Seek(startPosition);

Windows.Storage.Streams.Buffer tempBuffer = new Windows.Storage.Streams.Buffer(50000);

IBuffer enterpriseData = await stream.ReadAsync(tempBuffer, endPosition, InputStreamOptions.None);


Step 4: Decrypt the buffer that contains enterprise data

DataProtectionInfo dataProtectionInfo =
await DataProtectionManager.GetProtectionInfoAsync(enterpriseData);

if (dataProtectionInfo.Status == DataProtectionStatus.Protected)
{
BufferProtectUnprotectResult result = await DataProtectionManager.UnprotectAsync(enterpriseData);
enterpriseData = result.Buffer;
}
else if (dataProtectionInfo.Status == DataProtectionStatus.Revoked)
{
// Code goes here to handle this situation. Perhaps, show UI
// saying that the user's data has been revoked.
}

APIs
DataProtectionInfo
DataProtectionManager.GetProtectionInfoAsync

Protect data to a folder


You can create a folder and protect it. That way any items that you add to that folder are automatically protected.

private async Task<bool> CreateANewFolderAndProtectItAsync(string folderName, string identity)


{
if (!ProtectionPolicyManager.IsIdentityManaged(identity)) return false;

StorageFolder storageFolder = ApplicationData.Current.LocalFolder;


StorageFolder newStorageFolder =
await storageFolder.CreateFolderAsync(folderName);

FileProtectionInfo fileProtectionInfo =
await FileProtectionManager.ProtectAsync(newStorageFolder, identity);

if (fileProtectionInfo.Status != FileProtectionStatus.Protected)
{
// Protection failed.
return false;
}
return true;
}

Make sure that the folder is empty before you protect it. You can't protect a folder that already contains items.

APIs
ProtectionPolicyManager.IsIdentityManaged
FileProtectionManager.ProtectAsync
FileProtectionInfo.Identity
FileProtectionInfo.Status

Protect data to a network end point


Create a protected thread context to send that data to an enterprise endpoint.
Step 1: Get the identity of the network endpoint
Windows.Networking.HostName hostName =
new Windows.Networking.HostName(resourceURI.Host);

string identity = await ProtectionPolicyManager.


GetPrimaryManagedIdentityForNetworkEndpointAsync(hostName);

APIs
ProtectionPolicyManager.GetPrimaryManagedIdentityForNetworkEndpointAsync

Step 2: Create a protected thread context and send data to the network endpoint

HttpClient client = null;

if (!string.IsNullOrEmpty(m_EnterpriseId))
{
ProtectionPolicyManager.GetForCurrentView().Identity = identity;

using (ThreadNetworkContext threadNetworkContext =


ProtectionPolicyManager.CreateCurrentThreadNetworkContext(identity))
{
client = new HttpClient();
HttpRequestMessage message = new HttpRequestMessage(HttpMethod.Put, resourceURI);
message.Content = new HttpStreamContent(dataToWrite);

HttpResponseMessage response = await client.SendRequestAsync(message);

if (response.StatusCode == HttpStatusCode.Ok)
return true;
else
return false;
}
}
else
{
return false;
}

APIs
ProtectionPolicyManager.GetForCurrentView
ProtectionPolicyManager.Identity
ProtectionPolicyManager.CreateCurrentThreadNetworkContext

Protect data that your app shares through a share contract


If you want users to share content from your app, you'll have to implement a share contract and handle the
DataTransferManager.DataRequested event.
In your event handler, set the enterprise identity context in the data package.
private void OnShareSourceOperation(object sender, RoutedEventArgs e)
{
// Register the current page as a share source (or you could do this earlier in your app).
DataTransferManager.GetForCurrentView().DataRequested += OnDataRequested;
DataTransferManager.ShowShareUI();
}

private void OnDataRequested(DataTransferManager sender, DataRequestedEventArgs args)


{
if (!string.IsNullOrEmpty(this.shareSourceContent))
{
var protectionPolicyManager = ProtectionPolicyManager.GetForCurrentView();
DataPackage requestData = args.Request.Data;
requestData.Properties.Title = this.shareSourceTitle;
requestData.Properties.EnterpriseId = protectionPolicyManager.Identity;
requestData.SetText(this.shareSourceContent);
}
}

APIs
ProtectionPolicyManager.GetForCurrentView
ProtectionPolicyManager.Identity

Protect files that you copy to another location

private async void CopyProtectionFromOneFileToAnother


(StorageFile sourceStorageFile, StorageFile targetStorageFile)
{
bool copyResult = await
FileProtectionManager.CopyProtectionAsync(sourceStorageFile, targetStorageFile);

if (!copyResult)
{
// Copying failed. To diagnose, you could check the file's status.
// (call FileProtectionManager.GetProtectionInfoAsync and
// check FileProtectionInfo.Status).
}
}

APIs
FileProtectionManager.CopyProtectionAsync

Protect enterprise data when the screen of the device is locked


Remove all sensitive data in memory when the device is locked. When the user unlocks the device, your app can
safely add that data back.
Handle the ProtectionPolicyManager.ProtectedAccessSuspending event so that your app knows when the
screen is locked. This event is raised only if the administrator configures a secure data protection under lock policy.
Windows temporarily removes the data protection keys that are provisioned on the device. Windows removes
these keys to ensure that there is no unauthorized access to encrypted data while the device is locked and possibly
not in possession of its owner.
Handle the ProtectionPolicyManager.ProtectedAccessResumed event so that your app knows when the screen
is unlocked. This event is raised regardless of whether the administrator configures a secure data protection under
lock policy.
Remove sensitive data in memory when the screen is locked
Protect sensitive data, and close any file streams that your app has opened on protected files to help ensure that the
system doesn't cache any sensitive data in memory.
This example saves content from a textblock to an encrypted buffer and removes the content from that textblock.

private async void ProtectionPolicyManager_ProtectedAccessSuspending(object sender, ProtectedAccessSuspendingEventArgs e)


{
Deferral deferral = e.GetDeferral();

if (ProtectionPolicyManager.GetForCurrentView().Identity != String.Empty)
{
IBuffer documentBodyBuffer = CryptographicBuffer.ConvertStringToBinary
(documentTextBlock.Text, BinaryStringEncoding.Utf8);

BufferProtectUnprotectResult result = await DataProtectionManager.ProtectAsync


(documentBodyBuffer, ProtectionPolicyManager.GetForCurrentView().Identity);

if (result.ProtectionInfo.Status == DataProtectionStatus.Protected)
{
this.protectedDocumentBuffer = result.Buffer;
documentTextBlock.Text = null;
}
}

// Close any open streams that you are actively working with
// to make sure that we have no unprotected content in memory.

// Optionally, code goes here to use e.Deadline to determine whether we have more
// than 15 seconds left before the suspension deadline. If we do then process any
// messages queued up for sending while we are still able to access them.

deferral.Complete();
}

APIs
ProtectionPolicyManager.ProtectedAccessSuspending
ProtectionPolicyManager.GetForCurrentView
ProtectionPolicyManager.Identity
DataProtectionManager.ProtectAsync
BufferProtectUnprotectResult.buffer
ProtectedAccessSuspendingEventArgs.GetDeferral
Deferral.Complete

Add back sensitive data when the device is unlocked


ProtectionPolicyManager.ProtectedAccessResumed is raised when the device is unlocked and the keys are
available on the device again.
ProtectedAccessResumedEventArgs.Identities is an empty collection if the administrator hasn't configured a
secure data protection under lock policy.
This example does the reverse of the previous example. It decrypts the buffer, adds information from that buffer
back to the textbox and then disposes of the buffer.
private async void ProtectionPolicyManager_ProtectedAccessResumed(object sender, ProtectedAccessResumedEventArgs e)
{
if (ProtectionPolicyManager.GetForCurrentView().Identity != String.Empty)
{
BufferProtectUnprotectResult result = await DataProtectionManager.UnprotectAsync
(this.protectedDocumentBuffer);

if (result.ProtectionInfo.Status == DataProtectionStatus.Unprotected)
{
// Restore the unprotected version.
documentTextBlock.Text = CryptographicBuffer.ConvertBinaryToString
(BinaryStringEncoding.Utf8, result.Buffer);
this.protectedDocumentBuffer = null;
}
}

APIs
ProtectionPolicyManager.ProtectedAccessResumed
ProtectionPolicyManager.GetForCurrentView
ProtectionPolicyManager.Identity
DataProtectionManager.UnprotectAsync
BufferProtectUnprotectResult.Status

Handle enterprise data when protected content is revoked


If you want your app to be notified when the device is un-enrolled from MDM or when the policy administrator
explicitly revokes access to enterprise data, handle the ProtectionPolicyManager_ProtectedContentRevoked
event.
This example determines if the data in an enterprise mailbox for an email app has been revoked.

private string mailIdentity = "contoso.com";

void MailAppSetup()
{
ProtectionPolicyManager.ProtectedContentRevoked += ProtectionPolicyManager_ProtectedContentRevoked;
// Code goes here to set up mailbox for 'mailIdentity'.
}

private void ProtectionPolicyManager_ProtectedContentRevoked(object sender, ProtectedContentRevokedEventArgs e)


{
if (!new System.Collections.Generic.List<string>(e.Identities).Contains
(this.mailIdentity))
{
// This event is not for our identity.
return;
}

// Code goes here to delete any metadata associated with 'mailIdentity'.


}

APIs
ProtectionPolicyManager_ProtectedContentRevoked

Related topics
Windows Information Protection (WIP) sample
Enterprise Shared Storage
3/6/2017 1 min to read Edit on GitHub

The shared storage consists of two locations, where apps with the restricted capability
enterpriseDeviceLockdown and an Enterprise certificate have full read and write access. Note that the
enterpriseDeviceLockdown capability allows apps to use the device lock down API and access the enterprise
shared storage folders. For more information about the API, see Windows.Embedded.DeviceLockdown
namespace.
These locations are set on the local drive:
\Data\SharedData\Enterprise\Persistent
\Data\SharedData\Enterprise\Non-Persistent

Scenarios
Enterprise shared storage provides support for the following scenarios.
You can share data within an instance of an app, between instances of the same app, or even between apps
assuming they both have the appropriate capability and certificate.
You can store data on the local hard drive in the \Data\SharedData\Enterprise\Persistent folder and it persists
even after the device has been reset.
Manipulate files, including read, write, and delete of files on a device via Mobile Device Management (MDM)
service. For more information on how to use enterprise shared storage through the MDM service, see
EnterpriseExtFileSystem CSP.

Access enterprise shared storage


The following example shows how to declare the capability to access enterprise shared storage in the package
manifest, and how to access the shared storage folders by using the Windows.Storage.StorageFolder class.
In your app package manifest, include the following capability:

<Package
xmlns="http://schemas.microsoft.com/appx/manifest/foundation/windows10"
xmlns:mp="http://schemas.microsoft.com/appx/2014/phone/manifest"
xmlns:uap="http://schemas.microsoft.com/appx/manifest/uap/windows10"
xmlns:rescap="http://schemas.microsoft.com/appx/manifest/foundation/windows10/restrictedcapabilities"
IgnorableNamespaces="uap mp rescap">

<Capabilities>
<rescap:Capability Name="enterpriseDeviceLockdown"/>
</Capabilities>

To access the shared data location, your app would use the following code.
using System;
using System.Collections.Generic;
using System.Diagnostics;
using Windows.Storage;

// Get the Enterprise Shared Storage folder.


var enterprisePersistentFolderRoot = @"C:\Data\SharedData\Enterprise\Persistent";

StorageFolder folder =
await StorageFolder.GetFolderFromPathAsync(enterprisePersistentFolderRoot);

// Get the files in the folder.


IReadOnlyList<StorageFile> sortedItems =
await folder.GetFilesAsync();

// Iterate over the results and print the list of files


// to the Visual Studio Output window.
foreach (StorageFile file in sortedItems)
Debug.WriteLine(file.Name + ", " + file.DateCreated);
Files, folders, and libraries
3/6/2017 1 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
You use the APIs in the Windows.Storage, Windows.Storage.Streams, and Windows.Storage.Pickers namespaces to
read and write text and other data formats in files, and to manage files and folders. In this section, you'll also learn
about reading and writing app settings, about file and folder pickers, and about special sand-boxed locations such
as the Video/Music library.

TOPIC DESCRIPTION

Enumerate and query files and folders Access files and folders in either a folder, library, device, or
network location. You can also query the files and folders in a
location by constructing file and folder queries.

Create, write, and read a file Read and write a file using a StorageFile object.

Get file properties Get propertiestop-level, basic, and extendedfor a file


represented by a StorageFile object.

Open files and folders with a picker Access files and folders by letting the user interact with a
picker. You can use the FolderPicker to gain access to a folder.

Save a file with a picker Use FileSavePicker to let users specify the name and location
where they want your app to save a file.

Accessing HomeGroup content Access content stored in the user's HomeGroup folder,
including pictures, music, and videos.

Determining availability of Microsoft OneDrive files Determine if a Microsoft OneDrive file is available using the
StorageFile.IsAvailable property.

Files and folders in the Music, Pictures, and Videos libraries Add existing folders of music, pictures, or videos to the
corresponding libraries. You can also remove folders from
libraries, get the list of folders in a library, and discover stored
photos, music, and videos.

Track recently used files and folders Track files that your user accesses frequently by adding them
to your app's most recently used list (MRU). The platform
manages the MRU for you by sorting items based on when
they were last accessed, and by removing the oldest item
when the list's 25-item limit is reached. All apps have their
own MRU.

Access the SD card You can store and access non-essential data on an optional
microSD card, especially on low-cost mobile devices that have
limited internal storage.

File access permissions Apps can access certain file system locations by default. Apps
can also access additional locations through the file picker, or
by declaring capabilities.
Related samples
Folder enumeration sample
File access sample
File picker sample
Enumerate and query files and folders
3/6/2017 5 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Access files and folders in either a folder, library, device, or network location. You can also query the files and
folders in a location by constructing file and folder queries.
For detailed guidance on how to store your Universal Windows Platform app's data, see the ApplicationData class.
Note Also see the Folder enumeration sample.

Prerequisites
Understand async programming for Universal Windows Platform (UWP) apps
You can learn how to write asynchronous apps in C# or Visual Basic, see Call asynchronous APIs in C# or
Visual Basic. To learn how to write asynchronous apps in C++, see Asynchronous programming in C++.
Access permissions to the location
For example, the code in these examples require the picturesLibrary capability, but your location may
require a different capability or no capability at all. To learn more, see File access permissions.

Enumerate files and folders in a location


Note Remember to declare the picturesLibrary capability.

In this example we first use the StorageFolder.GetFilesAsync method to get all the files in the root folder of the
PicturesLibrary (not in subfolders) and list the name of each file. Next, we use the GetFoldersAsync method to
get all the subfolders in the PicturesLibrary and list the name of each subfolder.
//#include <ppltasks.h>
//#include <string>
//#include <memory>
using namespace Windows::Storage;
using namespace Platform::Collections;
using namespace concurrency;
using namespace std;

// Be sure to specify the Pictures Folder capability in the appxmanifext file.


StorageFolder^ picturesFolder = KnownFolders::PicturesLibrary;

// Use a shared_ptr so that the string stays in memory


// until the last task is complete.
auto outputString = make_shared<wstring>();
*outputString += L"Files:\n";

// Get a read-only vector of the file objects


// and pass it to the continuation.
create_task(picturesFolder->GetFilesAsync())
// outputString is captured by value, which creates a copy
// of the shared_ptr and increments its reference count.
.then ([outputString] (IVectorView\<StorageFile^>^ files)
{
for ( unsigned int i = 0 ; i < files->Size; i++)
{
*outputString += files->GetAt(i)->Name->Data();
*outputString += L"\n";
}
})
// We need to explicitly state the return type
// here: -> IAsyncOperation<...>
.then([picturesFolder]() -> IAsyncOperation\<IVectorView\<StorageFolder^>^>^
{
return picturesFolder->GetFoldersAsync();
})
// Capture "this" to access m_OutputTextBlock from within the lambda.
.then([this, outputString](IVectorView\<StorageFolder^>^ folders)
{
*outputString += L"Folders:\n";

for ( unsigned int i = 0; i < folders->Size; i++)


{
*outputString += folders->GetAt(i)->Name->Data();
*outputString += L"\n";
}

// Assume m_OutputTextBlock is a TextBlock defined in the XAML.


m_OutputTextBlock->Text = ref new String((*outputString).c_str());
});
StorageFolder picturesFolder = KnownFolders.PicturesLibrary;
StringBuilder outputText = new StringBuilder();

IReadOnlyList<StorageFile> fileList =
await picturesFolder.GetFilesAsync();

outputText.AppendLine("Files:");
foreach (StorageFile file in fileList)
{
outputText.Append(file.Name + "\n");
}

IReadOnlyList<StorageFolder> folderList =
await picturesFolder.GetFoldersAsync();

outputText.AppendLine("Folders:");
foreach (StorageFolder folder in folderList)
{
outputText.Append(folder.DisplayName + "\n");
}

Dim picturesFolder As StorageFolder = KnownFolders.PicturesLibrary


Dim outputText As New StringBuilder

Dim fileList As IReadOnlyList(Of StorageFile) =


Await picturesFolder.GetFilesAsync()

outputText.AppendLine("Files:")
For Each file As StorageFile In fileList

outputText.Append(file.Name & vbLf)

Next file

Dim folderList As IReadOnlyList(Of StorageFolder) =


Await picturesFolder.GetFoldersAsync()

outputText.AppendLine("Folders:")
For Each folder As StorageFolder In folderList

outputText.Append(folder.DisplayName & vbLf)

Next folder

Note In C# or Visual Basic, remember to put the async keyword in the method declaration of any method in which
you use the await operator.
Alternatively, you can use the GetItemsAsync method to get all items (both files and subfolders) in a particular
location. The following example uses the GetItemsAsync method to get all files and subfolders in the root folder
of the PicturesLibrary (not in subfolders). Then the example lists the name of each file and subfolder. If the item is
a subfolder, the example appends "folder" to the name.
// See previous example for comments, namespace and #include info.
StorageFolder^ picturesFolder = KnownFolders::PicturesLibrary;
auto outputString = make_shared<wstring>();

create_task(picturesFolder->GetItemsAsync())
.then ([this, outputString] (IVectorView<IStorageItem^>^ items)
{
for ( unsigned int i = 0 ; i < items->Size; i++)
{
*outputString += items->GetAt(i)->Name->Data();
if(items->GetAt(i)->IsOfType(StorageItemTypes::Folder))
{
*outputString += L" folder\n";
}
else
{
*outputString += L"\n";
}
m_OutputTextBlock->Text = ref new String((*outputString).c_str());
}
});

StorageFolder picturesFolder = KnownFolders.PicturesLibrary;


StringBuilder outputText = new StringBuilder();

IReadOnlyList<IStorageItem> itemsList =
await picturesFolder.GetItemsAsync();

foreach (var item in itemsList)


{
if (item is StorageFolder)
{
outputText.Append(item.Name + " folder\n");

}
else
{
outputText.Append(item.Name + "\n");

}
}

Dim picturesFolder As StorageFolder = KnownFolders.PicturesLibrary


Dim outputText As New StringBuilder

Dim itemsList As IReadOnlyList(Of IStorageItem) =


Await picturesFolder.GetItemsAsync()

For Each item In itemsList

If TypeOf item Is StorageFolder Then

outputText.Append(item.Name & " folder" & vbLf)

Else

outputText.Append(item.Name & vbLf)

End If

Next item
Query files in a location and enumerate matching files
In this example we query for all the files in the PicturesLibrary grouped by the month, and this time the example
recurses into subfolders. First, we call StorageFolder.CreateFolderQuery and pass the
CommonFolderQuery.GroupByMonth value to the method. That gives us a StorageFolderQueryResult object.
Next we call StorageFolderQueryResult.GetFoldersAsync which returns StorageFolder objects representing
virtual folders. In this case we're grouping by month, so the virtual folders each represent a group of files with the
same month.

//#include <ppltasks.h>
//#include <string>
//#include <memory>
using namespace Windows::Storage;
using namespace Windows::Storage::Search;
using namespace concurrency;
using namespace Platform::Collections;
using namespace Windows::Foundation::Collections;
using namespace std;

StorageFolder^ picturesFolder = KnownFolders::PicturesLibrary;

StorageFolderQueryResult^ queryResult =
picturesFolder->CreateFolderQuery(CommonFolderQuery::GroupByMonth);

// Use shared_ptr so that outputString remains in memory


// until the task completes, which is after the function goes out of scope.
auto outputString = std::make_shared<wstring>();

create_task( queryResult->GetFoldersAsync()).then([this, outputString] (IVectorView<StorageFolder^>^ view)


{
for ( unsigned int i = 0; i < view->Size; i++)
{
create_task(view->GetAt(i)->GetFilesAsync()).then([this, i, view, outputString](IVectorView<StorageFile^>^ files)
{
*outputString += view->GetAt(i)->Name->Data();
*outputString += L"(";
*outputString += to_wstring(files->Size);
*outputString += L")\r\n";
for (unsigned int j = 0; j < files->Size; j++)
{
*outputString += L" ";
*outputString += files->GetAt(j)->Name->Data();
*outputString += L"\r\n";
}
}).then([this, outputString]()
{
m_OutputTextBlock->Text = ref new String((*outputString).c_str());
});
}
});
StorageFolder picturesFolder = KnownFolders.PicturesLibrary;

StorageFolderQueryResult queryResult =
picturesFolder.CreateFolderQuery(CommonFolderQuery.GroupByMonth);

IReadOnlyList<StorageFolder> folderList =
await queryResult.GetFoldersAsync();

StringBuilder outputText = new StringBuilder();

foreach (StorageFolder folder in folderList)


{
IReadOnlyList<StorageFile> fileList = await folder.GetFilesAsync();

// Print the month and number of files in this group.


outputText.AppendLine(folder.Name + " (" + fileList.Count + ")");

foreach (StorageFile file in fileList)


{
// Print the name of the file.
outputText.AppendLine(" " + file.Name);
}
}

Dim picturesFolder As StorageFolder = KnownFolders.PicturesLibrary


Dim outputText As New StringBuilder

Dim queryResult As StorageFolderQueryResult =


picturesFolder.CreateFolderQuery(CommonFolderQuery.GroupByMonth)

Dim folderList As IReadOnlyList(Of StorageFolder) =


Await queryResult.GetFoldersAsync()

For Each folder As StorageFolder In folderList

Dim fileList As IReadOnlyList(Of StorageFile) =


Await folder.GetFilesAsync()

' Print the month and number of files in this group.


outputText.AppendLine(folder.Name & " (" & fileList.Count & ")")

For Each file As StorageFile In fileList

' Print the name of the file.


outputText.AppendLine(" " & file.Name)

Next file

Next folder

The output of the example looks similar to the following.

July 2015 (2)


MyImage3.png
MyImage4.png
December 2014 (2)
MyImage1.png
MyImage2.png
Create, write, and read a file
3/6/2017 4 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Important APIs
StorageFolder class
StorageFile class
FileIO class
Read and write a file using a StorageFile object.

Note Also see the File access sample.

Prerequisites
Understand async programming for Universal Windows Platform (UWP) apps
You can learn how to write asynchronous apps in C# or Visual Basic, see Call asynchronous APIs in C# or
Visual Basic. To learn how to write asynchronous apps in C++, see Asynchronous programming in C++.
Know how to get the file that you want to read from, write to, or both
You can learn how to get a file by using a file picker in Open files and folders with a picker.

Creating a file
Here's how to create a file in the app's local folder. If it already exists, we replace it.

// Create sample file; replace if exists.


Windows.Storage.StorageFolder storageFolder =
Windows.Storage.ApplicationData.Current.LocalFolder;
Windows.Storage.StorageFile sampleFile =
await storageFolder.CreateFileAsync("sample.txt",
Windows.Storage.CreationCollisionOption.ReplaceExisting);

' Create sample file; replace if exists.


Dim storageFolder As StorageFolder = Windows.Storage.ApplicationData.Current.LocalFolder
Dim sampleFile As StorageFile = Await storageFolder.CreateFileAsync("sample.txt", CreationCollisionOption.ReplaceExisting)

Writing to a file
Here's how to write to a writable file on disk using the StorageFile class. The common first step for each of the
ways of writing to a file (unless you're writing to the file immediately after creating it) is to get the file with
StorageFolder.GetFileAsync.

Windows.Storage.StorageFolder storageFolder =
Windows.Storage.ApplicationData.Current.LocalFolder;
Windows.Storage.StorageFile sampleFile =
await storageFolder.GetFileAsync("sample.txt");
Dim storageFolder As StorageFolder = Windows.Storage.ApplicationData.Current.LocalFolder
Dim sampleFile As StorageFile = Await storageFolder.GetFileAsync("sample.txt")

Writing text to a file


Write text to your file by calling the WriteTextAsync method of the FileIO class.

await Windows.Storage.FileIO.WriteTextAsync(sampleFile, "Swift as a shadow");

Await Windows.Storage.FileIO.WriteTextAsync(sampleFile, "Swift as a shadow")

Writing bytes to a file by using a buffer (2 steps)


1. First, call ConvertStringToBinary to get a buffer of the bytes (based on an arbitrary string) that you want
to write to your file.

var buffer = Windows.Security.Cryptography.CryptographicBuffer.ConvertStringToBinary(


"What fools these mortals be", Windows.Security.Cryptography.BinaryStringEncoding.Utf8);

Dim buffer = Windows.Security.Cryptography.CryptographicBuffer.ConvertStringToBinary(


"What fools these mortals be",
Windows.Security.Cryptography.BinaryStringEncoding.Utf8)

2. Then write the bytes from your buffer to your file by calling the WriteBufferAsync method of the FileIO
class.

await Windows.Storage.FileIO.WriteBufferAsync(sampleFile, buffer);

Await Windows.Storage.FileIO.WriteBufferAsync(sampleFile, buffer)

Writing text to a file by using a stream (4 steps)


1. First, open the file by calling the StorageFile.OpenAsync method. It returns a stream of the file's content
when the open operation completes.

var stream = await sampleFile.OpenAsync(Windows.Storage.FileAccessMode.ReadWrite);

Dim stream = Await sampleFile.OpenAsync(Windows.Storage.FileAccessMode.ReadWrite)

2. Next, get an output stream by calling the GetOutputStreamAt method from the stream . Put this in a using
statement to manage the output stream's lifetime.

using (var outputStream = stream.GetOutputStreamAt(0))


{
// We'll add more code here in the next step.
}
stream.Dispose(); // Or use the stream variable (see previous code snippet) with a using statement as well.
Using outputStream = stream.GetOutputStreamAt(0)
' We'll add more code here in the next step.
End Using

3. Now add this code within the existing using statement to write to the output stream by creating a new
DataWriter object and calling the DataWriter.WriteString method.

using (var dataWriter = new Windows.Storage.Streams.DataWriter(outputStream))


{
dataWriter.WriteString("DataWriter has methods to write to various types, such as DataTimeOffset.");
}

Dim dataWriter As New DataWriter(outputStream)


dataWriter.WriteString("DataWriter has methods to write to various types, such as DataTimeOffset.")

4. Lastly, add this code (within the inner using statement) to save the text to your file with StoreAsync and
close the stream with FlushAsync.

await dataWriter.StoreAsync();
await outputStream.FlushAsync();

Await dataWriter.StoreAsync()
Await outputStream.FlushAsync()

Reading from a file


Here's how to read from a file on disk using the StorageFile class. The common first step for each of the ways of
reading from a file is to get the file with StorageFolder.GetFileAsync.

Windows.Storage.StorageFolder storageFolder =
Windows.Storage.ApplicationData.Current.LocalFolder;
Windows.Storage.StorageFile sampleFile =
await storageFolder.GetFileAsync("sample.txt");

Dim storageFolder As StorageFolder = Windows.Storage.ApplicationData.Current.LocalFolder


Dim sampleFile As StorageFile = Await storageFolder.GetFileAsync("sample.txt")

Reading text from a file


Read text from your file by calling the ReadTextAsync method of the FileIO class.

string text = await Windows.Storage.FileIO.ReadTextAsync(sampleFile);

Dim text As String = Await Windows.Storage.FileIO.ReadTextAsync(sampleFile)

Reading text from a file by using a buffer (2 steps)


1. First, call the ReadBufferAsync method of the FileIO class.
var buffer = await Windows.Storage.FileIO.ReadBufferAsync(sampleFile);

Dim buffer = Await Windows.Storage.FileIO.ReadBufferAsync(sampleFile)

2. Then use a DataReader object to read first the length of the buffer and then its contents.

using (var dataReader = Windows.Storage.Streams.DataReader.FromBuffer(buffer))


{
string text = dataReader.ReadString(buffer.Length);
}

Dim dataReader As DataReader = Windows.Storage.Streams.DataReader.FromBuffer(buffer)


Dim text As String = dataReader.ReadString(buffer.Length)

Reading text from a file by using a stream (4 steps)


1. Open a stream for your file by calling the StorageFile.OpenAsync method. It returns a stream of the file's
content when the operation completes.

var stream = await sampleFile.OpenAsync(Windows.Storage.FileAccessMode.ReadWrite);

Dim stream = Await sampleFile.OpenAsync(Windows.Storage.FileAccessMode.ReadWrite)

2. Get the size of the stream to use later.

ulong size = stream.Size;

Dim size = stream.Size

3. Get an input stream by calling the GetInputStreamAt method. Put this in a using statement to manage the
stream's lifetime. Specify 0 when you call GetInputStreamAt to set the position to the beginning of the
stream.

using (var inputStream = stream.GetInputStreamAt(0))


{
// We'll add more code here in the next step.
}

Using inputStream = stream.GetInputStreamAt(0)


' We'll add more code here in the next step.
End Using

4. Lastly, add this code within the existing using statement to get a DataReader object on the stream then
read the text by calling DataReader.LoadAsync and DataReader.ReadString.
using (var dataReader = new Windows.Storage.Streams.DataReader(inputStream))
{
uint numBytesLoaded = await dataReader.LoadAsync((uint)size);
string text = dataReader.ReadString(numBytesLoaded);
}

Dim dataReader As New DataReader(inputStream)


Dim numBytesLoaded As UInteger = Await dataReader.LoadAsync(CUInt(size))
Dim text As String = dataReader.ReadString(numBytesLoaded)
Get file properties
3/6/2017 3 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Important APIs
StorageFile.GetBasicPropertiesAsync
StorageFile.Properties
StorageItemContentProperties.RetrievePropertiesAsync
Get propertiestop-level, basic, and extendedfor a file represented by a StorageFile object.
Note Also see the File access sample.

Prerequisites
Understand async programming for Universal Windows Platform (UWP) apps
You can learn how to write asynchronous apps in C# or Visual Basic, see Call asynchronous APIs in C# or
Visual Basic. To learn how to write asynchronous apps in C++, see Asynchronous programming in C++.
Access permissions to the location
For example, the code in these examples require the picturesLibrary capability, but your location may
require a different capability or no capability at all. To learn more, see File access permissions.

Getting a file's top-level properties


Many top-level file properties are accessible as members of the StorageFile class. These properties include the
files attributes, content type, creation date, display name, file type, and so on.
Note Remember to declare the picturesLibrary capability.
This example enumerates all of the files in the Pictures library, accessing a few of each file's top-level properties.

// Enumerate all files in the Pictures library.


var folder = Windows.Storage.KnownFolders.PicturesLibrary;
var query = folder.CreateFileQuery();
var files = await query.GetFilesAsync();

foreach (Windows.Storage.StorageFile file in files)


{
StringBuilder fileProperties = new StringBuilder();

// Get top-level file properties.


fileProperties.AppendLine("File name: " + file.Name);
fileProperties.AppendLine("File type: " + file.FileType);
}

Getting a file's basic properties


Many basic file properties are obtained by first calling the StorageFile.GetBasicPropertiesAsync method. This
method returns a BasicProperties object, which defines properties for the size of the item (file or folder) as well as
when the item was last modified.
This example enumerates all of the files in the Pictures library, accessing a few of each file's basic properties.

// Enumerate all files in the Pictures library.


var folder = Windows.Storage.KnownFolders.PicturesLibrary;
var query = folder.CreateFileQuery();
var files = await query.GetFilesAsync();

foreach (Windows.Storage.StorageFile file in files)


{
StringBuilder fileProperties = new StringBuilder();

// Get file's basic properties.


Windows.Storage.FileProperties.BasicProperties basicProperties =
await file.GetBasicPropertiesAsync();
string fileSize = string.Format("{0:n0}", basicProperties.Size);
fileProperties.AppendLine("File size: " + fileSize + " bytes");
fileProperties.AppendLine("Date modified: " + basicProperties.DateModified);
}

Getting a file's extended properties


Aside from the top-level and basic file properties, there are many properties associated with the file's contents.
These extended properties are accessed by calling the BasicProperties.RetrievePropertiesAsync method. (A
BasicProperties object is obtained by calling the StorageFile.Properties property.) While top-level and basic file
properties are accessible as properties of a classStorageFile and BasicProperties, respectivelyextended
properties are obtained by passing an IEnumerable collection of String objects representing the names of the
properties that are to be retrieved to the BasicProperties.RetrievePropertiesAsync method. This method then
returns an IDictionary collection. Each extended property is then retrieved from the collection by name or by index.
This example enumerates all of the files in the Pictures library, specifies the names of desired properties
(DataAccessed and FileOwner) in a List object, passes that List object to
BasicProperties.RetrievePropertiesAsync to retrieve those properties, and then retrieves those properties by
name from the returned IDictionary object.
const string dateAccessedProperty = "System.DateAccessed";
const string fileOwnerProperty = "System.FileOwner";

// Enumerate all files in the Pictures library.


var folder = KnownFolders.PicturesLibrary;
var query = folder.CreateFileQuery();
var files = await query.GetFilesAsync();

foreach (Windows.Storage.StorageFile file in files)


{
StringBuilder fileProperties = new StringBuilder();

// Define property names to be retrieved.


var propertyNames = new List<string>();
propertyNames.Add(dateAccessedProperty);
propertyNames.Add(fileOwnerProperty);

// Get extended properties.


IDictionary<string, object> extraProperties =
await file.Properties.RetrievePropertiesAsync(propertyNames);

// Get date-accessed property.


var propValue = extraProperties[dateAccessedProperty];
if (propValue != null)
{
fileProperties.AppendLine("Date accessed: " + propValue);
}

// Get file-owner property.


propValue = extraProperties[fileOwnerProperty];
if (propValue != null)
{
fileProperties.AppendLine("File owner: " + propValue);
}
}
Open files and folders with a picker
3/6/2017 4 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Important APIs
FileOpenPicker
FolderPicker
StorageFile
Access files and folders by letting the user interact with a picker. You can use the FileOpenPicker and
FileSavePicker classes to access files, and the FolderPicker to access a folder.
Note For a complete sample, see the File picker sample.

Prerequisites
Understand async programming for Universal Windows Platform (UWP) apps
You can learn how to write asynchronous apps in C# or Visual Basic, see Call asynchronous APIs in C# or
Visual Basic. To learn how to write asynchronous apps in C++, see Asynchronous programming in C++.
Access permissions to the location
See File access permissions.

File picker UI
A file picker displays information to orient users and provide a consistent experience when opening or saving files.
That information includes:
The current location
The item or items that the user picked
A tree of locations that the user can browse to. These locations include file system locationssuch as the Music
or Downloads folderas well as apps that implement the file picker contract (such as Camera, Photos, and
Microsoft OneDrive).
An email app might display a file picker for the user to pick attachments.
How pickers work
With a picker your app can access, browse, and save files and folders on the user's system. Your app receives
those picks as StorageFile and StorageFolder objects, which you can then operate on.
The picker uses a single, unified interface to let the user pick files and folders from the file system or from other
apps. Files picked from other apps are like files from the file system: they are returned as StorageFile objects. In
general, your app can operate on them in the same ways as other objects. Other apps make files available by
participating in file picker contracts. If you want your app to provide files, a save location, or file updates to other
apps, see Integrating with file picker contracts.
For example, you might call the file picker in your app so that your user can open a file. This makes your app the
calling app. The file picker interacts with the system and/or other apps to let the user navigate and pick the file.
When your user chooses a file, the file picker returns that file to your app. Here's the process for the case of the
user choosing a file from a providing app, such as OneDrive.
Pick a single file: complete code listing
var picker = new Windows.Storage.Pickers.FileOpenPicker();
picker.ViewMode = Windows.Storage.Pickers.PickerViewMode.Thumbnail;
picker.SuggestedStartLocation =
Windows.Storage.Pickers.PickerLocationId.PicturesLibrary;
picker.FileTypeFilter.Add(".jpg");
picker.FileTypeFilter.Add(".jpeg");
picker.FileTypeFilter.Add(".png");

Windows.Storage.StorageFile file = await picker.PickSingleFileAsync();


if (file != null)
{
// Application now has read/write access to the picked file
this.textBlock.Text = "Picked photo: " + file.Name;
}
else
{
this.textBlock.Text = "Operation cancelled.";
}

Pick a single file: step-by-step


Using a file picker involves creating and customizing a file picker object, and then showing the file picker so the
user can pick one or more items.
1. Create and customize a FileOpenPicker

var picker = new Windows.Storage.Pickers.FileOpenPicker();


picker.ViewMode = Windows.Storage.Pickers.PickerViewMode.Thumbnail;
picker.SuggestedStartLocation =
Windows.Storage.Pickers.PickerLocationId.PicturesLibrary;
picker.FileTypeFilter.Add(".jpg");
picker.FileTypeFilter.Add(".jpeg");
picker.FileTypeFilter.Add(".png");

Set properties on the file picker object relevant to your users and app. For guidelines to help you decide
how to customize the file picker, see Guidelines and checklist for file pickers.
This example creates a rich, visual display of pictures in a convenient location that the user can pick from by
setting three properties: ViewMode, SuggestedStartLocation, and FileTypeFilter.
Setting ViewMode to the PickerViewMode Thumbnail enum value creates a rich, visual display
by using picture thumbnails to represent files in the file picker. Do this for picking visual files such as
pictures or videos. Otherwise, use PickerViewMode.List. A hypothetical email app with Attach
Picture or Video and Attach Document features would set the ViewMode appropriate to the
feature before showing the file picker.
Setting SuggestedStartLocation to Pictures using PickerLocationId.PicturesLibrary starts the
user in a location where they're likely to find pictures. Set SuggestedStartLocation to a location
appropriate for the type of file being picked, for example Music, Pictures, Videos, or Documents.
From the start location, the user can navigate to other locations.
Using FileTypeFilter to specify file types keeps the user focused on picking files that are relevant. To
replace previous file types in the FileTypeFilter with new entries, use ReplaceAll instead of Add.
2. Show the FileOpenPicker
To pick a single file
Windows.Storage.StorageFile file = await picker.PickSingleFileAsync();
if (file != null)
{
// Application now has read/write access to the picked file
this.textBlock.Text = "Picked photo: " + file.Name;
}
else
{
this.textBlock.Text = "Operation cancelled.";
}

To pick multiple files

var files = await picker.PickMultipleFilesAsync();


if (files.Count > 0)
{
StringBuilder output = new StringBuilder("Picked files:\n");

// Application now has read/write access to the picked file(s)


foreach (Windows.Storage.StorageFile file in files)
{
output.Append(file.Name + "\n");
}
this.textBlock.Text = output.ToString();
}
else
{
this.textBlock.Text = "Operation cancelled.";
}

Pick a folder: complete code listing


var folderPicker = new Windows.Storage.Pickers.FolderPicker();
folderPicker.SuggestedStartLocation = Windows.Storage.Pickers.PickerLocationId.Desktop;
folderPicker.FileTypeFilter.Add("*");

Windows.Storage.StorageFolder folder = await folderPicker.PickSingleFolderAsync();


if (folder != null)
{
// Application now has read/write access to all contents in the picked folder
// (including other sub-folder contents)
Windows.Storage.AccessCache.StorageApplicationPermissions.
FutureAccessList.AddOrReplace("PickedFolderToken", folder);
this.textBlock.Text = "Picked folder: " + folder.Name;
}
else
{
this.textBlock.Text = "Operation cancelled.";
}

Tip Whenever your app accesses a file or folder through a picker, add it to your app's FutureAccessList or
MostRecentlyUsedList to keep track of it. You can learn more about using these lists in How to track recently-
used files and folders.
Save a file with a picker
3/6/2017 3 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Important APIs
FileSavePicker
StorageFile
Use FileSavePicker to let users specify the name and location where they want your app to save a file.

Note Also see the File picker sample.

Prerequisites
Understand async programming for Universal Windows Platform (UWP) apps
You can learn how to write asynchronous apps in C# or Visual Basic, see Call asynchronous APIs in C# or
Visual Basic. To learn how to write asynchronous apps in C++, see Asynchronous programming in C++.
Access permissions to the location
See File access permissions.

FileSavePicker: step-by-step
Use a FileSavePicker so that your users can specify the name, type, and location of a file to save. Create,
customize, and show a file picker object, and then save data via the returned StorageFile object that represents the
file picked.
1. Create and customize the FileSavePicker

var savePicker = new Windows.Storage.Pickers.FileSavePicker();


savePicker.SuggestedStartLocation =
Windows.Storage.Pickers.PickerLocationId.DocumentsLibrary;
// Dropdown of file types the user can save the file as
savePicker.FileTypeChoices.Add("Plain Text", new List<string>() { ".txt" });
// Default file name if the user does not type one in or select a file to replace
savePicker.SuggestedFileName = "New Document";

Set properties on the file picker object that are relevant to your users and your app. For guidelines to help you
decide how to customize the file picker, see Guidelines and checklist for file pickers.
This example sets three properties: SuggestedStartLocation, FileTypeChoices and SuggestedFileName.

Note FileSavePicker objects display the file picker using the PickerViewMode.List.

Because our user is saving a document or text file, the sample sets SuggestedStartLocation to the app's
local folder by using LocalFolder. Set SuggestedStartLocation to a location appropriate for the type of file
being saved, for example Music, Pictures, Videos, or Documents. From the start location, the user can
navigate to other locations.
Because we want to make sure our app can open the file after it is saved, we use FileTypeChoices to specify
file types that the sample supports (Microsoft Word documents and text files). Make sure all the file types
that you specify are supported by your app. Users will be able to save their file as any of the file types you
specify. They can also change the file type by selecting another of the file types that you specified. The first
file type choice in the list will be selected by default: to control that, set the DefaultFileExtension property.

Note The file picker also uses the currently selected file type to filter which files it displays, so that only file
types that match the selected files types are displayed to the user.

To save the user some typing, the example sets a SuggestedFileName. Make your suggested file name
relevant to the file being saved. For example, like Word, you can suggest the existing file name if there is one, or
the first line of a document if the user is saving a file that does not yet have a name.
1. Show the FileSavePicker and save to the picked file
Display the file picker by calling PickSaveFileAsync. After the user specifies the name, file type, and
location, and confirms to save the file, PickSaveFileAsync returns a StorageFile object that represents the
saved file. You can capture and process this file now that you have read and write access to it.

Windows.Storage.StorageFile file = await savePicker.PickSaveFileAsync();


if (file != null)
{
// Prevent updates to the remote version of the file until
// we finish making changes and call CompleteUpdatesAsync.
Windows.Storage.CachedFileManager.DeferUpdates(file);
// write to file
await Windows.Storage.FileIO.WriteTextAsync(file, file.Name);
// Let Windows know that we're finished changing the file so
// the other app can update the remote version of the file.
// Completing updates may require Windows to ask for user input.
Windows.Storage.Provider.FileUpdateStatus status =
await Windows.Storage.CachedFileManager.CompleteUpdatesAsync(file);
if (status == Windows.Storage.Provider.FileUpdateStatus.Complete)
{
this.textBlock.Text = "File " + file.Name + " was saved.";
}
else
{
this.textBlock.Text = "File " + file.Name + " couldn't be saved.";
}
}
else
{
this.textBlock.Text = "Operation cancelled.";
}

The example checks that the file is valid and writes its own file name into it. Also see Creating, writing, and reading
a file.
Tip You should always check the saved file to make sure it is valid before you perform any other processing. Then,
you can save content to the file as appropriate for your app, and provide appropriate behavior if the picked file is
not valid.
Accessing HomeGroup content
3/6/2017 5 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Important APIs
Windows.Storage.KnownFolders class
Access content stored in the user's HomeGroup folder, including pictures, music, and videos.

Prerequisites
Understand async programming for Universal Windows Platform (UWP) apps
You can learn how to write asynchronous apps in C# or Visual Basic, see Call asynchronous APIs in C# or
Visual Basic. To learn how to write asynchronous apps in C++, see Asynchronous programming in C++.
App capabilty declarations
To access HomeGroup content, the user's machine must have a HomeGroup set up and your app must have
at least one of the following capabilities: picturesLibrary, musicLibrary, or videosLibrary. When your app
accesses the HomeGroup folder, it will see only the libraries that correspond to the capabilities declared in
your app's manifest. To learn more, see File access permissions.
Note Content in the Documents library of a HomeGroup isn't visible to your app regardless of the
capabilities declared in your app's manifest and regardless of the user's sharing settings.
Understand how to use file pickers
You typically use the file picker to access files and folders in the HomeGroup. To learn how to use the file
picker, see Open files and folders with a picker.
Understand file and folder queries
You can use queries to enumerate files and folders in the HomeGroup. To learn about file and folder queries,
see Enumerating and querying files and folders.

Open the file picker at the HomeGroup


Follow these steps to open an instance of the file picker that lets the user pick files and folders from the
HomeGroup:
1. Create and customize the file picker
Use FileOpenPicker to create the file picker, and then set the picker's SuggestedStartLocation to
PickerLocationId.HomeGroup. Or, set other properties that are relevant to your users and your app. For
guidelines to help you decide how to customize the file picker, see Guidelines and checklist for file pickers
This example creates a file picker that opens at the HomeGroup, includes files of any type, and displays the
files as thumbnail images:
Windows.Storage.Pickers.FileOpenPicker picker = new Windows.Storage.Pickers.FileOpenPicker();
picker.ViewMode = Windows.Storage.Pickers.PickerViewMode.Thumbnail;
picker.SuggestedStartLocation = Windows.Storage.Pickers.PickerLocationId.HomeGroup;
picker.FileTypeFilter.Clear();
picker.FileTypeFilter.Add("*");

2. Show the file picker and process the picked file.


After you create and customize the file picker, let the user pick one file by calling
FileOpenPicker.PickSingleFileAsync, or multiple files by calling
FileOpenPicker.PickMultipleFilesAsync.
This example displays the file picker to let the user pick one file:

Windows.Storage.StorageFile file = await picker.PickSingleFileAsync();

if (file != null)
{
// Do something with the file.
}
else
{
// No file returned. Handle the error.
}

Search the HomeGroup for files


This section shows how to find HomeGroup items that match a query term provided by the user.
1. Get the query term from the user.
Here we get a query term that the user has entered into a TextBox control called searchQueryTextBox :

string queryTerm = this.searchQueryTextBox.Text;

2. Set the query options and search filter.


Query options determine how the search results are sorted, while the search filter determines which items
are included in the search results.
This example sets query options that sort the search results by relevance and then the date modified. The
search filter is the query term that the user entered in the previous step:

Windows.Storage.Search.QueryOptions queryOptions =
new Windows.Storage.Search.QueryOptions
(Windows.Storage.Search.CommonFileQuery.OrderBySearchRank, null);
queryOptions.UserSearchFilter = queryTerm.Text;
Windows.Storage.Search.StorageFileQueryResult queryResults =
Windows.Storage.KnownFolders.HomeGroup.CreateFileQueryWithOptions(queryOptions);

3. Run the query and process the results.


The following example runs the search query in the HomeGroup and saves the names of any matching files
as a list of strings.
System.Collections.Generic.IReadOnlyList<Windows.Storage.StorageFile> files =
await queryResults.GetFilesAsync();

if (files.Count > 0)
{
outputString += (files.Count == 1) ? "One file found\n" : files.Count.ToString() + " files found\n";
foreach (Windows.Storage.StorageFile file in files)
{
outputString += file.Name + "\n";
}
}

Search the HomeGroup for a particular user's shared files


This section shows you how to find HomeGroup files that are shared by a particular user.
1. Get a collection of HomeGroup users.
Each of the first-level folders in the HomeGroup represents an individual HomeGroup user. So, to get the
collection of HomeGroup users, call GetFoldersAsync retrieve the top-level HomeGroup folders.

System.Collections.Generic.IReadOnlyList<Windows.Storage.StorageFolder> hgFolders =
await Windows.Storage.KnownFolders.HomeGroup.GetFoldersAsync();

2. Find the target user's folder, and then create a file query scoped to that user's folder.
The following example iterates through the retrieved folders to find the target user's folder. Then, it sets
query options to find all files in the folder, sorted first by relevance and then by the date modified. The
example builds a string that reports the number of files found, along with the names of the files.

bool userFound = false;


foreach (Windows.Storage.StorageFolder folder in hgFolders)
{
if (folder.DisplayName == targetUserName)
{
// Found the target user's folder, now find all files in the folder.
userFound = true;
Windows.Storage.Search.QueryOptions queryOptions =
new Windows.Storage.Search.QueryOptions
(Windows.Storage.Search.CommonFileQuery.OrderBySearchRank, null);
queryOptions.UserSearchFilter = "*";
Windows.Storage.Search.StorageFileQueryResult queryResults =
folder.CreateFileQueryWithOptions(queryOptions);
System.Collections.Generic.IReadOnlyList<Windows.Storage.StorageFile> files =
await queryResults.GetFilesAsync();

if (files.Count > 0)
{
string outputString = "Searched for files belonging to " + targetUserName + "'\n";
outputString += (files.Count == 1) ? "One file found\n" : files.Count.ToString() + " files found\n";
foreach (Windows.Storage.StorageFile file in files)
{
outputString += file.Name + "\n";
}
}
}
}

Stream video from the HomeGroup


Follow these steps to stream video content from the HomeGroup:
1. Include a MediaElement in your app.
A MediaElement lets you play back audio and video content in your app. For more information on audio
and video playback, see Create custom transport controls and Audio, video, and camera.

<Grid x:Name="Output" HorizontalAlignment="Left" VerticalAlignment="Top" Grid.Row="1">


<MediaElement x:Name="VideoBox" HorizontalAlignment="Left" VerticalAlignment="Top" Margin="0" Width="400" Height="300"/>
</Grid>

2. Open a file picker at the HomeGroup and apply a filter that includes video files in the formats that
your app supports.
This example includes .mp4 and .wmv files in the file open picker.

Windows.Storage.Pickers.FileOpenPicker picker = new Windows.Storage.Pickers.FileOpenPicker();


picker.ViewMode = Windows.Storage.Pickers.PickerViewMode.Thumbnail;
picker.SuggestedStartLocation = Windows.Storage.Pickers.PickerLocationId.HomeGroup;
picker.FileTypeFilter.Clear();
picker.FileTypeFilter.Add(".mp4");
picker.FileTypeFilter.Add(".wmv");
Windows.Storage.StorageFile file = await picker.PickSingleFileAsync();

3. Open the the user's file selection for read access, and set the file stream as the source for the
MediaElement, and then play the file.

if (file != null)
{
var stream = await file.OpenAsync(Windows.Storage.FileAccessMode.Read);
VideoBox.SetSource(stream, file.ContentType);
VideoBox.Stop();
VideoBox.Play();
}
else
{
// No file selected. Handle the error here.
}
Determining availability of Microsoft OneDrive files
3/6/2017 2 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Important APIs
FileIO class
StorageFile class
StorageFile.IsAvailable property
Determine if a Microsoft OneDrive file is available using the StorageFile.IsAvailable property.

Prerequisites
Understand async programming for Universal Windows Platform (UWP) apps
You can learn how to write asynchronous apps in C# or Visual Basic, see Call asynchronous APIs in C# or
Visual Basic. To learn how to write asynchronous apps in C++, see Asynchronous programming in C++.
App capabilty declarations
See File access permissions.

Using the StorageFile.IsAvailable property


Users are able to mark OneDrive files as either available-offline (default) or online-only. This capability enables
users to move large files (such as pictures and videos) to their OneDrive, mark them as online-only, and save disk
space (the only thing kept locally is a metadata file).
StorageFile.IsAvailable, is used to determine if a file is currently available. The following table shows the value of
the StorageFile.IsAvailable property in various scenarios.

TYPE OF FILE ONLINE METERED NETWORK OFFLINE

Local file True True True

OneDrive file marked as True True True


available-offline

OneDrive file marked as True Based on user settings False


online-only

Network file True Based on user settings False

The following steps illustrate how to determine if a file is currently available.


1. Declare a capability appropriate for the library you want to access.
2. Include the Windows.Storage namespace. This namespace includes the types for managing files, folders, and
application settings. It also includes the needed StorageFile type.
3. Acquire a StorageFile object for the desired file(s). If you are enumerating a library, this step is usually
accomplished by calling the StorageFolder.CreateFileQuery method and then calling the resulting
StorageFileQueryResult object's GetFilesAsync method. The GetFilesAsync method returns an
IReadOnlyList collection of StorageFile objects.
4. Once you have the access to a StorageFile object representing the desired file(s), the value of the
StorageFile.IsAvailable property reflects whether or not the file is available.
The following generic method illustrates how to enumerate any folder and return the collection of StorageFile
objects for that folder. The calling method then iterates over the returned collection referencing the
StorageFile.IsAvailable property for each file.

/// <summary>
/// Generic function that retrieves all files from the specified folder.
/// </summary>
/// <param name="folder">The folder to be searched.</param>
/// <returns>An IReadOnlyList collection containing the file objects.</returns>
async Task<System.Collections.Generic.IReadOnlyList<StorageFile>> GetLibraryFilesAsync(StorageFolder folder)
{
var query = folder.CreateFileQuery();
return await query.GetFilesAsync();
}

...

private async void CheckAvailabilityOfFilesInPicturesLibrary()


{
// Determine availability of all files within Pictures library.
var files = await GetLibraryFilesAsync(KnownFolders.PicturesLibrary);
for (int i = 0; i < files.Count; i++)
{
StorageFile file = files[i];

StringBuilder fileInfo = new StringBuilder();


fileInfo.AppendFormat("{0} (on {1}) is {2}",
file.Name,
file.Provider.DisplayName,
file.IsAvailable ? "available" : "not available");
}
}
Files and folders in the Music, Pictures, and Videos
libraries
3/6/2017 6 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Add existing folders of music, pictures, or videos to the corresponding libraries. You can also remove folders from
libraries, get the list of folders in a library, and discover stored photos, music, and videos.
A library is a virtual collection of folders, which includes a known folder by default plus any other folders the user
has added to the library by using your app or one of the built-in apps. For example, the Pictures library includes
the Pictures known folder by default. The user can add folders to, or remove them from, the Pictures library by
using your app or the built-in Photos app.

Prerequisites
Understand async programming for Universal Windows Platform (UWP) apps
You can learn how to write asynchronous apps in C# or Visual Basic, see Call asynchronous APIs in C# or
Visual Basic. To learn how to write asynchronous apps in C++, see Asynchronous programming in C++.
Access permissions to the location
In Visual Studio, open the app manifest file in Manifest Designer. On the Capabilities page, select the
libraries that your app manages.
Music Library
Pictures Library
Videos Library
To learn more, see File access permissions.

Get a reference to a library


Note Remember to declare the appropriate capability.
To get a reference to the user's Music, Pictures, or Video library, call the StorageLibrary.GetLibraryAsync
method. Provide the corresponding value from the KnownLibraryId enumeration.
KnownLibraryId.Music
KnownLibraryId.Pictures
KnownLibraryId.Videos

var myPictures = await Windows.Storage.StorageLibrary.GetLibraryAsync


(Windows.Storage.KnownLibraryId.Pictures);

Get the list of folders in a library


To get the list of folders in a library, get the value of the StorageLibrary.Folders property.
using Windows.Foundation.Collections;

// ...

IObservableVector<Windows.Storage.StorageFolder> myPictureFolders = myPictures.Folders;

Get the folder in a library where new files are saved by default
To get the folder in a library where new files are saved by default, get the value of the StorageLibrary.SaveFolder
property.

Windows.Storage.StorageFolder savePicturesFolder = myPictures.SaveFolder;

Add an existing folder to a library


To add a folder to a library, you call the StorageLibrary.RequestAddFolderAsync. Taking the Pictures Library as
an example, calling this method causes a folder picker to be shown to the user with an Add this folder to
Pictures button. If the user picks a folder then the folder remains in its original location on disk and it becomes an
item in the StorageLibrary.Folders property (and in the built-in Photos app), but the folder does not appear as a
child of the Pictures folder in File Explorer.

Windows.Storage.StorageFolder newFolder = await myPictures.RequestAddFolderAsync();

Remove a folder from a library


To remove a folder from a library, call the StorageLibrary.RequestRemoveFolderAsync method and specify the
folder to be removed. You could use StorageLibrary.Folders and a ListView control (or similar) for the user to
select a folder to remove.
When you call StorageLibrary.RequestRemoveFolderAsync, the user sees a confirmation dialog saying that the
folder "won't appear in Pictures anymore, but won't be deleted." What this means is that the folder remains in its
original location on disk, is removed from the StorageLibrary.Folders property, and will no longer included in the
built-in Photos app.
The following example assumes that the user has selected the folder to remove from a ListView control named
lvPictureFolders.

bool result = await myPictures.RequestRemoveFolderAsync(folder);

Get notified of changes to the list of folders in a library


To get notified about changes to the list of folders in a library, register a handler for the
StorageLibrary.DefinitionChanged event of the library.

myPictures.DefinitionChanged += MyPictures_DefinitionChanged;
// ...

void HandleDefinitionChanged(Windows.Storage.StorageLibrary sender, object args)


{
// ...
}
Media library folders
A device provides five predefined locations for users and apps to store media files. Built-in apps store both user-
created media and downloaded media in these locations.
The locations are:
Pictures folder. Contains pictures.
Camera Roll folder. Contains photos and video from the built-in camera.
Saved Pictures folder. Contains pictures that the user has saved from other apps.
Music folder. Contains songs, podcasts, and audio books.
Video folder. Contains videos.
Users or apps may also store media files outside the media library folders on the SD card. To find a media file
reliably on the SD card, scan the contents of the SD card, or ask the user to locate the file by using a file picker. For
more info, see Access the SD card.

Querying the media libraries


To get a collection of files, specify the library and the type of files that you want.

...
using Windows.Storage;
using Windows.Storage.Search;
...

private async void getSongs()


{
QueryOptions queryOption = new QueryOptions
(CommonFileQuery.OrderByTitle, new string[] { ".mp3", ".mp4", ".wma" });

queryOption.FolderDepth = FolderDepth.Deep

Queue<IStorageFolder> folders = new Queue<IStorageFolder>();

var files = await KnownFolders.MusicLibrary.CreateFileQueryWithOptions


(queryOption).GetFilesAsync();

foreach (var file in files)


{
// do something with the music files.
}

Query results include both internal and removable storage


Users can choose to store files by default on the optional SD card. Apps, however, can opt out of allowing files to
be stored on the SD card. As a result, the media libraries can be split across the device's internal storage and the
SD card.
You don't have to write additional code to handle this possibility. The methods in the Windows.Storage
namespace that query known folders transparently combine the query results from both locations. You don't have
to specify the removableStorage capability in the app manifest file to get these combined results, either.
Consider the state of the device's storage shown in the following image:
If you query the contents of the Pictures Library by calling await KnownFolders.PicturesLibrary.GetFilesAsync() , the results
include both internalPic.jpg and SDPic.jpg.

Working with photos


On devices where the camera saves both a low-resolution image and a high-resolution image of every picture, the
deep queries return only the low-resolution image.
The Camera Roll and the Saved Pictures folder do not support the deep queries.
Opening a photo in the app that captured it
If you want to let the user open a photo again later in the app that captured it, you can save the CreatorAppId
with the photo's metadata by using code similar to the following example. In this example, testPhoto is a
StorageFile.

IDictionary<string, object> propertiesToSave = new Dictionary<string, object>();

propertiesToSave.Add("System.CreatorOpenWithUIOptions", 1);
propertiesToSave.Add("System.CreatorAppId", appId);

testPhoto.Properties.SavePropertiesAsync(propertiesToSave).AsyncWait();

Using stream methods to add a file to a media library


When you access a media library by using a known folder such as KnownFolders.PictureLibrary, and you use
stream methods to add a file to the media library, you have to make sure to close all the streams that your code
opens. Otherwise these methods fail to add the file to the media library as expected because at least one stream
still has a handle to the file.
For example, when you run the following code, the file is not added to the media library. In the line of code,
using (var destinationStream = (await destinationFile.OpenAsync(FileAccessMode.ReadWrite)).GetOutputStreamAt(0)) , both the
OpenAsync method and the GetOutputStreamAt method open a stream. However only the stream opened by
the GetOutputStreamAt method is disposed as a result of the using statement. The other stream remains open
and prevents saving the file.

StorageFolder testFolder = await StorageFolder.GetFolderFromPathAsync(@"C:\test");


StorageFile sourceFile = await testFolder.GetFileAsync("TestImage.jpg");
StorageFile destinationFile = await KnownFolders.CameraRoll.CreateFileAsync("MyTestImage.jpg");
using (var sourceStream = (await sourceFile.OpenReadAsync()).GetInputStreamAt(0))
{
using (var destinationStream = (await destinationFile.OpenAsync(FileAccessMode.ReadWrite)).GetOutputStreamAt(0))
{
await RandomAccessStream.CopyAndCloseAsync(sourceStream, destinationStream);
}
}

To use stream methods successfully to add a file to the media library, make sure to close all the streams that your
code opens, as shown in the following example.
StorageFolder testFolder = await StorageFolder.GetFolderFromPathAsync(@"C:\test");
StorageFile sourceFile = await testFolder.GetFileAsync("TestImage.jpg");
StorageFile destinationFile = await KnownFolders.CameraRoll.CreateFileAsync("MyTestImage.jpg");

using (var sourceStream = await sourceFile.OpenReadAsync())


{
using (var sourceInputStream = sourceStream.GetInputStreamAt(0))
{
using (var destinationStream = await destinationFile.OpenAsync(FileAccessMode.ReadWrite))
{
using (var destinationOutputStream = destinationStream.GetOutputStreamAt(0))
{
await RandomAccessStream.CopyAndCloseAsync(sourceInputStream, destinationStream);
}
}
}
}
Track recently used files and folders
3/6/2017 3 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Important APIs
MostRecentlyUsedList
FileOpenPicker
Track files that your user accesses frequently by adding them to your app's most recently used list (MRU). The
platform manages the MRU for you by sorting items based on when they were last accessed, and by removing the
oldest item when the list's 25-item limit is reached. All apps have their own MRU.
Your app's MRU is represented by the StorageItemMostRecentlyUsedList class, which you obtain from the
static StorageApplicationPermissions.MostRecentlyUsedList property. MRU items are stored as
IStorageItem objects, so both StorageFile objects (which represent files) and StorageFolder objects (which
represent folders) can be added to the MRU.
Note Also see the File picker sample and the File access sample.

Prerequisites
Understand async programming for Universal Windows Platform (UWP) apps
You can learn how to write asynchronous apps in C# or Visual Basic, see Call asynchronous APIs in C# or
Visual Basic. To learn how to write asynchronous apps in C++, see Asynchronous programming in C++.
Access permissions to the location
See File access permissions.
Open files and folders with a picker
Picked files are often the same files that users return to again and again.

Add a picked file to the MRU


The files that your user picks are often files that they return to repeatedly. So consider adding picked files to
your app's MRU as soon as they are picked. Here's how.

...

Windows.Storage.StorageFile file = await picker.PickSingleFileAsync();

var mru = Windows.Storage.AccessCache.StorageApplicationPermissions.MostRecentlyUsedList;


string mruToken = mru.Add(file, "profile pic");

StorageItemMostRecentlyUsedList.Add is overloaded. In the example, we use Add(IStorageItem,


String) so that we can associate metadata with the file. Setting metadata lets you record the item's purpose,
for example "profile pic". You can also add the file to the MRU without metadata by calling
Add(IStorageItem). When you add an item to the MRU, the method returns a uniquely identifying string,
called a token, which is used to retrieve the item.
Tip You'll need the token to retrieve an item from the MRU, so persist it somewhere. For more info about
app data, see Managing application data.

Use a token to retrieve an item from the MRU


Use the retrieval method most appropriate for the item you want to retrieve.
Retrieve a file as a StorageFile by using GetFileAsync.
Retrieve a folder as a StorageFolder by using GetFolderAsync.
Retrieve a generic IStorageItem, which can represent either a file or folder, by using GetItemAsync.
Here's how to get back the file we just added.

StorageFile retrievedFile = await mru.GetFileAsync(mruToken);

Here's how to iterate all the entries to get tokens and then items.

foreach (Windows.Storage.AccessCache.AccessListEntry entry in mru.Entries)


{
string mruToken = entry.Token;
string mruMetadata = entry.Metadata;
Windows.Storage.IStorageItem item = await mru.GetItemAsync(mruToken);
// The type of item will tell you whether it's a file or a folder.
}

The AccessListEntryView lets you iterate entries in the MRU. These entries are AccessListEntry structures that
contain the token and metadata for an item.

Removing items from the MRU when it's full


When the MRU's 25-item limit is reached and you try to add a new item, the item that was accessed the longest
time ago is automatically removed. So, you never need to remove an item before you add a new one.

Future-access list
As well as an MRU, your app also has a future-access list. By picking files and folders, your user grants your app
permission to access items that might not be accessible otherwise. If you add these items to your future-access list
then you'll retain that permission when your app wants to access those items again later. Your app's future-access
list is represented by the StorageItemAccessList class, which you obtain from the static
StorageApplicationPermissions.FutureAccessList property.
When a user picks an item, consider adding it to your future-access list as well as your MRU.
The FutureAccessList can hold up to 1000 items. Remember: it can hold folders as well as files, so that's a lot
of folders.
The platform never removes items from the FutureAccessList for you. When you reach the 1000-item limit,
you can't add another until you make room with the Remove method.
Access the SD card
3/9/2017 5 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
You can store and access non-essential data on an optional microSD card, especially on low-cost mobile devices
that have limited internal storage and have a slot for an SD card.
In most cases, you have to specify the removableStorage capability in the app manifest file before your app can
store and access files on the SD card. Typically you also have to register to handle the type of files that your app
stores and accesses.
You can store and access files on the optional SD card by using the following methods:
File pickers.
The Windows.Storage APIs.

What you can and can't access on the SD card


What you can access
Your app can only read and write files of file types that the app has registered to handle in the app manifest file.
Your app can also create and manage folders.
What you can't access
Your app can't see or access system folders and the files that they contain.
Your app can't see files that are marked with the Hidden attribute. The Hidden attribute is typically used to
reduce the risk of deleting data accidentally.
Your app can't see or access the Documents library by using KnownFolders.DocumentsLibrary. However
you can access the Documents library on the SD card by traversing the file system.

Security and privacy considerations


When an app saves files in a global location on the SD card, those files are not encrypted so they are typically
accessible to other apps.
While the SD card is in the device, your files are accessible to other apps that have registered to handle the
same file type.
When the SD card is removed from the device and opened from a PC, your files are visible in File Explorer and
accessible to other apps.
When an app installed on the SD card saves files in its LocalFolder, however, those files are encrypted and are not
accessible to other apps.

Requirements for accessing files on the SD card


To access files on the SD card, typically you have to specify the following things.
1. You have to specify the removableStorage capability in the app manifest file.
2. You also have to register to handle the file extensions associated with the type of media that you want to
access.
Use the preceding method also to access media files on the SD card without referencing a known folder like
KnownFolders.MusicLibrary, or to access media files that are stored outside of the media library folders.
To access media files stored in the media librariesMusic, Photos, or Videosby using known folders, you only
have to specify the associated capability in the app manifest filemusicLibrary, picturesLibrary, or
videoLibrary. You do not have to specify the removableStorage capability. For more info, see Files and folders
in the Music, Pictures, and Videos libraries.

Accessing files on the SD card


Getting a reference to the SD card
The KnownFolders.RemovableDevices folder is the logical root StorageFolder for the set of removable devices
currently connected to the device. If an SD card is present, the first (and only) StorageFolder underneath the
KnownFolders.RemovableDevices folder represents the SD card.
Use code like the following to determine whether an SD card is present and to get a reference to it as a
StorageFolder.

using Windows.Storage;

// Get the logical root folder for all external storage devices.
StorageFolder externalDevices = Windows.Storage.KnownFolders.RemovableDevices;

// Get the first child folder, which represents the SD card.


StorageFolder sdCard = (await externalDevices.GetFoldersAsync()).FirstOrDefault();

if (sdCard != null)
{
// An SD card is present and the sdCard variable now contains a reference to it.
}
else
{
// No SD card is present.
}

NOTE
If your SD card reader is an embedded reader (e.g., a slot in the laptop or PC itself), it may not be accessible through
KnownFolders.RemovableDevices.

Querying the contents of the SD card


The SD card can contain many folders and files that aren't recognized as known folders and can't be queried by
using a location from KnownFolders. To find files, your app has to enumerate the contents of the card by
traversing the file system recursively. Use GetFilesAsync (CommonFileQuery.DefaultQuery) and
GetFoldersAsync (CommonFolderQuery.DefaultQuery) to get the contents of the SD card efficiently.
We recommend that you use a background thread to traverse the SD card. An SD card may contain many
gigabytes of data.
Your app can also require the user to choose specific folders by using the folder picker.
When you access the file system on the SD card with a path that you derived from
KnownFolders.RemovableDevices, the following methods behave in the following way.
The GetFilesAsync method returns the union of the file extensions that you have registered to handle and the
file extensions associated with any media library capabilities that you have specified.
The GetFileFromPathAsync method fails if you have not registered to handle the file extension of the file you
are trying to access.
Identifying the individual SD card
When the SD card is first mounted, the operating system generates a unique identifier for the card. It stores this ID
in a file in the WPSystem folder at the root of the card. An app can use this ID to determine whether it recognizes
the card. If an app recognizes the card, the app may be able to postpone certain operations that were completed
previously. However the contents of the card may have changed since the card was last accessed by the app.
For example, consider an app that indexes ebooks. If the app has previously scanned the whole SD card for ebook
files and created an index of the ebooks, it can display the list immediately if the card is reinserted and the app
recognizes the card. Separately it can start a low-priority background thread to search for new ebooks. It can also
handle a failure to find an ebook that existed previously when the user tries to access the deleted ebook.
The name of the property that contains this ID is WindowsPhone.ExternalStorageId.

using Windows.Storage;

// Get the logical root folder for all external storage devices.
StorageFolder externalDevices = Windows.Storage.KnownFolders.RemovableDevices;

// Get the first child folder, which represents the SD card.


StorageFolder sdCard = (await externalDevices.GetFoldersAsync()).FirstOrDefault();

if (sdCard != null)
{
var allProperties = sdCard.Properties;
IEnumerable<string> propertiesToRetrieve = new List<string> { "WindowsPhone.ExternalStorageId" };

var storageIdProperties = await allProperties.RetrievePropertiesAsync(propertiesToRetrieve);

string cardId = (string)storageIdProperties["WindowsPhone.ExternalStorageId"];

if (...) // If cardID matches the cached ID of a recognized card.


{
// Card is recognized. Index contents opportunistically.
}
else
{
// Card is not recognized. Index contents immediately.
}
}
File access permissions
3/6/2017 6 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Apps can access certain file system locations by default. Apps can also access additional locations through the file
picker, or by declaring capabilities.

The locations that all apps can access


When you create a new app, you can access the following file system locations by default:
Application install directory. The folder where your app is installed on the users system.
There are two primary ways to access files and folders in your apps install directory:
1. You can retrieve a StorageFolder that represents your app's install directory, like this:

Windows.Storage.StorageFolder installedLocation = Windows.ApplicationModel.Package.Current.InstalledLocation;

var installDirectory = Windows.ApplicationModel.Package.current.installedLocation;

You can then access files and folders in the directory using StorageFolder methods. In the
example, this StorageFolder is stored in the installDirectory variable. You can learn more about
working with your app package and install directory by downloading the App package information
sample for Windows 8.1 and re-using its source code in your Windows 10 app.
2. You can retrieve a file directly from your app's install directory by using an app URI, like this:

using Windows.Storage;
StorageFile file = await StorageFile.GetFileFromApplicationUriAsync("ms-appx:///file.txt");

Windows.Storage.StorageFile.getFileFromApplicationUriAsync("ms-appx:///file.txt").done(
function(file) {
// Process file
}
);

When GetFileFromApplicationUriAsync completes, it returns a StorageFile that represents the


file.txt file in the app's install directory ( file in the example).
The "ms-appx:///" prefix in the URI refers to the app's install directory. You can learn more about
using app URIs in How to use URIs to reference content.
In addition, and unlike other locations, you can also access files in your app install directory by using some
Win32 and COM for Universal Windows Platform (UWP) apps and some C/C++ Standard Library
functions from Microsoft Visual Studio.
The app's install directory is a read-only location. You cant gain access to the install directory through the
file picker.
Application data locations. The folders where your app can store data. These folders (local, roaming
and temporary) are created when your app is installed.
There are two primary ways to access files and folders from your apps data locations:
1. Use ApplicationData properties to retrieve an app data folder.
For example, you can use ApplicationData.LocalFolder to retrieve a StorageFolder that
represents your app's local folder like this:

using Windows.Storage;
StorageFolder localFolder = ApplicationData.Current.LocalFolder;

var localFolder = Windows.Storage.ApplicationData.current.localFolder;

If you want to access your app's roaming or temporary folder, use the RoamingFolder or
TemporaryFolder property instead.
After you retrieve a StorageFolder that represents an app data location, you can access files and
folders in that location by using StorageFolder methods. In the example, these StorageFolder
objects are stored in the localFolder variable. You can learn more about using app data locations in
Managing application data, and by downloading the Application data sample for Windows 8.1 and
re-using its source code in your Windows 10 app.
2. For example, you can retrieve a file directly from your app's local folder by using an app URI, like
this:

using Windows.Storage;
StorageFile file = await StorageFile.GetFileFromApplicationUriAsync(new Uri("ms-appx:///file.txt"));

Windows.Storage.StorageFile.getFileFromApplicationUriAsync("ms-appdata:///local/file.txt").done(
function(file) {
// Process file
}
);

When GetFileFromApplicationUriAsync completes, it returns a StorageFile that represents the


file.txt file in the app's local folder ( file in the example).
The "ms-appdata:///local/" prefix in the URI refers to the app's local folder. To access files in the
app's roaming or temporary folders use "ms-appdata:///roaming/" or "ms-appdata:///temporary/"
instead. You can learn more about using app URIs in How to load file resources.
In addition, and unlike other locations, you can also access files in your app data locations by using some
Win32 and COM for UWP apps and some C/C++ Standard Library functions from Visual Studio.
You cant access the local, roaming, or temporary folders through the file picker.
Removable devices. Additionally, your app can access some of the files on connected devices by default.
This is an option if your app uses the AutoPlay extension to launch automatically when users connect a
device, like a camera or USB thumb drive, to their system. The files your app can access are limited to
specific file types that are specified via File Type Association declarations in your app manifest.
Of course, you can also gain access to files and folders on a removable device by calling the file picker
(using FileOpenPicker and FolderPicker) and letting the user pick files and folders for your app to
access. Learn how to use the file picker in Open files and folders with a picker.
Note For more info about accessing an SD card from a mobile app, see Access the SD card.

Locations Windows Store apps can access


Users Downloads folder. The folder where downloaded files are saved by default.
By default, your app can only access files and folders in the user's Downloads folder that your app created.
However, you can gain access to files and folders in the user's Downloads folder by calling a file picker
(FileOpenPicker or FolderPicker) so that users can navigate and pick files or folders for your app to
access.
You can create a file in the user's Downloads folder like this:

using Windows.Storage;
StorageFile newFile = await DownloadsFolder.CreateFileAsync("file.txt");

Windows.Storage.DownloadsFolder.createFileAsync("file.txt").done(
function(newFile) {
// Process file
}
);

DownloadsFolder.CreateFileAsync is overloaded so that you can specify what the system should
do if there is already an existing file in the Downloads folder that has the same name. When these
methods complete, they return a StorageFile that represents the file that was created. This file is
called newFile in the example.
You can create a subfolder in the user's Downloads folder like this:

using Windows.Storage;
StorageFolder newFolder = await DownloadsFolder.CreateFolderAsync("New Folder");

Windows.Storage.DownloadsFolder.createFolderAsync("New Folder").done(
function(newFolder) {
// Process folder
}
);

DownloadsFolder.CreateFolderAsync is overloaded so that you can specify what the system


should do if there is already an existing subfolder in the Downloads folder that has the same name.
When these methods complete, they return a StorageFolder that represents the subfolder that
was created. This file is called newFolder in the example.
If you create a file or folder in the Downloads folder, we recommend that you add that item to your app's
FutureAccessList so that your app can readily access that item in the future.

Accessing additional locations


In addition to the default locations, an app can access additional files and folders by declaring capabilities in the
app manifest (see App capability declarations), or by calling a file picker to let the user pick files and folders for
the app to access (see Open files and folders with a picker).
The following table lists additional locations that you can access by declaring a capability (or capabilities) and
using the associated Windows.Storage API:

LOCATION CAPABILITY WINDOWS.STORAGE API

Documents DocumentsLibrary KnownFolders.DocumentsLibrary

Note: You must add File Type


Associations to your app manifest that
declare specific file types that your app
can access in this location.

Use this capability if your app:


- Facilitates cross-platform offline
access to specific OneDrive content
using valid OneDrive URLs or Resource
IDs
- Saves open files to the users
OneDrive automatically while offline

Music MusicLibrary KnownFolders.MusicLibrary


Also see Files and folders in the Music,
Pictures, and Videos libraries.

Pictures PicturesLibrary KnownFolders.PicturesLibrary


Also see Files and folders in the Music,
Pictures, and Videos libraries.

Videos VideosLibrary KnownFolders.VideosLibrary


Also see Files and folders in the Music,
Pictures, and Videos libraries.

Removable devices RemovableDevices KnownFolders.RemovableDevices

Note You must add File Type


Associations to your app manifest that
declare specific file types that your app
can access in this location.

Also see Access the SD card.

Homegroup libraries At least one of the following KnownFolders.HomeGroup


capabilities is needed.
- MusicLibrary
- PicturesLibrary
- VideosLibrary

Media server devices (DLNA) At least one of the following KnownFolders.MediaServerDevices


capabilities is needed.
- MusicLibrary
- PicturesLibrary
- VideosLibrary
LOCATION CAPABILITY WINDOWS.STORAGE API

Universal Naming Convention (UNC) A combination of the following Retrieve a folder using:
folders capabilities is needed. StorageFolder.GetFolderFromPathAsyn
c
The home and work networks
capability: Retrieve a file using:
- PrivateNetworkClientServer StorageFile.GetFileFromPathAsync

And at least one internet and public


networks capability:
- InternetClient
- InternetClientServer

And, if applicable, the domain


credentials capability:
- EnterpriseAuthentication

Note: You must add File Type


Associations to your app manifest that
declare specific file types that your app
can access in this location.
Games and DirectX
3/6/2017 1 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Universal Windows Platform (UWP) offers new opportunities to create, distribute, and monetize games. Learn
about starting a new game or porting an existing game.

TOPIC DESCRIPTION

Windows 10 game development guide An end-to-end guide with resources and information for
developing UWP games.

Planning This topic contains a list of articles for the game planning
stage.

UWP programming Learn how to use Windows Runtime APIs to develop UWP
games.

DirectX programming Learn how to use DirectX in UWP games.

Game porting guides Describes how to port existing games to Direct3D 11, UWP,
and Windows 10.

Game development videos A collection of game dev videos from major conferences and
events.

Note
This article is for Windows 10 developers writing Universal Windows Platform (UWP) apps. If youre
developing for Windows 8.x or Windows Phone 8.x, see the archived documentation.

To make the best use of the game development overviews and tutorials, you should be familiar with the following
subjects:
Microsoft C++ with Component Extensions (C++/CX). This is an update to Microsoft C++ that incorporates
automatic reference counting, and is the language for developing UWP games with DirectX 11.1 or later
versions.
Basic graphics programming terminology.
Basic Windows programming concepts.
Basic familiarity with the Direct3D 9 or 11 APIs.
Windows 10 game development guide
3/6/2017 29 min to read Edit on GitHub

Welcome to the Windows 10 game development guide!


This guide provides an end-to-end collection of the resources and information you'll need to develop a Universal
Windows Platform (UWP) game.

Introduction to game development for the Universal Windows Platform


(UWP)
When you create a Windows 10 game, you have the opportunity to reach millions of players worldwide across
phone, PC, and Xbox One. With Xbox on Windows, Xbox Live, cross-device multiplayer, an amazing gaming
community, and powerful new features like the Universal Windows Platform (UWP) and DirectX 12, Windows 10
games thrill players of all ages and genres. The new Universal Windows Platform (UWP) delivers compatibility for
your game across Windows 10 devices with a common API for phone, PC, and Xbox One, along with tools and
options to tailor your game to each device experience.
This guide provides an end-to-end collection of information and resources that will help you as you develop your
game. The sections are organized according to the stages of game development, so you'll know where to look for
information when you need it.
To get started, the Game development resources section provides a high-level survey of documentation, programs,
and other resources that are helpful when creating a game.
This guide will be updated as additional Windows 10 game development resources and material become available.

Game development resources


From documentation to developer programs, forums, blogs, and samples, there are many resources available to
help you on your game development journey. Here's a roundup of resources to know about as you begin
developing your Windows 10 game.

Note Xbox One development and select Windows 10 gaming features (Xbox Live Services, for example) are
managed through various programs. This guide covers a broad range of resources, so you may find that some
resources are inaccessible depending on the program you are in or your specific development role. Examples
are links that resolve to developer.xboxlive.com, forums.xboxlive.com, xdi.xboxlive.com, or the Game Developer
Network (GDN). For information about partnering with Microsoft, see Developer Programs.

Game development documentation


Throughout this guide, you'll find deep links to relevant documentationorganized by task, technology, and stage
of game development. To give you a broad view of what's available, here are the main documentation portals for
Windows 10 game development.

Windows Dev Center main portal Windows Dev Center

Developing Windows apps Develop Windows apps


Universal Windows Platform app development How-to guides for Windows 10 apps

How-to guides for UWP games Games and DirectX

DirectX reference and overviews DirectX Graphics and Gaming

Azure for gaming Build and scale your games using Azure

UWP on Xbox One Building UWP apps on Xbox One

Xbox Live documentation Xbox Live SDK

Xbox One developer documentation (GDN) Xbox One XDK documentation

Xbox One developer whitepapers (GDN) White Papers

Developer programs
Microsoft offers several developer programs to help you develop and publish Windows games. To publish a game
in the Windows Store, you'll need to create a developer account on Windows Dev Center. Other programs may be
of interest depending on your game and studio needs, and can create opportunities such as Xbox One
development and Xbox Live integration.
Windows Dev Center
Registering a developer account on the Windows Dev Center is the first step towards publishing your Windows
game. A developer account lets you reserve your game's name and submit free or paid games to the Windows
Store for all Windows devices. Use your developer account to manage your game and in-game products, get
detailed analytics, and enable services that create great experiences for your players around the world.

Register a developer account Ready to sign up?

ID@Xbox
The ID@Xbox program helps qualified game developers self-publish on Windows and Xbox One. If you want to
develop for Xbox One, or add Xbox Live features like Gamerscore, achievements, and leaderboards to your
Windows 10 game, sign up with ID@Xbox. Become an ID@Xbox developer to get the tools and support you need
to unleash your creativity and maximize your success. Before applying to ID@Xbox, please register a developer
account on Windows Dev Center.

ID@Xbox developer program Independent Developer Program for Xbox One

ID@Xbox consumer site ID@Xbox

Xbox Live Creators Program


The Xbox Live Creators Program is currently in Preview. This program allows anyone to integrate Xbox Live into
their title and publish to Xbox One and Windows 10. To start developing with the Xbox Live Creators Program, sign
up for the Preview today. The sign-ups for the Preview program is currently limited but more spaces will be made
available periodically.
If you want access to even more Xbox Live capabilities, be featured in the main Xbox One store, or receive
dedicated marketing and development support, you can apply to the ID@Xbox program.

Xbox Live Creators Program Preview Integrate Xbox Live into your title

Xbox tools and middleware


The Xbox Tools and Middleware Program licenses Xbox development kits to professional developers of game tools
and middleware. Developers accepted into the program can share and distribute their Xbox XDK technologies to
other licensed Xbox developers.

Contact the tools and middleware program xboxtlsm@microsoft.com

Game samples
There are many Windows 10 game and app samples available to help you understand Windows 10 gaming
features and get a quick start on game development. More samples are developed and published regularly, so
don't forget to occasionally check back at sample portals to see what's new. You can also watch GitHub repos to be
notified of changes and additions.

Universal Windows Platform app samples Windows-universal-samples

Xbox Advanced Technology Group public samples Xbox-ATG-Samples

Direct3D 12 graphics samples DirectX-Graphics-Samples

Direct3D 11 graphics samples directx-sdk-samples

Direct3D 11 first-person game sample Create a simple UWP game with DirectX

Direct2D custom image effects sample D2DCustomEffects

Direct2D gradient mesh sample D2DGradientMesh

Direct2D photo adjustment sample D2DPhotoAdjustment

Xbox One game samples (GDN) Samples

Windows 8 game samples (MSDN Code Gallery) Windows Store game samples

JavaScript and HTML5 game sample JavaScript and HTML5 touch game sample

Developer forums
Developer forums are a great place to ask and answer game development questions and connect with the game
development community. Forums can also be fantastic resources for finding existing answers to difficult issues that
developers have faced and solved in the past.

Windows apps developer forums Windows store and apps forums

UWP apps developer forum Developing Universal Windows Platform apps

Desktop applications developer forums Windows desktop applications forums

DirectX Windows Store games (archived forum posts) Building Windows Store games with DirectX (archived)

Windows 10 managed partner developer forums XBOX Developer Forums: Windows 10

DirectX forums DirectX 12 forum


Developer blogs
Developer blogs are another great resource for the latest information about game development. You'll find posts
about new features, implementation details, best practices, architecture background, and more.

Building apps for Windows blog Building Apps for Windows

Windows 10 (blog posts) Posts in Windows 10

Visual Studio engineering team blog The Visual Studio Blog

Visual Studio developer tools blogs Developer Tools Blogs

Somasegar's developer tools blog Somasegars blog

DirectX developer blog DirectX Developer blog

DirectX 12 introduction (blog post) DirectX 12

Visual C++ tools team blog Visual C++ team blog

ID@Xbox developer blog ID@XBOX Developer Blog

Concept and planning


In the concept and planning stage, you're deciding what your game is going to be like and the technologies and
tools you'll use to bring it to life.
Overview of game development technologies
When you start developing a game for the UWP you have multiple options available for graphics, input, audio,
networking, utilities, and libraries.
If you've already decided on all the technologies you'll be using in your game, great! If not, the Game technologies
for UWP apps guide is an excellent overview of many of the technologies available, and is highly recommended
reading to help you understand the options and how they fit together.

Survey of UWP game technologies Game technologies for UWP apps

These three GDC 2015 videos give a good overview of Windows 10 game development and the Windows 10
gaming experience.

Overview of Windows 10 game development (video) Developing Games for Windows 10

Windows 10 gaming experience (video) Gaming Consumer Experience on Windows 10

Gaming across the Microsoft ecosystem (video) The Future of Gaming Across the Microsoft Ecosystem

Game planning
These are some high level concept and planning topics to consider when planning for your game.

Make your game accessible Accessibility for games

Build games using cloud Cloud for games


Monetize your game Monetization for games

Choosing your graphics technology and programming language


There are several programming languages and graphics technologies available for use in Windows 10 games. The
path you take depends on the type of game youre developing, the experience and preferences of your
development studio, and specific feature requirements of your game. Will you use C#, C++, or JavaScript? DirectX,
XAML, or HTML5?
DirectX
Microsoft DirectX is the choice to make for the highest-performance 2D and 3D graphics and multimedia.
Direct3D 12, new in Windows 10, brings the power of a console-like API and is faster and more efficient than ever
before. Your game can fully utilize modern graphics hardware and feature more objects, richer scenes, and
enhanced effects. Direct3D 12 delivers optimized graphics on Windows 10 PCs and Xbox One. If you want to use
the familiar graphics pipeline of Direct3D 11, youll still benefit from the new rendering and optimization features
added to Direct3D 11.3. And, if youre a tried-and-true desktop Windows API developer with roots in Win32, youll
still have that option in Windows 10.
The extensive features and deep platform integration of DirectX provide the power and performance needed by the
most demanding games.

How-to guides for DirectX games Games and DirectX

DirectX overviews and reference DirectX Graphics and Gaming

Direct3D 12 programming guide and reference Direct3D 12 Graphics

Graphics and DirectX 12 development videos (YouTube Microsoft DirectX 12 and Graphics Education
channel)

XAML
XAML is an easy-to-use declarative UI language with convenient features like animations, storyboards, data
binding, scalable vector-based graphics, dynamic resizing, and scene graphs. XAML works great for game UI,
menus, sprites, and 2D graphics. To make UI layout easy, XAML is compatible with design and development tools
like Expression Blend and Microsoft Visual Studio. XAML is commonly used with C#, but C++ is also a good choice
if thats your preferred language or if your game has high CPU demands.

XAML platform overview XAML platform

XAML UI and controls Controls, layouts, and text

HTML 5
HyperText Markup Language (HTML) is a common UI markup language used for web pages, apps, and rich clients.
Windows games can use HTML5 as a full-featured presentation layer with the familiar features of HTML, access to
the Universal Windows Platform, and support for modern web features like AppCache, Web Workers, canvas, drag-
and-drop, asynchronous programming, and SVG. Behind the scenes, HTML rendering takes advantage of the
power of DirectX hardware acceleration, so you can still get the performance benefits of DirectX without writing
any extra code. HTML5 is a good choice if you are proficient with web development, porting a web game, or want
to use language and graphics layers that can be easier to approach than the other choices. HTML5 is used with
JavaScript, but can also call into components created with C# or C++/CX.

HTML5 and Document Object Model information HTML and DOM reference
The HTML5 W3C Recommendation HTML5

Combining presentation technologies


The Microsoft DirectX Graphics Infrastructure (DXGI) provides interop and compatibility across multiple graphics
technologies. For high-performance graphics, you can combine XAML and DirectX, using XAML for menus and
other simple UI, and DirectX for rendering complex 2D and 3D scenes. DXGI also provides compatibility between
Direct2D, Direct3D, DirectWrite, DirectCompute, and the Microsoft Media Foundation.

DirectX Graphics Infrastructure programming guide and DXGI


reference

Combining DirectX and XAML DirectX and XAML interop

C++
C++/CX is a high-performance, low overhead language that provides the powerful combination of speed,
compatibility, and platform access. C++/CX makes it easy to use all of the great gaming features in Windows 10,
including DirectX and Xbox Live. You can also reuse existing C++ code and libraries. C++/CX creates fast, native
code that doesnt incur the overhead of garbage collection, so your game can have great performance and low
power consumption, which leads to longer battery life. Use C++/CX with DirectX or XAML, or create a game that
uses a combination of both.

C++/CX reference and overviews Visual C++ Language Reference (C++/CX)

Visual C++ programming guide and reference Visual C++ in Visual Studio 2015

C
C# (pronounced "C sharp") is a modern, innovative language that is simple, powerful, type-safe, and object-
oriented. C# enables rapid development while retaining the familiarity and expressiveness of C-style languages.
Though easy to use, C# has numerous advanced language features like polymorphism, delegates, lambdas,
closures, iterator methods, covariance, and Language-Integrated Query (LINQ) expressions. C# is an excellent
choice if you are targeting XAML, want to get a quick start developing your game, or have previous C# experience.
C# is used primarily with XAML, so if you want to use DirectX, choose C++ instead, or write part of your game as a
C++ component that interacts with DirectX. Or, consider Win2D, an immediate mode Direct2D graphics libary for
C# and C++.

C# programming guide and reference C# language reference

JavaScript
JavaScript is a dynamic scripting language widely used for modern web and rich client applications.
Windows JavaScript apps can access the powerful features of the Universal Windows Platform in an easy, intuitive
wayas methods and properties of object-oriented JavaScript classes. JavaScript is a good choice for your game if
youre coming from a web development environment, are already familiar with JavaScript, or want to use HTML5,
CSS, WinJS, or JavaScript libraries. If youre targeting DirectX or XAML, choose C# or C++/CX instead.

JavaScript and Windows Runtime reference JavaScript reference

Use Windows Runtime Components to combine languages


With the Universal Windows Platform, its easy to combine components written in different languages. Create
Windows Runtime Components in C++, C#, or Visual Basic, and then call into them from JavaScript, C#, C++, or
Visual Basic. This is a great way to program portions of your game in the language of your choice. Components
also let you consume external libraries that are only available in a particular language, as well as use legacy code
youve already written.
How to create Windows Runtime Components Creating Windows Runtime Components

Which version of DirectX should your game use?


If you are choosing DirectX for your game, you'll need to decide which version to use: Microsoft Direct3D 12 or
Microsoft Direct3D 11.
Direct3D 12, new in Windows 10, brings the power of a console-like API and is faster and more efficient than ever
before. Your game can fully utilize modern graphics hardware and feature more objects, richer scenes, and
enhanced effects. Direct3D 12 delivers optimized graphics on Windows 10 PCs and Xbox One. Since Direct3D 12
works at a very low level, it is able to give an expert graphics development team or an experienced DirectX 11
development team all the control they need to maximize graphics optimization.
Direct3D 11.3 is a low level graphics API that uses the familiar Direct3D programming model and handles for you
more of the complexity involved in GPU rendering. It is also supported in Windows 10 and Xbox One. If you have
an existing engine written in Direct3D 11, and you're not quite ready to make the jump to Direct3D 12, you can use
Direct3D 11 on 12 to achieve some performance improvements. Versions 11.3+ contain the new rendering and
optimization features enabled also in Direct3D 12.

Choosing Direct3D 12 or Direct3D 11 What is Direct3D 12?

Overview of Direct3D 11 Direct3D 11 Graphics

Overview of Direct3D 11 on 12 Direct3D 11 on 12

Bridges, game engines, and middleware


Depending on the needs of your game, using bridges, game engines, or middleware can save development and
testing time and resources. Here are some overview and resources for bridges, game engines, and middleware to
help you decide if any are right for you.

Bridges and game engines for Windows 10 (blog post) More ways to bring your code to fast-growing Windows 10
Store

Game Development with Middleware (video) Accelerating Windows Store Game Development with
Middleware

Visual Studio and Unity, Unreal, and Cocos2d (blog post) Visual Studio for Game Development: New Partnerships with
Unity, Unreal Engine and Cocos2d

Introduction to game middleware (blog post) Game Development Middleware - What is it? Do I need it?

Universal Windows Platform Bridges


Universal Windows Platform Bridges are technologies that bring your existing app or game over to the UWP.
Bridges are a great way to get a quick start on UWP game development.

UWP bridges Bring your code to Windows

Windows Bridge for iOS Bring your iOS apps to Windows

Windows Bridge for desktop applications (.NET and Win32) Convert your desktop application to a UWP app

Unity
Unity 5 is the next generation of the award-winning development platform for creating 2D and 3D games and
interactive experiences. Unity 5 brings new artistic power, enhanced graphics capabilities, and improved efficiency.
Beginning with Unity 5.4, Unity supports Direct3D 12 development.

The Unity game engine Unity - Game Engine

Get Unity 5 Get Unity

Universal Windows Platform app support in Unity 5.2 (blog Windows 10 Universal Platform apps in Unity 5.2
post)

Unity documentation for Windows Unity Manual / Windows

Publish your Unity game to Windows Store Porting guide

Publish your Unity game as a Universal Windows Platform app How to publish your Unity game as a UWP app
(video)

Use Unity to make Windows games and apps (video) Making Windows games and apps with Unity

Unity game development using Visual Studio (video series) Using Unity with Visual Studio 2015

Havok
Havoks modular suite of tools and technologies help game creators reach new levels of interactivity and
immersion. Havok enables highly realistic physics, interactive simulations, and stunning cinematics. Version 2015.1
and higher officially supports UWP in Visual Studio 2015 on x86, 64-bit, and ARM.

Havok website Havok

Havok tool suite Havok Product Overview

Havok support forums Havok

MonoGame
MonoGame is an open source, cross-platform game development framework originally based on Microsoft's XNA
Framework 4.0. Monogame currently supports Windows, Windows Phone, and Xbox, as well as Linux, macOS, iOS,
Android, and several other platforms.

MonoGame MonoGame website

MonoGame Documentation MonoGame Documentation (latest)

Monogame Downloads Download releases, development builds, and source code from
the MonoGame website, or get the latest release via NuGet.

Cocos2d
Cocos2d-X is a cross-platform open source game development engine and tools suite that supports building UWP
games. Beginning with version 3, 3D features are being added as well.

Cocos2d-x What is Cocos2d-X?

Cocos2d-x programmer's guide Cocos2d-x Programmers Guide v3.8

Cocos2d-x on Windows 10 (blog post) Running Cocos2d-x on Windows 10


Cocos2d-x Windows Store games (video) Build a Game with Cocos2d-x for Windows Devices

Unreal Engine
Unreal Engine 4 is a complete suite of game development tools for all types of games and developers. For the most
demanding console and PC games, Unreal Engine is used by game developers worldwide.

Unreal Engine overview Unreal Engine 4

BabylonJS
BabylonJS is a complete JavaScript framework for building 3D games with HTML5, WebGL, and Web Audio.

BabylonJS BabylonJS

WebGL 3D with HTML5 and BabylonJS (video series) Learning WebGL 3D and BabylonJS

Building a cross-platform WebGL game with BabylonJS Use BabylonJS to develop a cross-platform game

Middleware and partners


There are many other middleware and engine partners that can provide solutions depending on your game
development needs.

Windows Dev Center partners Dev Center Partners

Porting your game


If you have an existing game, there are many resources and guides available to help you quickly bring your game
to the UWP. To jumpstart your porting efforts, you might also consider using a Universal Windows Platform Bridge.

Porting a Windows 8 app to a Universal Windows Platform Move from Windows Runtime 8.x to UWP
app

Porting a Windows 8 app to a Universal Windows Platform Porting 8.1 Apps to Windows 10
app (video)

Porting an iOS app to a Universal Windows Platform app Move from iOS to UWP

Porting a Silverlight app to a Universal Windows Platform app Move from Windows Phone Silverlight to UWP

Porting from XAML or Silverlight to a Universal Windows Porting an App from XAML or Silverlight to Windows 10
Platform app (video)

Porting an Xbox game to a Universal Windows Platform app Porting from Xbox One to Windows 10 UWP

Porting from DirectX 9 to DirectX 11 Port from DirectX 9 to Universal Windows Platform (UWP)

Porting from Direct3D 11 to Direct3D 12 Porting from Direct3D 11 to Direct3D 12

Porting from OpenGL ES to Direct3D 11 Port from OpenGL ES 2.0 to Direct3D 11

OpenGL ES to Direct3D 11 using ANGLE ANGLE


Classic Windows API equivalents in the UWP Alternatives to Windows APIs in Universal Windows Platform
(UWP) apps

Prototype and design


Now that you've decided the type of game you want to create and the tools and graphics technology you'll use to
build it, you're ready to get started with the design and prototype. At its core, your game is a Universal Windows
Platform app, so that's where you'll begin.
Introduction to the Universal Windows Platform (UWP)
Windows 10 introduces the Universal Windows Platform (UWP), which provides a common API platform across
Windows 10 devices. UWP evolves and expands the Windows Runtime model and hones it into a cohesive, unified
core. Games that target the UWP can call WinRT APIs that are common to all devices. Because the UWP provides a
guaranteed core API layer, you can choose to create a single app package that will install across Windows 10
devices. And if you want to, your game can still call APIs (including some classic Windows APIs from Win32 and
.NET) that are specific to the devices your game runs on.
The goal of the UWP is to have:
One core operating system
One application platform
One gaming social network
One store
One ingestion path
The following are excellent guides that discuss the Universal Windows Platform apps in detail, and are
recommended reading to help you understand the platform.

Introduction to Universal Windows Platform apps What's a Universal Windows Platform app?

Overview of the UWP Guide to UWP apps

Getting started with UWP development


Getting set up and ready to develop a Universal Windows Platform app is quick and easy. The following guides
take you through the process step-by-step.

Getting started with UWP development Get started with Windows apps

Getting set up for UWP development Get set up

If you're an "absolute beginner" to UWP programming, and are considering using XAML in your game (see
Choosing your graphics technology and programming language), the Windows 10 development for absolute
beginners video series is a good place to start.

Beginners guide to Windows 10 development with XAML Windows 10 development for absolute beginners
(Video series)

Announcing the Windows 10 absolute beginners series using Windows 10 development for absolute beginners
XAML (blog post)

UWP development concepts


Overview of Universal Windows Platform app development Develop Windows apps

Overview of network programming in the UWP Networking and web services

Using Windows.Web.HTTP and Windows.Networking.Sockets Networking for games


in games

Asynchronous programming concepts in the UWP Asynchronous programming

Windows Desktop APIs to UWP


These are some links to help you move your Windows desktop game to UWP.

UWP APIs for Win32 and COM APIs Win32 and COM APIs for UWP apps

Unsupported CRT functions in UWP CRT functions not supported in Universal Windows Platform
apps

Alternatives for Windows APIs Alternatives to Windows APIs in Universal Windows Platform
(UWP) apps

Process lifetime management


Process lifetime management, or app lifecyle, describes the various activation states that a Universal Windows
Platform app can transition through. Your game can be activated, suspended, resumed, or terminated, and can
transition through those states in a variety of ways.

Handling app lifecyle transitions App lifecycle

Using Microsoft Visual Studio to trigger app transitions How to trigger suspend, resume, and background events for
Windows Store apps in Visual Studio

Designing game UX
The genesis of a great game is inspired design.
Games share some common user interface elements and design principles with apps, but games often have a
unique look, feel, and design goal for their user experience. Games succeed when thoughtful design is applied to
both aspectswhen should your game use tested UX, and when should it diverge and innovate? The presentation
technology that you choose for your gameDirectX, XAML, HTML5, or some combination of the threewill
influence implementation details, but the design principles you apply are largely independent of that choice.
Separately from UX design, gameplay design such as level design, pacing, world design, and other aspects is an art
form of its ownone that's up to you and your team, and not covered in this development guide.

UWP design basics and guidelines Designing UWP apps

Designing for app lifecycle states UX guidelines for launch, suspend, and resume

Targeting multiple device form factors (video) Designing Games for a Windows Core World

Color guideline and palette


Following a consistent color guideline in your game improves aesthetics, aids navigation, and is a powerful tool to
inform the player of menu and HUD functionality. Consistent coloring of game elements like warnings, damage, XP,
and achievements can lead to cleaner UI and reduce the need for explicit labels.
Color guide Best Practices: Color

Typography
The appropriate use of typography enhances many aspects of your game, including UI layout, navigation,
readability, atmosphere, brand, and player immersion.

Typography guide Best Practices: Typography

UI map
A UI map is a layout of game navigation and menus expressed as a flowchart. The UI map helps all involved
stakeholders understand the games interface and navigation paths, and can expose potential roadblocks and dead
ends early in the development cycle.

UI map guide Best Practices: UI Map

DirectX development
Guides and references for DirectX game development.

DirectX game development on the UWP Games and DirectX

DirectX interaction with the UWP app model The app object and DirectX

Graphics and DirectX 12 development videos (YouTube Microsoft DirectX 12 and Graphics Education
channel)

DirectX overviews and reference DirectX Graphics and Gaming

Direct3D 12 programming guide and reference Direct3D 12 Graphics

DirectX 12 fundamentals (video) Better Power, Better Performance: Your Game on DirectX 12

Learning Direct3D 12
Learn what changed in Direct3D 12 and how to start programming using Direct3D 12.

Set up programming environment Direct3D 12 programming environment setup

How to create a basic component Creating a basic Direct3D 12 component

Changes in Direct3D 12 Important changes migrating from Direct3D 11 to Direct3D


12

How to port from Direct3D 11 to Direct3D 12 Porting from Direct3D 11 to Direct3D 12

Resource binding concepts (covering descriptor, descriptor Resource binding in Direct3D 12


table, descriptor heap, and root signature)

Managing memory Memory management in Direct3D 12

DirectX Tool Kit and libraries


The DirectX Tool Kit, DirectX texture processing library, DirectXMesh geometry processing library, UVAtlas library,
and DirectXMath library provide texture, mesh, sprite, and other utility functionality and helper classes for DirectX
development. These libraries can help you save development time and effort.
Get DirectX Tool Kit for DirectX 11 DirectXTK

Get DirectX Tool Kit for DirectX 12 DirectXTK 12

Get DirectX texture processing library DirectXTex

Get DirectXMesh geometry processing library DirectXMesh

Get UVAtlas for creating and packing isochart texture atlas UVAtlas

Get the DirectXMath library DirectXMath

Direct3D 12 support in the DirectXTK (blog post) Support for DirectX 12

DirectX resources from partners


These are some additional DirectX documentation created by external partners.

Nvidia: DX12 Do's and Don'ts (blog post) DirectX 12 on Nvidia GPUs

Intel: Efficient rendering with DirectX 12 DirectX 12 rendering on Intel Graphics

Intel: Multi adapter support in DirectX 12 How to implement an explicit multi-adapter application using
DirectX 12

Intel: DirectX 12 tutorial Collaborative white paper by Intel, Suzhou Snail and Microsoft

Production
Your studio is now fully engaged and moving into the production cycle, with work distributed throughout your
team. You're polishing, refactoring, and extending the prototype to craft it into a full game.
Notifications and live tiles
A tile is your game's representation on the Start Menu. Tiles and notifications can drive player interest even when
they aren't currently playing your game.

Developing tiles and badges Tiles, badges, and notifications

Sample illustrating live tiles and notifications Notifications sample

Adaptive tile templates (blog post) Adaptive Tile Templates - Schema and Documentation

Designing tiles and badges Guidelines for tiles and badges

Windows 10 app for interactively developing live tile templates Notifications Visualizer

UWP Tile Generator extension for Visual Studio Tool for creating all required tiles using single image

UWP Tile Generator extension for Visual Studio (blog post) Tips on using the UWP Tile Generator tool

Enable in-app product (IAP) purchases


An IAP (in-app product) is a supplementary item that players can purchase in-game. IAPs can be new add-ons,
game levels, items, or anything else that your players might enjoy. Used appropriately, IAPs can provide revenue
while improving the game experience. You define and publish your game's IAPs through the Windows Dev Center
dashboard, and enable in-app purchases in your game's code.

Durable in-app products Enable in-app product purchases

Consumable in-app products Enable consumable in-app product purchases

In-app product details and submission IAP submissions

Monitor IAP sales and demographics for your game IAP acquisitions report

Debugging and performance monitoring tools


The Windows Performance Toolkit (WPT) consists of performance monitoring tools that produce in-depth
performance profiles of Windows operating systems and applications. This is especially useful for monitoring
memory usage and improving game performance. The Windows Performance Toolkit is included in the Windows
10 SDK and Windows ADK. This toolkit consists of two independent tools: Windows Performance Recorder (WPR)
and Windows Performance Analyzer (WPA). Another useful tool for generating dump files to investigate game
crashes is ProcDump, which is part of Windows Sysinternals.

Get Windows Performance Toolkit (WPT) from Windows 10 Windows 10 SDK


SDK

Get Windows Performance Toolkit (WPT) from Windows ADK Windows ADK

Troubleshoot unresponsible UI using Windows Performance Critical path analysis with WPA
Analyzer (video)

Diagnose memory usage and leaks using Windows Memory footprint and leaks
Performance Recorder (video)

Get ProcDump ProcDump

Learn to use ProcDump (video) Configure ProcDump to create dump files

Advanced DirectX techniques and concepts


Some portions of DirectX development can be nuanced and complex. When you get to the point in production
where you need to dig down into the details of your DirectX engine, or debug difficult performance problems, the
resources and information in this section can help.

Optimizing graphics and performance (video) Advanced DirectX 12 Graphics and Performance

DirectX graphics debugging (video) Solve the tough graphics problems with your game using
DirectX Tools

Visual Studio 2015 tools for debugging DirectX 12 (video) DirectX tools for Windows 10 in Visual Studio 2015

Direct3D 12 programming guide Direct3D 12 Programming Guide

Combining DirectX and XAML DirectX and XAML interop

Globalization and localization


Develop world-ready games for the Windows platform and learn about the international features built into
Microsofts top products.

Preparing your game for the global market Guidelines when developing for a global audience

Bridging languages, cultures, and technology Online resource for language conventions and standard
Microsoft terminology

Submitting and publishing your game


The following guides and information help make the publishing and submission process as smooth as possible.
Packaging and uploading
You'll use the new unified Windows Dev Center dashboard to publish and manage your game packages.

Windows Dev Center app publishing Publish Windows apps

Windows Dev Center advanced publishing (GDN) Windows Dev Center Dashboard advanced publishing guide

Rating your game (blog post) Single workflow to assign age ratings using IARC system

Packaging your game Package your UWPDirectX game

Packaging your game as a 3rd party developer (blog post) Create uploadable packages without publisher's store account
access

Creating app packages and app package bundles using Create packages using app packager tool MakeAppx.exe
MakeAppx

Signing your files digitally using SignTool Sign files and verify signatures in files using SignTool

Uploading and versioning your game Upload app packages

Policies and certification


Don't let certification issues delay your game's release. Here are policies and common certification issues to be
aware of.

Windows Store App Developer Agreement App Developer Agreement

Policies for publishing apps in the Windows Store Windows Store Policies

How to avoid some common app certification issues Avoid common certification failures

Store manifest (StoreManifest.xml)


The store manifest (StoreManifest.xml) is an optional configuration file that can be included in your app package.
The store manifest provides additional features that are not part of the AppxManifest.xml file. For example, you can
use the store manifest to block installation of your game if a target device doesn't have the specified minimum
DirectX feature level, or the specified minimum system memory.

Store manifest schema StoreManifest schema (Windows 10)


Game lifecycle management
After you've finished development and shipped your game, it's not "game over". You may be done with
development on version one, but your game's journey in the marketplace has only just begun. You'll want to
monitor usage and error reporting, respond to user feedback, and publish updates to your game.
Windows Dev Center analytics and promotion

Dev Center App Dev Center Windows 10 app to view performance of your
published apps

Windows Dev Center analytics Analytics

Responding to customer reviews Respond to customer reviews

Ways to promote your game Promote your apps

Visual Studio Application Insights


Visual Studio Application Insights provides performance, telemetry, and usage analytics for your published game.
Application Insights helps you detect and solve issues after your game is released, continuously monitor and
improve usage, and understand how players are continuing to interact with your game. Application Insights works
by adding an SDK into your app, which sends telemetry to the Azure portal.

Application performance and usage analytics Visual Studio Application Insights

Enable Application Insights in Windows apps Application Insights for Windows Phone and Store apps

Creating and managing content updates


To update your published game, submit a new app package with a higher version number. After the package
makes its way through submission and certification, it will automatically be available to customers as an update.

Updating and versioning your game Package version numbering

Game package management guidance Guidance for app package management

Adding Xbox Live to your game


Note Xbox Live development is managed through various programs. This guide covers a broad range of
resources, and you may find that some resources are inaccessible depending on your program participation or
specific development role. Examples are links that resolve to developer.xboxlive.com, forums.xboxlive.com,
xdi.xboxlive.com, or the Game Developer Network (GDN). For information about partnering with Microsoft, see
Developer Programs.

Download the latest Xbox Live SDK Xbox Live SDK

Adding Xbox Live to your Universal Windows Platform app How to - Add Xbox Live SDK to Universal Windows Platform
(UWP) Apps

Requirements for games that use Xbox Live Xbox Requirements for Xbox Live on Windows 10
Overview of Xbox Live game development (video) Developing with Xbox Live for Windows 10

Cross-platform matchmaking (video) Xbox Live Multiplayer: Introducing services for cross-platform
matchmaking and gameplay

Cross-device gameplay in Fable Legends (video) Fable Legends: Cross-device Gameplay with Xbox Live

Xbox Live stats and achievements (video) Best Practices for Leveraging Cloud-Based User Stats and
Achievements in Xbox Live

Additional resources
Indie game development (video) New Opportunities for Independent Developers

Considerations for multi-core mobile devices (video) Sustained Gaming Performance in multi-core mobile devices

Developing Windows 10 desktop games (video) PC Games for Windows 10


Planning for UWP games
3/6/2017 1 min to read Edit on GitHub

This section provides information about planning for your UWP game.

TOPIC DESCRIPTION

Game technologies A list of technologies to help you develop games.

Accessibility for games Learn to make games more accessible.

Cloud for games Use cloud for games.

Monetization for games Ways to monetize your game.

Package your game Prepare your game package for the Windows Store.

Concept approval Get your game concept approved.


Game technologies for Universal Windows Platform
(UWP) apps
3/6/2017 11 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
In this guide, you'll learn about the technologies available for developing Universal Windows Platform (UWP)
games.

Benefits of Windows 10 for game development


With the introduction of UWP in Windows 10, your Windows 10 titles will be able to span all of the Microsoft
platforms. With free migration from previous versions of Windows, there is a steadily increasing number of
Windows 10 clients. The combination of these two things means that your Windows 10 titles will be able to reach a
huge number of customers through the Windows Store.
In addition, Windows 10 offers many new features that are particularly beneficial to games:
Reduced memory paging and reduced overall memory system size
Improved graphics memory management actively allocates and protects more memory for the foreground
game

UWP Games with C++ and DirectX


Real-time games requiring high performance should make use of the DirectX APIs. DirectX is a collection of native
APIs for creating games and multimedia applications that require high performance, such as 3D games. Because
the DirectX APIs are native, C++ is the only language supported for use with DirectX.

Development Environment
To create games for UWP, you'll need to set up your development environment by installing a copy of Visual
Studio 2015. Visual Studio 2015 allows you to create UWP apps and provides tools for game development:
Visual Studio tools for DX game programming - Visual Studio provides tools for creating, editing, previewing,
and exporting image, model, and shader resources. There are also tools that you can use to convert resources at
build time and debug DirectX graphics code. For more information, see Use Visual Studio tools for game
programming.
Visual Studio graphics diagnostics features - Graphics diagnostic tools are now available from within Windows
as an optional feature. The diagnostic tools allow you to do graphics debugging, graphics frame analysis, and
monitor GPU usage in real time. For more information, see Use the DirectX runtime and Visual Studio graphics
diagnostic features.
For more information, see Prepare your Universal Windows Platform and DirectX game programming
environment.

Getting Started with DirectX Game Project Templates


After setting up you development environment, you can use one of the DirectX related project templates to create
your UWP DirectX game. Visual Studio 2015 has three templates available for creating new UWP DirectX projects,
DirectX 11 App (Universal Windows), DirectX 12 App (Universal Windows), and DirectX 11 and XAML App
(Universal Windows). For more information, see Create a Universal Windows Platform and DirectX game project
from a template.

Windows 10 APIs
Windows 10 provides a an extensive collection of APIs that are useful for game development. There are APIs for
almost all aspects of games including, 3D Graphics, 2D Graphics, Audio, Input, Text Resources, User Interface, and
networking.
There are many APIs related to game development, but not all games need to use all of the APIs. For example,
some games will only use 3D graphics and only make use of Direct3D, some games may only use 2D graphics and
only make use of Direct2D, and still other games may make use of both. The following diagram shows the game
development related APIs grouped by functionality type.

3D Graphics - Windows 10 supports two 3D graphics API sets, Direct3D 11, and Direct3D 12. Both of these
APIs provide the capability to create 3D and 2D graphics. Direct3D 11 and Direct3D 12 are not used
together, but either can be used with any of the APIs in the 2D Graphics and UI group. For more information
about using the graphics APIs in your game, see Basic 3D graphics for DirectX games.

API DESCRIPTION

Direct3D 12 Direct3D 12 introduces the next version of Direct3D,


the 3D graphics API at the heart of DirectX. This
version of Direct3D is designed to be faster and more
efficient than previous versions of Direct3D. The
tradeoff for Direct3D 12's increased speed is that it is
lower level and requires you to manage your graphics
resources yourself and have more extensive graphics
programming experience to realize the increased
speed.
When to use
Use Direct3D 12 when you need to maximize your
game's performance and your game is CPU bound.
For more information
See the Direct3d 12 documentation.
API DESCRIPTION

Direct3D 11 Direct3D 11 is the previous version of Direct3D and


allows you to create 3D graphics using a higher level
of hardware abstraction than D3D 12.
When to use
Use Direct3D 11 if you have existing Direct3D 11
code, your game is not CPU bound, or you want the
benefit of having resources managed for you.
For more information
See the Direct3D 11 documentation.

2D Graphics and UI - APIs concerning 2D graphics such as text and user interfaces. All of the 2D graphics
and UI APIs are optional.

API DESCRIPTION

Direct2D Direct2D is a hardware-accelerated, immediate-mode,


2-D graphics API that provides high performance and
high-quality rendering for 2-D geometry, bitmaps,
and text. The Direct2D API is built on Direct3D and is
designed to interoperate well with GDI, GDI+, and
Direct3D.
When to use
Direct2D can be used instead of Direct3D to provide
graphics for pure 2D games such as a side-scroller or
board game, or can be used with Direct3D to simplify
creation of 2D graphics in a 3D game, such as a user
interface or heads-up-display.
For more information
See the Direct2D documentation.

DirectWrite DirectWrite provides extra capabilities for working


with text and can be used with Direct3D or Direct2D
to provide text output for user interfaces or other
areas where text is required. DirectWrite supports
measuring, drawing, and hit-testing of multi-format
text. DirectWrite handles text in all supported
languages for global and localized applications.
DirectWrite also provides a low-level glyph rendering
API for developers who want to perform their own
layout and Unicode-to-glyph processing.
When to use
For more information
See the DirectWrite documentation.
API DESCRIPTION

DirectComposition DirectComposition is a Windows component that


enables high-performance bitmap composition with
transforms, effects, and animations. Application
developers can use the DirectComposition API to
create visually engaging user interfaces that feature
rich and fluid animated transitions from one visual to
another.
When to use
DirectComposition is designed to simplify the process
of composing visuals and creating animated
transitions. If your game requires complex user
interfaces, you can use DirectComposition to simplify
the creation and management of the UI.
For more information
See the DirectComposition documentation.

Audio - APIs concerning playing audio and applying audio effects. For information about using the audio
APIs in your game, see Audio for games.

API DESCRIPTION

XAudio2 XAudio2 is a low-level audio API that provides a


foundation for signal processing and mixing. XAudio is
designed to be very responsive for game audio
engines while maintaining the ability to create custom
audio effects and complex chains of audio effects and
filters.
When to use
Use XAudio2 when your game needs to play sounds
with minimal overhead and delay.
For more information
See the XAudio2 documentation.

Media Foundation Microsoft Media Foundation is designed for the


playback of media files and streams, both audio and
video, but can also be used in games when higher
level functionality than XAudio2 is required and some
additional overhead is acceptable.
When to use
Media foundation is particularly useful for cinematic
scenes or non-interactive components of your game.
Media foundation is also useful for decoding audio
files for playback using XAudio2.
For more information
See the Microsoft Media Foundation overview.

Input - APIs concerning input from the keyboard, mouse, gamepad, and other user input sources.
API DESCRIPTION

XInput The XInput Game Controller API enables applications


to receive input from game controllers.
When to use
If your game needs to support gampad input and you
have existing XInput code, you can continue to make
use of XInput. XInput has been replaced by
Windows.Gaming.Input for UWP, and if you're writing
new input code, you should use
Windows.Gaming.Input instead of XInput.
For more information
See the XInput documentation.

Windows.Gaming.Input The Windows.Gaming.Input API replaces XInput and


provides the same functionality with the following
advantages over Xinput:
Lower resource usage
Lower API call latency for retrieving input
The ability to work with more than 4 gamepads at
once
The ability to access addition Xbox One gamepad
features, such as the trigger vibration motors
The ability to be notified when controllers
connect/disconnect via event instead of polling
The ability to attribute input to a specific user
(Windows.System.User)
When to use
If your game needs to support gamepad input and is
not using existing XInput code or you need one of the
benefits listed above, you should make use of
Windows.Gaming.Input.
For more information
See the Windows.Gaming.Input documentation.

Windows.UI.Core.CoreWindow The Windows.UI.Core.CoreWindow class provides


events for tracking pointer presses and movement,
and key down and key up events.
When to use
Use Windows.UI.Core.CoreWindows events when you
need to track the mouse or key presses in your game.
For more information
See Move-look controls for games for more
information about using the mouse or keyboard in
your game.

Math - APIs concerning simplifying commonly used mathematical operations.


API DESCRIPTION

DirectXMath The DirectXMath API provides SIMD-friendly C++


types and functions for common linear algebra and
graphics math operations common to games.
When to use
Use of DirectXMath is optional and simplifies common
mathematical operations.
For more information
See the DirectXMath documentation.

Networking - APIs concerning communicating with other computers and devices over either the Internet or
private networks.

API DESCRIPTION

Windows.Networking.Sockets The Windows.Networking.Sockets namespace


provides TCP and UDP sockets that allow reliable or
unreliable network communication.
When to use
Use Windows.Networking.Sockets if your game needs
to communicate with other computers or devices over
the network.
For more information
See Work with networking in your game.

Windows.Web.HTTP The Windows.Web.HTTP namespace provides a


reliable connection to HTTP servers that can be used
to access a web site.
When to use
Use Windows.Web.HTTP when your game needs to
access a web site to retrieve or store information.
For more information
See Work with networking in your game.

Support Utilities - Libraries that build on the Windows 10 APIs.

LIBRARY DESCRIPTION
LIBRARY DESCRIPTION

DirectX Tool Kit The DirectX Tool Kit (DirectXTK) is a collection of


helper classes for writing DirectX 11.x code in C++.
When to use
Use the DirectX Tool Kit if you're a C++ developer
looking for a modern replacement to the legacy D3DX
utility code or you're an XNA Game Studio developer
transitioning to native C++.
For more information
See the DirectX Tool Kit project page,
https://github.com/Microsoft/DirectXTK.

Win2D Win2D is an easy-to-use Windows Runtime API for


immediate mode 2D graphics rendering.
When to use
Use Win2D if you're a C++ developer and want an
easier to use WinRT wrapper for Direct2D and
DirectWrite, or you're a C# developer wanting to use
Direct2D and DirectWrite.
For more information
See the Win2D project page,
https://github.com/Microsoft/Win2D.

Xbox Live Services


The Xbox Live feature set cross play with Xbox, Achievements, Gamerscore, and more is coming to Windows 10.
Soon, you'll be able to work with ID@Xbox to include Live in your UWP games! In the future, well also help you
ship your universal app platform games on Xbox One. For more information, see the ID@Xbox page.

Alternatives to writing games with DirectX and UWP


UWP Games without DirectX
Simpler games with minimal performance requirements, such as card games or board games, can be written
without DirectX and don't necessarily need to be written in C++. These sort of games can make use of any of the
languages supported by UWP such as C#, Visual Basic, C++, and HTML/JavaScript. If performance and intensive
graphics are not a requirement for your game, checkout JavaScript and HTML5 touch game sample as an example.
Game Engines
As an alternative to writing your own game engine using the Windows game development APIs, many high quality
game engines that build on the Windows game development APIs are available for developing games on Windows
platforms. When considering a game engine or library, you have multiple options:
Full game engine - A full game engine encapsulates most or all of the Windows 10 APIs you would use when
writing a game engine from scratch, such as graphics, audio, input, and networking. Full game engines may also
provide game logic functionality such as artificial intelligence and pathfinding.
Graphics engine - Graphics engines encapsulate the Windows 10 graphics APIs, manage graphics resources,
and support a variety of model and world formats.
Audio engine - Audio engines encapsulate the Windows 10 audio APIs, manage audio resources, and provide
advanced audio processing and effects.
Network engine - Network engines encapsulate Windows 10 networking APIs for adding peer-to-peer or
server-based multiplayer support to your game, and may include advanced networking functionality to support
large numbers of players.
Artificial intelligence and pathfinding engine - AI and pathfinding engines provide a framework for controlling
the behavior of agents in your game.
Special purpose engines - A variety of additional engines exist for handling almost any game development
related task you might run into, such as creating inventory systems and dialog trees.

Submitting a game to the Store


Once youre ready to publish your game, youll need to create a developer account and submit your game to the
Windows Store.
For information about submitting your game to the Windows Store, see https://dev.windows.com/publish.
Making games accessible
3/6/2017 13 min to read Edit on GitHub

Accessibility can empower every person and every organization on the planet to achieve more, and this applies to
making games more accessible too. This article is written for game developers; specifically game designers,
producers, and managers. It provides an overview of game accessibility guidelines derived from various
organizations (listed in the reference section below), and introduces the inclusive game design principle for creating
more accessible games.

Why make games accessible?


Increased gamer base
At its most basic level, the business justification for accessibility is straightforward:
Number of users who can play your game x Awesomeness of game = Game sales
If you made an amazing game that is so complicated or convoluted that only a handful of people can play it, you
limit your sales. Similarly, if you made a game that is unplayable by those with physical, sensory, or cognitive
impairments, you are missing out on potential sales. Considering that, for example, 19% of people in the United
States have some form of disability, this can potentially have a large impact on your titles revenue.
For more business justifications, see Making Video Games Accessible.
Better games
Creating a more accessible game can create a better game in the end.
An example is subtitles in games. In the past, games rarely supported subtitles or closed captioning for game
dialogues. Today, its expected that games include subtitles and closed captioning. This change was not driven by
gamers with disabilities. Instead, it was driven by gamers who simply preferred to play with subtitles because it
made the gaming experience better. Gamers turn subtitles and closed captioning on when they are playing with too
much background noise, are having difficulty hearing voices with various sound effects or ambient sounds playing
at the same time, or when they simply need to keep the volume low to avoid disturbing others. Subtitles and closed
captions not only helped gamers to have a better gaming experience, but it also allows people with hearing
disabilities to game as well.
Controller remapping is another feature that is slowly becoming a standard for the game industry for similar
reasons. Gamers enjoy customizing their gaming experiences. What most people dont realize is that the ability to
remap buttons on an input device is actually an accessibility feature that was intended to make a game playable for
people with various types of motor disabilities.
Ultimately, the thought process used to make your game more accessible will often result in a better game because
you have designed a more user-friendly, customizable experience for your players to enjoy.
A social space
Gaming is a form of entertainment and can provide hours of joy. For some, gaming is not only a form of
entertainment but it is an escape from a hospital bed, chronic pain, or debilitating social anxiety. Gamers are
transported into a world where they become the main characters in the video game. Through gaming, they can
create and participate in a social space for themselves that provides distraction from the day-to-day struggles
brought on by their disabilities, and that provides an opportunity to communicate with people they might
otherwise be unable to interact with.
Is the game you are making today accessible?
If you are thinking about making your game accessible for the first time, here are some questions to ask yourself:
Can you complete the game using a single hand?
Can an average person be able to pick the game up and play?
Can you effectively play the game on a small monitor or TV sitting at a distance?
Do you support more than one type of input device that can be used to play through the entire game?
Can you play the game with sound muted?
Can you play the game with your monitor set to black and white?
If your answers are mostly no, or you do not know the answers, it is time to step up and put accessibility into your
game.

Defining disability
Disability is defined as "a mismatch between the needs of the individual and the service, product or environment
offered." (Inclusive video, Microsoft.com.) This means that anyone can experience a disability, and that it can be a
short-term or situational condition. Envision what challenges gamers with these conditions might have when
playing your game, and think about how your game can be better designed for them. Here are some disabilities to
consider:
Vision
Medical, long-term conditions like glaucoma, cataracts, color blindness, near-sightedness, and diabetic
retinopathy
Short-term, situational conditions like a small monitor or screen size, a low resolution screen, or screen glare
due to bright light sources on a monitor
Hearing
Medical, long-term conditions like complete deafness or partial hearing loss due to diseases or genetics
Short-term, situational conditions like excessive background noise, or restricted volume to avoid disturbing
others
Motor
Medical, long-term conditions like Parkinsons disease, amyotrophic lateral sclerosis (ALS), arthritis, and
muscular dystrophy
Short-term, situational conditions like an injured hand, holding a beverage, or carrying a child in one arm
Cognitive
Medical, long-term conditions like dyslexia, epilepsy, attention deficit hyperactivity disorder (ADHD), dementia,
and amnesia
Short-term, situational conditions like lack of sleep, or temporary distractions like siren from an emergency
vehicle driving by the house
Speech
Medical, long-term conditions like vocal cord damage, dysarthria, and apraxia
Short-term, situational conditions like dental work, or eating and drinking

How to make games more accessible?


Design shift: Inclusive game design approach
Inclusive design focuses on creating products and services more accessible to a broader spectrum of consumers,
including people with disabilities.
To be successful, todays game designers need to think beyond creating fun games for a small, targeted audience.
Game designers need to be aware of how their design decisions impact the overall accessibility of the game; the
playability of the game for their overall potential audience, including those with disabilities.
As such, traditional game design paradigms must shift to embrace the inclusive game design concept. Inclusive
game design means going beyond the basic game design of creating fun for the target audience, to creating
additional or modified personas to include a wider spectrum of players.
This extra step helps to identify gaps in the original design. By identifying gaps, you can iterate on the original
design concept and make it better. When you take the time to be more inclusive in your game design process, your
final game becomes more accessible.
Empower gamers: Give gamers options
Accessibility is all about options. Give your gamers the options to customize their gaming experience. If you already
have a huge fan base, you may have a significant portion of your audience who do not want the experience to
change in any way. Thats okay. Give your gamers the ability to turn these features on and off, and make features
configurable individually.
Innovate: Be creative
There are many creative ways to improve the accessibility of your game. Put on your creative hat and learn from
other accessible games out there. If you already have an existing game, learn to identify current game features that
could be improved while keeping the core game mechanics and experience as designed. As mentioned above,
accessibility in games is all about providing gamers with options to customize their gaming experience.
Evangelize: Make accessibility a priority in your game studio
Game development is always running on a tight timeline, so prioritizing accessibility will help make it an easier
process. One way is to design from the start with accessibility in mind. Share your knowledge about accessibility
with your team, and share the business justifications.
Review: Constantly evaluate your game
During development, you can introduce a review process to make sure that at every step of the way you are
thinking about accessibility. Make a checklist like the one below to help your team constantly evaluate whether
what you are creating is accessible or not.

CHECKLIST ACCESSIBILITY FEATURES

In-game cinematics Has subtitles and captions, photosensitivity tested

Overall artwork (2D and 3D graphics) Colorblind friendly colors and options, not dependent entirely
on color for identification but use shapes and patterns as well

Start screen, settings menu and other menus Ability to read options aloud, ability to remember settings,
alternate command control input method, adjustable UI font
size

Gameplay Wide adjustable difficulty levels, subtitles and captions, good


visual and audio feedback for gamer

HUD display Adjustable screen position, adjustable font size, colorblind


friendly option

Control input Mappable controls to input device, custom controller support,


simplified input for game allowed

Playtest and iterate: Get gamers' feedback


When organizing playtesting sessions, invite play testers with disabilities that your game is designed for and get
them to play your game. Observe how they play and get feedback from them. Figure out what changes need to be
made to make the game better.
Shout it out: Let the world know your game is accessible
Consumers will want to know if your game can be played by gamers with disabilities. State the games accessibility
clearly on the game website and packaging to ensure that consumers know what to expect when they buy your
game. Remember to make your website and all sales channels to the game accessible as well. Most importantly,
reach out to the accessibility gaming community and tell them about your game.

Game accessibility features


This section outlines some features that can make your game more accessible. These features are derived from
guidelines taken from the Game accessibility guidelines, which represent the findings of a collaborative group of
studios, specialists, and academics. For more information, see Game accessibility guidelines.
Colorblind friendly graphics and user interface
The retina of the eye has two types of light-sensitive cells: the cones for seeing where there is light, and the rods for
seeing in low light conditions. There are three types of cones (red, green, and blue) to enable us to view colors
correctly. Colorblindness occurs when one or more of these three types light cones is not functioning as expected.
The degree of colorblindness can range from almost normal color perception with reduced sensitivity towards red,
green, or blue light, to a complete inability to perceive red, green, or blue color. Since its less common to have
reduced sensitivity to blue light, when designing for the colorblind, the selection of colors are geared towards
people who are red or green colorblind:
Use color combinations that can be differentiated by people with red/green colorblindness:
Colors that appear similar: All shades of red and green including brown and orange
Colors that stand out: Blue and yellow
Do not rely solely on color to distinguish game objectsuse shapes and patterns as well
Closed captioning and subtitles
When designing the closed captions and subtitles for your game, the objective is provide readable captions as an
option so that your game can also be enjoyed without audio. It should be possible to have game components like
game dialogues, game audio, and sound effects displayed as text on screen.
Here are some basic guidelines to consider when designing closed captions and subtitles:
Select simple readable font.
Select sufficiently large font size, or consider having adjustable font size option for more flexibility. (Ideal font
size depends on screen size, viewing distance from screen, and so on.)
Create high contrast between background and font color. (For more information, see Information on contrast
ratio.)
Display short sentences on screen. (Remember not to give the game away by displaying the text before event
occurs.)
Differentiate what is making the sound or who is talking. (Example: "Daniel: Hi!")
Provide the option to turn closed captions and subtitles on and off. (Additional feature: Ability to select how
much sound information is displayed based on importance.)
Sound feedback
Sound provides feedback to the player, in addition to visual feedback. Good game audio design can improve
accessibility for players with visual impairment. Here are some guidelines to consider:
Use 3D audio cues to provide additional spatial information.
Separate music, speech and sound effects volume controls.
Design speech that provides meaningful information for gamers. (Example: "Enemies are approaching" vs.
"Enemies are entering from the back door.")
Ensure speech is spoken at a reasonable rate, and provide rate control for better accessibility.
Fully mappable controls
There are companies and organizations, such as Special Effect, that design custom game controllers that can be
used with various gaming systems like Windows and Xbox One. This customization allows people with different
forms of disabilities to play games they might not be able to play otherwise. For more information on people who
are now able to play games independently because of customized controllers, see who they helped.
As a game developer, you can make your game more accessible by allowing fully mappable controls so that
gamers have the option to plug in their custom controllers and remap the keys according to their needs.
Both standard Xbox One and Xbox Elite controllers offer customization of the controllers for precision gaming. For
more information, see Xbox One and Xbox Elite.
Wider selection of difficulty levels
Video games provide entertainment. The challenge for game developers is to tune the difficulty level such that the
gamer experiences the right amount challenge. Firstly, not all gamers have the same skill level and capability, so
designing a wider selection of difficulty options increases the chance of providing gamers with the right amount of
challenge. At the same time, this wider selection also makes your video game more accessible because it could
potentially allow more people with disabilities to play your game. Remember, gamers want to overcome challenges
in a game and be rewarded for it. They do not want a game that they cannot win.
Tweaking the difficulty level of your game is a delicate process. If it is too easy, gamers might get bored. If it is too
difficult, gamers may give up and not play any further from that point on. The balancing process is both art and
science. There are many ways to make a game level that has the right amount of challenge. Some games offer
simplified inputs, like a single button press game option for their game, a rewind and replay option to make
gameplay more forgiving, or less and weaker enemies to make it easier to proceed forward after several tries.
Photosensitivity epilepsy testing
Photosensitive epilepsy (PSE) is a condition where seizures are triggered by visual stimuli like exposures to flashing
lights or certain moving visual forms and patterns. This occurs in about three percent of people and is more
common in children and adolescents.
There are many factors that can cause a photosensitive reaction when playing video games, including the duration
of gameplay, the frequency of the flash, the intensity of the light, the contrast of the background and the light, the
distance between the screen and the gamer, and the wavelength of the light.
As a developer, here are some tips for designing a game to include gamers who have the tendency for
photosensitive epilepsy:
Avoid having flashing lights with a frequency of 5 to 30 flashes per second (Hertz) because flashing lights in that
range are most likely to trigger seizures.
Use an automated system to check gameplay for stimuli that could trigger photosensitive epilepsy. (Example:
Harding Flash and Pattern Analyzer (FPA) G2 developed by Cambridge Research System Ltd and Professor
Graham Harding.)
Design for breaks between game levels, encouraging players to take a break from playing non-stop.

Other accessibility resources


Here are some external sites that provide additional information about game accessibility.
Game accessibility guidelines
Game accessibility guidelines
AbleGamers Foundation guidelines
Design Universally Accessible (UA) games
Custom input controllers
Special effect
War fighter engaged

References used
Game accessibility guidelines
AbleGamers Foundation guidelines
Color Blind Awareness, a Community Interest Company
How to do subtitles well- a blog article on Gamasutra by Ian Hamilton
Innovation for All Programme

Related links
Inclusive Design
Microsoft Accessibility Developer Hub
Developing accessible UWP apps
Engineering Software For Accessibility eBook
Using cloud services for UWP games
3/6/2017 10 min to read Edit on GitHub

The Universal Windows Platform (UWP) in Windows 10 offers a set of APIs that can be used for developing games
across Microsoft devices. When developing games across platforms and devices, you can make use of a cloud
backend to help scale your game according to demand.

What is cloud computing?


Cloud computing uses on demand IT resources and applications over the internet to store and process data for
your devices. The term cloud is a metaphor for the availability of vast resources out there (not local resources) that
you can access from non-specific locations. The principle of cloud computing offers a new way in which resources
and software can be consumed. Users no longer need to pay for the full complete product or resources upfront, but
instead are able to consume platform, software, and resources as a service. Cloud providers often bill their
customers according to usage or service plan offerings.

Why use cloud services?


One advantage of using cloud services for games is that you do not need to invest in physical hardware servers
upfront, but only need to pay according to usage or service plans at a later stage. It is one way to help manage the
risks involved in developing a new game title.
Another advantage is that your game can tap into vast cloud resources to achieve scalability (effectively manage
any sudden spikes in the number of concurrent players, intense real-time game calculations or data requirements).
This keeps the performance of your game stable around the clock. Furthermore, cloud resources can be accessed
from any device running on any platform anywhere in the world, which means that you are able to bring your
game to everyone globally.
Delivering an amazing gameplay experience to your players is important. Because game servers running in the
cloud are independent of client-side updates, they can give you a more controlled and secure environment for your
game overall. You can also achieve gameplay consistency through the cloud by never trusting the client and having
server side game logic. Service-to-service connections can also be configured to allow a more integrated gaming
experience; examples include linking in-game purchases to various payment methods, bridging over different
gaming networks, and sharing in-game updates to popular social media portals such as Facebook and Twitter.
You can also use dedicated cloud servers to create a large persistent game world, build up a gamer community,
collect and analyze gamer data over time to improve gameplay, and optimize your game's monetization design
model.
In addition, games that require intensive game data management capabilities like social games with asynchronous
multiplayer mechanics can be implemented using cloud services.

How game companies use the cloud technology


Learn how other developers have implemented cloud solutions in their games.

DEVELOPE DESCRIPTION KEY GAME SCENARIOS LEARN MORE


R
343 Halo 5: Guardians implemented Scalable data-tier to handle Social gameplay
Industrie Halo: Spartan Companies as its groups implemented using Azure
s social gameplay platform by using creation/management for DocumentDB
Microsoft Azure DocumentDB, multiplayer gameplay
which was selected for its speed Game and social media
and flexibility due to its auto- integration
indexing capabilities. Real-time queries of data
through multiple attributes
Synchronization of
gameplay achievements and
stats

Illyriad Illyriad Games created Age of Cross-platform, browser- Manage game components
Games Ascent, a massively multiplayer based game as microservices using
online (MMO) epic 3D space game Single large persistent open Azure Service Fabric (video)
that can be played on devices that world Interview with Age of
have modern browsers. So this Handles intensive real-time Ascent developers (video)
game can be played on PCs, gameplay calculations
laptops, mobile phones and other Scales with number of
mobile devices without plug-ins. players
The game uses ASP.NET Core,
HTML5, WebGL, and Microsoft
Azure.

Next Next Games is the creator of The Cross-platform Interview with Kalle Hiitola,
Games Walking Dead: No Man's Land Turn based multiplayer CTO of Next Games (video)
video game which is based on Elastically scale performance Walking Dead uses
AMC's original series. The Walking DocumentDB for faster
Dead game used Azure as the development cycle and
backend. It had 1,000,000 more engaging gameplay
downloads in the opening weekend
and within the first week, the game
became #1 iPhone & iPad Free App
in the U.S. App Store, #1 Free App
in 12 countries, and #1 Free Game
in 13 countries.

Pixel Pixel Squad developed Crime Coast Cross-platform How Crime Coast MMO
Squad using Unity game engine and Multiplayer online game game used Azure Cloud
Azure. Crime Coast is a social Scale with number of Services
strategy game available on the players
Android, iOS and Windows
platform. Azure Blob storage,
Managed Azure Redis Cache, an
array of load balanced IIS VMs, and
Microsoft Notification hub were
used in their game. Learn how they
managed scaling and handled
players surge with 5000
simultaneous players.

Other links
Azure as the secret sauce for Hitcents, Game Troopers and InnoSpark
Game startups on Bizspark program using Azure

How to design your cloud backend


While producers and game designers are in discussion about what game features and functionalities are needed in
the game, it is good to start considering how you want to design your game infrastructure. Azure can be used as
your game backend when you want to develop games for various devices and across different major platforms.
Step by step learning guides
Build 2016 Codelabs: Use Microsoft Azure App Service and Microsoft SQL Azure backend to save game score
Design your game's mobile engagement strategy
Using Azure Mobile Engagement for Unity iOS deployment
Understanding IaaS, PaaS or SaaS
First, you need to think about the level of service that is best suited for your game. Knowing the differences in the
following three services can help you determine the approach you want to take in building your backend.
Infrastructure as a Service (IaaS)
Infrastructure as a Service (IaaS) is an instant computing infrastructure, provisioned and managed over the
Internet. Imagine having the possibility of many machines readily available to quickly scale up and down
depending on demand. IaaS helps you to avoid the cost and complexity of buying and managing your own
physical servers and other datacenter infrastructure.
Platform as a Service (PaaS)
Platform as a Service (PaaS) is like IaaS but it also includes management of infrastructure like servers,
storage, and networking. So on the top of not buying physical servers and datacenter infrastructure, you also
do not need to buy and manage software licenses, underlying application infrastructure, middleware,
development tools, or other resources.
Software as a Service (SaaS)
Software as a Service is normally an application already built for you and hosted on an existing cloud
platform. It is designed to make it even easier for you to start running your game on their service.
Design your game infrastructure using Azure
Following are some ways that Azure cloud offerings can be used for a game. Azure works with Windows, Linux, and
familiar open source technologies such as Ruby, Python, Java, and PHP. For more information, see Azure for
gaming.

REQUIREMENTS ACTIVITY SCENARIOS PRODUCT OFFERING PRODUCT CAPABILITIES

Host your domain in the Respond to DNS queries Azure DNS Host your domain with high
cloud efficiently performance and availability

Sign in, identity verification Gamer signs in and gamer Azure Active Directory Single sign-on to any cloud
identity is authenticated and on-premises web app
with multi-factor
authentication

Game using infrastructure as Game is hosted on virtual Azure VMs Scale from 1 to thousands of
a service model (IaaS) machines in the cloud virtual machine instances as
game servers with built-in
virtual networking and load
balancing; hybrid consistency
with on-premises systems

Web or mobile games using Game is hosted on a Azure App Service PaaS for websites or mobile
platform as a service model managed platform games (which means Azure
(PaaS) VMs with
middleware/development
tools/BI/DB management)
REQUIREMENTS ACTIVITY SCENARIOS PRODUCT OFFERING PRODUCT CAPABILITIES

Cloud storage for game data Latest game data is stored in Azure Blob Storage No restriction on the kinds
the cloud and sent to client of file that can be stored;
devices object storage for large
amounts of unstructured
data like images, audio,
video, and more.

Temporary data storage Game transactions (changes Azure Table Storage Game data can be stored in
tables in game states) are stored in a flexible schema according
tables temporarily to the needs of the game

Queue game Game transactions are Azure Queue Storage Queues absorb unexpected
transactions/requests processed in the form of a traffic bursts and can
queue prevent servers from being
overwhelmed by a sudden
flood of requests during the
game

Scalable relational game Structured storage of Azure SQL Database SQL database as a service
database relational data like in-game (Compare with SQL on a
transactions to database VM)

Scalable distributed low- Fast read, write, and query Azure DocumentDB Low latency NoSQL
latency game database of game and player data document database as a
with schema flexibility service

Use own datacenter with Game is retrieved from your Azure Stack Enables your organization to
Azure services own datacenter and sent to deliver Azure services from
the client devices your own datacenter to help
you achieve more

Large data chunks transfer Large files such as game Azure Content Delivery Built on a modern network
images, audio, and videos Network topology of large centralized
can be sent to users from nodes, Azure CDN handles
the nearest Content Delivery sudden traffic spikes and
Network (CDN) pop location heavy loads to dramatically
with Azure CDN increase speed and
availability, resulting in
significant user experience
improvements

Low latency Perform caching to build Azure Redis Cache High throughput, consistent
fast, scalable games with low-latency data access to
more control and power fast, scalable Azure
guaranteed isolation of data; applications
can be used to improve
match-making feature for
game as well.
REQUIREMENTS ACTIVITY SCENARIOS PRODUCT OFFERING PRODUCT CAPABILITIES

High scalability, low latency Handles fluctuations in the Azure Service Fabric Able to power the most
number of game users with complex, low-latency, data-
low latency read and writes intensive scenarios and
reliably scale to handle more
users at a time. Service
Fabric enables you to build
games without having to
create a separate store or
cache, as required for
stateless apps

Ability to collect millions of Log millions of events per Azure Event Hubs Cloud-scale telemetry
events per second from second from devices ingestion from games,
devices websites, apps, and devices

Real time processing for Perform real-time analysis of Azure Stream Analytics Real-time stream processing
game data gamer data to improve in the cloud
gameplay

Develop predictive gameplay Create customized dynamic Azure Machine Learning A fully managed cloud
gameplay based on gamer service that enables you to
data easily build, deploy, and
share predictive analytics
solutions

Collect and analyze game Massive parallel processing Azure Data Warehouse Elastic data warehouse as a
data of data from both relational service with Enterprise class
and non-relational features
databases

Create marketing campaigns Send push notifications to Mobile engagement Increase gameplay time and
to increase usage and targeted players to generate user retention on all major
retention interest and encourage platformsiOS, Android,
specific game actions Windows, Windows Phone
according to data analysis

Startup and developer resources


Microsoft BizSpark
Microsoft BizSpark is a global program that helps startups succeed by giving free access to Azure cloud
services, software and support. BizSpark members receive five Visual Studio Enterprise with MSDN
subscriptions, each with a $150 monthly Azure credit. This totals $750/month across all five developers to
spend on Azure services. BizSpark is available to startups that are privately held, less than 5-years-old, and
earn less than $1M in annual revenue. Microsoft believes that by helping startups succeed, were helping to
build a valued long-term partnership.
ID@Xbox
If you want to add Xbox Live features like multiplayer gameplay, cross-platform matchmaking, Gamerscore,
achievements, and leaderboards to your Windows 10 game, sign up with ID@Xbox to get the tools and
support you need to unleash your creativity and maximize your success. Before applying to ID@Xbox, please
register a developer account on Windows Dev Center.

Software as a Service for game backend


These are some companies that offer cloud backend for games based on major cloud service providers to allow
you to focus on developing your game.
GameSparks
GameSparks is a cloud-based development platform for games developers enabling them to build all of
their game's server-side
Photon Engine
Photon is an independent networking engine and multiplayer platform for games. It offers Photon Cloud
which offers software as a service (SaaS) and as such is a fully managed service. You can completely
concentrate on your application client while hosting; server operations and scaling is all taken care of by Exit
Games.
Playfab
Playfab brings world-class live game management and backend technology to your mobile, PC, or console
game simply and quickly.

Related links
Windows 10 game development guide
Azure for gaming
Microsoft BizSpark
ID@Xbox
Monetization for games
3/6/2017 12 min to read Edit on GitHub

As a game developer, you need to know your monetization options so you can sustain your business and keep
doing what you're passionate about: creating great games. This article provides an overview of the monetization
methods for a Universal Windows Platform (UWP) game and how to implement them.
In the past, you would simply put a price on your game and then wait for people to purchase it at a store. But today
you have options. You can choose to distribute a game to "brick-and-mortar" stores, sell the game online (either
physical or soft copies), or let everyone play the game for free but incorporate some sort of ads or in-game items
that can be purchased. Games are also no longer just standalone products. They often come with extra content that
can be purchased in addition to the main game.
You can promote and monetize a UWP game in one or more of these ways:
Put your game in the Windows Store, which is a secured, online store offering worldwide distribution. Gamers
around the world can buy your game online at the price you set.
Use APIs in the Windows SDK to create in-game purchases. Gamers can buy items from within your game, or
buy additional content such as extra equipment, skins, maps, or game levels.
Use APIs in the Microsoft Store Services SDK to display ads from ad networks. You can display ads in your game
and offer the option for gamers to watch video ads in exchange for in-game rewards.
Maximize your game's potential through ad campaigns. Promote your game using paid, community (free), or
house (free) ad campaigns to grow its user base.

Worldwide distribution channel


The Windows Store can make your game available for download in more than 200 countries and regions
worldwide, with support for billing via various forms of payment including Visa, MasterCard, and PayPal. For a full
list of countries and regions, see Markets and custom prices.

Set a price for your game


UWP games published to the Store can be either be paid or free. A paid game allows you to charge gamers up
front for your game at a price you set, whereas a free game allows users to download and play the game without
paying for it.
Here are some important concepts regarding the pricing of your game in the Store.
Base price
The base price of the game is what determines whether your game is categorized as paid or free. You can use the
Dev Center dashboard to configure the base price based on country and region. The process of determining the
price may include your tax responsibilities when selling to different countries and cost considerations for specific
markets. You can also set custom prices for specific markets. For more info, see Define pricing and market selection.
Sale price
One way to promote your game is to reduce its price for a limited time. It's also possible to set the sale price to
Free to allow your game to be downloaded without payment. You can schedule sale campaigns in advance by
setting both the starting date and ending date of the sale. For more info, see Put apps and add-ons on sale.

In-game purchases
In-game purchases are products bought within a game. They're also generically known as in-app purchases. In the
Windows Store, these products are called add-ons. Add-ons are published through the Windows Dev Center
dashboard. You'll also need to enable the add-ons in your game's code.
Types of add-ons
You can create two types of add-ons in the store: durables or consumables. Durables are items that persist over for
a specified amount of time and can be purchased only once until they expire. Consumables are items that can be
purchased and used again and again.
When creating consumables, decide how you want to keep track of them that is whether they're developer
managed or Store managed (This feature is available starting in Windows 10, version 1607). With a developer-
managed consumable, you are responsible for keeping track of the item's balance for the gamer; with a Store-
managed consumable, the Windows Store keeps track of the item's balance for you. For more info, see Overview of
consumable add-ons.
Create in-game purchases
The latest in-app purchases and license info APIs are part of the Windows.Services.Store namespace in the
Windows SDK (starting in Windows 10, version 1607). If you're developing a new game that targets 1607 or later
release, we recommend that you use the Windows.Services.Store namespace because it supports the latest add-
on types and has better performance. It's also designed to be compatible with future types of products and features
supported by the Windows Dev Center and the Store. When developing for previous versions of Windows 10, use
the Windows.ApplicationModel.Store namespace instead.
For more info, go to In-app purchases and trials.
Simplified purchase example
This section uses a simplified purchase example to illustrate the use of different method calls to implement the
purchase flow.

IN-GAME ACTIONS / ACTIVITY GAME BACKGROUND TASKS

Gamer enters a shop. Shop menu pops up to display the Game retrieves the product info of the add-ons, determines
available add-ons and purchase price whether the add-ons have the proper license, and displays the
add-ons that are available for purchase by the gamer on the
shop menu.

Gamer clicks Buy to purchase an item The Buy action sends a request to purchase the item and
starts the payment process to acquire it. The implementation
varies depending on the item type. If it is a durable or a one-
time purchase item, the customer can own only a single item
until it expires. If the item is a consumable, the customer can
own one or more of it.

Test in-game purchases during game development


Because an add-on must be created in association with a game, your game must be published and available in the
Store. The steps in this section show how to create add-ons while your game is still in development. (If your
finished game is already live in the Store, you can skip the first three steps and go directly to Create an add-on in
the Store.)
To create add-ons while your game is still in development:
1. Create a package
2. Publish the game as hidden
3. Associate your game solution in Visual Studio with the Store
4. Create an add-on in the Store
Create a package
For any game to be published, it must meet the minimum Windows App Certification requirements. You can use
the Windows App Certification Kit, which is part of the Windows 10 SDK, to run tests on the game to help ensure
that it's ready for publishing to the Store. If you have not already downloaded the Windows 10 SDK that includes
the Windows App Certification Kit, go to Windows 10 SDK.
To create a package that can be uploaded to the Store:
1. Open your game solution in Visual Studio.
2. Within Visual Studio, go to Project > Store > Create App Packages ...
3. For the Do you want to build packages to upload to the Windows Store? option, select Yes.
4. Sign in to your Dev Center developer account. Or register for a developer account if you don't have one.
5. Select an app to create the upload package for. If you have not yet created an app submission, provide a new
app name to create a new submission. For more info, see Create your app by reserving a name.
6. After the package has been created successfully, click Launch Windows App Certification Kit to start the
testing process.
7. Fix any errors to create a game package.
Publish the game as hidden
1. Go to Dev Center and sign in.
2. From the Dashboard overview or All apps page, click the app you want to work with. If you have not yet
created an app submission, click on Create a new app and reserve a name.
3. On the App Overview page, click Start your submission.
4. Configure this new submission. On the submission page:
Click Pricing and availability. In the Visibility section, choose 'Hide this app and prevent
acquisition...' to ensure only your development team has access to the game. For more details, go to
Distribution and visibility.
Click Properties. In the Category and subcategory section, choose Games and then a suitable
subcategory for your game.
Click Age ratings. Fill out the questionnaire accurately.
Click Packages. Upload the game package created in the earlier step.
5. Follow any other submission prompts in the dashboard to allow you to successfully publish this game which
remains hidden to the public.
6. Click Submit to the Store.
For more info, go to App submissions.
After your game is submitted to the Store, it enters the app certification process. This process can take up to 16
hours before the game is listed.
Associate your game solution with the Store
With your game solution opened in Visual Studio:
1. Go to Project > Store > Associate App with the Store ...
2. Sign in to your Dev Center developer account and select the app name to associate this solution with.
3. Double-click on the Package.appxmanifest.xml file and go to the Packaging tab to check that the game is
associated correctly.
If you have associated the solution to a published game that is live and listed in the Store, your solution will have an
active license and you are one step closer in creating add-ons for your game. For more info, see Packaging apps.
Create an add-on in the Store
As you create add-ons, make sure you're associating them with the right game submission. For details about how
to configure all the various info associated with an add-on, see Add-on submissions.
1. Go to the Dev Center and sign in.
2. From the Dashboard overview or All apps page, click the app you want to create the add-on for.
3. On the App Overview page, in the Add-ons section, select Create a new add-on.
4. Select the product type for the add-on: developer-managed consumable, store-managed consumable, or
durable.
5. Enter a unique product ID which will be used as a string variable when integrating this add-on into your game
code. This ID will not be seen by consumers. For more info, see Set your app product type and product ID.
Other configurations for add-ons include:
Properties
Pricing and availability
Store listing
If your game has many add-ons, you can create them programmatically by using the Windows Store submission
API. For more info, see Create and manage submissions using Windows Store services.

Display ads in your game


The libraries and tools in the Microsoft Store Services SDK help you set up a service in your game to receive ads
from an ad network. Your gamers will be shown live ads and you'll earn money from the advertisers when your
gamers view or interact with the displayed ads. For more info, see Workflows for creating apps with ads.
Ad formats
Two types of ads can be displayed by using the Microsoft Store Services SDK:
Banner ads Ads that take up a part of your gaming screen and are usually placed within a game.
Interstitial video ads Full-screen ads, which can be very effective when used between levels. If implemented
properly, they can be less obtrusive than banner ads.
Which ads are displayed?
Ads are currently served through our partner networks when you use the Microsoft Store Services SDK. For more
info about current offerings, see Monetize your apps with ads. If you use AdControl to display ads, you can opt in to
show affiliate ads by expanding the product ads that are shown in your game.
Which markets allow ads to be displayed?
Banner ads and interstitial video ads can be shown to users from selected countries. For the full list of countries and
regions that support ads, see Supported markets for Microsoft Advertising.
APIs for displaying ads
The AdControl and InterstitialAd classes in the Microsoft Store Services SDK, part of the
Microsoft.Advertising.WinRT.UI namespace are used to help display ads in games.
To get started, download and install the Microsoft Store Services SDK with Visual Studio 2015. For more info, see
Features available in the SDK.
Implementation guides
These walkthroughs show how to implement ads by using AdControl and InterstitialAd:
Create banner ads by using the AdControl class in XAML and .NET
Create banner ads by using the AdControl class in HTML5 and JavaScript
Create interstitial video ads by using the InterstitialAd class
During development, you can make use of these test values to see how the ads are rendered. These same values
are also used in the walkthroughs above.
ADTYPE ADUNITID APPID

Banner ads 10865270 3f83fe91-d6be-434d-a0ae-


7351c5a997f1

Interstitial ads 11389925 d25517cb-12d4-4699-8bdc-


52040c712cab

Here are some best practices to help you in the design and implementation process.
Best practices for banner ads by using the AdControl class
Best practices for interstitial ads by using the InterstitialAd class
For solutions to common development issues, like ads not appearing, black box blinking and disappearing, or ads
not refreshing, see Troubleshooting guides.
Prepare for release by replacing ad unit test values
When you're ready to move to live testing or to receive ads in published games, you must update the test ad unit
values to the actual values provided for your game. To create ad units for your game, see Set up ad units in your
app.
Other ad networks
These are other ad networks that support serving ads to UWP apps and games.
Vungle
The Vungle SDK for Windows offer video ads in apps and games. To download the SDK, go to Vungle SDK.
Smaato
Smaato enables banner ads to be incorporated into UWP apps and games. Download the SDK and for more info,
see the documentation.
AdDuplex
You can use AdDuplex to implement banner or interstitial ads in your game.
To learn more about integrating AdDuplex directly into a Windows 10 XAML project, go to the AdDuplex website:
Banner ads: Windows 10 SDK for XAML
Interstitial ads: Windows 10 XAML AdDuplex Interstitial Ad Installation and Usage
For info about integrating the AdDuplex SDK into Windows 10 UWP games created using Unity, see Windows 10
SDK for Unity apps installation and usage.

Maximize your game's potential through ad campaigns


Take the next step in promoting your game using ads. When you create an ad campaign for your game, other apps
and games will display ads promoting your game.
Choose from several types of campaigns that can help increase your gamer base.

CAMPAIGN TYPE ADS FOR YOUR GAME APPEAR IN...

Paid Apps that match your games device or category.

Free community Apps published by other developers who have also opted in to
community ad campaigns. For more info, see About
community ads.
CAMPAIGN TYPE ADS FOR YOUR GAME APPEAR IN...

Free house Only apps that you've published. For more info, see About
house ads.

Related links
Getting paid
Account types, locations, and fees
Analytics
Globalization and localization
Implement a trial version of your app
Run app experiments with A/B testing
Concept approval
3/6/2017 1 min to read Edit on GitHub

When you begin creating a game that will run on Xbox, you will need to submit a proposal concerning that game to
Microsoft for concept approval before you can publish it. This up-front, high-level submission benefits both
Microsoft and you by identifying at the very beginning of the process any likely difficulties or drawbacks in the
overall plan for the game. Try to make sure that your content isn't overly vulgar, offensive, or objectionable, and
that it feels at home on the target platform. Once you submit your proposal, Microsoft will review it and then notify
you of the result.
If you are developing a Universal Windows Platform (UWP) game, you only need concept approval if you want to
allow customers to download it on Xbox devices or you want to enable Xbox Live features. If you only want to make
your UWP game available to Windows desktop or mobile devices (or if you're publishing a UWP app that's not a
game, targeting any device), and you won't be using Xbox Live, all you need is a developer account, and you can
freely configure and submit your app to the Store through the dashboard.

Submit your concept for approval


If you are an independent game developer or publisher, you can submit your concept for approval through the
ID@Xbox program. Learn more about ID@Xbox and apply here.
If you are already an ID@Xbox developer, you should have been sent a link to the Game Information Form (GIF)
where you can submit your game concept. If you have questions, contact id@xbox.com.
If you have an existing license agreement with Microsoft, contact your Microsoft account team for information on
submitting your concept.
Package your Universal Windows Platform (UWP)
DirectX game
3/6/2017 13 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Larger Universal Windows Platform (UWP) games, especially those that support multiple languages with region-
specific assets or feature optional high-definition assets, can easily balloon to large sizes. In this topic, learn how to
use app packages and app bundles to customize your app so that your customers only receive the resources they
actually need.
In addition to the app package model, Windows 10 supports app bundles which group together two types of packs:
App packs contain platform-specific executables and libraries. Typically, a UWP game can have up to three app
packs: one each for the x86, x64, and ARM CPU architectures. All code and data specific to that hardware
platform must be included in its app pack. An app pack should also contain all the core assets for the game to
run with a baseline level of fidelity and performance.
Resource packs contain optional or expanded platform-agnostic data, such as game assets (textures, meshes,
sound, text). A UWP game can have one or more resource packs, including resource packs for high-definition
assets or textures, DirectX feature level 11+ resources, or language-specific assets and resources.
For more information about app bundles and app packs, read Defining app resources.
While you can place all content in your app packs, this is inefficient and redundant. Why have the same large
texture file replicated three times for each platform, especially for ARM platforms that may not use it? A good goal
is to try to minimize what your customer has to download, so they can start playing your game sooner, save space
on their device, and avoid possible metered bandwidth costs.
To use this feature of the UWP app installer, it is important to consider the directory layout and file naming
conventions for app and resource packaging early in game development, so your tools and source can output them
correctly in a way that makes packaging simple. Follow the rules outlined in this doc when developing or
configuring asset creation and managing tools and scripts, and when authoring code that loads or references
resources.

Why create resource packs?


When you create an app, particularly a game app that can be sold in many locales or a broad variety of UWP
hardware platforms, you often need to include multiple versions of many files to support those locales or
platforms. For example, if you are releasing your game in both the United States and Japan, you might need one
set of voice files in English for the en-us locales, and another in Japanese for the jp-jp locale. Or, if you want to use
an image in your game for ARM devices as well as x86 and x64 platforms, you must upload the same image asset
3 times, once for each CPU architecture.
Additionally, if your game has a lot of high definition resources that do not apply to platforms with lower DirectX
feature levels, why include them in the baseline app pack and require your user to a download a large volume of
components that the device cant use? Separating these high-def resources into an optional resource pack means
that customers with devices that support those high-def resources can obtain them at the cost of (possibly
metered) bandwidth, while those who do not have higher-end devices can get their game quicker and at a lower
network usage cost.
Content candidates for game resource packs include:
International locale specific assets (localized text, audio, or images)
High resolution assets for different device scaling factors (1.0x, 1.4x, and 1.8x)
High definition assets for higher DirectX feature levels (9, 10, and 11)
All of this is defined in the package.appxmanifest that is part of your UWP project, and in your directory structure of
your final package. Because of the new Visual Studio UI, if you follow the process in this document, you should not
need to edit it manually.

Important The loading and management of these resources are handled through the
Windows.ApplicationModel.Resources* APIs. If you use these app model resource APIs to load the correct
file for a locale, scaling factor, or DirectX feature level, you do not need to load your assets using explicit file
paths; rather, you provide the resource APIs with just the generalized file name of the asset you want, and let
the resource management system obtain the correct variant of the resource for the users current platform and
locale configuration (which you can specify directly as well with these same APIs).

Resources for resource packaging are specified in one of two basic ways:
Asset files have the same filename, and the resource pack specific versions are placed in specific named
directories. These directory names are reserved by the system. For example, \en-us, \scale-140, \dxfl-dx11.
Asset files are stored in folders with arbitrary names, but the files are named with a common label that is
appended with strings reserved by the system to denote language or other qualifiers. Specifically, the qualifier
strings are affixed to the generalized filename after an underscore (_). For example,
\assets\menu_option1_lang-en-us.png, \assets\menu_option1_scale-140.png, \assets\coolsign_dxfl-dx11.dds.
You may also combine these strings. For example, \assets\menu_option1_scale-140_lang-en-us.png. > Note
When used in a filename rather than alone in a directory name, a language qualifier must take the form "lang-",
e.g."lang-en-us" as described in How to name resources using qualifiers.
Directory names can be combined for additional specificity in resource packaging. However, they cannot be
redundant. For example, \en-us\menu_option1_lang-en-us.png is redundant.
You may specify any non-reserved subdirectory names you need underneath a resource directory, as long as the
directory structure is identical in each resource directory. For example, \dxfl-dx10\assets\textures\coolsign.dds.
When you load or reference an asset, the pathname must be generalized, removing any qualifiers for language,
scale, or DirectX feature level, whether they are in folder nodes or in the file names. For example, to refer in code to
an asset for which one of the variants is \dxfl-dx10\assets\textures\coolsign.dds, use \assets\textures\coolsign.dds.
Likewise, to refer to an asset with a variant \images\background_scale-140.png, use \images\background.png.
Here are the following reserved directory names and filename underscore prefixes:

ASSET TYPE RESOURCE PACK DIRECTORY NAME RESOURCE PACK FILENAME SUFFIX

Localized assets All possible languages, or language and An "_" followed by the language, locale,
locale combinations, for Windows 10. or language-locale specifier. For
(The qualifier prefix "lang-" is not example, "_en", "_us", or "_en-us",
required in a folder name.) respectively.

Scaling factor assets scale-100, scale-140, scale-180. These An "_" followed by "scale-100", "scale-
are for the 1.0x, 1.4x, and 1.8x UI 140", or "scale-180".
scaling factors, respectively.

DirectX feature level assets dxfl-dx9, dxfl-dx10, and dxfl-dx11. An "_" followed by "dxfl-dx9", "dxfl-
These are for the DirectX 9, 10, and 11 dx10", or "dxfl-dx11".
feature levels, respectively.
Defining localized language resource packs
Locale-specific files are placed in project directories named for the language (for example, "en").
When configuring your app to support localized assets for multiple languages, you should:
Create an app subdirectory (or file version) for each language and locale you will support (for example, en-us,
jp-jp, zh-cn, fr-fr, and so on).
During development, place copies of ALL assets (such as localized audio files, textures, and menu graphics) in
the corresponding language locale subdirectory, even if they are not different across languages or locales. For
the best user experience, ensure that the user is alerted if they have not obtained an available language resource
pack for their locale if one is available (or if they have accidentally deleted it after download and installation).
Make sure each asset or string resource file (.resw) has the same name in each directory. For example,
menu_option1.png should have the same name in both the \en-us and \jp-jp directories even if the content
of the file is for a different language. In this case, you'd see them as \en-us\menu_option1.png and \jp-
jp\menu_option1.png.

Note You can optionally append the locale to the file name and store them in the same directory; for
example, \assets\menu_option1_lang-en-us.png, \assets\menu_option1_lang-jp-jp.png.

Use the APIs in Windows.ApplicationModel.Resources and


Windows.ApplicationModel.Resources.Core to specify and load the locale-specific resources for you
app. Also, use asset references that do no include the specific locale, since these APIs determine the correct
locale based on the user's settings and then retrieve the correct resource for the user.
In Microsoft Visual Studio 2015, select PROJECT->Store->Create App Package... and create the package.

Defining scaling factor resource packs


Windows 10 provides three user interface scaling factors: 1.0x, 1.4x, and 1.8x. Scaling values for each display are set
during installation based on a number of combined factors: the size of the screen, the resolution of the screen, and
the assumed average distance of the user from the screen. The user can also adjust scale factors to improve
readability. Your game should be both DPI-aware and scaling factor-aware for the best possible experience. Part of
this awareness means creating versions of critical visual assets for each of the three scaling factors. This also
includes pointer interaction and hit testing!
When configuring your app to support resource packs for different UWP app scaling factors, you should:
Create an app subdirectory (or file version) for each scaling factor you will support (scale-100, scale-140, and
scale-180).
During development, place scale factor-appropriate copies of ALL assets in each scale factor resource directory,
even if they are not different across scaling factors.
Make sure each asset has the same name in each directory. For example, menu_option1.png should have the
same name in both the \scale-100 and \scale-180 directories even if the content of the file is different. In this
case, you'd see them as \scale-100\menu_option1.png and \scale-140\menu_option1.png.

Note Again, you can optionally append the scaling factor suffix to the file name and store them in the
same directory; for example, \assets\menu_option1_scale-100.png, \assets\menu_option1_scale-
140.png.

Use the APIs in Windows.ApplicationModel.Resources.Core to load the assets. Asset references should
be generalized (no suffix), leaving out the specific scale variation. The system will retrieve the appropriate
scale asset for the display and the user's settings.
In Visual Studio 2015, select PROJECT->Store->Create App Package... and create the package.

Defining DirectX feature level resource packs


DirectX feature levels correspond to GPU feature sets for prior and current versions of DirectX (specifically,
Direct3D). This includes shader model specifications and functionality, shader language support, texture
compression support, and overall graphics pipeline features.
Your baseline app pack should use the baseline texture compression formats: BC1, BC2, or BC3. These formats can
be consumed by any UWP device, from low-end ARM platforms up to dedicated multi-GPU workstations and
media computers.
Texture format support at DirectX feature level 10 or higher should be added in a resource pack to conserve local
disk space and download bandwidth. This enables using the more advanced compression schemes for 11, like
BC6H and BC7. (For more details, see Texture block compression in Direct3D 11.) These formats are more efficient
for the high-resolution texture assets supported by modern GPUs, and using them improves the look, performance,
and space requirements of your game on high-end platforms.

DIRECTX FEATURE LEVEL SUPPORTED TEXTURE COMPRESSION

9 BC1, BC2, BC3

10 BC4, BC5

11 BC6H, BC7

Also, each DirectX feature levels supports different shader model versions. Compiled shader resources can be
created on a per-feature level basis, and can be included in DirectX feature level resource packs. Additionally, some
later version shader models can use assets, such as normal maps, that earlier shader model versions cannot. These
shader model specific assets should be included in a DirectX feature level resource pack as well.
The resource mechanism is primarily focused on texture formats supported for assets, so it supports only the 3
overall feature levels. If you need to have separate shaders for sub-levels (dot versions) like DX9_1 vs DX9_3, your
asset management and rendering code must handle them explicitly.
When configuring your app to support resource packs for different DirectX feature levels, you should:
Create an app subdirectory (or file version) for each DirectX feature level you will support (dxfl-dx9, dxfl-dx10,
and dxfl-dx11).
During development, place feature level specific assets in each feature level resource directory. Unlike locales
and scaling factors, you may have different rendering code branches for each feature level in your game, and if
you have textures, compiled shaders, or other assets that are only used in one or a subset of all supported
feature levels, put the corresponding assets only in the directories for the feature levels that use them. For
assets that are loaded across all feature levels, make sure that each feature level resource directory has a
version of it with the same name. For example, for a feature level independent texture named "coolsign.dds",
place the BC3-compressed version in the \dxfl-dx9 directory and the BC7-compressed version in the \dxfl-dx11
directory.
Make sure each asset (if it is available to multiple feature levels) has the same name in each directory. For
example, coolsign.dds should have the same name in both the \dxfl-dx9 and \dxfl-dx11 directories even if
the content of the file is different. In this case, you'd see them as \dxfl-dx9\coolsign.dds and \dxfl-
dx11\coolsign.dds.

Note Again, you can optionally append the feature level suffix to the file name and store them in the
same directory; for example, \textures\coolsign_dxfl-dx9.dds, \textures\coolsign_dxfl-dx11.dds.
Declare the supported DirectX feature levels when configuring your graphics resources.

D3D_FEATURE_LEVEL featureLevels[] =
{
D3D_FEATURE_LEVEL_11_1,
D3D_FEATURE_LEVEL_11_0,
D3D_FEATURE_LEVEL_10_1,
D3D_FEATURE_LEVEL_10_0,
D3D_FEATURE_LEVEL_9_3,
D3D_FEATURE_LEVEL_9_1
};

ComPtr<ID3D11Device> device;
ComPtr<ID3D11DeviceContext> context;
D3D11CreateDevice(
nullptr, // Use the default adapter.
D3D_DRIVER_TYPE_HARDWARE,
0, // Use 0 unless it is a software device.
creationFlags, // defined above
featureLevels, // What the app will support.
ARRAYSIZE(featureLevels),
D3D11_SDK_VERSION, // This should always be D3D11_SDK_VERSION.
&device, // created device
&m_featureLevel, // The feature level of the device.
&context // The corresponding immediate context.
);

Use the APIs in Windows.ApplicationModel.Resources.Core to load the resources. Asset references


should be generalized (no suffix), leaving out the feature level. However, unlike language and scale, the
system does not automatically determine which feature level is optimal for a given display; that is left to you
to determine based on code logic. Once you make that determination, use the APIs to inform the OS of the
preferred feature level. The system will then be able to retrieve the correct asset based on that preference.
Here is a code sample that shows how to inform your app of the current DirectX feature level for the
platform:

// Set the current UI thread's MRT ResourceContext's DXFeatureLevel with the right DXFL.

Platform::String^ dxFeatureLevel;
switch (m_featureLevel)
{
case D3D_FEATURE_LEVEL_9_1:
case D3D_FEATURE_LEVEL_9_2:
case D3D_FEATURE_LEVEL_9_3:
dxFeatureLevel = L"DX9";
break;
case D3D_FEATURE_LEVEL_10_0:
case D3D_FEATURE_LEVEL_10_1:
dxFeatureLevel = L"DX10";
break;
default:
dxFeatureLevel = L"DX11";
}

ResourceContext::SetGlobalQualifierValue(L"DXFeatureLevel", dxFeatureLevel);

Note In your code, load the texture directly by name (or path below the feature level directory). Do not
include either the feature level directory name or the suffix. For example, load "textures\coolsign.dds",
not "dxfl-dx11\textures\coolsign.dds" or "textures\coolsign_dxfl-dx11.dds".
Now, use the ResourceManager to locate the file that matches current DirectX feature level. The
ResourceManager returns a ResourceMap, which you query with ResourceMap::GetValue (or
ResourceMap::TryGetValue) and a supplied ResourceContext. This returns a ResourceCandidate that
most closely matches the DirectX feature level that was specified by calling SetGlobalQualifierValue.
```cpp // An explicit ResourceContext is needed to match the DirectX feature level for the display on which
the current view is presented.
auto resourceContext = ResourceContext::GetForCurrentView(); auto mainResourceMap =
ResourceManager::Current->MainResourceMap;
// For this code example, loader is a custom ref class used to load resources. // You can use the BasicLoader
class from any of the 8.1 DirectX samples similarly.

auto possibleResource = mainResourceMap->GetValue(


L"Files/BumpPixelShader.cso",
resourceContext
);
Platform::String^ resourceName = possibleResource->ValueAsString;
```

In Visual Studio 2015, select PROJECT->Store->Create App Package... and create the package.
Make sure that you enable app bundles in the package.appxmanifest manifest settings.

Related topics
Defining app resources
Packaging apps
App packager (MakeAppx.exe)
UWP programming
3/6/2017 1 min to read Edit on GitHub

This section provides information about developing UWP games. Note that some of these articles are written in the
context of creating a UWP game with DirectX.

TOPIC DESCRIPTION

Audio for games Describes the use of XAudio2 and Microsoft Media
Foundation to add music and sound effects into a DirectX
game.

Input for games Learn about the different kinds of input devices for UWP
games and how to implement them.

Networking for games Explains how to develop and incorporate networking


features into a DirectX game.
Audio for games
3/6/2017 9 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Learn how to develop and incorporate music and sounds into your DirectX game, and how to process the audio
signals to create dynamic and positional sounds.
For audio programming, we recommend using the XAudio2 library in DirectX, and we use it here. XAudio2 is a low-
level audio library that provides a signal processing and mixing foundation for games, and it supports a variety of
formats.
You can also implement simple sounds and music playback with Microsoft Media Foundation. Microsoft Media
Foundation is designed for the playback of media files and streams, both audio and video, but can also be used in
games, and is particularly useful for cinematic scenes or non-interactive components of your game.

Concepts at a glance
Here are a few audio programming concepts we use in this section.
Signals are the basic unit of sound programming, analogous to pixels in graphics. The digital signal processors
(DSPs) that process them are like the pixel shaders of game audio. They can transform signals, or combine
them, or filter them. By programming to the DSPs, you can alter your game's sound effects and music with as
little or as much complexity as you need.
Voices are the submixed composites of two or more signals. There are 3 types of XAudio2 voice objects: source,
submix, and mastering voices. Source voices operate on audio data provided by the client. Source and submix
voices send their output to one or more submix or mastering voices. Submix and mastering voices mix the
audio from all voices feeding them, and operate on the result. Mastering voices write audio data to an audio
device.
Mixing is the process of combining several discrete voices, such as the sound effects and the background audio
that are played back in a scene, into a single stream. Submixing is the process of combining several discrete
signals, such as the component sounds of an engine noise, and creating a voice.
Audio formats. Music and sound effects can be stored in a variety of digital formats for your game. There are
uncompressed formats, like WAV, and compressed formats like MP3 and OGG. The more a sample is
compressed -- typically designated by its bit rate, where the lower the bit rate is, the more lossy the
compression -- the worse fidelity it has. Fidelity can vary across compression schemes and bit rates, so
experiment with them to find what works best for your game.
Sample rate and quality. Sounds can be sampled at different rates, and sounds sampled at a lower rate have
much poorer fidelity. The sample rate for CD quality is 44.1 Khz (44100 Hz). If you don't need high fidelity for a
sound, you can choose a lower sample rate. Higher rates may be appropriate for professional audio
applications, but you probably don't need them unless your game demands professional fidelity sound.
Sound emitters (or sources). In XAudio2, sound emitters are locations that emit a sound, be it a mere blip of a
background noise or a snarling rock track played by an in-game jukebox. You specify emitters by world
coordinates.
Sound listeners. A sound listener is often the player, or perhaps an AI entity in a more advanced game, that
processes the sounds received from a listener. You can submix that sound into the audio stream for playback to
the player, or you can use it to take a specific in-game action, like awakening an AI guard marked as a listener.

Design considerations
Audio is a tremendously important part of game design and development. Many gamers can recall a mediocre
game elevated to legendary status just because of a memorable soundtrack, or great voice work and sound mixing,
or overall stellar audio production. Music and sound define a game's personality, and establish the main motive
that defines the game and makes it stand apart from other similar games. The effort you spend designing and
developing your game's audio profile will be well worth it.
Positional 3D audio can add a level of immersion beyond that provided by 3D graphics. If you are developing a
complex game that simulates a world, or which demands a cinematic style, consider using 3D positional audio
techniques to really draw the player in.

DirectX audio development roadmap


XAudio2 conceptual resources
XAudio2 is the audio mixing library for DirectX, and is primarily intended for developing high performance audio
engines for games. For game developers who want to add sound effects and background music to their modern
games, XAudio2 offers an audio graph and mixing engine with low-latency and support for dynamic buffers,
synchronous sample-accurate playback, and implicit source rate conversion.

TOPIC DESCRIPTION

Introduction to XAudio2 The topic provides a list of the audio programming


features supported by XAudio2.

Getting Started with XAudio2 This topic provides information on key XAudio2 concepts,
XAudio2 versions, and the RIFF audio format.

Common Audio Programming Concepts This topic provides an overview of common audio
concepts with which an audio developer should be
familiar.

XAudio2 Voices This topic contains an overview of XAudio2 voices, which


are used to submix, operate on, and master audio data.

XAudio2 Callbacks This topic covers the XAudio 2 callbacks, which are used
to prevent breaks in the audio playback.

XAudio2 Audio Graphs This topic covers the XAudio2 audio processing graphs,
which take a set of audio streams from the client as input,
process them, and deliver the final result to an audio
device.

XAudio2 Audio Effects The topic covers XAudio2 audio effects, which take
incoming audio data and perform some operation on the
data (such as a reverb effect) before passing it on.

Streaming Audio Data with XAudio2 This topic covers audio streaming with XAudio2.
TOPIC DESCRIPTION

X3DAudio this topic covers X3DAudio, an API used in conjunction


with XAudio2 to create the illusion of a sound coming
from a point in 3D space.

XAudio2 Programming Reference This section contains the complete reference for the
XAudio2 APIs.

XAudio2 "how to" resources

TOPIC DESCRIPTION

How to: Initialize XAudio2 Learn how to initialize XAudio2 for audio playback by
creating an instance of the XAudio2 engine, and creating
a mastering voice.

How to: Load Audio Data Files in XAudio2 Learn how to populate the structures required to play
audio data in XAudio2.

How to: Play a Sound with XAudio2 Learn how to play previously-loaded audio data in
XAudio2.

How to: Use Submix Voices Learn how to set groups of voices to send their output to
the same submix voice.

How to: Use Source Voice Callbacks Learn how to use XAudio2 source voice callbacks.

How to: Use Engine Callbacks Learn how to use XAudio2 engine callbacks.

How to: Build a Basic Audio Processing Graph Learn how to create an audio processing graph,
constructed from a single mastering voice and a single
source voice.

How to: Dynamically Add or Remove Voices From an Learn how to add or remove submix voices from a graph
Audio Graph that has been created following the steps in How to: Build
a Basic Audio Processing Graph.

How to: Create an Effect Chain Learn how to apply an effect chain to a voice to allow
custom processing of the audio data for that voice.

How to: Create an XAPO Learn how to implement IXAPO to create an XAudio2
audio processing object (XAPO).

How to: Add Run-time Parameter Support to an XAPO Learn how to add run-time parameter support to an
XAPO by implementing the IXAPOParameters interface.
TOPIC DESCRIPTION

How to: Use an XAPO in XAudio2 Learn how to use an effect implemented as an XAPO in an
XAudio2 effect chain.

How to: Use XAPOFX in XAudio2 Learn how to use one of the effects included in XAPOFX in
an XAudio2 effect chain.

How to: Stream a Sound from Disk Learn how to stream audio data in XAudio2 by creating a
separate thread to read an audio buffer, and to use
callbacks to control that thread.

How to: Integrate X3DAudio with XAudio2 Learn how to use X3DAudio to provide the volume and
pitch values for XAudio2 voices as well as the parameters
for the XAudio2 built-in reverb effect.

How to: Group Audio Methods as an Operation Set Learn how to use XAudio2 operation sets to make a
group of method calls take effect at the same time.

Debugging Audio Glitches in XAudio2 Learn how to set the debug logging level for XAudio2.

Media Foundation resources


Media Foundation (MF) is a media platform for streaming audio and video playback. You can use the Media
Foundation APIs to stream audio and video encoded and compressed with a variety of algorithms. It is not
designed for real-time gameplay scenarios; instead, it provides powerful tools and broad codec support for more
linear capture and presentation of audio and video components.

TOPIC DESCRIPTION

About Media Foundation This section contains general information about the Media
Foundation APIs, and the tools available to support them.

Media Foundation: Essential Concepts This topic introduces some concepts that you will need to
understand before writing a Media Foundation
application.

Media Foundation Architecture This section describes the general design of Microsoft
Media Foundation, as well as the media primitives and
processing pipeline it uses.

Audio/Video Capture This topic describes how to use Microsoft Media


Foundation to perform audio and video capture.

Audio/Video Playback This topic describes how to implement audio/video


playback in your app.
TOPIC DESCRIPTION

Supported Media Formats in Media Foundation This topic lists the media formats that Microsoft Media
Foundation supports natively. (Third parties can support
additional formats by writing custom plug-ins.)

Encoding and File Authoring This topic describes how to use Microsoft Media
Foundation to perform audio and video encoding, and
author media files.

Windows Media Codecs This topic describes how to use the features of the
Windows Media Audio and Video codecs to produce and
consume compressed data streams.

Media Foundation Programming Reference This section contains reference information for the Media
Foundation APIs.

Media Foundation SDK Samples This section lists sample apps that demonstrate how to
use Media Foundation.

Windows Runtime XAML media types


If you are using DirectX-XAML interop, you can incorporate the Windows Runtime XAML media APIs into your
Windows Store apps using DirectX with C++ for simpler game scenarios.

TOPIC DESCRIPTION

Windows.UI.Xaml.Controls.MediaElement XAML element that represents an object that contains


audio, video, or both.

Audio, video, and camera Learn how to incorporate basic audio and video in your
Universal Windows Platform (UWP) app.

MediaElement Learn how to play a locally-stored media file in your UWP


app.

MediaElement Learn how to stream a media file with low-latency in your


UWP app.

Media casting Learn how to use the Play To contract to stream media
from your UWP app to another device.

Reference
XAudio2 Introduction
XAudio2 Programming Guide
Microsoft Media Foundation overview

Note
This article is for Windows 10 developers writing Universal Windows Platform (UWP) apps. If youre
developing for Windows 8.x or Windows Phone 8.x, see the archived documentation.

Related topics
XAudio2 Programming Guide
Input for games
3/6/2017 2 min to read Edit on GitHub

This section describes the different kinds of input devices that can be used in Universal Windows Platform (UWP)
games on Windows 10 and Xbox One, demonstrates their basic usage, and recommends patterns and techniques
for effective input programming in games.

Note Other kinds of input devices exist and are available to be used in UWP games such as custom input
devices that might be genre-specific or game-specific. Such devices and their programming are not discussed
in this section. For information on the interfaces used to facilitate custom input devices, see the
Windows.Gaming.Input.Custom namespace.

Gaming input devices


Game input devices are supported in UWP games and apps for Windows 10 and Xbox One by the
Windows.Gaming.Input namespace.
Gamepads
Gamepads are the standard input device on Xbox One and a common choice for Windows gamers when they don't
favor a keyboard and mouse. They provide a variety of digital and analog controls making them suitable for almost
any kind of game and also provide tactile feedback through embedded vibration motors.
For information on how to use gamepads in your UWP game, see Gamepad and vibration.
Arcade sticks
Arcade sticks are all-digital input devices valued for reproducing the feel of stand-up arcade machines and are the
perfect input device for head-to-head-fighting or other arcade-style games.
For information on how to use arcade sticks in your UWP game, see Arcade stick.
Racing wheels
Racing wheels are input devices that resemble the feel of a real racecar cockpit and are the perfect input device for
any racing game that features cars or trucks. Many racing wheels are equipped with true force feedback--that is,
they can apply actual forces on an axis of control such as the steering wheel--not just simple vibration.
For information on how to use racing wheels in your UWP game, see Racing Wheel and force feedback.
UI navigation controller
UI Navigation controllers are a logical input device that exists to provide a common vocabulary for UI navigation
commands that promotes a consistent user experience across different games and physical input devices. A game's
user interface should use the UINavigationController interfaces instead of device-specific interfaces.
For information on how to use UI navigation controllers in your UWP game, see UI navigation controller.
Headsets
Headsets are audio capture and playback devices that are associated with a specific user when connected through
their input device. They're commonly used by online games for voice chat but can also be used to enhance
immersion or provide gameplay features in both online and offline games.
For information on how to use headsets in your UWP game, see Headset
Users
Each input device and its connected headset can be associated with a specific user to link their identity to their
gameplay. The user identity is also the means by which input from a physical input device is correlated to input
from its logical UI navigation controller.
For information on how to manage users and their input devices, see Tracking users and their devices.

See Also
Windows.Gaming.Input.Custom
Gamepad and vibration
3/6/2017 13 min to read Edit on GitHub

This page describes the basics of programming for Xbox One gamepads using Windows.Gaming.Input.Gamepad
and related APIs for the Universal Windows Platform (UWP).
By reading this page, you'll learn:
how to gather a list of connected gamepads and their users
how to detect that a gamepad has been added or removed
how to read input from one or more gamepads
how to send vibration and impulse commands
how gamepads behave as a navigation device

Gamepad overview
Gamepads like the Xbox Wireless Controller and Xbox Wireless Controller S are general-purpose gaming input
devices. They're the standard input device on Xbox One and a common choice for Windows gamers when they
don't favor a keyboard and mouse. Gamepads are supported in Windows 10 and Xbox UWP apps by the
Windows.Gaming.Input namespace.
Xbox One gamepads are equipped with a directional pad (or D-pad); A, B, X, Y, view, and menu buttons; left and
right thumbsticks, bumpers, and triggers; and a total of four vibration motors. Both thumbsticks provide dual
analog readings in the X and Y axes, and also act as a button when pressed inward. Each trigger provides an analog
reading that represents how far its pulled back.

Note The Xbox Elite Wireless Controller is equipped with four additional paddle buttons on its underside.
These can be used to provide redundant access to game commands that are difficult to use together (such as
the right thumbstick together with any of the A, B, X, or Y buttons) or to provide dedicated access to additional
commands.
Note Windows.Gaming.Input.Gamepad also supports Xbox 360 gamepads, which have the same control layout as
standard Xbox One gamepads.

Vibration and impulse triggers


Xbox One gamepads provide two independent motors for strong and subtle gamepad vibration as well as two
dedicated motors for providing sharp vibration to each trigger (this unique feature is the reason that Xbox One
gamepad triggers are referred to as impulse triggers).

Note Xbox 360 gamepads are not equipped with impulse triggers.

For more information, see Vibration and impulse triggers overview.


Thumbstick deadzones
A thumbstick at rest in the center position would ideally produce the same, neutral reading in the X and Y axes
every time. However, due to mechanical forces and the sensitivity of the thumbstick, actual readings in the center
position only approximate the ideal neutral value and can vary between subsequent readings. For this reason, you
must always use a small deadzone--a range of values near the ideal center position that are ignored--to
compensate for manufacturing differences, mechanical wear, or other gamepad issues.
Larger deadzones offer a simple strategy for separating intentional input from unintentional input.
For more information, see Reading the thumbsticks.
UI navigation
In order to ease the burden of supporting the different input devices for user interface navigation and to encourage
consistency between games and devices, most physical input devices simultaneously act as a separate logical input
device called a UI navigation controller. The UI navigation controller provides a common vocabulary for UI
navigation commands across input devices.
As a UI navigation controller, gamepads map the required set of navigation commands to the left thumbstick, D-
pad, view, menu, A, and B buttons.

NAVIGATION COMMAND GAMEPAD INPUT

Up Left thumbstick up / D-pad up

Down Left thumbstick down / D-pad down

Left Left thumbstick left / D-pad left

Right Left thumbstick right / D-pad right

View View button

Menu Menu button

Accept A button

Cancel B button

Additionally, gamepads map all of the optional set of navigation commands to the remaining inputs.

NAVIGATION COMMAND GAMEPAD INPUT

Page Up Left trigger

Page Down Right trigger

Page Left Left bumper

Page Right Right bumper

Scroll Up Right thumbstick up

Scroll Down Right thumbstick down

Scroll Left Right thumbstick left

Scroll Right Right thumbstick right

Context 1 X Button

Context 2 Y Button
NAVIGATION COMMAND GAMEPAD INPUT

Context 3 Left thumbstick press

Context 4 Right thumbstick press

Detect and track gamepads


Gamepads are managed by the system, therefore you don't have to create or initialize them. The system provides a
list of connected gamepads and events to notify you when a gamepad is added or removed.
The gamepads list
The Gamepad class provides a static property, Gamepads, which is a read-only list of gamepads that are currently
connected. Because you might only be interested in some of the connected gamepads, its recommended that you
maintain your own collection instead of accessing them through the Gamepads property.
The following example copies all connected gamepads into a new collection.

auto myGamepads = ref new Vector<Gamepad^>();

for (auto gamepad : Gamepad::Gamepads)


{
// This code assumes that you're interested in all gamepads.
myGamepads->Append(gamepad);
}

Adding and removing gamepads


When a gamepad is added or removed the GamepadAdded and GamepadRemoved events are raised. You can
register handlers for these events to keep track of the gamepads that are currently connected.
The following example starts tracking a gamepad that's been added.

Gamepad::GamepadAdded += ref new EventHandler<Gamepad^>(Platform::Object^, Gamepad^ args)


{
// This code assumes that you're interested in all new gamepads.
myGamepads->Append(args);
}

The following example stops tracking an arcade stick that's been removed.

Gamepad::GamepadRemoved += ref new EventHandler<Gamepad^>(Platform::Object^, Gamepad^ args)


{
unsigned int indexRemoved;

if(myGamepads->IndexOf(args, &indexRemoved))
{
myGamepads->RemoveAt(indexRemoved);
}
}

Users and headsets


Each gamepad can be associated with a user account to link their identity to their gameplay, and can have a
headset attached to facilitate voice chat or in-game features. To learn more about working with users and headsets,
see Tracking users and their devices and Headset.
Reading the gamepad
After you identify the gamepad that you're interested in, you're ready to gather input from it. However, unlike some
other kinds of input that you might be used to, gamepads don't communicate state-change by raising events.
Instead, you take regular readings of their current state by polling them.
Polling the gamepad
Polling captures a snapshot of the navigation device at a precise point in time. This approach to input gathering is a
good fit for most games because their logic typically runs in a deterministic loop rather than being event-driven; its
also typically simpler to interpret game commands from input gathered all at once than it is from many single
inputs gathered over time.
You poll a gamepad by calling GetCurrentReading; this function returns a GamepadReading that contains the state
of the gamepad.
The following example polls a gamepad for its current state.

auto gamepad = myGamepads[0];

GamepadReading reading = gamepad->GetCurrentReading();

In addition to the gamepad state, each reading includes a timestamp that indicates precisely when the state was
retrieved. The timestamp is useful for relating to the timing of previous readings or to the timing of the game
simulation.
Reading the thumbsticks
Each thumbstick provides an analog reading between -1.0 and +1.0 in the X and Y axes. In the X axis, a value of -1.0
corresponds to the left-most thumbstick position; a value of +1.0 corresponds to right-most position. In the Y axis,
a value of -1.0 corresponds to the bottom-most thumbstick position; a value of +1.0 corresponds to the top-most
position. In both axes, the value is approximately 0.0 when the stick is in the center position, but its normal for the
precise value to vary, even between subsequent readings; strategies for mitigating this variation are discussed later
in this section.
The value of the left thumbstick's X axis is read from the LeftThumbstickX property of the GamepadReading
structure; the value of the Y axis is read from the LeftThumbstickY property. The value of the right thumbstick's X axis
is read from the RightThumbstickX property; the value of the Y axis is read from the RightThumbstickY property.

float leftStickX = reading.LeftThumbstickX; // returns a value between -1.0 and +1.0


float leftStickY = reading.LeftThumbstickY; // returns a value between -1.0 and +1.0
float rightStickX = reading.RightThumbstickX; // returns a value between -1.0 and +1.0
float rightStickY = reading.RightThumbstickY; // returns a value between -1.0 and +1.0

When reading the thumbstick values, you'll notice that they don't reliably produce a neutral reading of 0.0 when
the thumbstick is at rest in the center position; instead, they'll produce different values near 0.0 each time the
thumbstick is moved and returned to the center position. To mitigate these variations, you can implement a small
deadzone, which is a range of values near the ideal center position that are ignored. One way to implement a
deadzone is to determine how far from center the thumbstick has moved, and ignoring the readings that are nearer
than some distance you choose. You can compute the distance roughly--its not exact because thumbstick readings
are essentially polar, not planar, values--just by using the Pythagorean theorem. This produces a radial deadzone.
The following example demonstrates a basic radial deadzone using the Pythagorean theorem.
float leftStickX = reading.LeftThumbstickX; // returns a value between -1.0 and +1.0
float leftStickY = reading.LeftThumbstickY; // returns a value between -1.0 and +1.0

// choose a deadzone -- readings inside this radius are ignored.


const float deadzoneRadius = 0.1;
const float deadzoneSquared = deadzoneRadius * deadzoneRadius;

// Pythagorean theorem -- for a right triangle, hypotenuse^2 = (opposite side)^2 + (adjacent side)^2
auto oppositeSquared = leftStickY * leftStickY;
auto adjacentSquared = leftStickX * leftStickX;

// accept and process input if true; otherwise, reject and ignore it.
if((oppositeSquared + adjacentSquared) < deadzoneSquared)
{
// input accepted, process it
}

Each thumbstick also acts as a button when pressed inward; for more information on reading this input, see
Reading the buttons.
Reading the triggers
The triggers are represented as floating-point values between 0.0 (fully released) and 1.0 (fully depressed). The
value of the left trigger is read from the LeftTrigger property of the GamepadReading structure; the value of the
right trigger is read from the RightTrigger property.

float leftTrigger = reading.LeftTrigger; // returns a value between 0.0 and 1.0


float rightTrigger = reading.RightTrigger; // returns a value between 0.0 and 1.0

Reading the buttons


Each of the gamepad buttons--the four directions of the D-pad, left and right bumpers, left and right thumbstick
press, A, B, X, Y, view, and menu--provide a digital reading that indicate whether its pressed (down), or released
(up). For efficiency, button readings aren't represented as individual boolean values; instead they're all packed into
a single bitfield that's represented by the GamepadButtons enumeration.

Note The Xbox Elite Wireless Controller is equipped with four additional paddle buttons on its underside.
These buttons are also represented in the GamepadButtons enumeration and their values are read in the same
way as the standard gamepad buttons.

The button values are read from the Buttons property of the GamepadReading structure. Because this property is a
bitfield, bitwise masking is used to isolate the value of the button that you're interested in. The button is pressed
(down) when the corresponding bit is set; otherwise its released (up).
The following example determines whether the A button is pressed.

if (GamepadButtons::A == (reading.Buttons & GamepadButtons::A))


{
// button A is pressed
}

The following example determines whether the A button is released.

if (GamepadButtons::None == (reading.Buttons & GamepadButtons::A))


{
// button A is pressed
}
Sometimes you might want to determine when a button transitions from pressed to released or released to
pressed, whether multiple buttons are pressed or released, or if a set of buttons are arranged in a particular way--
some pressed, some not. For information on how to detect each of these conditions, see Detecting button
transitions and Detecting complex button arrangements.

Run the gamepad input sample


The GamepadUWP sample (github) demonstrates how to connect to a gamepad and read its state.

Vibration and impulse triggers overview


The vibration motors inside a gamepad are for providing tactile feedback to the user. Games use this ability to
create a greater sense of immersion, to help communicate status information (such as taking damage), to signal
proximity to important objects, or for other creative uses.
Xbox One gamepads are equipped with a total of four independent vibration motors. Two are large motors located
in the gamepad body; the left motor provides rough, high-amplitude vibration, while the right motor provides
gentler, more-subtle vibration. The other two are small motors, one inside each trigger, that provide sharp bursts of
vibration directly to the user's trigger fingers; this unique ability of the Xbox One gamepad is the reason its triggers
are referred to as impulse triggers. By orchestrating these motors together, a wide range of tactile sensations can
be produced.

Using vibration and impulse


Gamepad vibration is controlled through the Vibration property of the Gamepad class. Vibration is an instance of
the GamepadVibration structure which is made up of four floating point values; each value represents the intensity
of one of the motors.
Although the members of the Gamepad.Vibration property can be modified directly, its recommended to initialize a
separate GamepadVibration instance to the values you want, and then copying it into the Gamepad.Vibration property to
change the actual motor intensities all at once.
The following example demonstrates how to change the motor intensities all at once.

// get the first gamepad


Gamepad^ gamepad = Gamepad::Gamepads->GetAt(0);

// create an instance of GamepadVibration


GamepadVibration vibration;

// ... set vibration levels on vibration struct here

// copy the GamepadVibration struct to the gamepad


gamepad.Vibration = vibration.

Using the vibration motors


The left and right vibration motors take floating point values between 0.0 (no vibration) and 1.0 (most intense
vibration). The intensity of the left motor is set by the LeftMotor property of the GamepadVibration structure; the
intensity of the right motor is set by the RightMotor property.
The following example sets the intensity of both vibration motors and activates gamepad vibration.
GamepadVibration vibration;
vibration.LeftMotor = 0.80; // sets the intensity of the left motor to 80%
vibration.RightMotor = 0.25; // sets the intensity of the right motor to 25%
gamepad.Vibration = vibration;

Remember that these two motors are not identical so setting these properties to the same value doesn't produce
the same vibration in one motor as in the other. For any value, the left motor produces a stronger vibration at a
lower frequency than the right motor which--for the same value--produces a gentler vibration at a higher
frequency. Even at the maximum value, the left motor can't produce the high frequencies of the right motor, nor
can the right motor produce the high forces of the left motor. Still, because the motors are rigidly connected by the
gamepad body, players don't experience the vibrations fully independently even though the motors have different
characteristics and can vibrate with different intensities. This arrangement allows for a wider, more expressive
range of sensations to be produced than if the motors were identical.
Using the impulse triggers
Each impulse trigger motor takes a floating point value between 0.0 (no vibration) and 1.0 (most intense vibration).
The intensity of the left trigger motor is set by the LeftTrigger property of the GamepadVibration structure; the
intensity of the right trigger is set by the RightTrigger property.
The following example sets intensity of both impulse triggers and activates them.

GamepadVibration vibration;
vibration.LeftTrigger = 0.75; // sets the intensity of the left trigger to 75%
vibration.RightTrigger = 0.50; // sets the intensity of the right trigger to 50%
gamepad.Vibration = vibration;

Unlike the others, the two vibration motors inside the triggers are identical so they produce the same vibration in
either motor for the same value. However, because these motors are not rigidly connected in any way, players
experience the vibrations independently. This arrangement allows for fully independent sensations to be directed to
both triggers simultaneously, and helps them to convey more specific information than the motors in the gamepad
body can.

Run the gamepad vibration sample


The GamepadVibrationUWP sample (github) demonstrates how the gamepad vibration motors and impulse
triggers are used to produce a variety of effects.

See also
Windows.Gaming.Input.UINavigationController Windows.Gaming.Input.IGameController
Arcade stick
3/6/2017 5 min to read Edit on GitHub

This page describes the basics of programming for Xbox One arcade sticks using
Windows.Gaming.Input.ArcadeStick and related APIs for the Universal Windows Platform (UWP).
By reading this page, you'll learn:
how to gather a list of connected arcade sticks and their users
how to detect that an arcade stick has been added or removed
how to read input from one or more arcade sticks
how arcade sticks behave as a navigation device

Arcade stick overview


Arcade sticks are input devices valued for reproducing the feel of stand-up arcade machines and for their high-
precision digital controls. Arcade sticks are the perfect input device for head-to-head-fighting and other arcade-
style games, and are suitable for any game that works well with all-digital controls. Arcade sticks are supported in
Windows 10 and Xbox One UWP apps by the Windows.Gaming.Input namespace.
Xbox One arcade sticks are equipped with an 8-way digital joystick, six action buttons, and two special buttons;
they're all-digital input devices that don't support analog controls or vibration. Xbox One arcade sticks are also
equipped with view and menu buttons used to support UI navigation but they're not intended to support
gameplay commands and can't be readily accessed as joystick buttons.
UI navigation
In order to ease the burden of supporting many different input devices for user interface navigation and to
encourage consistency between games and devices, most physical input devices simultaneously act as a separate
logical input device called a UI navigation controller. The UI navigation controller provides a common vocabulary
for UI navigation commands across input devices.
As a UI navigation controller, arcade sticks map the required set of navigation commands to the joystick and view,
menu, action 1, and action 2 buttons.

NAVIGATION COMMAND ARCADE STICK INPUT

Up Stick up

Down Stick down

Left Stick left

Right Stick right

View View button

Menu Menu button

Accept Action 1 button


NAVIGATION COMMAND ARCADE STICK INPUT

Cancel Action 2 button

Arcade sticks don't map any of the optional set of navigation commands.

Detect and track arcade sticks


Arcade sticks are managed by the system, therefore you don't have to create or initialize them. The system provides
a list of connected arcades sticks and events to notify you when an arcade stick is added or removed.
The arcade sticks list
The ArcadeStick class provides a static property, ArcadeSticks, which is a read-only list of arcade sticks that are
currently connected. Because you might only be interested in some of the connected arcade sticks, its
recommended that you maintain your own collection instead of accessing them through the ArcadeSticks property.
The following example copies all connected arcade sticks into a new collection.

auto myArcadeSticks = ref new Vector<ArcadeStick^>();

for (auto arcadestick : ArcadeStick::ArcadeSticks)


{
// This code assumes that you're interested in all arcade sticks.
myArcadeSticks->Append(arcadestick);
}

Adding and removing arcade sticks


When an arcade stick is added or removed the ArcadeStickAdded and ArcadeStickRemoved events are raised. You
can register handlers for these events to keep track of the arcade sticks that are currently connected.
The following example starts tracking an arcade stick that's been added.

ArcadeStick::ArcadeStickAdded += ref new EventHandler<ArcadeStick^>(Platform::Object^, ArcadeStick^ args)


{
// This code assumes that you're interested in all new arcade sticks.
myArcadeSticks->Append(args);
}

The following example stops tracking an arcade stick that's been removed.

ArcadeStick::ArcadeStickRemoved += ref new EventHandler<ArcadeStick^>(Platform::Object^, ArcadeStick^ args)


{
unsigned int indexRemoved;

if(myArcadeSticks->IndexOf(args, &indexRemoved))
{
myArcadeSticks->RemoveAt(indexRemoved);
}
}

Users and headsets


Each arcade stick can be associated with a user account to link their identity to their gameplay, and can have a
headset attached to facilitate voice chat or in-game features. To learn more about working with users and headsets,
see Tracking users and their devices and Headset.
Reading the arcade stick
After you identify the arcade stick that you're interested in, you're ready to gather input from it. However, unlike
some other kinds of input that you might be used to, arcade sticks don't communicate state-change by raising
events. Instead, you take regular readings of their current state by polling them.
Polling the arcade stick
Polling captures a snapshot of the arcade stick at a precise point in time. This approach to input gathering is a good
fit for most games because their logic typically runs in a deterministic loop rather than being event-driven; its also
typically simpler to interpret game commands from input gathered all at once than it is from many single inputs
gathered over time.
You poll an arcade stick by calling GetCurrentReading; this function returns an ArcadeStickReading that contains
the state of the arcade stick.
The following example polls an arcade stick for its current state.

auto arcadestick = myArcadeSticks[0];

ArcadeStickReading reading = arcadestick->GetCurrentReading();

In addition to the arcade stick state, each reading includes a timestamp that indicates precisely when the state was
retrieved. The timestamp is useful for relating to the timing of previous readings or to the timing of the game
simulation.
Reading the buttons
Each of the arcade stick buttons--the four directions of the joystick, six action buttons, and two special buttons--
provide a digital reading that indicates whether its pressed (down), or released (up). For efficiency, button readings
aren't represented as individual boolean values; instead they're all packed into a single bitfield that's represented
by the ArcadeStickButtons enumeration.

Note Arcade sticks are equipped with additional buttons used for UI navigation such as the view and menu
buttons. These buttons are not a part of the ArcadeStickButtons enumeration and can only be read by accessing
the arcade stick as a UI navigation device. For more information, see UI Navigation Device.

The button values are read from the Buttons property of the ArcadeStickReading structure. Because this property is
a bitfield, bitwise masking is used to isolate the value of the button that you're interested in. The button is pressed
(down) when the corresponding bit is set; otherwise its released (up).
The following example determines whether the Action 1 button is pressed.

if (ArcadeStickButtons::Action1 == (reading.Buttons & ArcadeStickButtons::Action1))


{
// Action 1 is pressed
}

The following example determines whether the Action 1 button is released.

if (ArcadeStickButtons::None == (reading.Buttons & ArcadeStickButtons::Action1))


{
// Action 1 is released (not pressed)
}

Sometimes you might want to determine when a button transitions from pressed to released or released to
pressed, whether multiple buttons are pressed or released, or if a set of buttons are arranged in a particular way--
some pressed, some not. For information on how to detect these conditions, see Detecting button transitions and
Detecting complex button arrangements.

Run the InputInterfacing sample


The InputInterfacingUWP sample (github) demonstrates how to use arcade sticks and different kinds of input
devices in tandem, as well as how these input devices behave as UI navigation controllers.

See also
Windows.Gaming.Input.UINavigationController Windows.Gaming.Input.IGameController
Racing wheel and force feedback
3/6/2017 13 min to read Edit on GitHub

This page describes the basics of programming for Xbox One racing wheels using
Windows.Gaming.Input.RacingWheel and related APIs for the Universal Windows Platform (UWP).
By reading this page, you'll learn:
how to gather a list of connected racingwheels and their users
how to detect that a racinghweel has been added or removed
how to read input from one or more racingwheels
how to send force feedback commands
how racingwheels behave as a navigation device

Racing wheel overview


Racing wheels are input devices that resemble the feel of a real racecar cockpit. Racing wheels are the perfect input
device for both arcade-style and simulation-style racing games that feature cars or trucks. Racing wheels are
supported in Windows 10 and Xbox One UWP apps by the Windows.Gaming.Input namespace.
Xbox One racing wheels are offered at a variety of price-points generally having more and better input and force
feedback capabilities as their price-points rise. All racing wheels are equipped with an analog steering wheel,
analog throttle and break controls, and some on-wheel buttons. Some racing wheels are additionally equipped with
analog clutch and handbreak controls, pattern shifters, and force-feedback capabilities. Not all racing wheels are
equipped with the same sets of features, and may also vary in their support for certain features--for example,
steering wheels might support different ranges of rotation and pattern shifters might support different numbers of
gears.
Device capabilities
Different Xbox One racing wheels offer different sets of optional device capabilities and varying levels of support
for those capabilities; this level of variation between a single kind of input device is unique among the devices
supported by the Windows.Gaming.Input APIs. Furthermore, most devices you'll encounter will support at least
some optional capabilities or other variations. Because of this, its important to determine the capabilities of each
connected racing wheel individually and to support the full variation of capabilities that makes sense for your
game.
For more information, see Determining racing wheel capabilities.
Force feedback
Some Xbox One racing wheels offer true force-feedback--that is, they can apply actual forces on an axis of control
such as their steering wheel--not just simple vibration. Games use this ability to create a greater sense of
immersion (simulated crash damage, "road feel") and to increase the challenge of driving well.
For more information, see Force feedback overview.
UI navigation
In order to ease the burden of supporting the different input devices for user interface navigation and to encourage
consistency between games and devices, most physical input devices simultaneously act as a separate logical input
device called a UI navigation controller. The UI navigation controller provides a common vocabulary for UI
navigation commands across input devices.
Due to their unique focus on analog controls and the degree of variation between different racing wheels, they're
typically equipped with a digital D-pad, view, menu, A, B, X, and Y buttons that resemble those of a gamepad;
these buttons aren't intended to support gameplay commands and can't be readily accessed as racingwheel
buttons.
As a UI navigation controller, racing wheels map the required set of navigation commands to the left thumbstick, D-
pad, view, menu, A, and B buttons.

NAVIGATION COMMAND RACING WHEEL INPUT

Up D-pad up

Down D-pad down

Left D-pad left

Right D-pad right

View View button

Menu Menu button

Accept A button

Cancel B button

Additionally, some racing wheels might map some of the optional set of navigation commands to other inputs they
support, but command mappings can vary from device to device. Consider supporting these commands as well, but
make sure that these commands are not essential to navigating your game's interface.

NAVIGATION COMMAND RACING WHEEL INPUT

Page Up varies

Page Down varies

Page Left varies

Page Right varies

Scroll Up varies

Scroll Down varies

Scroll Left varies

Scroll Right varies

Context 1 X Button (commonly)

Context 2 Y Button (commonly)

Context 3 varies
NAVIGATION COMMAND RACING WHEEL INPUT

Context 4 varies

Detect and track racing wheels


Racing wheels are managed by the system, therefore you don't have to create or initialize them. The system
provides a list of connected racing wheels and events to notify you when a racing wheel is added or removed.
The racing wheels list
The RacingWheel class provides a static property, RacingWheels, which is a read-only list of racing wheels that are
currently connected. Because you might only be interested in some of the connected racing wheels, its
recommended that you maintain your own collection instead of accessing them through the RacingWheels property.
The following example copies all connected racing wheels into a new collection.

auto myRacingWheels = ref new Vector<RacingWheel^>();

for (auto racingwheel : RacingWheel::RacingWheels)


{
// This code assumes that you're interested in all racing wheels.
myRacingWheels->Append(racingwheel);
}

Adding and removing racing wheels


When a racing wheel is added or removed the RacingWheelAdded and RacingWheelRemoved events are raised.
You can register handlers for these events to keep track of the racing wheels that are currently connected.
The following example starts tracking an racing wheels that's been added.

RacingWheel::RacingWheelAdded += ref new EventHandler<RacingWheel^>(Platform::Object^, RacingWheel^ args)


{
// This code assumes that you're interested in all new racing wheels.
myRacingWheels->Append(args);
}

The following example stops tracking a racing wheel that's been removed.

RacingWheel::RacingWheelRemoved += ref new EventHandler<RacingWheel^>(Platform::Object^, RacingWheel^ args)


{
unsigned int indexRemoved;

if(myRacingWheels->IndexOf(args, &indexRemoved))
{
myRacingWheels->RemoveAt(indexRemoved);
}
}

Users and headsets


Each racing wheel can be associated with a user account to link their identity to their gameplay, and can have a
headset attached to facilitate voice chat or in-game features. To learn more about working with users and headsets,
see Tracking users and their devices and Headset.

Reading the racing wheel


After you identify the racing wheels that you're interested in, you're ready to gather input from it. However, unlike
some other kinds of input that you might be used to, racing wheels don't communicate state-change by raising
events. Instead, you take regular readings of their current state by polling them.
Polling the racing wheel
Polling captures a snapshot of the racing wheel at a precise point in time. This approach to input gathering is a
good fit for most games because their logic typically runs in a deterministic loop rather than being event-driven; its
also typically simpler to interpret game commands from input gathered all at once than it is from many single
inputs gathered over time.
You poll a racing wheel by calling GetCurrentReading; this function returns an RacingWheelReading that contains
the state of the racing wheel.
The following example polls a racing wheel for its current state.

auto racingwheel = myRacingWheels[0];

RacingWheelReading reading = racingwheel->GetCurrentReading();

In addition to the racing wheel state, each reading includes a timestamp that indicates precisely when the state was
retrieved. The timestamp is useful for relating to the timing of previous readings or to the timing of the game
simulation.
Determining racing wheel capabilities
Many of the racing wheel controls are optional or support different variations even in the required controls, so you
have to determine the capabilities of each racing wheel individually before you can process the input gathered in
each reading of the racing wheel.
The optional controls are the handbrake, clutch, and pattern shifter; you can determine whether a connected racing
wheel supports these controls by reading the HasHandbrake, HasClutch, and HasPatternShifter properties of the
racing wheel, respectively. The control is supported if the value of the property is true; otherwise its not supported.

if (racingwheel->HasHandbrake)
{
// the handbrake is supported
}

if (racingwheel->HasClutch)
{
// the clutch is supported
}

if (racingwheel->HasPatternShifter)
{
// the pattern shifter is supported
}

Additionally, the controls that may vary are the steering wheel and pattern shifter. The steering wheel can vary by
the degree of physical rotation that the actual wheel can support, while the pattern shifter can vary by the number
of distinct forward gears it supports. You can determine the greatest angle of rotation the actual wheel supports by
reading the MaxWheelAngle property of the racing wheel; its value is the maximum supported physical angle in
degrees clock-wise (positive) which is likewise supported in the counter-clock-wise direction (negative degrees).
You can determine the greatest forward gear the pattern shifter supports by reading the MaxPatternShifterGear
property of the racing wheel; its value is the highest supported forward gear supported, inclusively--that is, if its
value is 4, then the pattern shifter supports reverse, neutral, first, second, third, and fourth gears.
auto maxWheelDegrees = racingwheel->MaxWheelAngle;
auto maxShifterGears = racingwheel->MaxPatternShifterGear;

Finally, some racing wheels support force feedback through the steering wheel. You can determine whether a
connected racing wheel supports force feedback by reading the WheelMotor property of the racing wheel. Force
feedback is supported if WheelMotor is not null; otherwise its not supported.

if (racingwheel->WheelMotor != nullptr)
{
// force feedback is supported
}

For information on how to use the force feedback capability of racing wheels that support it, see Force feedback
overview.
Reading the buttons
Each of the racing wheel buttons--the four directions of the D-pad, the Previous Gear and Next Gear buttons, and
16 additional buttons--provide a digital reading that indicates whether its pressed (down), or released (up). For
efficiency, button readings aren't represented as individual boolean values; instead they're all packed into a single
bitfield that's represented by the RacingWheelButtons enumeration.

Note Racing wheels are equipped with additional buttons used for UI navigation such as the view and menu
buttons. These buttons are not a part of the RacingWheelButtons enumeration and can only be read by accessing
the racing wheel as a UI navigation device. For more information, see UI Navigation Device.

The button values are read from the Buttons property of the RacingWheelReading structure. Because this property
is a bitfield, bitwise masking is used to isolate the value of the button that you're interested in. The button is
pressed (down) when the corresponding bit is set; otherwise its released (up).
The following example determines whether the Next Gear button is pressed.

if (RacingWheelButtons::NextGear == (reading.Buttons & RacingWheelButtons::NextGear))


{
// Next Gear is pressed
}

The following example determines whether the Next Gear button is released.

if (RacingWheelButtons::None == (reading.Buttons & RacingWheelButtons::NextGear))


{
// Next Gear is released (not pressed)
}

Sometimes you might want to determine when a button transitions from pressed to released or released to
pressed, whether multiple buttons are pressed or released, or if a set of buttons are arranged in a particular way--
some pressed, some not. For information on how to detect these conditions, see Detecting button transitions and
Detecting complex button arrangements.
Reading the wheel
The steering wheel is a required control that provides an analog reading between -1.0 and +1.0. A value of -1.0
corresponds to the left-most wheel position; a value of +1.0 corresponds to the right-most position. The value of
the steering wheel is read from the Wheel property of the RacingWheelReading structure.
float wheel = reading.Wheel; // returns a value between -1.0 and +1.0.

Although wheel readings correspond to different degrees of physical rotation in the actual wheel depending on the
range of rotation supported by the physical racing wheel, you don't usually want to scale the wheel readings;
wheels that support greater degrees of rotation just provide greater precision.
Reading the throttle and brake
The throttle and break are required controls that each provide analog readings between 0.0 (fully released) and 1.0
(fully pressed) represented as floating-point values. The value of the throttle control is read from the Throttle
property of the RacingWheelReading struct; the value of the brake control is read from the Brake property.

float throttle = reading.Throttle; // returns a value between 0.0 and 1.0


float brake = reading.Brake; // returns a value between 0.0 and 1.0

Reading the handbrake and clutch


The handbrake and clutch are optional controls that each provide analog readings between 0.0 (fully released) and
1.0 (fully engaged) represented as floating-point values. The value of the handbrake control is read from the
Handbrake property of the RacingWheelReading struct; the value of the clutch control is read from the Clutch
property.

float handbrake = 0.0;


float clutch = 0.0;

if(racingwheel->HasHandbrake)
{
handbrake = reading.Handbrake; // returns a value between 0.0 and 1.0
}

if(racingwheel->HasClutch)
{
clutch = reading.Clutch; // returns a value between 0.0 and 1.0
}

Reading the pattern shifter


The pattern shifter is an optional control that provides a digital reading between -1 and MaxPatternShifterGear
represented as a signed integer value. A value of -1 or 0 correspond to the reverse and neutral gears, respectively;
increasingly positive values correspond to greater forward gears up to MaxPatternShifterGear, inclusively. The
value of the pattern shifter is read from the PatternShifterGear property of the RacingWheelReading struct.

if(racingwheel->HasPatternShifter)
{
gear = reading.PatternShifterGear
}

Note The pattern shifter, where supported, exists alongside the required previous gear and next gear buttons
which also affect the current gear of the player's car. A simple strategy for unifying these inputs where both are
present is to ignore the pattern shifter (and clutch) when a player chooses an automatic transmission for their
car, and to ignore the previous and next gear buttons when a player chooses a manual transmission for their
car only if their racing wheel is equipped with a pattern shifter control. You can implement a different
unification strategy if this isn't suitable for your game.

Run the InputInterfacing sample


The InputInterfacingUWP sample (github) demonstrates how to use racing wheels and different kinds of input
devices in tandem, as well as how these input devices behave as UI navigation controllers.

Force feedback overview


Many racing have force feedback capability to provide a more immersive and challenging driving experience.
Racing wheels that support force feedback are typically equipped with a single motor that applies force to the
steering wheel along a single axis, the axis of wheel rotation. Force feedback is supported in Windows 10 and Xbox
One UWP apps by the [Windows.Gaming.Input.ForceFeedback][] namespace.

Note The force feedback APIs are capable of supporting several axes of force, but no Xbox One racing wheel
currently supports any feedback axis other than that of wheel rotation.

Using force feedback


These sections describe the basics of programming force feedback effects for Xbox One racing wheels. Feedback is
applied using effects, which are first loaded onto the force feedback device and then can be started, paused,
resumed, and stopped in a manner similar to sound effects; however, you must first determine the feedback
capabilities of the racing wheel.
Determining force feedback capabilities
You can determine whether a connected racing wheel supports force feedback by reading the WheelMotor
property of the racing wheel. Force feedback isn't supported if WheelMotor is null; otherwise force feedback is
supported and you can proceed to determine the specific feedback capabilities of the motor, such as the axes it can
affect.

if (racingwheel->WheelMotor != nullptr)
{
auto axes = racingwheel->WheelMotor->SupportedAxes;

if(ForceFeedbackEffectAxes::X == (axes & ForceFeedbackEffectAxes::X))


{
// Force can be applied through the X axis
}

if(ForceFeedbackEffectAxes::Y == (axes & ForceFeedbackEffectAxes::Y))


{
// Force can be applied through the Y axis
}

if(ForceFeedbackEffectAxes::Z == (axes & ForceFeedbackEffectAxes::Z))


{
// Force can be applied through the Z axis
}
}

Loading force feedback effects


Force feedback effects are loaded onto the feedback device where they are "played" autonomously at the command
of your game. A number of basic effects are provided; custom effects can be created through a class that
implements the [IForceFeedbackEffect][] interface.

EFFECT CLASS EFFECT DESCRIPTION

ConditionForceEffect An effect that applies variable force in response to current


sensor within the device.
EFFECT CLASS EFFECT DESCRIPTION

ConstantForceEffect An effect that applies constant force along a vector.

PeriodicForceEffect An effect that applies variable force defined by a waveform,


along a vector.

RampForceEffect An effect that applies a linearly increasing/decreasing force


along a vector.

using FFLoadEffectResult = ForceFeedback::ForceFeedbackLoadEffectResult;

auto effect = ref new Windows.Gaming::Input::ForceFeedback::ConstantForceEffect();


auto time = TimeSpan(10000);

effect->SetParameters(Windows::Foundation::Numerics::float3(1.0f, 0.0f, 0.0f), time);

// Here, we assume 'racingwheel' is valid and supports force feedback

IAsyncOperation<FFLoadEffectResult>^ request
= racingwheel->WheelMotor->LoadEffectAsync(effect);

auto loadEffectTask = Concurrency::create_task(request);

loadEffectTask.then([this](FFLoadEffectResult result)
{
if (FFLoadEffectResult::Succeeded == result)
{
// effect successfully loaded
}
else
{
// effect failed to load
}
}).wait();

Using force feedback effects


Once loaded, effects can all be started, paused, resumed, and stopped synchronously by calling functions on the
WheelMotor property of the racing wheel, or individually by calling functions on the feedback effect itself. Typically,
you should load all the effects that you want to use onto the feedback device before gameplay begins and then use
their respective SetParameters functions to update the effects as gameplay progresses.

if (ForceFeedbackEffectState::Running == effect->State)
{
effect->Stop();
}
else
{
effect->Start();
}

Finally, you can asynchronously enable, disable, or reset the entire force feedback system on a particular racing
wheel whenever you need.

See also
Windows.Gaming.Input.UINavigationController Windows.Gaming.Input.IGameController
Headset
3/6/2017 1 min to read Edit on GitHub

This page describes the basics of programming for headsets using Windows.Gaming.Input.Headset and related
APIs for the Universal Windows Platform (UWP).
By reading this page, you'll learn:
How to access a headset that's connected to an input or navigation device
How to detect that a headset has been connected or disconnected

Headset overview
Headsets are audio capture and playback devices most often used to communicate with other players in online
games but can also be used in gameplay or for other creative uses. Headsets are supported in Windows 10 and
Xbox UWP apps by the Windows.Gaming.Input namespace.

Detect and track headsets


Headsets are managed by the system, therefore you don't have to create or initialize them. The system provides
access to a headset through the input device its connected to and events to notify you when a headset is
connected or disconnected.
IGameController.Headset
All input devices in the Windows.Gaming.Input namespace implement the IGameController interface which defines
the Headset property to be the headset currently connected to the device.
Connecting and disconnecting headsets.
When a headset is connected or disconnected, the HeadsetConnected and HeadsetDisconnected events are raised.
You can register handlers for these events to keep track of whether an input device currently has a headset
connected to it.
The following example shows how to register a handler for the HeadsetConnected event.

auto inputDevice = myGamepads[0]; // or arcade stick, racing wheel

inputDevice.HeadsetConnected += ref new TypedEventHandler<IGameController^, Headset^>(IGameController^ device, Headset^ headset)


{
// enable headset capture and playback on this device
}

The following example shows how to register a handler for the HeadsetDisconnected event.

auto inputDevice = myGamepads[0]; // or arcade stick, racing wheel

inputDevice.HeadsetDisconnected += ref new TypedEventHandler<IGameController^, Headset^>(IGameController^ device, Headset^ headset)


{
// disable headset capture and playback on this device
}

Using the headset


The Headset class is made up of two strings that represent XAudio endpoint IDs--one for audio capture (recording
from the headset microphone) and one for audio rendering (playback through the headset earpiece).
The details of working with XAudio are not discussed here, for more information see the XAudio2 programming
guide and XAudio2 API reference.
UI navigation controller
3/6/2017 8 min to read Edit on GitHub

This page describes the basics of programming for UI navigation devices using
Windows.Gaming.Input.UINavigationController and related APIs for the Universal Windows Platform (UWP).
By reading this page, you'll learn:
How to gather a list of connected UI navigation devices and their users
How to detect that a navigation device has been added or removed
How to read input from one or more UI navigation devices
How gamepads and arcade sticks behave as navigation devices

UI navigation controller overview


Almost all games have at least some user interface that's separate from gameplay, even if its just pregame menus
or in-game dialogs. Players need to be able to navigate this UI using whichever input device they've chosen, but it
burdens developers to add specific support for each kind of input device and can also introduce inconsistencies
between games and input devices that confuse players. For these reasons the UINavigationController API was
created.
UI navigation controllers are logical input devices that exist to provide a vocabulary of common UI Navigation
commands that can be supported by a variety of physical input devices. A UI navigation controller is just a
different way of looking at a physical input device; we use navigation device to refer to any physical input device
being viewed as a navigation controller. By programming against a navigation device rather than specific input
devices, developers avoid the burden of supporting different input devices and achieve consistency by default.
Because the number and variety of controls supported by each kind of input device can be so different and
because certain input devices might want to support a richer set of navigation commands, the navigation
controller interface divides the vocabulary of commands into a required set containing the most common and
essential commands, and an optional set containing convenient but nonessential commands. All navigation
devices support every command in the required set and may support all, some, or none of the commands in the
optional set.
Required set
Navigation devices must support all navigation commands in the required set; these are the directional (up, down,
left, and right), view, menu, accept, and cancel commands.
The directional commands are intended for primary XY-focus navigation between single UI elements. The view
and menu commands are intended for displaying gameplay information (often momentary, sometimes modally)
and for switching between gameplay and menu contexts, respectively. The accept and cancel commands are
intended for affirmative (yes) and negative (no) responses, respectively.
The following table summarizes these commands and their intended uses, with examples.

COMMAND INTENDED USE

Up XY-focus navigation up

Down XY-focus navigation down


COMMAND INTENDED USE

Left XY-focus navigation left

Right XY-focus navigation right

View Display gameplay info (scoreboard, game stats, objectives,


world or area map)

Menu Primary menu / Pause (settings, status, equipment,


inventory, pause)

Accept Affirmative response (accept, advance, confirm, start, yes)

Cancel Negative response (reject, reverse, decline, stop, no)

Optional set
Navigation devices may also support all, some, or none of the navigation commands in the optional set; these are
the paging (up, down, left, and right), scrolling (up, down, left, and right), and contextual (context 1-4) commands.
The contextual commands are explicitly intended for application-specific commands and navigation shortcuts. The
paging and scrolling commands are intended for quick navigation between pages or groups of UI elements and
for fine-grained navigation within UI elements, respectively.
The following table summarizes these commands and their intended uses.

COMMAND INTENDED USE

PageUp Jump upward (to upper/previous vertical page or group)

PageDown Jump downward (to lower/next vertical page or group)

PageLeft Jump left (to leftward/previous horizontal page or group)

PageRight Jump right (to rightward/next horizontal page or group)

ScrollUp Scroll up (within focused UI element or scrollable group)

ScrollDown Scroll down (within focused UI element or scrollable group)

ScrollLeft Scroll left (within focused UI element or scrollable group)

ScrollRight Scroll right (within focused UI element or scrollable group)

Context1 Primary context action

Context2 Secondary context action

Context3 Third context action

Context4 Fourth context action

Note Although a game is free to respond to any command with an actual function that is different than its
intended use, surprising behavior should be avoided. In particular, don't change the actual function of a
command if you need its intended use, try to assign novel functions to the command that makes the most
sense, and assign counterpart functions to counterpart commands such as PageUp/PageDown. Finally,
consider which commands are supported by each kind of input device and which controls they are mapped to,
making sure that critical commands are accessible from every device.

Gamepad, arcade stick, and racing wheel navigation


All input devices supported by the Windows.Gaming.Input namespace are UI navigation devices.
The following table summarizes how the required set of navigation commands map to various input devices.

NAVIGATION COMMAND GAMEPAD INPUT ARCADE STICK INPUT RACING WHEEL INPUT

Up Left thumbstick up / D-pad Stick up D-pad up


up

Down Left thumbstick down / D- Stick down D-pad down


pad down

Left Left thumbstick left / D-pad Stick left D-pad left


left

Right Left thumbstick right / D- Stick right D-pad right


pad right

View View button View button View button

Menu Menu button Menu button Menu button

Accept A button Action 1 button A button

Cancel B button Action 2 button B button

The following table summarizes how the optional set of navigation commands map to various input devices.

NAVIGATION COMMAND GAMEPAD INPUT ARCADE STICK INPUT RACING WHEEL INPUT

PageUp Left trigger not supported varies

PageDown Right trigger not supported varies

PageLeft Left bumper not supported varies

PageRight Right bumper not supported varies

ScrollUp Right thumbstick up not supported varies

ScrollDown Right thumbstick down not supported varies

ScrollLeft Right thumbstick left not supported varies

ScrollRight Right thumbstick right not supported varies


NAVIGATION COMMAND GAMEPAD INPUT ARCADE STICK INPUT RACING WHEEL INPUT

Context1 X button not supported X button (commonly)

Context2 Y button not supported Y button (commonly)

Context3 Left thumbstick press not supported varies

Context4 Right thumbstick press not supported varies

Detect and track UI navigation controllers


Although UI navigation controllers are logical input devices, they are a representation of a physical device and are
managed by the system in the same way. You don't have to create or initialize them; the system provides a list of
connected UI navigation controllers and events to notify you when a UI Navigation controller is added or
removed.
The UI navigation controllers list
The UINavigationController class provides a static property, UINavigationControllers, which is a read-only list of
UI navigation devices that are currently connected. Because you might only be interested in some of the
connected navigation devices, its recommended that you maintain your own collection instead of accessing them
through the UINavigationControllers property.
The following example copies all connected UI navigation controllers into a new collection.

auto myNavigationControllers = ref new Vector<UINavigationController^>();

for (auto device : UINavigationController::UINavigationControllers)


{
// This code assumes that you're interested in all navigation controllers.
myNavigationControllers->Append(device);
}

Adding and removing UI navigation controllers


When a UI navigation controller is added or removed the UINavigationControllerAdded and
UINavigationControllerRemoved events are raised. You can register an event handler for these events to keep
track of the navigation devices that are currently connected.
The following example starts tracking a UI navigation device that's been added.

UINavigationController::UINavigationControllerAdded += ref new EventHandler<UINavigationController^>(Platform::Object^,


UINavigationController^ args)
{
// This code assumes that you're interested in all new navigation controllers.
myNavigationControllers->Append(args);
}

The following example stops tracking an arcade stick that's been removed.
UINavigationController::UINavigationControllerRemoved += ref new EventHandler<UINavigationController^>(Platform::Object^,
UINavigationController^ args)
{
unsigned int indexRemoved;

if(myNavigationControllers->IndexOf(args, &indexRemoved))
{
myNavigationControllers->RemoveAt(indexRemoved);
}
}

Users and headsets


Each navigation device can be associated with a user account to link their identity to their input, and can have a
headset attached to facilitate voice chat or navigation features. To learn more about working with users and
headsets, see Tracking users and their devices and Headset.

Reading the UI navigation controller


After you identify the UI navigation device that you're interested in, you're ready to gather input from it. However,
unlike some other kinds of input that you might be used to, navigation devices don't communicate state-change
by raising events. Instead, you take regular readings of their current state by polling them.
Polling the UI navigation controller
Polling captures a snapshot of the navigation device at a precise point in time. This approach to input gathering is
a good fit for most games because their logic typically runs in a deterministic loop rather than being event-driven;
its also typically simpler to interpret game commands from input gathered all at once than it is from many single
inputs gathered over time.
You poll a navigation device by calling UINavigationController.GetCurrentReading; this function returns a
UINavigationReading that contains the state of the navigation device.

auto navigationController = myNavigationControllers[0];

UINavigationReading reading = navigationController->GetCurrentReading();

Reading the buttons


Each of the UI navigation buttons provide a boolean reading that corresponds to whether its pressed (down), or
released (up). For efficiency, button readings aren't represented as individual boolean values; instead they're all
packed into one of two bitfields represented by the RequiredUINavigationButtons and
OptionalUINavigationButtons enumerations.
The buttons belonging to the required set are read from the RequiredButtons property of the UINavigationReading
structure; the buttons belonging to the optional set are read from the OptionalButtons property. Because these
properties are bitfields, bitwise masking is used to isolate the value of the button that you're interested in. The
button is pressed (down) when the corresponding bit is set; otherwise its released (up).
The following example determines whether the Accept button in the required set is pressed.

if (RequiredUINavigationButtons::Accept == (reading.RequiredButtons & RequiredUINavigationButtons::Accept))


{
// Accept is pressed
}

The following example determines whether the Accept button in the required set is released.
if (RequiredUINavigationButtons::None == (reading.RequiredButtons & RequiredUINavigationButtons::Accept))
{
// Accept is released (not pressed)
}

Be sure to use the OptionalButtons property and OptionalUINavigationButtons enumeration when reading buttons in
the optional set.
The following example determines whether the Context 1 button in the optional set is pressed.

if (OptionalUINavigationButtons::Context1 == (reading.OptionalButtons & OptionalUINavigationButtons::Context1))


{
// Context 1 is pressed
}

Sometimes you might want to determine when a button transitions from pressed to released or released to
pressed, whether multiple buttons are pressed or released, or if a set of buttons are arranged in a particular way--
some pressed, some not. For information on how to detect these conditions, see Detecting button transitions and
Detecting complex button arrangements.

Run the UI navigation controller sample


The InputInterfacingUWP sample (github) demonstrates how the different input devices behave as UI navigation
controllers.

See also
Windows.Gaming.Input.Gamepad Windows.Gaming.Input.ArcadeStick Windows.Gaming.Input.RacingWheel
Windows.Gaming.Input.IGameController
Input practices for games
3/6/2017 4 min to read Edit on GitHub

This page describes patterns and techniques for effectively using input devices in Universal Windows Platform
(UWP) games.
By reading this page, you'll learn:
how to track players and which input and navigation devices they're currently using
how to detect button transitions (pressed-to-released, released-to-pressed)
how to detect complex button arrangements with a single test

Tracking users and their devices


All input devices are associated with a User so that their identity can be linked to their gameplay, achievements,
settings changes, and other activities. Users can sign in or sign out at will and its common for a different user to
sign in on input device that remains connected to the system after the previous user has signed out. When a user
signs in or out the IGameController.UserChanged event is raised. You can register an event handler for this event to
keep track of players and the devices they're using.
User identity is also the way that an input device is associated with its corresponding UI navigation controller.
For these reasons, player input should be tracked and correlated by using the User property of the device class
(inherited from the IGameController interface).
The UserGamepadPairingUWP (github) sample demonstrates how you can keep track of users and the devices
they're using.

Detecting button transitions


Sometimes you want to know when a button is first pressed or released; that is, precisely when the button state
transitions from released to pressed or from pressed to released. To determine this, you need to remember the
previous device reading and compare the current reading against it to see what's changed.
The following example demonstrates a basic approach for remembering the previous reading; gamepads are
shown here, but the principles are the same for arcade stick, racing wheel, and UI navigation buttons

GamepadReading newReading();
GamepadReading oldReading();

// Game::Loop represents one iteration of a typical game loop


void Game::Loop()
{
// move previous newReading into oldReading before getting next newReading
oldReading = newReading, newReading = gamepad.CurrentReading();

// process device readings using buttonJustPressed/buttonJustReleased


}

Before doing anything else, Game::Loop moves the existing value of newReading (the gamepad reading from the
previous loop iteration) into oldReading , then fills newReading with a fresh gamepad reading for the current iteration.
This gives you the information you need to detect button transitions.
The following example demonstrates a basic approach for detecting button transitions.
bool buttonJustPressed(const gamepadButtons selection)
{
bool newSelectionPressed = (selection == (newReading.Buttons & selection));
bool oldSelectionPressed = (selection == (oldReading.Buttons & selection));

return newSelectionPressed && !oldSelectionPressed;


}

bool buttonJustReleased(gamepadButtons selection)


{
bool newSelectionReleased = (gamepadButtons.None == (newReading.Buttons & selection));
bool oldSelectionReleased = (gamepadButtons.None == (oldReading.Buttons & selection));

return newSelectionReleased && !oldSelectionReleased;


}

These two functions first derive the boolean state of the button selection from newReading and oldReading , then
perform boolean logic to determine whether the target transition has occurred. These functions return true only if
the new reading contains the target state (pressed or released, respectively) and the old reading does not also
contain the target state; otherwise, they return false.

Detecting complex button arrangements


Each button of an input device provides a digital reading that indicates whether its presseded (down), or released
(up). For efficiency, button readings aren't represented as individual boolean values; instead, they're all packed into
bitfields represented by device-specific enumerations such as GamepadButtons. To read specific buttons, bitwise
masking is used to isolate the values that you're interested in. Buttons are pressed (down) when their
corresponding bit is set; otherwise its released (up).
Recall how single buttons are determined to be pressed or released; gamepads are shown here but the principles
are the same for arcade stick, racing wheel, and UI navigation buttons.

// determines whether gamepad button A is pressed


if (GamepadButtons::A == (reading.Buttons & GamepadButtons::A))
{
// button A is pressed
}

// determines whether gamepad button A is released


if (GamepadButtons::None == (reading.Buttons & GamepadButtons::A))
{
// button A is released (not pressed)
}

As you can see, determining the state of a single button is straight-forward, but sometimes you might want to
determine whether multiple buttons are pressed or released, or if a set of buttons are arranged in a particular way--
some pressed, some not. Testing multiple buttons is more complex than testing single buttons--especially with the
potential of mixed button state--but there's a simple formula for these tests that applies to single and multiple
button tests alike.
The following example determines whether gamepad buttons A and B are both pressed.

if ((GamepadButtons::A | GamepadButtons::B) == (reading.Buttons & (GamepadButtons::A | GamepadButtons::B))


{
// buttons A and B are pressed
}
The following example determines whether gamepad buttons A and B are both released.

if ((GamepadButtons::None == (reading.Buttons & GamepadButtons::A | GamepadButtons::B))


{
// buttons A and B are released (not pressed)
}

The following example determines whether gamepad button A is pressed while button B is released.

if (GamepadButtons::A == (reading.Buttons & (GamepadButtons::A | GamepadButtons::B))


{
// button A is pressed and button B is released (button B is not pressed)
}

The formula that all five of these examples have in common is that the arrangement of buttons to be tested for is
specified by the expression on the left-hand side of the equality operator while the buttons to be considered are
selected by the masking expression on the right-hand side.
The following example demonstrates this formula more clearly by rewriting the previous example.

auto buttonArrangement = GamepadButtons::A;


auto buttonSelection = (reading.Buttons & (GamepadButtons::A | GamepadButtons::B));

if (buttonArrangement == buttonSelection)
{
// button A is pressed and button B is released (button B is not pressed)
}

This formula can be applied to test any number of buttons in any arrangement of their states.
Networking for games
3/6/2017 13 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Learn how to develop and incorporate networking features into your DirectX game.

Concepts at a glance
A variety of networking features can be used in your DirectX game, whether it is a simple standalone game to
massively multi-player games. The simplest use of networking would be to store user names and game scores on
a central network server.
Networking APIs are needed in multi-player games that use the infrastructure (client-server or internet peer-to-
peer) model and also by ad hoc (local peer-to-peer) games. For server-based multi-player games, a central game
server usually handles most of the game operations and the client game app is used for input, displaying graphics,
playing audio, and other features. The speed and latency of network transfers is a concern for a satisfactory game
experience.
For peer-to-peer games, each player's app handles the input and graphics. In most cases, the game players are
located in close proximity so that network latency should be lower but is still a concern. How to discovery peers
and establish a connection becomes a concern.
For single-player games, a central Web server or service is often used to store user names, game scores, and other
miscellaneous information. In these games, the speed and latency of networking transfers is less of a concern since
it doesn't directly affect game operation.
Network conditions can change at any time, so any game that uses networking APIs needs to handle network
exceptions that may occur. To learn more about handling network exceptions, see Networking basics.
Firewalls and web proxies are common and can affect the ability to use networking features. A game that uses
networking needs to be prepared to properly handle firewalls and proxies.
For mobile devices, it is important to monitor available network resources and behave accordingly when on
metered networks where roaming or data costs can be significant.
Network isolation is part of the app security model used by Windows. Windows actively discovers network
boundaries and enforces network access restrictions for network isolation. Apps must declare network isolation
capabilities in order to define the scope of network access. Without declaring these capabilities, your app will not
have access to network resources. To learn more about how Windows enforces network isolation for apps, see
How to configure network isolation capabilities.

Design considerations
A variety of networking APIs can be used in DirectX games. So, it is important to pick the right API. Windows
supports a variety of networking APIs that your app can use to communicate with other computers and devices
over either the Internet or private networks. Your first step is to figure out what networking features your app
needs.
The more popular network APIs for games include:
TCP and sockets - Provides a reliable connection. Use TCP for game operations that dont need security. TCP
allows the server to easily scale, so it is commonly used in games that use the infrastructure (client-server or
internet peer-to-peer) model. TCP can also be used by ad hoc (local peer-to-peer) games over Wi-Fi Direct and
Bluetooth. TCP is commonly used for game object movement, character interaction, text chat, and other
operations. The StreamSocket class provides a TCP socket that can be used in Windows Store games. The
StreamSocket class is used with related classes in the Windows::Networking::Sockets namespace.
TCP and sockets using SSL - Provides a reliable connection that prevents eavesdropping. Use TCP connections
with SSL for game operations that need security. The encryption and overhead of SSL adds a cost in latency
and performance, so it is only used when security is needed. TCP with SSL is commonly used for login,
purchasing and trading assets, game character creation and management. The StreamSocket class provides a
TCP socket that supports SSL.
UDP and sockets - Provides unreliable network transfers with low overhead. UDP is used for game operations
that require low latency and can tolerate some packet loss. This is often used for fighting games, shooting and
tracers, network audio, and voice chat. The DatagramSocket class provides a UDP socket that can be used in
Windows Store games. The DatagramSocket class is used with related classes in the
Windows::Networking::Sockets namespace.
HTTP Client - Provides a reliable connection to HTTP servers. The most common networking scenario is to
access a web site to retrieve or store information. A simple example would be a game that uses a website to
store user information and game scores. When used with SSL for security, an HTTP client can be used for login,
purchasing, trading assets, game character creation, and management. The HttpClient class provides a
modern HTTP client API for use in Windows Store games. The HttpClient class is used with related classes in
the Windows::Web::Http namespace.

Handling network exceptions in your DirectX game


When a network exception occurs in your DirectX game, this indicates a significant problem or failure. Exceptions
can occur for many reasons when using networking APIs. Often, the exception can result from changes in network
connectivity or other networking issues with the remote host or server.
Some causes of exceptions when using networking APIs include the following:
Input from the user for a hostname or a URI contains errors and is not valid.
Name resolutions failures when looking up a hostname or a URi.
Loss or change in network connectivity.
Network connection failures using sockets or the HTTP client APIs.
Network server or remote endpoint errors.
Miscellaneous networking errors.
Exceptions from network errors (for example, loss or change of connectivity, connection failures, and server
failures) can happen at any time. These errors result in exceptions being thrown. If an exception is not handled by
your app, it can cause your entire app to be terminated by the runtime.
You must write code to handle exceptions when you call most asynchronous network methods. Sometimes, when
an exception occurs, a network method can be retried as a way to resolve the problem. Other times, your app may
need to plan to continue without network connectivity using previously cached data.
Universal Windows Platform (UWP) apps generally throw a single exception. Your exception handler can retrieve
more detailed information about the cause of the exception to better understand the failure and make appropriate
decisions.
When an exception occurs in a DirectX game that is a UWP app, the HRESULT value for the cause of the error can
be retrieved. The Winerror.h include file contains a large list of possible HRESULT values that includes network
errors.
The networking APIs support different methods for retrieving this detailed information about the cause of an
exception.
A method to retrieve the HRESULT value of the error that caused the exception. The possible list of potential
HRESULT values is large and unspecified. The HRESULT value can be retrieved when using any of the
networking APIs.
A helper method that converts the HRESULT value to an enumeration value. The list of possible enumeration
values is specified and relatively small. A helper method is available for the socket classes in the
Windows::Networking::Sockets.
Exceptions in Windows.Networking.Sockets
The constructor for the HostName class used with sockets can throw an exception if the string passed is not a
valid hostname (contains characters that are not allowed in a host name). If an app gets input from the user for the
HostName for a peer connection for gaming, the constructor should be in a try/catch block. If an exception is
thrown, the app can notify the user and request a new hostname.
Add code to validate a string for a hostname from the user

// Define some variables at the class level.


Windows::Networking::HostName^ remoteHost;

bool isHostnameFromUser = false;


bool isHostnameValid = false;

///...

// If the value of 'remoteHostname' is set by the user in a control as input


// and is therefore untrusted input and could contain errors.
// If we can't create a valid hostname, we notify the user in statusText
// about the incorrect input.

String ^hostString = remoteHostname;

try
{
remoteHost = ref new Windows::Networking:Host(hostString);
isHostnameValid = true;
}
catch (InvalidArgumentException ^ex)
{
statusText->Text = "You entered a bad hostname, please re-enter a valid hostname.";
return;
}

isHostnameFromUser = true;

// ... Continue with code to execute with a valid hostname.

The Windows.Networking.Sockets namespace has convenient helper methods and enumerations for handling
errors when using sockets. This can be useful for handling specific network exceptions differently in your app.
An error encountered on DatagramSocket, StreamSocket, or StreamSocketListener operation results in an
exception being thrown. The cause of the exception is an error value represented as an HRESULT value. The
SocketError.GetStatus method is used to convert a network error from a socket operation to a
SocketErrorStatus enumeration value. Most of the SocketErrorStatus enumeration values correspond to an
error returned by the native Windows sockets operation. An app can filter on specific SocketErrorStatus
enumeration values to modify app behavior depending on the cause of the exception.
For parameter validation errors, an app can also use the HRESULT from the exception to learn more detailed
information about the error that caused the exception. Possible HRESULT values are listed in the Winerror.h
header file. For most parameter validation errors, the HRESULT returned is E_INVALIDARG.
Add code to handle exceptions when trying to make a stream socket connection

using namespace Windows::Networking;


using namespace Windows::Networking::Sockets;

// Define some more variables at the class level.

bool isSocketConnected = false


bool retrySocketConnect = false;

// The number of times we have tried to connect the socket.


unsigned int retryConnectCount = 0;

// The maximum number of times to retry a connect operation.


unsigned int maxRetryConnectCount = 5;
///...

// We pass in a valid remoteHost and serviceName parameter.


// The hostname can contain a name or an IP address.
// The servicename can contain a string or a TCP port number.

StreamSocket ^ socket = ref new StreamSocket();


SocketErrorStatus errorStatus;
HResult hr;

// Save the socket, so any subsequent steps can use it.


CoreApplication::Properties->Insert("clientSocket", socket);

// Connect to the remote server.


create_task(socket->ConnectAsync(
remoteHost,
serviceName,
SocketProtectionLevel::PlainSocket)).then([this] (task<void> previousTask)
{
try
{
// Try getting all exceptions from the continuation chain above this point.
previousTask.get();

isSocketConnected = true;
// Mark the socket as connected. We do not really care about the value of the property, but the mere
// existence of it means that we are connected.
CoreApplication::Properties->Insert("connected", nullptr);
}
catch (Exception^ ex)
{
hr = ex.HResult;
errorStatus = SocketStatus::GetStatus(hr);
if (errorStatus != Unknown)
{
switch (errorStatus)
{
case HostNotFound:
// If the hostname is from the user, this may indicate a bad input.
// Set a flag to ask the user to re-enter the hostname.
isHostnameValid = false;
return;
break;
case ConnectionRefused:
// The server might be temporarily busy.
retrySocketConnect = true;
return;
break;
case NetworkIsUnreachable:
// This could be a connectivity issue.
retrySocketConnect = true;
break;
break;
case UnreachableHost:
// This could be a connectivity issue.
retrySocketConnect = true;
break;
case NetworkIsDown:
// This could be a connectivity issue.
retrySocketConnect = true;
break;
// Handle other errors.
default:
// The connection failed and no options are available.
// Try to use cached data if it is available.
// You may want to tell the user that the connect failed.
break;
}
}
else
{
// Received an Hresult that is not mapped to an enum.
// This could be a connectivity issue.
retrySocketConnect = true;
}
}
});
}

Exceptions in Windows.Web.Http
The constructor for the Windows::Foundation::Uri class used with Windows::Web::Http::HttpClient can throw
an exception if the string passed is not a valid URI (contains characters that are not allowed in a URI). In C++, there
is no method to try and parse a string to a URI. If an app gets input from the user for the
Windows::Foundation::Uri, the constructor should be in a try/catch block. If an exception is thrown, the app can
notify the user and request a new URI.
Your app should also check that the scheme in the URI is HTTP or HTTPS since these are the only schemes
supported by the Windows::Web::Http::HttpClient.
Add code to validate a string for a URI from the user
// Define some variables at the class level.
Windows::Foundation::Uri^ resourceUri;

bool isUriFromUser = false;


bool isUriValid = false;

///...

// If the value of 'inputUri' is set by the user in a control as input


// and is therefore untrusted input and could contain errors.
// If we can't create a valid hostname, we notify the user in statusText
// about the incorrect input.

String ^uriString = inputUri;

try
{
isUriValid = false;
resourceUri = ref new Windows::Foundation:Uri(uriString);

if (resourceUri->SchemeName != "http" && resourceUri->SchemeName != "https")


{
statusText->Text = "Only 'http' and 'https' schemes supported. Please re-enter URI";
return;
}
isUriValid = true;
}
catch (InvalidArgumentException ^ex)
{
statusText->Text = "You entered a bad URI, please re-enter Uri to continue.";
return;
}

isUriFromUser = true;

// ... Continue with code to execute with a valid URI.

The Windows::Web::Http namespace lacks a convenience function. So, an app using HttpClient and other
classes in this namespace needs to use the HRESULT value.
In apps using C++, the Platform::Exception represents an error during app execution when an exception occurs.
The Platform::Exception::HResult property returns the HRESULT assigned to the specific exception. The
Platform::Exception::Message property returns the system-provided string that is associated with the HRESULT
value. Possible HRESULT values are listed in the Winerror.h header file. An app can filter on specific HRESULT
values to modify app behavior depending on the cause of the exception.
For most parameter validation errors, the HRESULT returned is E_INVALIDARG. For some illegal method calls,
the HRESULT returned is E_ILLEGAL_METHOD_CALL.
Add code to handle exceptions when trying to use HttpClient to connect to an HTTP server

using namespace Windows::Foundation;


using namespace Windows::Web::Http;

// Define some more variables at the class level.

bool isHttpClientConnected = false


bool retryHttpClient = false;

// The number of times we have tried to connect the socket


unsigned int retryConnectCount = 0;
// The maximum number of times to retry a connect operation.
unsigned int maxRetryConnectCount = 5;
///...

// We pass in a valid resourceUri parameter.


// The URI must contain a scheme and a name or an IP address.

HttpClient ^ httpClient = ref new HttpClient();


HResult hr;

// Save the httpClient, so any subsequent steps can use it.


CoreApplication::Properties->Insert("httpClient", httpClient);

// Send a GET request to the HTTP server.


create_task(httpClient->GetAsync(resourceUri)).then([this] (task<void> previousTask)
{
try
{
// Try getting all exceptions from the continuation chain above this point.
previousTask.get();

isHttpClientConnected = true;
// Mark the HttClient as connected. We do not really care about the value of the property, but the mere
// existence of it means that we are connected.
CoreApplication::Properties->Insert("connected", nullptr);
}
catch (Exception^ ex)
{
hr = ex.HResult;
switch (errorStatus)
{
case WININET_E_NAME_NOT_RESOLVED:
// If the Uri is from the user, this may indicate a bad input.
// Set a flag to ask user to re-enter the Uri.
isUriValid = false;
return;
break;
case WININET_E_CANNOT_CONNECT:
// The server might be temporarily busy.
retryHttpClientConnect = true;
return;
break;
case WININET_E_CONNECTION_ABORTED:
// This could be a connectivity issue.
retryHttpClientConnect = true;
break;
case WININET_E_CONNECTION_RESET:
// This could be a connectivity issue.
retryHttpClientConnect = true;
break;
case INET_E_RESOURCE_NOT_FOUND:
// The server cannot locate the resource specified in the uri.
// If the Uri is from user, this may indicate a bad input.
// Set a flag to ask the user to re-enter the Uri
isUriValid = false;
return;
break;
// Handle other errors.
default:
// The connection failed and no options are available.
// Try to use cached data if it is available.
// You may want to tell the user that the connect failed.
break;
}
else
{
// Received an Hresult that is not mapped to an enum.
// This could be a connectivity issue.
retrySocketConnect = true;
retrySocketConnect = true;
}
}
});

Related topics
Other resources
Connecting with a datagram socket
Connecting to a network resource with a stream socket
Connecting to network services
Connecting to web services
Networking basics
How to configure network isolation capabilities
How to enable loopback and debug network isolation
Reference
DatagramSocket
HttpClient
StreamSocket
Windows::Web::Http
Windows::Networking::Sockets
Samples
DatagramSocket sample
HttpClient Sample
Proximity sample
StreamSocket sample
DirectX programming
3/6/2017 1 min to read Edit on GitHub

This section provides information about developing UWP games with DirectX.

TOPIC DESCRIPTION

Getting started Introduction to DirectX programming.

Samples Learn DirectX through game samples.

Fundamentals Explains DirectX basic programming concepts.

Add features Describes how to add various game features into your
DirectX game.

Optimization and advanced topics Learn about optimization and other advanced topics.
DirectX: Getting started
3/6/2017 1 min to read Edit on GitHub

This section provides information to help you get started on developing UWP games using DirectX.
Project templates and tools for games topic explains how to create a DirectX game project from a template. It
introduces editing and diagnostic tools like Model Editor, Shader Designer, and Graphics Frame Analysis which can
help you to create, preview, and troubleshoot DirectX graphics.
The app object and DirectX topic explains how to use the core user interface framework with DirectX. Since DirectX
games run at a lower level in the Windows Runtime stack, they need to interoperate with the user interface
framework more fundamentally: by accessing and interoperating with the app object directly.
Launching and resuming apps topic explains how you can define the activation experience of a UWP DirectX game,
how to save important system state and app data, and restore important application data when your game
resumes.

TOPIC DESCRIPTION

Project templates and tools for games Prepare your dev environment for UWP DirectX game
development.

The app object and DirectX Access and interoperate with the app object directly.

Launching and resuming apps Launch, suspend, and resume your UWP DirectX game.
The app object and DirectX
3/6/2017 7 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Universal Windows Platform (UWP) with DirectX games don't use many of the Windows UI user interface elements
and objects. Rather, because they run at a lower level in the Windows Runtime stack, they must interoperate with
the user interface framework in a more fundamental way: by accessing and interoperating with the app object
directly. Learn when and how this interoperation occurs, and how you, as a DirectX developer, can effectively use
this model in the development of your UWP app.

The important core user interface namespaces


First, let's note the Windows Runtime namespaces that you must include (with using) in your UWP app. We get
into the details in a bit.
Windows.ApplicationModel.Core
Windows.ApplicationModel.Activation
Windows.UI.Core
Windows.System
Windows.Foundation

Note If you are not developing a UWP app, use the user interface components provided in the JavaScript- or
XAML-specific libraries and namespaces instead of the types provided in these namespaces.

The Windows Runtime app object


In your UWP app, you want to get a window and a view provider from which you can get a view and to which you
can connect your swap chain (your display buffers). You can also hook this view into the window-specific events for
your running app. To get the parent window for the app object, defined by the CoreWindow type, create a type
that implements IFrameworkViewSource, as we did in the previous code snippet.
Here's the basic set of steps to get a window using the core user interface framework:
1. Create a type that implements IFrameworkView. This is your view.
In this type, define:
An Initialize method that takes an instance of CoreApplicationView as a parameter. You can get an
instance of this type by calling CoreApplication.CreateNewView. The app object calls it when the app
is launched.
A SetWindow method that takes an instance of CoreWindow as a parameter. You can get an instance
of this type by accessing the CoreWindow property on your new CoreApplicationView instance.
A Load method that takes a string for an entry point as the sole parameter. The app object provides the
entry point string when you call this method. This is where you set up resources. You create your device
resources here. The app object calls it when the app is launched.
A Run method that activates the CoreWindow object and starts the window event dispatcher. The app
object calls it when the app's process starts.
An Uninitialize method that cleans up the resources set up in the call to Load. The app object calls this
method when the app is closed.
2. Create a type that implements IFrameworkViewSource. This is your view provider.
In this type, define:
A method named CreateView that returns an instance of your IFrameworkView implementation, as
created in Step 1.
3. Pass an instance of the view provider to CoreApplication.Run from main.
With those basics in mind, let's look at more options you have to extend this approach.

Core user interface types


Here are other core user interface types in the Windows Runtime that you might find helpful:
Windows.ApplicationModel.Core.CoreApplicationView
Windows.UI.Core.CoreWindow
Windows.UI.Core.CoreDispatcher
You can use these types to access your app's view, specifically, the bits that draw the contents of the app's parent
window, and handle the events fired for that window. The app window's process is an application single-threaded
apartment (ASTA) that is isolated and that handles all callbacks.
Your app's view is generated by the view provider for your app window, and in most cases will be implemented by
a specific framework package or the system itself, so you don't need to implement it yourself. For DirectX, you need
to implement a thin view provider, as discussed previously. There is a specific 1-to-1 relationship between the
following components and behaviors:
An app's view, which is represented by the CoreApplicationView type, and which defines the method(s) for
updating the window.
An ASTA, the attribution of which defines the threading behavior of the app. You cannot create instances of
COM STA-attributed types on an ASTA.
A view provider, which your app obtains from the system or which you implement.
A parent window, which is represented by the CoreWindow type.
Sourcing for all activation events. Both views and windows have separate activation events.
In summary, the app object provides a view provider factory. It creates a view provider and instantiates a parent
window for the app. The view provider defines the app's view for the parent window of the app. Now, let's discuss
the specifics of the view and the parent window.

CoreApplicationView behaviors and properties


CoreApplicationView represents the current app view. The app singleton creates the app view during
initialization, but the view remains dormant until it is activated. You can get the CoreWindow that displays the
view by accessing the CoreApplicationView.CoreWindow property on it, and you can handle activation and
deactivation events for the view by registering delegates with the CoreApplicationView.Activated event.

CoreWindow behaviors and properties


The parent window, which is a CoreWindow instance, is created and passed to the view provider when the app
object initializes. If the app has a window to display, it displays it; otherwise, it simply initializes the view.
CoreWindow provides a number of events specific to input and basic window behaviors. You can handle these
events by registering your own delegates with them.
You can also obtain the window event dispatcher for the window by accessing the CoreWindow.Dispatcher
property, which provides an instance of CoreDispatcher.
CoreDispatcher behaviors and properties
You can determine the threading behavior of event dispatching for a window with the CoreDispatcher type. On
this type, there's one particularly important method: the CoreDispatcher.ProcessEvents method, which starts
window event processing. Calling this method with the wrong option for your app can lead to all sorts of
unexpected event processing behaviors.

COREPROCESSEVENTSOPTION OPTION DESCRIPTION

CoreProcessEventsOption.ProcessOneAndAllPending Dispatch all currently available events in the queue. If no


events are pending, wait for the next new event.

CoreProcessEventsOption.ProcessOneIfPresent Dispatch one event if it is pending in the queue. If no events


are pending, don't wait for a new event to be raised but
instead return immediately.

CoreProcessEventsOption.ProcessUntilQuit Wait for new events and dispatch all available events.
Continue this behavior until the window is closed or the
application calls the Close method on the CoreWindow
instance.

CoreProcessEventsOption.ProcessAllIfPresent Dispatch all currently available events in the queue. If no


events are pending, return immediately.

UWP using DirectX should use the CoreProcessEventsOption.ProcessAllIfPresent option to prevent blocking
behaviors that might interrupt graphics updates.

ASTA considerations for DirectX devs


The app object that defines the run-time representation of yourUWP and DirectX app uses a threading model called
Application Single-Threaded Apartment (ASTA) to host your apps UI views. If you are developing a UWP and
DirectX app, you're familiar with the properties of an ASTA, because any thread you dispatch from your UWP and
DirectX app must use the Windows::System::Threading APIs, or use CoreWindow::CoreDispatcher. (You can
get the CoreWindow object for the ASTA by calling CoreWindow::GetForCurrentThread from your app.)
The most important thing for you to be aware of, as a developer of a UWP DirectX app, is that you must enable
your app thread to dispatch MTA threads by setting Platform::MTAThread on main().

[Platform::MTAThread]
int main(Platform::Array<Platform::String^>^)
{
auto myDXAppSource = ref new MyDXAppSource(); // your view provider factory
CoreApplication::Run(myDXAppSource);
return 0;
}

When the app object for your UWP DirectX app activates, it creates the ASTA that will be used for the UI view. The
new ASTA thread calls into your view provider factory, to create the view provider for your app object, and as a
result, your view provider code will run on that ASTA thread.
Also, any thread that you spin off from the ASTA must be in an MTA. Be aware that any MTA threads that you spin
off can still create reentrancy issues and result in a deadlock.
If you're porting existing code to run on the ASTA thread, keep these considerations in mind:
Wait primitives, such as CoWaitForMultipleObjects, behave differently in an ASTA than in an STA.
The COM call modal loop operates differently in an ASTA. You can no longer receive unrelated calls while an
outgoing call is in progress. For example, the following behavior will create a deadlock from an ASTA (and
immediately crash the app):
1. The ASTA calls an MTA object and passes an interface pointer P1.
2. Later, the ASTA calls the same MTA object. The MTA object calls P1 before it returns to the ASTA.
3. P1 cannot enter the ASTA as it's blocked making an unrelated call. However, the MTA thread is blocked as
it tries to make the call to P1.
You can resolve this by :
Using the async pattern defined in the Parallel Patterns Library (PPLTasks.h)
Calling CoreDispatcher::ProcessEvents from your app's ASTA (the main thread of your app) as soon as
possible to allow arbitrary calls.
That said, you cannot rely on immediate delivery of unrelated calls to your app's ASTA. For more info about
async calls, read Asynchronous programming in C++.
Overall, when designing your UWP app, use the CoreDispatcher for your app's CoreWindow and
CoreDispatcher::ProcessEvents to handle all UI threads rather than trying to create and manage your MTA
threads yourself. When you need a separate thread that you cannot handle with the CoreDispatcher, use async
patterns and follow the guidance mentioned earlier to avoid reentrancy issues.
Project templates and tools for games
3/6/2017 1 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This topic shows you what you need to start programming DirectX games for the Universal Windows Platform
(UWP).
First, you need Visual Studio.

Get Visual Studio


Download and install Microsoft Visual Studio 2015.

TOPIC DESCRIPTION

DirectX game project templates Learn about the templates for creating a UWP and
DirectX game.

Visual Studio tools for game programming An overview of DirectX specific tools available in Visual
Studio.

Graphics diagnostics tools Learn how to get and use the graphics diagnostics
features including Graphics Debugging, Graphics Frame
Analysis, and GPU Usage in Visual Studio.

Next steps
If you are porting an existing game, see the following topics.
Port from OpenGL ES 2.0 to DirectX 11
Port from DirectX 9 to UWP
If you are creating a new DirectX game, see the following topics.
Create a simple UWP game with DirectX
Developing Marble Maze, a Universal Windows Platform game in C++ and DirectX

Note
This article is for Windows 10 developers writing Universal Windows Platform (UWP) apps. If youre
developing for Windows 8.x or Windows Phone 8.x, see the archived documentation.
DirectX game project templates
3/6/2017 3 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
The DirectX and Universal Windows Platform (UWP) templates allow you to quickly create a project as a starting
point for your game.

Prerequisites
To create the project you need to:
Download Microsoft Visual Studio 2015. Visual Studio 2015 has tools for graphics programming, such as
debugging tools. For an overview of DirectX graphics and gaming features and tools, see Visual Studio tools
for DirectX game development.

Choosing a template
Visual Studio 2015 includes three DirectX and UWP templates:
DirectX 11 App (Universal Windows) - The DirectX 11 App (Universal Windows) template creates a UWP
project, which renders directly to an app window using DirectX 11.
DirectX 12 App (Universal Windows) - The DirectX 12 App (Universal Windows) template creates a project
UWP, which renders directly to an app window using DirectX 12.
DirectX 11 and XAML App (Universal Windows) - The DirectX 11 and XAML App (Universal Windows) template
creates a UWP project, which renders inside a XAML control using DirectX 11. This template uses a
SwapChainPanel, so you can use XAML UI controls. This can make adding user interface elements easier, but
using the XAML template may result in lower performance.
Which template you choose depends on the performance and what technologies you want to use.

Template structure
The DirectX Universal Windows templates contain the following files:
pch.h and pch.cpp - Precompiled header support.
Package.appxmanifest - The properties of the deployment package for the app.
*.pfx - Certificates for the application.
External Dependencies - Links to external files the project use.s
*Main.h and *Main.cpp - Methods for managing application assets, updating application state, and rendering
the frame.
App.h and App.cpp - Main entry point for the application. Connects the app with the Windows shell and
handles application lifecycle events. These files only appear in the DirectX 11 App (Universal Windows) and
DirectX 12 App (Universal Windows) templates.
App.xaml, App.xaml.cpp, and App.xaml.h - Main entry point for the application. Connects the app with the
Windows shell and handles application lifecycle events. These files only appear in the DirectX 11 and XAML
App (Universal Windows) template.
DirectXPage.xaml, DirectXPage.xaml.cpp, and DirectXPage.xaml.h - A page that hosts a DirectX
SwapChainPanel. These files only appear in the DirectX 11 and XAML App (Universal Windows) template.
Content
Sample3DSceneRenderer.h and Sample3DSceneRenderer.cpp - A sample renderer that instantiates a
basic rendering pipeline.
SampleFpsTextRenderer.h and SampleFpsTextRenderer.cpp - Renders the current FPS value in the
bottom right corner of the screen using Direct2D and DirectWrite. These files only appear in the DirectX
11 App (Universal Windows) and DirectX 11 and XAML App (Universal Windows) templates.
SamplePixelShader.hlsl - A simple example pixel shader.
SampleVertexShader.hlsl - A simple example vertex shader.
ShaderStructures.h - Structures used to send date to the example vertex shader.
Common
StepTimer.h - A helper class for animation and simulation timing.
DirectXHelper.h - Misc Helper functions.
DeviceResources.h and Device Resources.cpp - Provides an interface for an application that owns
DeviceResources to be notified of the device being lost or created.
d3dx12.h - Contains the D3DX12 utility library. This file only appears in the DirectX 12 App (Universal
Windows).
Assets - Logo and splashscreen images used by the application.

Next steps
Now that you have a starting point, add to it to build your game development knowledge and Windows Store
game development skills.
If you are porting an existing game, see the following topics.
Port from OpenGL ES 2.0 to Direct3D 11.1
Port from DirectX 9 to Universal Windows Platform
If you are creating a new DirectX game, see the following topics.
Create a simple UWP game with DirectX
Developing Marble Maze, a Universal Windows Platform game in C++ and DirectX

Note
This article is for Windows 10 developers writing Universal Windows Platform (UWP) apps. If youre
developing for Windows 8.x or Windows Phone 8.x, see the archived documentation.
Visual Studio tools for game programming
3/6/2017 3 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Summary
Create a DirectX game project from a template
Visual Studio tools for DirectX game programming
If you use Visual Studio Ultimate to develop DirectX apps, there are additional tools available for creating, editing,
previewing, and exporting image, model, and shader resources. There are also tools that you can use to convert
resources at build time and debug DirectX graphics code.
This topic gives an overview of these graphics tools.

Image Editor
Use the Image Editor to work with the kinds of rich texture and image formats that DirectX uses. The Image Editor
supports the following formats.
.png
.jpg, .jpeg, .jpe, .jfif
.dds
.gif
.bmp
.dib
.tif, .tiff
.tga
Create build customization files to convert these to .dds files at build time.
For more information, see Working with Textures and Images.

Note The Image Editor is not intended to be a replacement for a full feature image editing app, but is
appropriate for many simple viewing and editing scenarios.

Model Editor
You can use the Model Editor to create basic 3D models from scratch, or to view and modify more-complex 3D
models from full-featured 3D modeling tools. The Model Editor supports several 3D model formats that are used
in DirectX app development. You can create build customization files to convert these to .cmo files at build time.
.fbx
.dae
.obj
Here's a screenshot of a model in the editor with lighting applied.
For more information, see Working with 3-D Models.

Note The Model Editor is not intended to be a replacement for a full feature model editing app, but is
appropriate for many simple viewing and editing scenarios.

Shader Designer
Use the Shader Designer to create custom visual effects for your game or app even if you don't know HLSL
programming.
You create a shader visually as a graph. Each node displays a preview of the output up to that operation. Here's an
example that applies Lambert lighting with a sphere preview.

Use the Shader Editor to design, edit, and save shaders in the .dgsl format. It also exports the following formats.
.hlsl (source code)
.cso (bytecode)
.h (HLSL bytecode array)
Create build customization files to convert any of these formats to .cso files at build time.
Here is a portion of HLSL code that is exported by the Shader Editor. This is only the code for the Lambert lighting
node.

//
// Lambert lighting function
//
float3 LambertLighting(
float3 lightNormal,
float3 surfaceNormal,
float3 materialAmbient,
float3 lightAmbient,
float3 lightColor,
float3 pixelColor
)
{
// Compute the amount of contribution per light.
float diffuseAmount = saturate(dot(lightNormal, surfaceNormal));
float3 diffuse = diffuseAmount * lightColor * pixelColor;

// Combine ambient with diffuse.


return saturate((materialAmbient * lightAmbient) + diffuse);
}

For more information, see Working with Shaders.

Build customizations for 3D assets


You can add build customizations to your project so that Visual Studio converts resources to usable formats. After
that, you can load the assets into your app and use them by creating and filling DirectX resources just like you
would in any other DirectX app.
To add a build customization, you right-click on the project in the Solution Explorer and select Build
Customizations.... You can add the following types of build customizations to your project.
Image Content Pipeline takes image files as input and outputs DirectDraw Surface (.dds) files.
Mesh Content Pipeline takes mesh files (such as .fbx) and outputs .cmo mesh files.
Shader Content Pipeline takes Visual Shader Graph (.dgsl) from the Visual Studio Shader Editor and outputs a
Compiled Shader Output (.cso) file.
For more information, see Using 3-D Assets in Your Game or App.

Debugging DirectX graphics


Visual Studio provides graphics-specific debugging tools. Use these tools to debug things like:
The graphics pipeline.
The event call stack.
The object table.
The device state.
Shader bugs.
Uninitialized or incorrect constant buffers and parameters.
DirectX version compatibility.
Limited Direct2D support.
Operating system and SDK requirements.
For more information, see Debugging DirectX Graphics.

Note This article is for Windows 10 developers writing Universal Windows Platform (UWP) apps. If youre
developing for Windows 8.x or Windows Phone 8.x, see the archived documentation.
Graphics diagnostics tools
3/6/2017 1 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
With Windows 10, the graphics diagnostic tools are now available from within Windows as an optional feature. To
use the graphics diagnostic features provided in the runtime and Visual Studio to develop DirectX apps or games,
install the optional Graphics Tools feature:
1. Go to Settings, select System, select Apps & Features, and then click Manage optional features.
2. Click Add a feature
3. In the Optional features list, select Graphics Tools and then click Install.
Graphics diagnostics features include the ability to create Direct3D debug devices (via Direct3D SDK Layers) in the
DirectX runtime, plus Graphics Debugging, Frame Analysis, and GPU Usage.
Graphics Debugging lets you trace the Direct3D calls being made by your app. Then, you can replay those calls,
inspect parameters, debug and experiment with shaders, and visualize graphics assets to diagnose rendering
issues. Logs can be taken on Windows PCs, simulators, or devices, and be played back on different hardware.
Graphics Frame Analysis in Visual Studio runs on a graphics debugging log and gathers baseline timing for the
Direct3D draw calls. It then performs a set of experiments by modifying various graphics settings and produces
a table of timing results. You can use this data to understand graphics performance issues in your app, and you
can review results of the various experiments to identify opportunities for performance improvements.
GPU Usage in Visual Studio allows you to monitor GPU use in real time. It collects and analyzes the timing data
of the workloads being handled by the CPU and GPU, so you can determine where the bottlenecks are.

Related topics
Graphics Diagnostics Overview in Visual Studio
Launching and resuming apps (DirectX and C++)
3/6/2017 1 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Learn how to launch, suspend, and resume your Universal Windows Platform (UWP) DirectX app.

TOPIC DESCRIPTION

How to activate an app This topic shows how to define the activation experience for a
UWP DirectX app.

How to suspend an app This topic shows how to save important system state and app
data when the system suspends your UWP DirectX app.

How to resume an app This topic shows how to restore important application data
when the system resumes your UWP DirectX app.
How to activate an app (DirectX and C++)
3/6/2017 1 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This topic shows how to define the activation experience for a Universal Windows Platform (UWP) DirectX app.

Register the app activation event handler


First, register to handle the CoreApplicationView::Activated event, which is raised when your app is started and
initialized by the operating system.
Add this code to your implementation of the IFrameworkView::Initialize method of your view provider (named
MyViewProvider in the example):

void App::Initialize(CoreApplicationView^ applicationView)


{
// Register event handlers for the app lifecycle. This example includes Activated, so that we
// can make the CoreWindow active and start rendering on the window.
applicationView->Activated +=
ref new TypedEventHandler<CoreApplicationView^, IActivatedEventArgs^>(this, &App::OnActivated);

//...

Activate the CoreWindow instance for the app


When your app starts, you must obtain a reference to the CoreWindow for your app. CoreWindow contains the
window event message dispatcher that your app uses to process window events. Obtain this reference in your
callback for the app activation event by calling CoreWindow::GetForCurrentThread. Once you have obtained this
reference, activate the main app window by calling CoreWindow::Activate.

void App::OnActivated(CoreApplicationView^ applicationView, IActivatedEventArgs^ args)


{
// Run() won't start until the CoreWindow is activated.
CoreWindow::GetForCurrentThread()->Activate();
}

Start processing event message for the main app window


Your callbacks occur as event messages are processed by the CoreDispatcher for the app's CoreWindow. This
callback will not be invoked if you do not call CoreDispatcher::ProcessEvents from your app's main loop
(implemented in the IFrameworkView::Run method of your view provider).
// This method is called after the window becomes active.
void App::Run()
{
while (!m_windowClosed)
{
if (m_windowVisible)
{
CoreWindow::GetForCurrentThread()->Dispatcher->ProcessEvents(CoreProcessEventsOption::ProcessAllIfPresent);

m_main->Update();

if (m_main->Render())
{
m_deviceResources->Present();
}
}
else
{
CoreWindow::GetForCurrentThread()->Dispatcher->ProcessEvents(CoreProcessEventsOption::ProcessOneAndAllPending);
}
}
}

Related topics
How to suspend an app (DirectX and C++)
How to resume an app (DirectX and C++)
How to suspend an app (DirectX and C++)
3/6/2017 4 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This topic shows how to save important system state and app data when the system suspends your Universal
Windows Platform (UWP) DirectX app.

Register the suspending event handler


First, register to handle the CoreApplication::Suspending event, which is raised when your app is moved to a
suspended state by a user or system action.
Add this code to your implementation of the IFrameworkView::Initialize method of your view provider:

void App::Initialize(CoreApplicationView^ applicationView)


{
//...

CoreApplication::Suspending +=
ref new EventHandler<SuspendingEventArgs^>(this, &App::OnSuspending);

//...
}

Save any app data before suspending


When your app handles the CoreApplication::Suspending event, it has the opportunity to save its important
application data in the handler function. The app should use the LocalSettings storage API to save simple
application data synchronously. If you are developing a game, save any critical game state information. Don't
forget to suspend the audio processing!
Now, implement the callback. Save the app data in this method.

void App::OnSuspending(Platform::Object^ sender, SuspendingEventArgs^ args)


{
// Save app state asynchronously after requesting a deferral. Holding a deferral
// indicates that the application is busy performing suspending operations. Be
// aware that a deferral may not be held indefinitely. After about five seconds,
// the app will be forced to exit.
SuspendingDeferral^ deferral = args->SuspendingOperation->GetDeferral();

create_task([this, deferral]()
{
m_deviceResources->Trim();

// Insert your code here.

deferral->Complete();
});
}

This callback must complete with 5 seconds. During this callback, you must request a deferral by calling
SuspendingOperation::GetDeferral, which starts the countdown. When your app completes the save operation,
call SuspendingDeferral::Complete to tell the system that your app is now ready to be suspended. If you do not
request a deferral, or if your app takes longer than 5 seconds to save the data, your app is automatically
suspended.
This callback occurs as an event message processed by the CoreDispatcher for the app's CoreWindow. This
callback will not be invoked if you do not call CoreDispatcher::ProcessEvents from your app's main loop
(implemented in the IFrameworkView::Run method of your view provider).

// This method is called after the window becomes active.


void App::Run()
{
while (!m_windowClosed)
{
if (m_windowVisible)
{
CoreWindow::GetForCurrentThread()->Dispatcher->ProcessEvents(CoreProcessEventsOption::ProcessAllIfPresent);

m_main->Update();

if (m_main->Render())
{
m_deviceResources->Present();
}
}
else
{
CoreWindow::GetForCurrentThread()->Dispatcher->ProcessEvents(CoreProcessEventsOption::ProcessOneAndAllPending);
}
}
}

Call Trim()
Starting in Windows 8.1, all DirectX Windows Store apps must call IDXGIDevice3::Trim when suspending. This
call tells the graphics driver to release all temporary buffers allocated for the app, which reduces the chance that
the app will be terminated to reclaim memory resources while in the suspend state. This is a certification
requirement for Windows 8.1.
void App::OnSuspending(Platform::Object^ sender, SuspendingEventArgs^ args)
{
// Save app state asynchronously after requesting a deferral. Holding a deferral
// indicates that the application is busy performing suspending operations. Be
// aware that a deferral may not be held indefinitely. After about five seconds,
// the app will be forced to exit.
SuspendingDeferral^ deferral = args->SuspendingOperation->GetDeferral();

create_task([this, deferral]()
{
m_deviceResources->Trim();

// Insert your code here.

deferral->Complete();
});
}

// Call this method when the app suspends. It provides a hint to the driver that the app
// is entering an idle state and that temporary buffers can be reclaimed for use by other apps.
void DX::DeviceResources::Trim()
{
ComPtr<IDXGIDevice3> dxgiDevice;
m_d3dDevice.As(&dxgiDevice);

dxgiDevice->Trim();
}

Release any exclusive resources and file handles


When your app handles the CoreApplication::Suspending event, it also has the opportunity to release exclusive
resources and file handles. Explicitly releasing exclusive resources and file handles helps to ensure that other apps
can access them while your app isn't using them. When the app is activated after termination, it should open its
exclusive resources and file handles.

Remarks
The system suspends your app whenever the user switches to another app or to the desktop. The system resumes
your app whenever the user switches back to it. When the system resumes your app, the content of your variables
and data structures is the same as it was before the system suspended the app. The system restores the app exactly
where it left off, so that it appears to the user as if it's been running in the background.
The system attempts to keep your app and its data in memory while it's suspended. However, if the system does
not have the resources to keep your app in memory, the system will terminate your app. When the user switches
back to a suspended app that has been terminated, the system sends an Activated event and should restore its
application data in its handler for the CoreApplicationView::Activated event.
The system doesn't notify an app when it's terminated, so your app must save its application data and release
exclusive resources and file handles when it's suspended, and restore them when the app is activated after
termination.

Related topics
How to resume an app (DirectX and C++)
How to activate an app (DirectX and C++)
How to resume an app (DirectX and C++)
3/6/2017 2 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This topic shows how to restore important application data when the system resumes your Universal Windows
Platform (UWP) DirectX app.

Register the resuming event handler


Register to handle the CoreApplication::Resuming event, which indicates that the user switched away from your
app and then back to it.
Add this code to your implementation of the IFrameworkView::Initialize method of your view provider:

// The first method is called when the IFrameworkView is being created.


void App::Initialize(CoreApplicationView^ applicationView)
{
//...

CoreApplication::Resuming +=
ref new EventHandler<Platform::Object^>(this, &App::OnResuming);

//...

Refresh displayed content after suspension


When your app handles the Resuming event, it has the opportunity to refresh its displayed content. Restore any
app you have saved with your handler for CoreApplication::Suspending, and restart processing. Game devs: if
you've suspended your audio engine, now's the time to restart it.

void App::OnResuming(Platform::Object^ sender, Platform::Object^ args)


{
// Restore any data or state that was unloaded on suspend. By default, data
// and state are persisted when resuming from suspend. Note that this event
// does not occur if the app was previously terminated.

// Insert your code here.


}

This callback occurs as an event message processed by the CoreDispatcher for the app's CoreWindow. This
callback will not be invoked if you do not call CoreDispatcher::ProcessEvents from your app's main loop
(implemented in the IFrameworkView::Run method of your view provider).
// This method is called after the window becomes active.
void App::Run()
{
while (!m_windowClosed)
{
if (m_windowVisible)
{
CoreWindow::GetForCurrentThread()->Dispatcher->ProcessEvents(CoreProcessEventsOption::ProcessAllIfPresent);

m_main->Update();

if (m_main->Render())
{
m_deviceResources->Present();
}
}
else
{
CoreWindow::GetForCurrentThread()->Dispatcher->ProcessEvents(CoreProcessEventsOption::ProcessOneAndAllPending);
}
}
}

Remarks
The system suspends your app whenever the user switches to another app or to the desktop. The system resumes
your app whenever the user switches back to it. When the system resumes your app, the content of your variables
and data structures is the same as it was before the system suspended the app. The system restores the app exactly
where it left off, so that it appears to the user as if it's been running in the background. However, the app may have
been suspended for a significant amount of time, so it should refresh any displayed content that might have
changed while the app was suspended, and restart any rendering or audio processing threads. If you've saved any
game state data during a previous suspend event, restore it now.

Related topics
How to suspend an app (DirectX and C++)
How to activate an app (DirectX and C++)
DirectX Samples
3/6/2017 1 min to read Edit on GitHub

These are some game samples developed with DirectX.

TOPIC DESCRIPTION

Create a simple UWP game with DirectX Create a basic Universal Windows Platform (UWP) game
with DirectX and C++. This set of tutorials focus on key
UWP DirectX game development techniques and
considerations.

Developing Marble Maze, a Universal Windows Platform Create a 3D game that works on various types of devices
game in C++ and DirectX like tablets, desktop PCs, and laptops.
Create a simple Universal Windows Platform (UWP)
game with DirectX
3/6/2017 3 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
In this set of tutorials, you learn how to create a basic Universal Windows Platform (UWP) game with DirectX and
C++. We cover all the major parts of a game, including the processes for loading assets such as arts and meshes,
creating a main game loop, implementing a simple rendering pipeline, and adding sound and controls.
We show you the UWP game development techniques and considerations. We don't provide a complete end-to-
end game. Rather, we focus on key UWP DirectX game development concepts, and call out Windows Runtime
specific considerations around those concepts.

Objective
To use the basic concepts and components of a UWP DirectX game, and to become more comfortable
designing UWP games with DirectX.

What you need to know before starting


Before we get started with this tutorial, you need to be familiar with these subjects.
Microsoft C++ with Component Extensions (C++/CX). This is an update to Microsoft C++ that incorporates
automatic reference counting, and is the language for developing a UWP games with DirectX 11.1 or later
versions.
Basic linear algebra and Newtonian physics concepts.
Basic graphics programming terminology.
Basic Windows programming concepts.
Basic familiarity with the Direct2D and Direct3D 11 APIs.

The Windows Store Direct3D shooting game sample


This sample implements a simple first-person shooting gallery, where the player fires balls at moving targets.
Hitting each target awards a set number of points, and the player can progress through 6 levels of increasing
challenge. At the end of the levels, the points are tallied, and the player is awarded a final score.
The sample demonstrates the game concepts:
Interoperation between DirectX 11.1 and the Windows Runtime
A first-person 3D perspective and camera
Stereoscopic 3D effects
Collision detection between objects in 3D
Handling player input for mouse, touch, and Xbox 360 controller controls
Audio mixing and playback
A basic game state machine
TOPIC DESCRIPTION

Set up the game project The first step in assembling your game is to set up a project
in Microsoft Visual Studio in such a way that you minimize
the amount of code infrastructure work you need to do. You
can save yourself a lot of time and hassle by using the right
template and configuring the project specifically for game
development. We step you through the setup and
configuration of a simple game project.

Define the game's UWP app framework The first part of coding a UWP with DirectX game is building
the framework that lets the game object interact with
Windows. This includes Windows Runtime properties like
suspend/resume event handling, window focus, and
snapping, plus as the events, interactions and transitions for
the user interface. We go over how the sample game is
structured, and how it defines the high-level state machine
for the player and system interaction.

Define the main game object Now, we look at the details of the game sample's main object
and how the rules it implements translate into interactions
with the game world.

Assemble the rendering framework Now, it's time to look at how the sample game uses that
structure and state to display its graphics. Here, we look at
how to implement a rendering framework, starting from the
initialization of the graphics device through the presentation
of the graphics objects for display.

Add a user interface You've seen how the sample game implements the main
game object as well as the basic rendering framework. Now,
let's look at how the sample game provides feedback about
game state to the player. Here, you learn how you can add
simple menu options and heads-up display components on
top of the 3-D graphics pipeline output.

Add controls Now, we take a look at how the game sample implements
move-look controls in a 3-D game, and how to develop basic
touch, mouse, and game controller controls.
TOPIC DESCRIPTION

Add sound In this step, we examine how the shooting game sample
creates an object for sound playback using the XAudio2 APIs.

Extend the game sample Congratulations! At this point, you understand the key
components of a basic UWP DirectX 3D game. You can set
up the framework for a game, including the view provider
and rendering pipeline, and implement a basic game loop.
You can also create a basic user interface overlay, and
incorporate sounds and controls. You're on your way to
creating a game of your own, and here are some resources to
further your knowledge of DirectX game development.
Set up the game project
3/6/2017 5 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
The first step in assembling your game is to set up a project in Microsoft Visual Studio in such a way that you
minimize the amount of code infrastructure work you need to do. You can save yourself a lot of time and hassle by
using the right template and configuring the project specifically for game development. We step you through the
setup and configuration of a simple game project.

Objective
To learn how to set up a Direct3D game project in Visual Studio.

Setting up the game project


You can write a game from scratch, with just a handy text editor, a few samples, and a hat full of raw brainpower.
But that probably isn't the most effective use of your time. If you're new to Universal Windows Platform (UWP)
development, why not let Visual Studio shoulder some of the burden? Here's what to do to get your project off to a
roaring start.

1. Pick the right template


A Visual Studio template is a collection of settings and code files that target a specific type of app based on the
preferred language and technology. In Microsoft Visual Studio 2015, you'll find a number of templates that can
dramatically ease game and graphics app development. If you don't use a template, you must develop much of the
basic graphics rendering and display framework yourself, which can be a bit of a chore to a new game developer.
The right template for this tutorial, is the one titled DirectX 11 App (Universal Windows). In Visual Studio 2015, click
File... > New Project, and then:
1. From Templates, select Visual C++, Windows, Universal.
2. In the center pane, select DirectX 11 App (Universal Windows).
3. Give your game project a name, and click OK.
This template provides you with the basic framework for a UWP app using DirectX with C++. Go on, build and run
it with F5! Check out that powder blue screen. Take a moment and review the code that the template provides. Tthe
template creates multiple code files containing the basic functionality for a UWP app using DirectX with C++. We
talk more about the other code files in step 3. Right now, let's quickly inspect App.h.
ref class App sealed : public Windows::ApplicationModel::Core::IFrameworkView
{
public:
App();

// IFrameworkView Methods.
virtual void Initialize(Windows::ApplicationModel::Core::CoreApplicationView^ applicationView);
virtual void SetWindow(Windows::UI::Core::CoreWindow^ window);
virtual void Load(Platform::String^ entryPoint);
virtual void Run();
virtual void Uninitialize();

protected:
// Application lifecycle event handlers.
void OnActivated(Windows::ApplicationModel::Core::CoreApplicationView^ applicationView,
Windows::ApplicationModel::Activation::IActivatedEventArgs^ args);
void OnSuspending(Platform::Object^ sender, Windows::ApplicationModel::SuspendingEventArgs^ args);
void OnResuming(Platform::Object^ sender, Platform::Object^ args);

// Window event handlers.


void OnWindowSizeChanged(Windows::UI::Core::CoreWindow^ sender, Windows::UI::Core::WindowSizeChangedEventArgs^ args);
void OnVisibilityChanged(Windows::UI::Core::CoreWindow^ sender, Windows::UI::Core::VisibilityChangedEventArgs^ args);
void OnWindowClosed(Windows::UI::Core::CoreWindow^ sender, Windows::UI::Core::CoreWindowEventArgs^ args);

// DisplayInformation event handlers.


void OnDpiChanged(Windows::Graphics::Display::DisplayInformation^ sender, Platform::Object^ args);
void OnOrientationChanged(Windows::Graphics::Display::DisplayInformation^ sender, Platform::Object^ args);
void OnDisplayContentsInvalidated(Windows::Graphics::Display::DisplayInformation^ sender, Platform::Object^ args);

private:
std::shared_ptr<DX::DeviceResources> m_deviceResources;
std::unique_ptr<MyAwesomeGameMain> m_main;
bool m_windowClosed;
bool m_windowVisible;
};

You create these 5 methods, Initialize, SetWindow, Load, Run, and Uninitialize, when implementing the
IFrameworkView interface that defines a view provider. These methods are run by the app singleton that is
created when your game is launched, and load all your app's resources as well as connect the appropriate event
handlers.
Your main method is in the App.cpp source file. It looks like this:

[Platform::MTAThread]
int main(Platform::Array<Platform::String^>^)
{
auto direct3DApplicationSource = ref new Direct3DApplicationSource();
CoreApplication::Run(direct3DApplicationSource);
return 0;
}

Right now, it creates an instance of the Direct3D view provider from the view provider factory
(Direct3DApplicationSource, defined in App.h), and passes it to the app singleton to run
(CoreApplication::Run). This means that the starting point for your game lives in the body of the implementation
of the IFrameworkView::Run method, in this case, App::Run. Here's the code:
void App::Run()
{
while (!m_windowClosed)
{
if (m_windowVisible)
{
CoreWindow::GetForCurrentThread()->Dispatcher->ProcessEvents(CoreProcessEventsOption::ProcessAllIfPresent);

m_main->Update();

if (m_main->Render())
{
m_deviceResources->Present();
}
}
else
{
CoreWindow::GetForCurrentThread()->Dispatcher->ProcessEvents(CoreProcessEventsOption::ProcessOneAndAllPending);
}
}
}

If the window for your game isn't closed, this dispatches all events, updates the timer, and renders and presents the
results of your graphics pipeline. We talk about this in greater detail in Defining the game's UWP framework and
Assembling the rendering pipeline. At this point, you should have a sense of the basic code structure of a UWP
DirectX game.

2. Review and update the package.appxmanifest file


The code files aren't all there is to the template. The package.appxmanifest file contains metadata about your
project that are used for packaging and launching your game and for submission to the Windows Store. It also
contains important info the player's system uses to provide access to the system resources the game needs to run.
Launch the Manifest Designer by double-clicking the package.appxmanifest file in Solution Explorer. You see
this view:
For more info about the package.appxmanifest file and packaging, see Manifest Designer. For now, take a look at
the Capabilities tab and look at the options provided.

If you don't select the capabilities that your game uses, such as access to the Internet for global high score board,
you won't be able to access the corresponding resources or features. When you create a new game, make sure that
you select the capabilities that your game needs to run!
Now, let's look at the rest of the files that come with the DirectX 11 App (Universal Windows) template.

3. Review the included libraries and headers


There are a few files we haven't looked at yet. These files provide additional tools and support common to Direct3D
game development scenarios.

TEMPLATE SOURCE FILE DESCRIPTION

StepTimer.h Defines a high-resolution timer useful for gaming or


interactive rendering apps.

Sample3DSceneRenderer.h/.cpp Defines a basic renderer implementation that connects a


Direct3D swap chain and graphics adapter to your UWP using
DirectX.

DirectXHelper.h Implements a single method, DX::ThrowIfFailed, that


converts the error HRESULT values returned by DirectX APIs
into Windows Runtime exceptions. Use this method to put a
break point for debugging DirectX errors.

pch.h/.cpp Contains all the Windows system includes for the APIs used by
a Direct3D app, including the DirectX 11 APIs.

SamplePixelShader.hlsl Contains the high-level shader language (HLSL) code for a


very basic pixel shader.
TEMPLATE SOURCE FILE DESCRIPTION

SampleVertexShader.hlsl Contains the high-level shader language (HLSL) code for a


very basic vertex shader.

Next steps
At this point, you can create a UWP with DirectX game project and identify the components and files provided by
the DirectX 11 App (Universal Windows) template.
In the next tutorial, Defining the game's UWP framework, we work with a completed game and examine how it
uses and extends many of the concepts and components that the template provides.
Define the game's Universal Windows Platform
(UWP) app framework
3/6/2017 27 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
The first part of coding a Universal Windows Platform (UWP) with DirectX game is building the framework that
lets the game object interact with Windows. This includes Windows Runtime properties like suspend/resume
event handling, window focus, and snapping, plus as the events, interactions and transitions for the user interface.
We go over how the sample game is structured, and how it defines the high-level state machine for the player and
system interaction.

Objective
To set up the framework for a UWP DirectX game, and implement the state machine that defines the overall
game flow.

Initializing and starting the view provider


In any UWP DirectX game, you must obtain a view provider that the app singleton, the Windows Runtime object
that defines an instance of your running app, can use to access the graphics resources it needs. Through the
Windows Runtime, your app has a direct connection with the graphics interface, but you need to specify the
resources you need and how to handle them.
As we discussed in Setting up the game project, Microsoft Visual Studio 2015 provides an implementation of a
basic renderer for DirectX in the Sample3DSceneRenderer.cpp file that is available when you pick the DirectX
11 App (Universal Windows) template.
For more details about understanding and creating a view provider and renderer, see How to set up your UWP
with C++ and DirectX to display a DirectX view.
Suffice to say, you must provide the implementation for 5 methods that the app singleton calls:
Initialize
SetWindow
Load
Run
Uninitialize
In the DirectX11 App (Universal Windows) template, these 5 methods are defined on the App object in App.h. Let's
take a look at the way they are implemented in this game.
The Initialize method of the view provider
void App::Initialize(
_In_ CoreApplicationView^ applicationView
)
{
applicationView->Activated +=
ref new TypedEventHandler<CoreApplicationView^, IActivatedEventArgs^>(this, &App::OnActivated);

CoreApplication::Suspending +=
ref new EventHandler<SuspendingEventArgs^>(this, &App::OnSuspending);

CoreApplication::Resuming +=
ref new EventHandler<Platform::Object^>(this, &App::OnResuming);

m_controller = ref new MoveLookController();


m_renderer = ref new GameRenderer();
m_game = ref new Simple3DGame();
}

The app singleton first calls Initialize. Therefore, it is crucial that this method handles the most fundamental
behaviors of a UWP game, such as handling the activation of the main window and making sure that the game can
handle a sudden suspend (and a possible later resume) event.
When the game app is initialized, it allocates specific memory for the controller to allow the player to begin
providing input. It also creates new, uninitialized instances of the game's renderer and state machine. We discuss
the details in Defining the main game object.
At this point, the game app can handle a suspend (or resume) message, and has memory allocated for the
controller, the renderer, and the game itself. But there's no window to work with, and the game is uninitialized.
There's a few more things that need to happen!
The SetWindow method of the view provider
void App::SetWindow(
_In_ CoreWindow^ window
)
{
window->PointerCursor = ref new CoreCursor(CoreCursorType::Arrow, 0);

window->SizeChanged +=
ref new TypedEventHandler<CoreWindow^, WindowSizeChangedEventArgs^>(this, &App::OnWindowSizeChanged);

window->Closed +=
ref new TypedEventHandler<CoreWindow^, CoreWindowEventArgs^>(this, &App::OnWindowClosed);

window->VisibilityChanged +=
ref new TypedEventHandler<CoreWindow^, VisibilityChangedEventArgs^>(this, &App::OnVisibilityChanged);

DisplayProperties::LogicalDpiChanged +=
ref new DisplayPropertiesEventHandler(this, &App::OnLogicalDpiChanged);

m_controller->Initialize(window);

m_controller->SetMoveRect(
XMFLOAT2(0.0f, window->Bounds.Height - GameConstants::TouchRectangleSize),
XMFLOAT2(GameConstants::TouchRectangleSize, window->Bounds.Height)
);
m_controller->SetFireRect(
XMFLOAT2(window->Bounds.Width - GameConstants::TouchRectangleSize, window->Bounds.Height -
GameConstants::TouchRectangleSize),
XMFLOAT2(window->Bounds.Width, window->Bounds.Height)
);

m_renderer->Initialize(window, DisplayProperties::LogicalDpi);
SetGameInfoOverlay(GameInfoOverlayState::Loading);
ShowGameInfoOverlay();
}

Now, with a call to an implementation of SetWindow, the app singleton provides a CoreWindow object that
represents the game's main window, and makes its resources and events available to the game. Because there's a
window to work with, the game can now start adding in the basic user interface components and events: a pointer
(used by both mouse and touch controls), and the basic events for window resizing, closing, and DPI changes (if
the display device changes).
The game app also initializes the controller, because there's a window to interact with, and initializes the game
object itself. It can read input from the controller (touch, mouse, or XBox 360 controller).
After the controller is initialized, the app defines two rectangular areas in the lower-left and lower-right corners of
the screen for the move and camera touch controls, respectively. The player uses the lower-left rectangle, defined
by the call to SetMoveRect, as a virtual control pad for moving the camera forward and backward, and side to
side. The lower-right rectangle, defined by the SetFireRect method, is used as a virtual button to fire the ammo.
It's all starting to come together.
The Load method of the view provider
void App::Load(
Platform::String^ entryPoint
)
{
task<void>([this]()
{
m_game->Initialize(m_controller, m_renderer);

return m_renderer->CreateGameDeviceResourcesAsync(m_game);

}).then([this]()
{
// The finalize code needs to run in the same thread context
// in which the m_renderer object was created because the D3D device context
// can be accessed only on a single thread.
m_renderer->FinalizeCreateGameDeviceResources();

InitializeGameState();

if (m_updateState == UpdateEngineState::WaitingForResources)
{
// In the middle of a game, so spin up the async task to load the level.
create_task([this]()
{
return m_game->LoadLevelAsync();

}).then([this]()
{
// The m_game object may need to deal with D3D device context work, so
// the finalizer code needs to run in the same thread
// context as the m_renderer object was created because the D3D
// device context can be accessed only on a single thread.
m_game->FinalizeLoadLevel();
m_updateState = UpdateEngineState::ResourcesLoaded;

}, task_continuation_context::use_current());
}
}, task_continuation_context::use_current());
}

After the main window is set, the app singleton calls Load. In the sample, this method uses a set of asynchronous
tasks (the syntax for which is defined in the Parallel Patterns Library) to create the game objects, load graphics
resources, and initialize the games state machine. By using the async task pattern, the Load method completes
quickly and allows the app to start processing input. In this method, the app also displays a progress bar as the
resource files load.
We break resource loading into two separate stages, because access to the Direct3D 11 device context is restricted
to the thread the device context was created on, while access to the Direct3D 11 device for object creation is free-
threaded. The CreateGameDeviceResourcesAsync task runs on a separate thread from the completion task
(FinalizeCreateGameDeviceResources), which runs on the original thread. We use a similar pattern for loading
level resources with LoadLevelAsync and FinalizeLoadLevel.
After we create the games objects and load the graphics resources, we initialize the game's state machine to the
starting conditions (for example: setting the initial ammo count, level number, and object positions). If the game
state indicates that the player is resuming a game, we load the current level (the level that player was on when the
game was suspended).
In the Load method, we do any necessary preparations before the game begins, like setting any starting states or
global values. If you want to pre-fetch game data or assets, this is a better place for it rather than in SetWindow
or Initialize. Use async tasks in your game for any loading as Windows imposes restrictions on the time your
game can take before it must start processing input. If loading takes awhileif there are lots of resources then
provide your users with a regularly updated progress bar.
When developing your own game, design your startup code around these methods. Here's a simple list of basic
suggestions for each method:
Use Initialize to allocate your main classes and connect up the basic event handlers.
Use SetWindow to create your main window and connect any window-specific events.
Use Load to handle any remaining setup, and to initiate the async creation of objects and loading of resources.
If you need to create any temporary files or data, such as procedurally generated assets, do it here too.
So, the sample game creates an instance of the game's state machine and sets it to the starting configuration. It
handles all the system and input events. It provides a window to display content in. The gameplay code is now
ready to run.
The Run method of the view provider

void App::Run()
{
while (!m_windowClosed)
{
if (m_visible)
{
switch (m_updateState)
{
case UpdateEngineState::Deactivated:
case UpdateEngineState::Snapped:
if (!m_renderNeeded)
{
// The app is not currently the active window, so just wait for events.
CoreWindow::GetForCurrentThread()->Dispatcher->ProcessEvents(CoreProcessEventsOption::ProcessOneAndAllPending);
break;
}
// Otherwise, fall through and do normal processing to get the rendering handled.
default:
CoreWindow::GetForCurrentThread()->Dispatcher->ProcessEvents(CoreProcessEventsOption::ProcessAllIfPresent);
Update();
m_renderer->Render();
m_renderNeeded = false;
}
}
else
{
CoreWindow::GetForCurrentThread()->Dispatcher->ProcessEvents(CoreProcessEventsOption::ProcessOneAndAllPending);
}
}
m_game->OnSuspending(); // Exiting due to window close. Make sure to save the state.
}

Here's where we get to the play part of the game app. Having run the 3 methods and set the stage, the game app
runs the Run method, starting the fun!
In the game sample, we start a while loop that terminates when the player closes the game window. The sample
code transitions to one of two states in the game engine state machine:
The game window gets deactivated (loses focus) or snapped. When this happens, the game suspends event
processing and waits for the window to focus or unsnap.
Otherwise, the game updates its own state and renders the graphics for display.
When your game has focus, you must handle every event in the message queue as it arrives, and so you must call
CoreWindowDispatch.ProcessEvents with the ProcessAllIfPresent option. Other options can cause delays in
processing message events, which makes your game feel unresponsive, or result in touch behaviors that feel
sluggish and not "sticky".
Of course, when the app is not visible, suspended or snapped, we don't want it to consume any resources cycling
to dispatch messages that will never arrive. So your game must use ProcessOneAndAllPending, which blocks
until it gets an event, and then processes that event and any others that arrive in the process queue during the
processing of the first. ProcessEvents then immediately returns after the queue has been processed.
The game is running! The events that it uses to transition between game states are being dispatched and
processed. The graphics are being updated as the game loop cycles. We hope the player is having fun. But
eventually, the fun has to end...
...and we need to clean up the place. This is where Uninitialize comes in.
The Uninitialize method of the view provider

void App::Uninitialize()
{
}

In the game sample, we let the app singleton for the game clean everything up after the game is terminated. In
Windows 10, closing the app window doesn't kill the app's process, but instead writes the state of the app
singleton to memory. If anything special must happen when the system must reclaim this memory, any special
cleanup of resources, then put the code for that cleanup in this method.
We refer back to these 5 methods in this tutorial, so keep them in mind. Now, let's look at the game engine's
overall structure and the state machines that define it.

Initializing the game engine state


Because a user can resume a UWP game app from a suspended state at any time, the app can have any number of
possible states.
The game sample can be in one of the three states when it starts:
The game loop was running and was in the middle of a level.
The game loop was not running because a game had just been completed. (The high score is set.)
No game has been started, or the game was between levels. (The high score is 0.)
Obviously, in your own game, you could have more or fewer states. Again, always be aware that your UWP game
can be terminated at any time, and when it resumes, the player expects the game to behave as though they had
never stopped playing.
In the game sample, the code flow looks like this.
void App::InitializeGameState()
{
//
// Set up the initial state machine for handling game playing state.
//
if (m_game->GameActive() && m_game->LevelActive())
{
m_updateState = UpdateEngineState::WaitingForResources;
// ...

}
else if (!m_game->GameActive() && (m_game->HighScore().totalHits > 0))
{
m_updateState = UpdateEngineState::WaitingForPress;
// ...
}
else
{
m_updateState = UpdateEngineState::WaitingForResources;
// ...
}
SetAction(GameInfoOverlayCommand::PleaseWait);
ShowGameInfoOverlay();
}

Initialization is less about cold starting the app, and more about restarting the app after it has been terminated.
The sample game always saves state, which gives the appearance that the app is always running. The suspended
state is just that: the game play is suspended, but the resources of the game are still in memory. Likewise, the
resume event indicates that the sample game is picking up where it was last suspended or terminated. When the
sample game restarts after termination, it starts up normally and then determines the last known state so the
player can immediately continue playing.
The flowchart lays out the initial states and transitions for the game sample's initialization process.

Depending on the state, different options are presented to the player. If the game resumes mid-level, it appears as
paused, and the overlay presents a continue option. If the game resumed in a state where the game is completed,
it displays the high scores and an option to play a new game. Lastly, if the game resumes before a level has
started, the overlay presents a start option to the user.
The game sample doesn't distinguish between the game itself cold starting, that is a game that is launching for the
first time without a suspend event, and the game resuming from a suspended state. This is proper design for any
UWP app.

Handling events
Our sample code registered a number of handlers for specific events in Initialize, SetWindow, and Load. You
probably guessed that these were important events, because the code sample did this work well before it got into
any game mechanics or graphics development. You're right! These events are fundamental to a proper UWP app
experience, and because a UWP app can be activated, deactivated, resized, snapped, unsnapped, suspended, or
resumed at any time, the game must register for those very events as soon as it can, and handle them in a way
that keeps the experience smooth and predictable for the player.
Here's the event handlers in the sample, and the events they handle. You can find the full code for these event
handlers in Complete code for this section.

EVENT HANDLER DESCRIPTION

OnActivated Handles CoreApplicationView::Activated. The game app


has been brought to the foreground, so the main window is
activated.

OnLogicalDpiChanged Handles DisplayProperties::LogicalDpiChanged. The DPI


for the main game window has changed, and the game app
adjusts its resources accordingly.

Note CoreWindow coordinates are in DIPs (Device


Independent Pixels), as in Direct2D. As a result, you must
notify Direct2D of the change in DPI to display any 2D assets
or primitives correctly.

OnResuming Handles CoreApplication::Resuming. The game app


restores the game from a suspended state.

OnSuspending Handles CoreApplication::Suspending. The game app saves


its state to disk. It has 5 seconds to save state to storage.

OnVisibilityChanged Handles CoreWindow::VisibilityChanged. The game app


has changed visibility, and has either become visible or been
made invisible by another app becoming visible.

OnWindowActivationChanged Handles CoreWindow::Activated. The game app's main


window has been deactivated or activated, so it must remove
focus and pause the game, or regain focus. In both cases, the
overlay indicates that the game is paused.

OnWindowClosed Handles CoreWindow::Closed. The game app closes the


main window and suspends the game.

OnWindowSizeChanged Handles CoreWindow::SizeChanged. The game app


reallocates the graphics resources and overlay to
accommodate the size change, and then updates the render
target.
Your own game must handle these events, because they are part of UWP app design.

Updating the game engine


Within the game loop in Run, the sample has implemented a basic state machine for handling all the major
actions the player can take. The highest level of this state machine deals with loading a game, playing a specific
level, or continuing a level after the game has been paused (by the system or the player).
In the game sample, there are 3 major states (UpdateEngineState) the game can be in:
Waiting for resources. The game loop is cycling, unable to transition until resources (specifically graphics
resources) are available. When the async tasks for loading resources completes, it updates the state to
ResourcesLoaded. This usually happens between levels when the level needs to load new resources from disk.
In the game sample, we simulate this behavior because the sample doesn't need any additional per-level
resources at that time.
Waiting for press. The game loop is cycling, waiting for specific user input. This input is a player action to load
a game, start a level, or continue a level. The sample code refers to these sub-states as PressResultState
enumeration values.
Dynamics. The game loop is running with the user playing. While the user is playing, the game checks for 3
conditions that it can transition on: the expiration of the set time for a level, the completion of a level by the
player, or the completion of all levels by the player.
Here's the code structure. The complete code is in Complete code for this section.
The structure of the state machine used to update the game engine

void App::Update()
{
m_controller->Update();

switch (m_updateState)
{
case UpdateEngineState::WaitingForResources:
// Waiting for initial load. Display an update once per 60 updates.
loadCount++;
if ((loadCount % 60) == 0)
{
m_loadingCount++;
SetGameInfoOverlay(m_gameInfoOverlayState);
}
break;

case UpdateEngineState::ResourcesLoaded:
switch (m_pressResult)
{
case PressResultState::LoadGame:
// ...
break;

case PressResultState::PlayLevel:
// ...
break;

case PressResultState::ContinueLevel:
// ...
break;
}
// ...
break;

case UpdateEngineState::WaitingForPress:
if (m_controller->IsPressComplete() || m_pressComplete)
{
{
m_pressComplete = false;

switch (m_pressResult)
{
case PressResultState::LoadGame:
// ...
break;

case PressResultState::PlayLevel:
// ...
break;

case PressResultState::ContinueLevel:
// ...
break;
}
}
break;

case UpdateEngineState::Dynamics:
if (m_controller->IsPauseRequested() || m_pauseRequested)
{
// ...
}
else
{
GameState runState = m_game->RunGame();
switch (runState)
{
case GameState::TimeExpired:
// ...
break;

case GameState::LevelComplete:
// ...
break;

case GameState::GameComplete:
// ...
break;
}
}

if (m_updateState == UpdateEngineState::WaitingForPress)
{
// transitioning state, so enable waiting for the press event
m_controller->WaitForPress(m_game->GameInfoOverlayUpperLeft(), m_game->GameInfoOverlayLowerRight());
}
if (m_updateState == UpdateEngineState::WaitingForResources)
{
// Transitioning state, so shut down the input controller until resources are loaded
m_controller->Active(false);
}

break;
}
}

Visually, the main game state machine looks like this:


We talk about the game logic itself in more detail in Defining the main game object. For now, the important
takeaway is that your game is a state machine. Each specific state must have very specific criteria to define it, and
the transitions from one state to another must be based on discrete user input or system actions (such as graphics
resource loading). When you are planning your game, draw out a diagram like the one we use, making sure you
address all possible actions the user or system can take at a high level. Games can be very complicated, and the
state machine is a powerful tool to visualize this complexity and make it very manageable.
Of course, as you saw, there are state machines within state machines. There's one for the controller, that handles
all of the acceptable inputs the player can generate. In the diagram, a press is some form of user input. This state
machine doesn't care what it is, because it works at a higher level; it assumes that the state machine for the
controller will handle any transitions that affect movement and shooting behaviors, and the associated rendering
updates. We talk about managing input states in Adding controls.

Updating the user interface


We need to keep the player apprised of the state of the system, and allow him to change the high-level state
according to the rules of the game. For most games, this game sample included, this is done with a heads-up
display that contains representations of game state, and other play-specific info such as score, or ammo, or the
number of chances remaining. We call this the overlay, because it is rendered separate from the main graphics
pipeline and placed on top the 3D projection. In the sample game, we create this overlay using the Direct2D APIs.
We can also create this overlay using XAML, which we discuss in Extending the game sample.
There are two components to the user interface:
The heads-up display that contains the score and info about the current state of game play.
The pause bitmap, which is a black rectangle with text overlaid during the paused/suspended state of the game.
This is the game overlay. We discuss it further in Adding a user interface.
Unsurprisingly, the overlay has a state machine too. The overlay can display a level start or game over message. It
is essentially a canvas to output any info about game state that we display to the player when the game is paused
or suspended.
Here's how the game sample structures the overlay's state machine.

void App::SetGameInfoOverlay(GameInfoOverlayState state)


{
m_gameInfoOverlayState = state;
switch (state)
{

case GameInfoOverlayState::Loading:
m_renderer->InfoOverlay()->SetGameLoading(m_loadingCount);
break;

case GameInfoOverlayState::GameStats:
// ...
break;

case GameInfoOverlayState::LevelStart:
// ...
break;

case GameInfoOverlayState::GameOverCompleted:
// ...
break;

case GameInfoOverlayState::GameOverExpired:
// ...
break;

case GameInfoOverlayState::Pause:
// ...
break;
}
}

There are 6 state screens that the overlay displays, depending on the state of the game itself: a resources loading
screen at the start of the game, a game play screen, a level start message screen, a game over screen when all of
the levels are competed without time running out, a game over screen when time runs out, and a pause menu
screen.
Separating your user interface from your game's graphics pipeline allows you to work on it independent of the
game's graphics rendering engine and decreases the complexity of your game's code significantly.

Next steps
This covers the basic structure of the game sample, and presents a good model for UWP game app development
with DirectX. Of course, there's more to it than this. We only walked through the skeleton of the game. Now, we
take an in-depth look at the game and its mechanics, and how those mechanics are implemented as the core game
object. We review that part in Defining the main game object.
It's also time to consider the sample game's graphics engine in greater detail. That part is covered in Assembling
the rendering pipeline.

Complete sample code for this section


App.h

///// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved
//// Copyright (c) Microsoft Corporation. All rights reserved

#pragma once

#include "Simple3DGame.h"

enum class UpdateEngineState


{
WaitingForResources,
ResourcesLoaded,
WaitingForPress,
Dynamics,
Snapped,
Suspended,
Deactivated,
};

enum class PressResultState


{
LoadGame,
PlayLevel,
ContinueLevel,
};

enum class GameInfoOverlayState


{
Loading,
GameStats,
GameOverExpired,
GameOverCompleted,
LevelStart,
Pause,
};

ref class App : public Windows::ApplicationModel::Core::IFrameworkView


{
internal:
App();

public:
// IFrameworkView Methods
virtual void Initialize(_In_ Windows::ApplicationModel::Core::CoreApplicationView^ applicationView);
virtual void SetWindow(_In_ Windows::UI::Core::CoreWindow^ window);
virtual void Load(_In_ Platform::String^ entryPoint);
virtual void Run();
virtual void Uninitialize();

private:
void InitializeGameState();

// Event Handlers
void OnSuspending(
_In_ Platform::Object^ sender,
_In_ Windows::ApplicationModel::SuspendingEventArgs^ args
);

void OnResuming(
_In_ Platform::Object^ sender,
_In_ Platform::Object^ args
);

void UpdateViewState();

void OnWindowActivationChanged(
_In_ Windows::UI::Core::CoreWindow^ sender,
_In_ Windows::UI::Core::WindowActivatedEventArgs^ args
);

void OnWindowSizeChanged(
_In_ Windows::UI::Core::CoreWindow^ sender,
_In_ Windows::UI::Core::CoreWindow^ sender,
_In_ Windows::UI::Core::WindowSizeChangedEventArgs^ args
);

void OnWindowClosed(
_In_ Windows::UI::Core::CoreWindow^ sender,
_In_ Windows::UI::Core::CoreWindowEventArgs^ args
);

void OnLogicalDpiChanged(
_In_ Platform::Object^ sender
);

void OnActivated(
_In_ Windows::ApplicationModel::Core::CoreApplicationView^ applicationView,
_In_ Windows::ApplicationModel::Activation::IActivatedEventArgs^ args
);

void OnVisibilityChanged(
_In_ Windows::UI::Core::CoreWindow^ sender,
_In_ Windows::UI::Core::VisibilityChangedEventArgs^ args
);

void Update();
void SetGameInfoOverlay(GameInfoOverlayState state);
void SetAction (GameInfoOverlayCommand command);
void ShowGameInfoOverlay();
void HideGameInfoOverlay();
void SetSnapped();
void HideSnapped();

bool m_windowClosed;
bool m_renderNeeded;
bool m_haveFocus;
bool m_visible;

MoveLookController^ m_controller;
GameRenderer^ m_renderer;
Simple3DGame^ m_game;

UpdateEngineState m_updateState;
UpdateEngineState m_updateStateNext;
PressResultState m_pressResult;
GameInfoOverlayState m_gameInfoOverlayState;
GameInfoOverlayCommand m_gameInfoOverlayCommand;
uint32 m_loadingCount;
};

ref class Direct3DApplicationSource : Windows::ApplicationModel::Core::IFrameworkViewSource


{
public:
virtual Windows::ApplicationModel::Core::IFrameworkView^ CreateView();
};

App.cpp

//--------------------------------------------------------------------------------------
//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#include "pch.h"
#include "App.h"

using namespace concurrency;


using namespace concurrency;
using namespace DirectX;
using namespace Windows::ApplicationModel;
using namespace Windows::ApplicationModel::Activation;
using namespace Windows::ApplicationModel::Core;
using namespace Windows::Foundation;
using namespace Windows::Graphics::Display;
using namespace Windows::UI::Core;
using namespace Windows::UI::Input;
using namespace Windows::UI::ViewManagement;

App::App() :
m_windowClosed(false),
m_haveFocus(false),
m_gameInfoOverlayCommand(GameInfoOverlayCommand::None),
m_visible(true),
m_loadingCount(0),
m_updateState(UpdateEngineState::WaitingForResources)
{
}

//--------------------------------------------------------------------------------------

void App::Initialize(
_In_ CoreApplicationView^ applicationView
)
{
applicationView->Activated +=
ref new TypedEventHandler<CoreApplicationView^, IActivatedEventArgs^>(this, &App::OnActivated);

CoreApplication::Suspending +=
ref new EventHandler<SuspendingEventArgs^>(this, &App::OnSuspending);

CoreApplication::Resuming +=
ref new EventHandler<Platform::Object^>(this, &App::OnResuming);

m_controller = ref new MoveLookController();


m_renderer = ref new GameRenderer();
m_game = ref new Simple3DGame();
}

//--------------------------------------------------------------------------------------

void App::SetWindow(
_In_ CoreWindow^ window
)
{
window->PointerCursor = ref new CoreCursor(CoreCursorType::Arrow, 0);

PointerVisualizationSettings^ visualizationSettings = PointerVisualizationSettings::GetForCurrentView();


visualizationSettings->IsContactFeedbackEnabled = false;
visualizationSettings->IsBarrelButtonFeedbackEnabled = false;

window->SizeChanged +=
ref new TypedEventHandler<CoreWindow^, WindowSizeChangedEventArgs^>(this, &App::OnWindowSizeChanged);

window->Closed +=
ref new TypedEventHandler<CoreWindow^, CoreWindowEventArgs^>(this, &App::OnWindowClosed);

window->VisibilityChanged +=
ref new TypedEventHandler<CoreWindow^, VisibilityChangedEventArgs^>(this, &App::OnVisibilityChanged);

DisplayProperties::LogicalDpiChanged +=
ref new DisplayPropertiesEventHandler(this, &App::OnLogicalDpiChanged);

m_controller->Initialize(window);

m_controller->SetMoveRect(
XMFLOAT2(0.0f, window->Bounds.Height - GameConstants::TouchRectangleSize),
XMFLOAT2(GameConstants::TouchRectangleSize, window->Bounds.Height)
);
m_controller->SetFireRect(
XMFLOAT2(window->Bounds.Width - GameConstants::TouchRectangleSize, window->Bounds.Height -
GameConstants::TouchRectangleSize),
XMFLOAT2(window->Bounds.Width, window->Bounds.Height)
);

m_renderer->Initialize(window, DisplayProperties::LogicalDpi);
SetGameInfoOverlay(GameInfoOverlayState::Loading);
ShowGameInfoOverlay();
}

//--------------------------------------------------------------------------------------

void App::Load(
_In_ Platform::String^ /* entryPoint */
)
{
create_task([this]()
{
// Asynchronously initialize the game class and load the renderer device resources.
// By doing all this asynchronously, the game gets to its main loop more quickly
// and loads all the necessary resources in parallel on other threads.
m_game->Initialize(m_controller, m_renderer);

return m_renderer->CreateGameDeviceResourcesAsync(m_game);

}).then([this]()
{
// The finalize code needs to run in the same thread context
// as the m_renderer object was created because the D3D device context
// can ONLY be accessed on a single thread.
m_renderer->FinalizeCreateGameDeviceResources();

InitializeGameState();

if (m_updateState == UpdateEngineState::WaitingForResources)
{
// In the middle of a game, so spin up the async task to load the level.
create_task([this]()
{
return m_game->LoadLevelAsync();

}).then([this]()
{
// The m_game object may need to deal with D3D device context work, so
// again the finalize code needs to run in the same thread
// context as the m_renderer object was created because the D3D
// device context can ONLY be accessed on a single thread.
m_game->FinalizeLoadLevel();
m_updateState = UpdateEngineState::ResourcesLoaded;

}, task_continuation_context::use_current());
}
}, task_continuation_context::use_current());
}

//--------------------------------------------------------------------------------------

void App::Run()
{
while (!m_windowClosed)
{
if (m_visible)
{
switch (m_updateState)
{
case UpdateEngineState::Deactivated:
case UpdateEngineState::Snapped:
if (!m_renderNeeded)
{
// The App is not currently the active window, so just wait for events.
CoreWindow::GetForCurrentThread()->Dispatcher->ProcessEvents(CoreProcessEventsOption::ProcessOneAndAllPending);
break;
}
// Otherwise, fall through and do normal processing to get the rendering handled.
default:
CoreWindow::GetForCurrentThread()->Dispatcher->ProcessEvents(CoreProcessEventsOption::ProcessAllIfPresent);
Update();
m_renderer->Render();
m_renderNeeded = false;
}
}
else
{
CoreWindow::GetForCurrentThread()->Dispatcher->ProcessEvents(CoreProcessEventsOption::ProcessOneAndAllPending);
}
}
m_game->OnSuspending(); // Exiting due to window close. Make sure to save state.
}

//--------------------------------------------------------------------------------------

void App::Uninitialize()
{
}

//--------------------------------------------------------------------------------------

void App::OnWindowSizeChanged(
_In_ CoreWindow^ window,
_In_ WindowSizeChangedEventArgs^ /* args */
)
{
UpdateViewState();
m_renderer->UpdateForWindowSizeChange();

// The location of the GameInfoOverlay may have changed with the size change, so update the controller.
m_controller->SetMoveRect(
XMFLOAT2(0.0f, window->Bounds.Height - GameConstants::TouchRectangleSize),
XMFLOAT2(GameConstants::TouchRectangleSize, window->Bounds.Height)
);
m_controller->SetFireRect(
XMFLOAT2(window->Bounds.Width - GameConstants::TouchRectangleSize, window->Bounds.Height -
GameConstants::TouchRectangleSize),
XMFLOAT2(window->Bounds.Width, window->Bounds.Height)
);

if (m_updateState == UpdateEngineState::WaitingForPress)
{
m_controller->WaitForPress(m_renderer->GameInfoOverlayUpperLeft(), m_renderer->GameInfoOverlayLowerRight());
}
}

//--------------------------------------------------------------------------------------

void App::OnWindowClosed(
_In_ CoreWindow^ /* sender */,
_In_ CoreWindowEventArgs^ /* args */
)
{
m_windowClosed = true;
}

//--------------------------------------------------------------------------------------
void App::OnLogicalDpiChanged(
_In_ Platform::Object^ /* sender */
)
{
m_renderer->SetDpi(DisplayProperties::LogicalDpi);

// The GameInfoOverlay may have been recreated as a result of DPI changes, so


// regenerate the data.
SetGameInfoOverlay(m_gameInfoOverlayState);
SetAction(m_gameInfoOverlayCommand);
}

//--------------------------------------------------------------------------------------

void App::OnActivated(
_In_ CoreApplicationView^ /* applicationView */,
_In_ IActivatedEventArgs^ /* args */
)
{
CoreWindow::GetForCurrentThread()->Activated +=
ref new TypedEventHandler<CoreWindow^, WindowActivatedEventArgs^>(this, &App::OnWindowActivationChanged);
CoreWindow::GetForCurrentThread()->Activate();
}

//--------------------------------------------------------------------------------------

void App::OnVisibilityChanged(
_In_ CoreWindow^ /* sender */,
_In_ VisibilityChangedEventArgs^ args
)
{
m_visible = args->Visible;
}

//--------------------------------------------------------------------------------------

void App::InitializeGameState()
{
// Set up the initial state machine for handling the Game playing state.
if (m_game->GameActive() && m_game->LevelActive())
{
// The last time the game terminated it was in the middle
// of a level.
// We are waiting for the user to continue the game.
m_updateState = UpdateEngineState::WaitingForResources;
m_pressResult = PressResultState::ContinueLevel;
SetGameInfoOverlay(GameInfoOverlayState::Pause);
SetAction(GameInfoOverlayCommand::PleaseWait);
}
else if (!m_game->GameActive() && (m_game->HighScore().totalHits > 0))
{
// The last time the game terminated the game had been completed.
// Show the high score.
// We are waiting for the user to acknowledge the high score and start a new game.
// The level resources for the first level will be loaded later.
m_updateState = UpdateEngineState::WaitingForPress;
m_pressResult = PressResultState::LoadGame;
SetGameInfoOverlay(GameInfoOverlayState::GameStats);
m_controller->WaitForPress(m_renderer->GameInfoOverlayUpperLeft(), m_renderer->GameInfoOverlayLowerRight());
SetAction(GameInfoOverlayCommand::TapToContinue);
}
else
{
// This is either the first time the game has run or
// the last time the game terminated the level was completed.
// We are waiting for the user to begin the next level.
m_updateState = UpdateEngineState::WaitingForResources;
m_pressResult = PressResultState::PlayLevel;
SetGameInfoOverlay(GameInfoOverlayState::LevelStart);
SetGameInfoOverlay(GameInfoOverlayState::LevelStart);
SetAction(GameInfoOverlayCommand::PleaseWait);
}
ShowGameInfoOverlay();
}

//--------------------------------------------------------------------------------------

void App::Update()
{
static uint32 loadCount = 0;

m_controller->Update();

switch (m_updateState)
{
case UpdateEngineState::WaitingForResources:
// Waiting for the initial load. Display an update once per 60 updates.
loadCount++;
if ((loadCount % 60) == 0)
{
m_loadingCount++;
SetGameInfoOverlay(m_gameInfoOverlayState);
}
break;

case UpdateEngineState::ResourcesLoaded:
switch (m_pressResult)
{
case PressResultState::LoadGame:
SetGameInfoOverlay(GameInfoOverlayState::GameStats);
break;

case PressResultState::PlayLevel:
SetGameInfoOverlay(GameInfoOverlayState::LevelStart);
break;

case PressResultState::ContinueLevel:
SetGameInfoOverlay(GameInfoOverlayState::Pause);
break;
}
m_updateState = UpdateEngineState::WaitingForPress;
SetAction(GameInfoOverlayCommand::TapToContinue);
m_controller->WaitForPress(m_renderer->GameInfoOverlayUpperLeft(), m_renderer->GameInfoOverlayLowerRight());
ShowGameInfoOverlay();
m_renderNeeded = true;
break;

case UpdateEngineState::WaitingForPress:
if (m_controller->IsPressComplete())
{
switch (m_pressResult)
{
case PressResultState::LoadGame:
m_updateState = UpdateEngineState::WaitingForResources;
m_pressResult = PressResultState::PlayLevel;
m_controller->Active(false);
m_game->LoadGame();
SetAction(GameInfoOverlayCommand::PleaseWait);
SetGameInfoOverlay(GameInfoOverlayState::LevelStart);
ShowGameInfoOverlay();

m_game->LoadLevelAsync().then([this]()
{
m_game->FinalizeLoadLevel();
m_updateState = UpdateEngineState::ResourcesLoaded;

}, task_continuation_context::use_current());
break;
case PressResultState::PlayLevel:
m_updateState = UpdateEngineState::Dynamics;
HideGameInfoOverlay();
m_controller->Active(true);
m_game->StartLevel();
break;

case PressResultState::ContinueLevel:
m_updateState = UpdateEngineState::Dynamics;
HideGameInfoOverlay();
m_controller->Active(true);
m_game->ContinueGame();
break;
}
}
break;

case UpdateEngineState::Dynamics:
if (m_controller->IsPauseRequested())
{
m_game->PauseGame();
SetGameInfoOverlay(GameInfoOverlayState::Pause);
SetAction(GameInfoOverlayCommand::TapToContinue);
m_updateState = UpdateEngineState::WaitingForPress;
m_pressResult = PressResultState::ContinueLevel;
ShowGameInfoOverlay();
}
else
{
GameState runState = m_game->RunGame();
switch (runState)
{
case GameState::TimeExpired:
SetAction(GameInfoOverlayCommand::TapToContinue);
SetGameInfoOverlay(GameInfoOverlayState::GameOverExpired);
ShowGameInfoOverlay();
m_updateState = UpdateEngineState::WaitingForPress;
m_pressResult = PressResultState::LoadGame;
break;

case GameState::LevelComplete:
SetAction(GameInfoOverlayCommand::PleaseWait);
SetGameInfoOverlay(GameInfoOverlayState::LevelStart);
ShowGameInfoOverlay();
m_updateState = UpdateEngineState::WaitingForResources;
m_pressResult = PressResultState::PlayLevel;

m_game->LoadLevelAsync().then([this]()
{
m_game->FinalizeLoadLevel();
m_updateState = UpdateEngineState::ResourcesLoaded;

}, task_continuation_context::use_current());
break;

case GameState::GameComplete:
SetAction(GameInfoOverlayCommand::TapToContinue);
SetGameInfoOverlay(GameInfoOverlayState::GameOverCompleted);
ShowGameInfoOverlay();
m_updateState = UpdateEngineState::WaitingForPress;
m_pressResult = PressResultState::LoadGame;
break;
}
}

if (m_updateState == UpdateEngineState::WaitingForPress)
{
// Transitioning state, so enable waiting for the press event.
m_controller->WaitForPress(m_renderer->GameInfoOverlayUpperLeft(), m_renderer->GameInfoOverlayLowerRight());
m_controller->WaitForPress(m_renderer->GameInfoOverlayUpperLeft(), m_renderer->GameInfoOverlayLowerRight());
}
if (m_updateState == UpdateEngineState::WaitingForResources)
{
// Transitioning state, so shut down the input controller until resources are loaded.
m_controller->Active(false);
}
break;
}
}

//--------------------------------------------------------------------------------------

void App::OnWindowActivationChanged(
_In_ Windows::UI::Core::CoreWindow^ /* sender */,
_In_ Windows::UI::Core::WindowActivatedEventArgs^ args
)
{
if (args->WindowActivationState == CoreWindowActivationState::Deactivated)
{
m_haveFocus = false;

switch (m_updateState)
{
case UpdateEngineState::Dynamics:
// From Dynamic mode, when coming out of Deactivated rather than going directly back into game play
// go to the paused state waiting for user input to continue.
m_updateStateNext = UpdateEngineState::WaitingForPress;
m_pressResult = PressResultState::ContinueLevel;
SetGameInfoOverlay(GameInfoOverlayState::Pause);
ShowGameInfoOverlay();
m_game->PauseGame();
m_updateState = UpdateEngineState::Deactivated;
SetAction(GameInfoOverlayCommand::None);
m_renderNeeded = true;
break;

case UpdateEngineState::WaitingForResources:
case UpdateEngineState::WaitingForPress:
m_updateStateNext = m_updateState;
m_updateState = UpdateEngineState::Deactivated;
SetAction(GameInfoOverlayCommand::None);
ShowGameInfoOverlay();
m_renderNeeded = true;
break;
}
// Don't have focus, so shutdown input processing.
m_controller->Active(false);
}
else if (args->WindowActivationState == CoreWindowActivationState::CodeActivated
|| args->WindowActivationState == CoreWindowActivationState::PointerActivated)
{
m_haveFocus = true;

if (m_updateState == UpdateEngineState::Deactivated)
{
m_updateState = m_updateStateNext;

if (m_updateState == UpdateEngineState::WaitingForPress)
{
SetAction(GameInfoOverlayCommand::TapToContinue);
m_controller->WaitForPress(m_renderer->GameInfoOverlayUpperLeft(), m_renderer->GameInfoOverlayLowerRight());
}
else if (m_updateStateNext == UpdateEngineState::WaitingForResources)
{
SetAction(GameInfoOverlayCommand::PleaseWait);
}
}
}
}

//--------------------------------------------------------------------------------------

void App::OnSuspending(
_In_ Platform::Object^ /* sender */,
_In_ SuspendingEventArgs^ args
)
{
// Save application state.
// If your application needs time to complete a lengthy operation, it can request a deferral.
// The SuspendingOperation has a deadline time. Make sure all your operations are complete by that time!
// If the app doesn't return from this handler within five seconds, it will be terminated.
SuspendingOperation^ op = args->SuspendingOperation;
SuspendingDeferral^ deferral = op->GetDeferral();

create_task([=]()
{
switch (m_updateState)
{
case UpdateEngineState::Dynamics:
// Game is in the active game play state, Stop Game Timer and Pause play and save the state.
SetAction(GameInfoOverlayCommand::None);
SetGameInfoOverlay(GameInfoOverlayState::Pause);
m_updateStateNext = UpdateEngineState::WaitingForPress;
m_pressResult = PressResultState::ContinueLevel;
m_game->PauseGame();
break;

case UpdateEngineState::WaitingForResources:
case UpdateEngineState::WaitingForPress:
m_updateStateNext = m_updateState;
break;

default:
// If it is any other state, don't save as the next state as they are transient states and have already set m_updateStateNext
break;
}
m_updateState = UpdateEngineState::Suspended;

m_controller->Active(false);
m_game->OnSuspending();

deferral->Complete();
});
}

//--------------------------------------------------------------------------------------

void App::OnResuming(
_In_ Platform::Object^ /* sender */,
_In_ Platform::Object^ /* args */
)
{
if (m_haveFocus)
{
m_updateState = m_updateStateNext;
}
else
{
m_updateState = UpdateEngineState::Deactivated;
}

if (m_updateState == UpdateEngineState::WaitingForPress)
{
SetAction(GameInfoOverlayCommand::TapToContinue);
m_controller->WaitForPress(m_renderer->GameInfoOverlayUpperLeft(), m_renderer->GameInfoOverlayLowerRight());
}
m_game->OnResuming();
ShowGameInfoOverlay();
m_renderNeeded = true;
}

//--------------------------------------------------------------------------------------

void App::UpdateViewState()
{
m_renderNeeded = true;

if (ApplicationView::Value == ApplicationViewState::Snapped)
{
switch (m_updateState)
{
case UpdateEngineState::Dynamics:
// From Dynamic mode, when coming out of SNAPPED layout rather than going directly back into game play,
// go to the paused state and wait for user input to continue.
m_updateStateNext = UpdateEngineState::WaitingForPress;
m_pressResult = PressResultState::ContinueLevel;
SetGameInfoOverlay(GameInfoOverlayState::Pause);
SetAction(GameInfoOverlayCommand::TapToContinue);
m_game->PauseGame();
break;

case UpdateEngineState::WaitingForResources:
case UpdateEngineState::WaitingForPress:
// Avoid corrupting the m_updateStateNext on a transition from Snapped -> Snapped.
// Otherwise, just cache the current state and return to it when leaving SNAPPED layout.

m_updateStateNext = m_updateState;
break;

default:
break;
}

m_updateState = UpdateEngineState::Snapped;
m_controller->Active(false);
HideGameInfoOverlay();
SetSnapped();
}
else if (ApplicationView::Value == ApplicationViewState::Filled ||
ApplicationView::Value == ApplicationViewState::FullScreenLandscape ||
ApplicationView::Value == ApplicationViewState::FullScreenPortrait)
{
if (m_updateState == UpdateEngineState::Snapped)
{

HideSnapped();
ShowGameInfoOverlay();
m_renderNeeded = true;

if (m_haveFocus)
{
if (m_updateStateNext == UpdateEngineState::WaitingForPress)
{
SetAction(GameInfoOverlayCommand::TapToContinue);
m_controller->WaitForPress(m_renderer->GameInfoOverlayUpperLeft(), m_renderer->GameInfoOverlayLowerRight());
}
else if (m_updateStateNext == UpdateEngineState::WaitingForResources)
{
SetAction(GameInfoOverlayCommand::PleaseWait);
}

m_updateState = m_updateStateNext;
}
else
{
m_updateState = UpdateEngineState::Deactivated;
m_updateState = UpdateEngineState::Deactivated;
SetAction(GameInfoOverlayCommand::None);
}
}
}
}

//--------------------------------------------------------------------------------------

void App::SetGameInfoOverlay(GameInfoOverlayState state)


{
m_gameInfoOverlayState = state;
switch (state)
{
case GameInfoOverlayState::Loading:
m_renderer->InfoOverlay()->SetGameLoading(m_loadingCount);
break;

case GameInfoOverlayState::GameStats:
m_renderer->InfoOverlay()->SetGameStats(
m_game->HighScore().levelCompleted + 1,
m_game->HighScore().totalHits,
m_game->HighScore().totalShots
);
break;

case GameInfoOverlayState::LevelStart:
m_renderer->InfoOverlay()->SetLevelStart(
m_game->LevelCompleted() + 1,
m_game->CurrentLevel()->Objective(),
m_game->CurrentLevel()->TimeLimit(),
m_game->BonusTime()
);
break;

case GameInfoOverlayState::GameOverCompleted:
m_renderer->InfoOverlay()->SetGameOver(
true,
m_game->LevelCompleted() + 1,
m_game->TotalHits(),
m_game->TotalShots(),
m_game->HighScore().totalHits
);
break;

case GameInfoOverlayState::GameOverExpired:
m_renderer->InfoOverlay()->SetGameOver(
false,
m_game->LevelCompleted(),
m_game->TotalHits(),
m_game->TotalShots(),
m_game->HighScore().totalHits
);
break;

case GameInfoOverlayState::Pause:
m_renderer->InfoOverlay()->SetPause();
break;
}
}

//--------------------------------------------------------------------------------------

void App::SetAction (GameInfoOverlayCommand command)


{
m_gameInfoOverlayCommand = command;
m_renderer->InfoOverlay()->SetAction(command);
}

//--------------------------------------------------------------------------------------
//--------------------------------------------------------------------------------------

void App::ShowGameInfoOverlay()
{
m_renderer->InfoOverlay()->ShowGameInfoOverlay();
}

//--------------------------------------------------------------------------------------

void App::HideGameInfoOverlay()
{
m_renderer->InfoOverlay()->HideGameInfoOverlay();
}

//--------------------------------------------------------------------------------------

void App::SetSnapped()
{
m_renderer->InfoOverlay()->SetPause();
m_renderer->InfoOverlay()->ShowGameInfoOverlay();
}

//--------------------------------------------------------------------------------------

void App::HideSnapped()
{
m_renderer->InfoOverlay()->HideGameInfoOverlay();
SetGameInfoOverlay(m_gameInfoOverlayState);
}

//--------------------------------------------------------------------------------------

IFrameworkView^ DirectXAppSource::CreateView()
{
return ref new App();
}

//--------------------------------------------------------------------------------------

[Platform::MTAThread]
int main(Platform::Array<Platform::String^>^)
{
auto direct3DApplicationSource = ref new Direct3DApplicationSource();
CoreApplication::Run(direct3DApplicationSource);
return 0;
}

//--------------------------------------------------------------------------------------
Define the main game object
3/6/2017 63 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
At this point, we've laid out the basic framework of the sample game, and we implemented a state machine that
handles the high-level user and system behaviors. But we haven't examined the part that makes the game sample
an actual game: the rules and mechanics, and how they're implemented! Now, we look at the details of the game
sample's main object and how the rules it implements translate into interactions with the game world.

Objective
To apply the basic development techniques when implementing the rules and mechanics of a simple Universal
Windows Platform (UWP) game using DirectX.

Considering the game's flow


The majority of the game's basic structure is defined in these files:
App.cpp
Simple3DGame.cpp
In Defining the game's UWP app framework, we reviewed the game framework defined in App.cpp.
Simple3DGame.cpp provides the code for a class, Simple3DGame, which specifies the implementation of the
game play itself. Earlier, we considered the treatment of the sample game as a UWP app. Now, we look at the code
that makes it a game.
The complete code for Simple3DGame.h/.cpp is provided in Complete sample code for this section.
Let's take a look at the definition of the Simple3DGame class.

Defining the core game object


When the app singleton starts, the view provider's Initialize method creates an instance of the main game class,
the Simple3DGame object. This object contains the methods that communicate changes in game state to the
state machine defined in the app framework, or from the app to the game object itself. It also contains methods
that return info for updating the game's overlay bitmap and heads-up display, and for updating the animations
and physics (the dynamics) in the game. The code for obtaining the graphics device resources used by the game is
found in GameRenderer.cpp, which we discuss next in Assembling the rendering framework.
The code for Simple3DGame looks like this:
ref class GameRenderer;

ref class Simple3DGame


{
internal:
Simple3DGame();

void Initialize(
_In_ MoveLookController^ controller,
_In_ GameRenderer^ renderer
);

void LoadGame();
concurrency::task<void> LoadLevelAsync();
void FinalizeLoadLevel();
void StartLevel();
void PauseGame();
void ContinueGame();
GameState RunGame();

void OnSuspending();
void OnResuming();

// ... Global variable retrieval methods are defined here ...

private:
void LoadState();
void SaveState();
void SaveHighScore();
void LoadHighScore();
void InitializeAmmo();
void UpdateDynamics();

// ...
// ... Global variables are defined here.
// ...
};

First, let's review the internal methods defined on Simple3DGame.


Initialize. Sets the starting values of the global variables and initializes the game objects.
LoadGame. Initializes a new level and starts loading it.
LoadLevelAsync. Starts an async task (see the Parallel Patterns Library for more details) to initialize the level
and then invoke an async task on the renderer to load the device specific level resources. This method runs in a
separate thread; as a result, only ID3D11Device methods (as opposed to ID3D11DeviceContext methods)
can be called from this thread. Any device context methods are called in the FinalizeLoadLevel method.
FinalizeLoadLevel. Completes any work for level loading that needs to be done on the main thread. This
includes any calls to Direct3D 11 device context (ID3D11DeviceContext) methods.
StartLevel. Starts the game play for a new level.
PauseGame. Pauses the game.
RunGame. Runs an iteration of the game loop. It's called from App::Update one time every iteration of the
game loop if the game state is Active.
OnSuspending and OnResuming. Suspends and resumes the game's audio, respectively.
And the private methods:
LoadSavedState and SaveState. Loads and saves the current state of the game, respectively.
SaveHighScore and LoadHighScore. Saves and loads the high score across games, respectively.
InitializeAmmo. Resets the state of each sphere object used as ammunition back to its original state for the
beginning of each round.
UpdateDynamics. This is an important method, because it updates all the game objects based on canned
animation routines, physics, and control input. This is the heart of the interactivity that defines the game. We
talk about it more in the Updating the game section.
The other public methods are property getters that return game play and overlay specific information to the app
framework for display.

Defining the game state variables


One function of the game object is to serve as a container for the data that defines a game session, level, or
lifetime, depending on how you define your game at a high level. In this case, the game state data is for the
lifetime of the game, initialized one time when a user launches the game.
Here's the complete set of definitions for the game object's state variables.

private:
MoveLookController^ m_controller;
GameRenderer^ m_renderer;
Camera^ m_camera;

Audio^ m_audioController;

std::vector<Sphere^> m_ammo;
uint32 m_ammoCount;
uint32 m_ammoNext;

HighScoreEntry m_topScore;
PersistentState^ m_savedState;

GameTimer^ m_timer;
bool m_gameActive;
bool m_levelActive;
int m_totalHits;
int m_totalShots;
float m_levelDuration;
float m_levelBonusTime;
float m_levelTimeRemaining;
std::vector<Level^> m_level;
uint32 m_levelCount;
uint32 m_currentLevel;

Sphere^ m_player;
std::vector<GameObject^> m_object; // object list for intersections
std::vector<GameObject^> m_renderObject; // all objects to be rendered

DirectX::XMFLOAT3 m_minBound;
DirectX::XMFLOAT3 m_maxBound;

At the top of the code example, there are four objects whose instances are updated as the game loop runs.
The MoveLookController object. This object represents the player input. (For more info about the
MoveLookController object, see Adding controls.)
The GameRenderer object. This object represents the Direct3D 11 renderer derived from the DirectXBase
class that handles all the device-specific objects and their rendering. (For more info, see Assembling the
rendering pipeline).
The Camera object. This object represents the player's first-person view of the game world. (For more info
about the Camera object, see Assembling the rendering pipeline.)
The Audio object. This object controls the audio playback for the game. (For more info about the Audio object,
see Adding sound.)
The rest of the game variables contain the lists of the primitives and their respective in-game amounts, and game
play specific data and constraints. Let's see how the sample configures these variables when the game is initialized.

Initializing and starting the game


When a player starts the game, the game object must initialize its state, create and add the overlay, set the
variables that track the player's performance, and instantiate the objects that it will use to build the levels.

void Simple3DGame::Initialize(
_In_ MoveLookController^ controller,
_In_ GameRenderer^ renderer
)
{
// This method is expected to be called as an asynchronous task.
// Make sure that you don't call rendering methods on the
// m_renderer as this would result in the D3D Context being
// used in multiple threads, which is not allowed.

m_controller = controller;
m_renderer = renderer;

m_audioController = ref new Audio;


m_audioController->CreateDeviceIndependentResources();

m_ammo = std::vector<Sphere^>(GameConstants::MaxAmmo);
m_object = std::vector<GameObject^>();
m_renderObject = std::vector<GameObject^>();
m_level = std::vector<Level^>();

m_savedState = ref new PersistentState();


m_savedState->Initialize(ApplicationData::Current->LocalSettings->Values, "Game");

m_timer = ref new GameTimer();

// Create a sphere primitive to represent the player.


// The sphere is used to handle collisions and constrain the player in the world.
// It's not rendered, so it's not added to the list of render objects.
m_player = ref new Sphere(XMFLOAT3(0.0f, -1.3f, 4.0f), 0.2f);

m_camera = ref new Camera;


m_camera->SetProjParams(XM_PI / 2, 1.0f, 0.01f, 100.0f);
m_camera->SetViewParams(
m_player->Position(), // Eye point in world coordinates.
XMFLOAT3 (0.0f, 0.7f, 0.0f), // Look at point in world coordinates.
XMFLOAT3 (0.0f, 1.0f, 0.0f) // The Up vector for the camera.
);

m_controller->Pitch(m_camera->Pitch());
m_controller->Yaw(m_camera->Yaw());

// Add the m_player object to the object list to do intersection calculations.


m_object.push_back(m_player);
m_player->Active(true);

// Instantiate the world primitive. This object maintains the geometry and
// material properties of the walls, floor, and ceiling of the enclosing world.
// The TargetId is used to identify the world objects so that the right geometry
// and textures can be associated with them later after those resources have
// been created.
GameObject^ world = ref new GameObject();
world->TargetId(GameConstants::WorldFloorId);
world->Active(true);
m_renderObject.push_back(world);

world = ref new GameObject();


world->TargetId(GameConstants::WorldCeilingId);
world->Active(true);
m_renderObject.push_back(world);

world = ref new GameObject();


world->TargetId(GameConstants::WorldWallsId);
world->Active(true);
m_renderObject.push_back(world);

// Min and max Bound are defining the world space of the game.
// All camera motion and dynamics are confined to this space.
m_minBound = XMFLOAT3(-4.0f, -3.0f, -6.0f);
m_maxBound = XMFLOAT3(4.0f, 3.0f, 6.0f);

// Instantiate the cylinders for use in the various game levels.


// Each cylinder has a different initial position, radius, and direction vector,
// but share a common set of material properties.
for (int a = 0; a < GameConstants::MaxCylinders; a++)
{
Cylinder^ cylinder;
switch (a)
{
case 0:
cylinder = ref new Cylinder(XMFLOAT3(-2.0f, -3.0f, 0.0f), 0.25f, XMFLOAT3(0.0f, 6.0f, 0.0f));
break;
case 1:
cylinder = ref new Cylinder(XMFLOAT3(2.0f, -3.0f, 0.0f), 0.25f, XMFLOAT3(0.0f, 6.0f, 0.0f));
break;
case 2:
cylinder = ref new Cylinder(XMFLOAT3(0.0f, -3.0f, -2.0f), 0.25f, XMFLOAT3(0.0f, 6.0f, 0.0f));
break;
case 3:
cylinder = ref new Cylinder(XMFLOAT3(-1.5f, -3.0f, -4.0f), 0.25f, XMFLOAT3(0.0f, 6.0f, 0.0f));
break;
case 4:
cylinder = ref new Cylinder(XMFLOAT3(1.5f, -3.0f, -4.0f), 0.50f, XMFLOAT3(0.0f, 6.0f, 0.0f));
break;
}
cylinder->Active(true);
m_object.push_back(cylinder);
m_renderObject.push_back(cylinder);
}

MediaReader^ mediaReader = ref new MediaReader;


auto targetHitSound = mediaReader->LoadMedia("hit.wav");

// Instantiate the targets for use in the game.


// Each target has a different initial position, size, and orientation,
// but share a common set of material properties.
// The target is defined by a position and two vectors that define both
// the plane of the target in world space and the size of the parallelogram
// based on the lengths of the vectors.
// Each target is assigned a number for identification purposes.
// The Target ID number is 1 based.
// All targets have the same material properties.
for (int a = 1; a < GameConstants::MaxTargets; a++)
{
Face^ target;
switch (a)
{
case 1:
target = ref new Face(XMFLOAT3(-2.5f, -1.0f, -1.5f), XMFLOAT3(-1.5f, -1.0f, -2.0f), XMFLOAT3(-2.5f, 1.0f, -1.5f));
break;
case 2:
target = ref new Face(XMFLOAT3(-1.0f, 1.0f, -3.0f), XMFLOAT3(0.0f, 1.0f, -3.0f), XMFLOAT3(-1.0f, 2.0f, -3.0f));
break;
case 3:
target = ref new Face(XMFLOAT3(1.5f, 0.0f, -3.0f), XMFLOAT3(2.5f, 0.0f, -2.0f), XMFLOAT3(1.5f, 2.0f, -3.0f));
break;
case 4:
target = ref new Face(XMFLOAT3(-2.5f, -1.0f, -5.5f), XMFLOAT3(-0.5f, -1.0f, -5.5f), XMFLOAT3(-2.5f, 1.0f, -5.5f));
break;
case 5:
target = ref new Face(XMFLOAT3(0.5f, -2.0f, -5.0f), XMFLOAT3(1.5f, -2.0f, -5.0f), XMFLOAT3(0.5f, 0.0f, -5.0f));
break;
case 6:
target = ref new Face(XMFLOAT3(1.5f, -2.0f, -5.5f), XMFLOAT3(2.5f, -2.0f, -5.0f), XMFLOAT3(1.5f, 0.0f, -5.5f));
break;
case 7:
target = ref new Face(XMFLOAT3(0.0f, 0.0f, 0.0f), XMFLOAT3(0.5f, 0.0f, 0.0f), XMFLOAT3(0.0f, 0.5f, 0.0f));
break;
case 8:
target = ref new Face(XMFLOAT3(0.0f, 0.0f, 0.0f), XMFLOAT3(0.5f, 0.0f, 0.0f), XMFLOAT3(0.0f, 0.5f, 0.0f));
break;
case 9:
target = ref new Face(XMFLOAT3(0.0f, 0.0f, 0.0f), XMFLOAT3(0.5f, 0.0f, 0.0f), XMFLOAT3(0.0f, 0.5f, 0.0f));
break;
}

target->Target(true);
target->TargetId(a);
target->Active(true);
target->HitSound(ref new SoundEffect());
target->HitSound()->Initialize(
m_audioController->SoundEffectEngine(),
mediaReader->GetOutputWaveFormatEx(),
targetHitSound);

m_object.push_back(target);
m_renderObject.push_back(target);
}

// Instantiate a set of spheres to be used as ammunition for the game


// and set the material properties of the spheres.
auto ammoHitSound = mediaReader->LoadMedia("bounce.wav");

for (int a = 0; a < GameConstants::MaxAmmo; a++)


{
m_ammo[a] = ref new Sphere;
m_ammo[a]->Radius(GameConstants::AmmoRadius);
m_ammo[a]->HitSound(ref new SoundEffect());
m_ammo[a]->HitSound()->Initialize(
m_audioController->SoundEffectEngine(),
mediaReader->GetOutputWaveFormatEx(),
ammoHitSound);
m_ammo[a]->Active(false);
m_renderObject.push_back(m_ammo[a]);
}

// Instantiate each of the game levels. The Level class contains methods
// that initialize the objects in the world for the given level and also
// define any motion paths for the objects in that level.

m_level.push_back(ref new Level1);


m_level.push_back(ref new Level2);
m_level.push_back(ref new Level3);
m_level.push_back(ref new Level4);
m_level.push_back(ref new Level5);
m_level.push_back(ref new Level6);
m_levelCount = static_cast<uint32>(m_level.size());

// Load the top score from disk if it exists.


LoadHighScore();

// Load the currentScore for saved state.


LoadState();

m_controller->Active(false);
}
}

The sample game sets up the components of the game object in this order:
1. A new audio playback object is created.
2. Arrays for the game's graphic primitives are created, including arrays for the level primitives, ammo, and
obstacles.
3. A location for saving game state data is created, named Game, and placed in the app data settings storage
location specified by ApplicationData::Current.
4. A game timer and the initial in-game overlay bitmap are created.
5. A new camera is created with a specific set of view and projection parameters.
6. The input device (the controller) is set to the same starting pitch and yaw as the camera, so the player has a 1-
to-1 correspondence between the starting control position and the camera position.
7. The player object is created and set to active. We use a sphere object to detect the player's proximity to walls
and obstacles and to keep the camera from getting put in a position that might break immersion.
8. The game world primitive is created.
9. The cylinder obstacles are created.
10. The targets (Face objects) are created and numbered.
11. The ammo spheres are created.
12. The levels are created.
13. The high score is loaded.
14. Any prior saved game state is loaded.
The game now has instances of all the key components: the world, the player, the obstacles, the targets, and the
ammo spheres. It also has instances of the levels, which represent configurations of all of the above components
and their behaviors for each specific level. Let's see how the game builds the levels.

Building and loading the game's levels


Most of the heavy lifting for the level construction is done in the Level.h/.cpp file, which we won't delve into,
because it focuses on a very specific implementation. The important thing is that the code for each level is run as a
separate LevelN object. If you'd like to extend the game, you can create a Level object that took an assigned
number as a parameter and randomly placed the obstacles and targets. Or, you can have it load level configuration
data from a resource file, or even the Internet!
The complete code for Level.h/.cpp is provided in Complete sample code for this section.

Defining the game play


At this point, we have all the components we need to assemble the game. The levels have been constructed in
memory from the primitives, and are ready for the player to start interacting with them in some fashion.
Now, the best games react instantly to player input, and provide immediate feedback. This is true for any type of a
game, from twitch-action, real-time shoot-em-ups to thoughtful, turn-based strategy games.
In Defining the game's UWP framework, we looked at the overall state machine that governs the flow of the game.
Remember, the sample implements this flow as a loop inside the Run method of the App class, which itself is an
implementation of a DirectX view provider. The important state transitions must be controlled by the player, and
must provide clear feedback. Any delay in this feedback breaks the sense of immersion.
Here is a diagram representing the basic flow of the game and its high-level states.
When the sample game starts play, the game object can be in one of three states:
Waiting for resources. This state is activated when the game object is initialized or when the components of a
level are being loaded. If this state was triggered by a request to load a prior game, the game stats overlay is
displayed; if it was triggered by a request to play a level, the level start overlay is displayed. The completion of
resource loading causes the game to pass through the Resources loaded state and then transition into the
Waiting for press state.
Waiting for press. This state is activated when the game is paused, either by the player or by the system (after,
say, loading resources). When the player is ready to exit this state, the player is prompted to load a new game
state (LoadGame), start or restart the loaded level (StartLevel), or continue the current level (ContinueGame).
Dynamics. If a player's press input is completed and the resulting action is to start or continue a level, the
game object transitions into the Dynamics state. The game is played in this state, and the game world and
player objects are updated here based on animation routines and player input. This state is left when the player
triggers a pause event, either by pressing P, by taking an action that deactivates the main window, or by
completing a level or the game.
Now, let's look at specific code in the App class (see: Defining the game's UWP framework) for the Update
method that implements this state machine.

void App::Update()
{
static uint32 loadCount = 0;

m_controller->Update();

switch (m_updateState)
{
case UpdateEngineState::WaitingForResources:
// Waiting for initial load. Display an update one time per 60 updates.
loadCount++;
if ((loadCount % 60) == 0)
{
m_loadingCount++;
m_loadingCount++;
SetGameInfoOverlay(m_gameInfoOverlayState);
}
break;

case UpdateEngineState::ResourcesLoaded:
switch (m_pressResult)
{
case PressResultState::LoadGame:
SetGameInfoOverlay(GameInfoOverlayState::GameStats);
break;

case PressResultState::PlayLevel:
SetGameInfoOverlay(GameInfoOverlayState::LevelStart);
break;

case PressResultState::ContinueLevel:
SetGameInfoOverlay(GameInfoOverlayState::Pause);
break;
}
m_updateState = UpdateEngineState::WaitingForPress;
SetAction(GameInfoOverlayCommand::TapToContinue);
m_controller->WaitForPress(m_renderer->GameInfoOverlayUpperLeft(), m_renderer->GameInfoOverlayLowerRight());
ShowGameInfoOverlay();
m_renderNeeded = true;
break;

case UpdateEngineState::WaitingForPress:
if (m_controller->IsPressComplete())
{
switch (m_pressResult)
{
case PressResultState::LoadGame:
m_updateState = UpdateEngineState::WaitingForResources;
m_pressResult = PressResultState::PlayLevel;
m_controller->Active(false);
m_game->LoadGame();
SetAction(GameInfoOverlayCommand::PleaseWait);
SetGameInfoOverlay(GameInfoOverlayState::LevelStart);
ShowGameInfoOverlay();

m_game->LoadLevelAsync().then([this]()
{
m_game->FinalizeLoadLevel();
m_updateState = UpdateEngineState::ResourcesLoaded;

}, task_continuation_context::use_current());
break;

case PressResultState::PlayLevel:
m_updateState = UpdateEngineState::Dynamics;
HideGameInfoOverlay();
m_controller->Active(true);
m_game->StartLevel();
break;

case PressResultState::ContinueLevel:
m_updateState = UpdateEngineState::Dynamics;
HideGameInfoOverlay();
m_controller->Active(true);
m_game->ContinueGame();
break;
}
}
break;

case UpdateEngineState::Dynamics:
if (m_controller->IsPauseRequested())
{
m_game->PauseGame();
m_game->PauseGame();
SetGameInfoOverlay(GameInfoOverlayState::Pause);
SetAction(GameInfoOverlayCommand::TapToContinue);
m_updateState = UpdateEngineState::WaitingForPress;
m_pressResult = PressResultState::ContinueLevel;
ShowGameInfoOverlay();
}
else
{
GameState runState = m_game->RunGame();
switch (runState)
{
case GameState::TimeExpired:
SetAction(GameInfoOverlayCommand::TapToContinue);
SetGameInfoOverlay(GameInfoOverlayState::GameOverExpired);
ShowGameInfoOverlay();
m_updateState = UpdateEngineState::WaitingForPress;
m_pressResult = PressResultState::LoadGame;
break;

case GameState::LevelComplete:
SetAction(GameInfoOverlayCommand::PleaseWait);
SetGameInfoOverlay(GameInfoOverlayState::LevelStart);
ShowGameInfoOverlay();
m_updateState = UpdateEngineState::WaitingForResources;
m_pressResult = PressResultState::PlayLevel;

m_game->LoadLevelAsync().then([this]()
{
m_game->FinalizeLoadLevel();
m_updateState = UpdateEngineState::ResourcesLoaded;

}, task_continuation_context::use_current());
break;

case GameState::GameComplete:
SetAction(GameInfoOverlayCommand::TapToContinue);
SetGameInfoOverlay(GameInfoOverlayState::GameOverCompleted);
ShowGameInfoOverlay();
m_updateState = UpdateEngineState::WaitingForPress;
m_pressResult = PressResultState::LoadGame;
break;
}
}

if (m_updateState == UpdateEngineState::WaitingForPress)
{
// Transitioning state, so enable waiting for the press event.
m_controller->WaitForPress(m_renderer->GameInfoOverlayUpperLeft(), m_renderer->GameInfoOverlayLowerRight());
}
if (m_updateState == UpdateEngineState::WaitingForResources)
{
// Transitioning state, so shut down the input controller until resources are loaded.
m_controller->Active(false);
}
break;
}
}

The first thing this method does is call the MoveLookController instance's own Update method, which updates
the data from the controller. This data includes the direction the player's view (the camera) is facing and the
velocity of the player's movement.
When the game is in the Dynamics state, that is, when the player is playing, the work is handled in the RunGame
method, with this call:
GameState runState = m_game->RunGame();
RunGame handles the set of data that defines the current state of the game play for the current iteration of the
game loop. It flows like this:
1. The method updates the timer that counts down the seconds until the level is completed, and tests to see if the
level's time has expired. This is one of the rules of the game: when time runs out without all the targets getting
shot, the game is over.
2. If time has run out, the method sets the TimeExpired game state, and returns to the Update method in the
previous code.
3. If time remains, the move-look controller is polled for an update to the camera position; specifically, an update
to the angle of the view normal projecting from the camera plane (where the player is looking), and the
distance that angle has moved from the previous time the controller was polled.
4. The camera is updated based on the new data from the move-look controller.
5. The dynamics, or the animations and behaviors of objects in the game world independent of player control, are
updated. In the game sample, this is the motion of the ammo spheres that have been fired, the animation of the
pillar obstacles and the movement of the targets.
6. The method checks to see if the criteria for the successful completion of a level have been met. If so, it finalizes
the score for the level and checks to see if this is the last level (of 6). If it's the last level, the method returns the
GameComplete game state; otherwise, it returns the LevelComplete game state.
7. If the level isn't complete, the method sets the game state to Active and returns.
Here's what RunGame, found in Simple3DGame.cpp, looks like in code.

GameState Simple3DGame::RunGame()
{
m_timer->Update();

m_levelTimeRemaining = m_levelDuration - m_timer->PlayingTime();

if (m_levelTimeRemaining <= 0.0f)


{
// Time expired, so the game is over.
m_levelTimeRemaining = 0.0f;
InitializeAmmo();
m_timer->Reset();
m_gameActive = false;
m_levelActive = false;
SaveState();

if (m_totalHits > m_topScore.totalHits)


{
m_topScore.totalHits = m_totalHits;
m_topScore.totalShots = m_totalShots;
m_topScore.levelCompleted = m_currentLevel;

SaveHighScore();
}
return GameState::TimeExpired;
}
else
{
// Time has not expired, so run one frame of game play.
m_player->Velocity(m_controller->Velocity());
m_camera->LookDirection(m_controller->LookDirection());

UpdateDynamics();

// Update the camera with the player position updates from the dynamics calculations.
m_camera->Eye(m_player->Position());
m_camera->LookDirection(m_controller->LookDirection());

if (m_level[m_currentLevel]->Update(m_timer->PlayingTime(), m_timer->DeltaTime(), m_levelTimeRemaining, m_object))


{
{
// The level has been completed.
m_levelActive = false;
InitializeAmmo();

if (m_currentLevel < m_levelCount-1)


{
// More levels to go so increment the level number.
// Actual level loading will occur in the LoadLevelAsync / FinalizeLoadLevel
// methods.
m_timer->Reset();
m_currentLevel++;
m_levelBonusTime = m_levelTimeRemaining;
SaveState();
return GameState::LevelComplete;
}
else
{
// All levels have been completed.
m_timer->Reset();
m_gameActive = false;
m_levelActive = false;
SaveState();

if (m_totalHits > m_topScore.totalHits)


{
m_topScore.totalHits = m_totalHits;
m_topScore.totalShots = m_totalShots;
m_topScore.levelCompleted = m_currentLevel;

SaveHighScore();
}
return GameState::GameComplete;
}
}
}
return GameState::Active;
}}

Here's the key call: UpdateDynamics() . It's what brings the game world to life. Let's review it!

Updating the game world


A fast and fluid game experience is one where the world feels alive, where the game itself is in motion
independent of player input. Trees wave in the wind, waves crest along shore lines, machinery smokes and shines,
and alien monsters stretch and salivate. Imagine what a game would be like if everything was frozen, with the
graphics only moving when the player provided input. It'd be weird and not very, well, immersive. Immersion, for
the player, comes from the feeling of being an agent in a living, breathing world.
The game loop should always keep updating the game world and running the animation routines, be they canned
or based on physical algorithms or just plain random, except when the game is specifically paused. In the game
sample, this principle is called dynamics, and it encompasses the rise and fall of the pillar obstacles, and the
motion and physical behaviors of the ammo spheres as they are fired. It also encompasses the interaction between
objects, including collisions between the player sphere and the world, or between the ammo and the obstacles and
targets.
The code that implements these dynamics looks like this:

void Simple3DGame::UpdateDynamics()
{
float timeTotal = m_timer->PlayingTime();
float timeFrame = m_timer->DeltaTime();
bool fire = m_controller->IsFiring();
#pragma region Shoot Ammo
// Shoot ammo.
if (fire)
{
static float lastFired; // Timestamp of the last ammo fired.

if (timeTotal < lastFired)


{
// timeTotal is not guarenteed to be monotomically increasing because it is
// reset at each level.
lastFired = timeTotal - GameConstants::Physics::AutoFireDelay;
}

if (timeTotal - lastFired >= GameConstants::Physics::AutoFireDelay)


{
// Compute the ammo firing behavior.
}
}
#pragma endregion

#pragma region Animate Objects


for (uint32 i = 0; i < m_object.size(); i++)
{
if (m_object[i]->AnimatePosition())
{
// Animate targets and cylinders based on level parameters and global constants.
}
}
#pragma endregion

// If the elapsed time is too long, we slice up the time and


// handle physics over several passes.
float timeLeft = timeFrame;
float elapsedFrameTime;
while (timeLeft > 0.0f)
{
elapsedFrameTime = min(timeLeft, GameConstants::Physics::FrameLength);
timeLeft -= elapsedFrameTime;

// Update the player position.


m_player->Position(m_player->VectorPosition() + m_player->VectorVelocity() * elapsedFrameTime);

// Do m_player / object intersections.


for (uint32 a = 0; a < m_object.size(); a++)
{
if (m_object[a]->Active() && m_object[a] != m_player)
{
XMFLOAT3 contact;
XMFLOAT3 normal;

if (m_object[a]->IsTouching(m_player->Position(), m_player->Radius(), &contact, &normal))


{
XMVECTOR oneToTwo;
oneToTwo = -XMLoadFloat3(&normal);

// The player is in contact with Object


float impact;
impact = XMVectorGetX(
XMVector3Dot (oneToTwo, m_player->VectorVelocity())
);
// Make sure that the player is actually headed towards the object at grazing angles; there
// could appear to be an impact when the player is actually already hit and moving away.
if (impact > 0.0f)
{
// Compute the normal and tangential components of the player's velocity.
XMVECTOR velocityOneNormal = XMVector3Dot(oneToTwo, m_player->VectorVelocity()) * oneToTwo;
XMVECTOR velocityOneTangent = m_player->VectorVelocity() - velocityOneNormal;

// Compute the post-collision velocity.


// Compute the post-collision velocity.
m_player->Velocity(velocityOneTangent - velocityOneNormal);

// Fix the positions so that the ball is exactly GameConstants::AmmoRadius from target.
float distanceToMove = m_player->Radius();
m_player->Position(XMLoadFloat3(&contact) - (oneToTwo * distanceToMove));
}
}
}
}
{
// Do collision detection of the player with the bounding world.
XMFLOAT3 position = m_player->Position();
XMFLOAT3 velocity = m_player->Velocity();
float radius = m_player->Radius();

// Check for player collisions with the walls, floor, or ceiling


// and adjust the position.

float limit = m_minBound.x + radius;


if (position.x < limit)
{
position.x = limit;
velocity.x = -velocity.x * GameConstants::Physics::GroundRestitution;
}
limit = m_maxBound.x - radius;
if (position.x > limit)
{
position.x = limit;
velocity.x = -velocity.x + GameConstants::Physics::GroundRestitution;
}
limit = m_minBound.y + radius;
if (position.y < limit)
{
position.y = limit;
velocity.y = -velocity.y * GameConstants::Physics::GroundRestitution;
}
limit = m_maxBound.y - radius;
if (position.y > limit)
{
position.y = limit;
velocity.y = -velocity.y * GameConstants::Physics::GroundRestitution;
}
limit = m_minBound.z + radius;
if (position.z < limit)
{
position.z = limit;
velocity.z = -velocity.z * GameConstants::Physics::GroundRestitution;
}
limit = m_maxBound.z - radius;
if (position.z > limit)
{
position.z = limit;
velocity.z = -velocity.z * GameConstants::Physics::GroundRestitution;
}
m_player->Position(position);
m_player->Velocity(velocity);
}

// Animate the ammo.


if (m_ammoCount > 0)
{
// Check for inter-ammo collision.
#pragma region inter-ammo collision detection
if (m_ammoCount > 1)
{
for (uint32 one = 0; one < m_ammoCount; one++)
{
for (uint32 two = (one + 1); two < m_ammoCount; two++)
{
{
// Compute checks for collisions for each ammo object with the other active ammo objects.
}
}
}
#pragma endregion

#pragma region Ammo-Object intersections


// Check for intersections with Objects.
for (uint32 one = 0; one < m_ammoCount; one++)
{
// Compute ammo collisions with game objects (targets, cylinders, walls).
}
#pragma endregion

#pragma region Apply Gravity and world intersection


// Apply gravity and check for collision against ground and walls.
for (uint32 i = 0; i < m_ammoCount; i++)
{
// Compute the effect of gravity on the ammo, and any ammo collisions with the world objects (walls, floor).
}
}
}

(This code example has been abbreviated for readability. The full working code is found in the complete code
sample at the bottom of this topic.)
This method deals with four sets of computations:
The positions of the fired ammo spheres in the world.
The animation of the pillar obstacles.
The intersection of the player and the world boundaries.
The collisions of the ammo spheres with the obstacles, the targets, other ammo spheres, and the world.
The animation of the obstacles is a loop defined in Animate.h/.cpp. The behavior of the ammo and any collisions
are defined by simplified physics algorithms, supplied in the previous code and parameterized by a set of global
constants for the game world, including gravity and material properties. This is all computed in the game world
coordinate space.
Now that we've updated all the objects in the scene and calculated any collisions, we need to use that info to draw
the corresponding visual changes. After Update completes in the current iteration of the game loop, the sample
immediately calls Render to take the updated object data and generate a new scene to present to the player.
Let's look at the render method now.

Rendering the game world's graphics


We recommend that the graphics in a game update as often as possible, which, at maximum, is every time the
main game loop iterates. As the loop iterates, the game is updated, with or without player input. This allows the
animations and behaviors that are calculated to be displayed smoothly. Imagine if we had a simple scene of water
that only moved when the player pressed a button. That would make for terribly boring visuals. A good game
looks smooth and fluid.
Recall the sample game's loop, as shown here. If the game's main window is visible, and isn't snapped or
deactivated, the game continues to update and render the results of that update.
void App::Run()
{
while (!m_windowClosed)
{
if (m_visible)
{
switch (m_updateState)
{
case UpdateEngineState::Deactivated:
case UpdateEngineState::Snapped:
if (!m_renderNeeded)
{
// The App is not currently the active window, so just wait for events.
CoreWindow::GetForCurrentThread()->Dispatcher->ProcessEvents(CoreProcessEventsOption::ProcessOneAndAllPending);
break;
}
// Otherwise, fall through and do normal processing to get the rendering handled.
default:
CoreWindow::GetForCurrentThread()->Dispatcher->ProcessEvents(CoreProcessEventsOption::ProcessAllIfPresent);
Update();
m_renderer->Render();
m_renderNeeded = false;
}
}
else
{
CoreWindow::GetForCurrentThread()->Dispatcher->ProcessEvents(CoreProcessEventsOption::ProcessOneAndAllPending);
}
}
m_game->OnSuspending(); // Exiting due to window close. Make sure to save state.
}

The method we examine now renders a representation of that state immediately after the state is updated in Run
with a call to Update, which we discussed in the previous section.

void GameRenderer::Render()
{
int renderingPasses = 1;
if (m_stereoEnabled)
{
renderingPasses = 2;
}

for (int i = 0; i < renderingPasses; i++)


{
if (m_stereoEnabled && i > 0)
{
// Doing the Right Eye View
m_d3dContext->OMSetRenderTargets(1, m_renderTargetViewRight.GetAddressOf(), m_depthStencilView.Get());
m_d3dContext->ClearDepthStencilView(m_depthStencilView.Get(), D3D11_CLEAR_DEPTH, 1.0f, 0);
m_d2dContext->SetTarget(m_d2dTargetBitmapRight.Get());
}
else
{
// Doing the Mono or Left Eye View
m_d3dContext->OMSetRenderTargets(1, m_renderTargetView.GetAddressOf(), m_depthStencilView.Get());
m_d3dContext->ClearDepthStencilView(m_depthStencilView.Get(), D3D11_CLEAR_DEPTH, 1.0f, 0);
m_d2dContext->SetTarget(m_d2dTargetBitmap.Get());
}

if (m_game != nullptr && m_gameResourcesLoaded && m_levelResourcesLoaded)


{
// This section is only used after the game state has been initialized and all device
// resources needed for the game have been created and associated with the game objects.
if (m_stereoEnabled)
{
{
ConstantBufferChangeOnResize changesOnResize;
XMStoreFloat4x4(
&changesOnResize.projection,
XMMatrixMultiply(
XMMatrixTranspose(
i == 0 ?
m_game->GameCamera()->LeftEyeProjection() :
m_game->GameCamera()->RightEyeProjection()
),
XMMatrixTranspose(XMLoadFloat4x4(&m_rotationTransform3D))
)
);

m_d3dContext->UpdateSubresource(
m_constantBufferChangeOnResize.Get(),
0,
nullptr,
&changesOnResize,
0,
0
);
}
// Update variables that change one time per frame.

ConstantBufferChangesEveryFrame constantBufferChangesEveryFrame;
XMStoreFloat4x4(
&constantBufferChangesEveryFrame.view,
XMMatrixTranspose(m_game->GameCamera()->View())
);
m_d3dContext->UpdateSubresource(
m_constantBufferChangesEveryFrame.Get(),
0,
nullptr,
&constantBufferChangesEveryFrame,
0,
0
);

// Set up the Pipeline.

m_d3dContext->IASetInputLayout(m_vertexLayout.Get());
m_d3dContext->VSSetConstantBuffers(0, 1, m_constantBufferNeverChanges.GetAddressOf());
m_d3dContext->VSSetConstantBuffers(1, 1, m_constantBufferChangeOnResize.GetAddressOf());
m_d3dContext->VSSetConstantBuffers(2, 1, m_constantBufferChangesEveryFrame.GetAddressOf());
m_d3dContext->VSSetConstantBuffers(3, 1, m_constantBufferChangesEveryPrim.GetAddressOf());

m_d3dContext->PSSetConstantBuffers(2, 1, m_constantBufferChangesEveryFrame.GetAddressOf());
m_d3dContext->PSSetConstantBuffers(3, 1, m_constantBufferChangesEveryPrim.GetAddressOf());
m_d3dContext->PSSetSamplers(0, 1, m_samplerLinear.GetAddressOf());

// Get all the objects to render from the Game state.


auto objects = m_game->RenderObjects();
for (auto object = objects.begin(); object != objects.end(); object++)
{
(*object)->Render(m_d3dContext.Get(), m_constantBufferChangesEveryPrim.Get());
}
}
else
{
const float ClearColor[4] = {0.1f, 0.1f, 0.1f, 1.0f};

// Only need to clear the background when not rendering the full 3D scene because
// the 3D world is a fully enclosed box, and the dynamics prevents the camera from
// moving outside this space.
if (m_stereoEnabled && i > 0)
{
// Doing the Right Eye View.
m_d3dContext->ClearRenderTargetView(m_renderTargetViewRight.Get(), ClearColor);
}
}
else
{
// Doing the Mono or Left Eye View.
m_d3dContext->ClearRenderTargetView(m_renderTargetView.Get(), ClearColor);
}
}

m_d2dContext->BeginDraw();

// To handle the swapchain being pre-rotated, set the D2D transformation to include it.
m_d2dContext->SetTransform(m_rotationTransform2D);

if (m_game != nullptr && m_gameResourcesLoaded)


{
// This is only used after the game state has been initialized.
m_gameHud->Render(m_game, m_d2dContext.Get(), m_windowBounds);
}

if (m_gameInfoOverlay->Visible())
{
m_d2dContext->DrawBitmap(
m_gameInfoOverlay->Bitmap(),
D2D1::RectF(
(m_windowBounds.Width - GameInfoOverlayConstant::Width)/2.0f,
(m_windowBounds.Height - GameInfoOverlayConstant::Height)/2.0f,
(m_windowBounds.Width - GameInfoOverlayConstant::Width)/2.0f + GameInfoOverlayConstant::Width,
(m_windowBounds.Height - GameInfoOverlayConstant::Height)/2.0f + GameInfoOverlayConstant::Height
)
);
}

HRESULT hr = m_d2dContext->EndDraw();
if (hr != D2DERR_RECREATE_TARGET)
{
// The D2DERR_RECREATE_TARGET indicates there has been a problem with the underlying
// D3D device. All subsequent rendering will be ignored until the device is recreated.
// This error will be propagated and the appropriate D3D error will be returned from the
// swapchain->Present(...) call. At that point, the sample will recreate the device
// and all associated resources. As a result the D2DERR_RECREATE_TARGET doesn't
// need to be handled here.
DX::ThrowIfFailed(hr);
}
}
Present();
}

The complete code for this method is in Assembling the rendering framework.
This method draws the projection of the 3D world, and then draws the Direct2D overlay on top of it. When
completed, it presents the final swap chain with the combined buffers for display.
Be aware that there are two states for the sample game's Direct2D overlay: one where the game displays the game
info overlay that contains the bitmap for the pause menu, and one where the game displays the cross hairs along
with the rectangles for the touchscreen move-look controller. The score text is drawn in both states.

Next steps
By now, you're probably curious about the actual rendering engine: how those calls to the Render methods on the
updated primitives get turned into pixels on your screen. We cover that in detail in Assembling the rendering
framework. If you're more interested in how the player controls update the game state, then check out Adding
controls.

Complete code sample for this section


Simple3DGame.h

//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#pragma once

// Game specific Classes


#include "GameConstants.h"
#include "Audio.h"
#include "Camera.h"
#include "Level.h"
#include "GameObject.h"
#include "GameTimer.h"
#include "MoveLookController.h"
#include "PersistentState.h"
#include "Sphere.h"
#include "GameRenderer.h"

//--------------------------------------------------------------------------------------

enum class GameState


{
Waiting,
Active,
LevelComplete,
TimeExpired,
GameComplete,
};

typedef struct
{
Platform::String^ tag;
int totalHits;
int totalShots;
int levelCompleted;
} HighScoreEntry;

typedef std::vector<HighScoreEntry> HighScoreEntries;

//--------------------------------------------------------------------------------------

ref class GameRenderer;

ref class Simple3DGame


{
internal:
Simple3DGame();

void Initialize(
_In_ MoveLookController^ controller,
_In_ GameRenderer^ renderer
);

void LoadGame();
concurrency::task<void> LoadLevelAsync();
void FinalizeLoadLevel();
void StartLevel();
void PauseGame();
void ContinueGame();
GameState RunGame();

void OnSuspending();
void OnResuming();
void OnResuming();

bool IsActivePlay() { return m_timer->Active(); }


int LevelCompleted() { return m_currentLevel; };
int TotalShots() { return m_totalShots; };
int TotalHits() { return m_totalHits; };
float BonusTime() { return m_levelBonusTime; };
bool GameActive() { return m_gameActive; };
bool LevelActive() { return m_levelActive; };
HighScoreEntry HighScore() { return m_topScore; };
Level^ CurrentLevel() { return m_level[m_currentLevel]; };
float TimeRemaining() { return m_levelTimeRemaining; };
Camera^ GameCamera() { return m_camera; };
std::vector<GameObject^> RenderObjects() { return m_renderObject; };

private:
void LoadState();
void SaveState();
void SaveHighScore();
void LoadHighScore();
void InitializeAmmo();
void UpdateDynamics();

MoveLookController^ m_controller;
GameRenderer^ m_renderer;
Camera^ m_camera;

Audio^ m_audioController;

std::vector<Sphere^> m_ammo;
uint32 m_ammoCount;
uint32 m_ammoNext;

HighScoreEntry m_topScore;
PersistentState^ m_savedState;

GameTimer^ m_timer;
bool m_gameActive;
bool m_levelActive;
int m_totalHits;
int m_totalShots;
float m_levelDuration;
float m_levelBonusTime;
float m_levelTimeRemaining;
std::vector<Level^> m_level;
uint32 m_levelCount;
uint32 m_currentLevel;

Sphere^ m_player;
std::vector<GameObject^> m_object; // Object list for intersections
std::vector<GameObject^> m_renderObject; // All objects to be rendered

DirectX::XMFLOAT3 m_minBound;
DirectX::XMFLOAT3 m_maxBound;
};

Simple3DGame.cpp

//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#include "pch.h"
#include "GameRenderer.h"
#include "DirectXSample.h"
#include "Level1.h"
#include "Level2.h"
#include "Level3.h"
#include "Level4.h"
#include "Level5.h"
#include "Level6.h"
#include "Animate.h"
#include "Sphere.h"
#include "Cylinder.h"
#include "Face.h"
#include "MediaReader.h"

using namespace concurrency;


using namespace DirectX;
using namespace Microsoft::WRL;
using namespace Windows::Storage;
using namespace Windows::UI::Core;

//----------------------------------------------------------------------

Simple3DGame::Simple3DGame():
m_ammoCount(0),
m_ammoNext(0),
m_gameActive(false),
m_levelActive(false),
m_totalHits(0),
m_totalShots(0),
m_levelBonusTime(0.0),
m_levelTimeRemaining(0.0),
m_levelCount(0),
m_currentLevel(0)
{
m_topScore.totalHits = 0;
m_topScore.totalShots = 0;
m_topScore.levelCompleted = 0;
}

//----------------------------------------------------------------------

void Simple3DGame::Initialize(
_In_ MoveLookController^ controller,
_In_ GameRenderer^ renderer
)
{
// This method is expected to be called as an asynchronous task.
// Make sure that you don't call rendering methods on the
// m_renderer, as this would result in the D3D Context being
// used in multiple threads, which is not allowed.

m_controller = controller;
m_renderer = renderer;

m_audioController = ref new Audio;


m_audioController->CreateDeviceIndependentResources();

m_ammo = std::vector<Sphere^>(GameConstants::MaxAmmo);
m_object = std::vector<GameObject^>();
m_renderObject = std::vector<GameObject^>();
m_level = std::vector<Level^>();

m_savedState = ref new PersistentState();


m_savedState->Initialize(ApplicationData::Current->LocalSettings->Values, "Game");

m_timer = ref new GameTimer();

// Create a sphere primitive to represent the player.


// The sphere will be used to handle collisions and constrain the player in the world.
// It is not rendered, so it is not added to the list of render objects.
// It is not rendered, so it is not added to the list of render objects.
m_player = ref new Sphere(XMFLOAT3(0.0f, -1.3f, 4.0f), 0.2f);

m_camera = ref new Camera;


m_camera->SetProjParams(XM_PI / 2, 1.0f, 0.01f, 100.0f);
m_camera->SetViewParams(
m_player->Position(), // Eye point in world coordinates.
XMFLOAT3 (0.0f, 0.7f, 0.0f), // Look at point in world coordinates.
XMFLOAT3 (0.0f, 1.0f, 0.0f) // The Up vector for the camera.
);

m_controller->Pitch(m_camera->Pitch());
m_controller->Yaw(m_camera->Yaw());

// Add the m_player object to the object list to do intersection calculations.


m_object.push_back(m_player);
m_player->Active(true);

// Instantiate the world primitive. This object maintains the geometry and
// material properties of the walls, floor, and ceiling of the enclosing world.
// The TargetId is used to identify the world objects, so that the right geometry
// and textures can be associated with them later after those resources have
// been created.
GameObject^ world = ref new GameObject();
world->TargetId(GameConstants::WorldFloorId);
world->Active(true);
m_renderObject.push_back(world);

world = ref new GameObject();


world->TargetId(GameConstants::WorldCeilingId);
world->Active(true);
m_renderObject.push_back(world);

world = ref new GameObject();


world->TargetId(GameConstants::WorldWallsId);
world->Active(true);
m_renderObject.push_back(world);

// Min and max Bound are defining the world space of the game.
// All camera motion and dynamics are confined to this space.
m_minBound = XMFLOAT3(-4.0f, -3.0f, -6.0f);
m_maxBound = XMFLOAT3(4.0f, 3.0f, 6.0f);

// Instantiate the Cylinders for use in the various game levels.


// Each cylinder has a different initial position, radius and direction vector,
// but share a common set of material properties.
for (int a = 0; a < GameConstants::MaxCylinders; a++)
{
Cylinder^ cylinder;
switch (a)
{
case 0:
cylinder = ref new Cylinder(XMFLOAT3(-2.0f, -3.0f, 0.0f), 0.25f, XMFLOAT3(0.0f, 6.0f, 0.0f));
break;
case 1:
cylinder = ref new Cylinder(XMFLOAT3(2.0f, -3.0f, 0.0f), 0.25f, XMFLOAT3(0.0f, 6.0f, 0.0f));
break;
case 2:
cylinder = ref new Cylinder(XMFLOAT3(0.0f, -3.0f, -2.0f), 0.25f, XMFLOAT3(0.0f, 6.0f, 0.0f));
break;
case 3:
cylinder = ref new Cylinder(XMFLOAT3(-1.5f, -3.0f, -4.0f), 0.25f, XMFLOAT3(0.0f, 6.0f, 0.0f));
break;
case 4:
cylinder = ref new Cylinder(XMFLOAT3(1.5f, -3.0f, -4.0f), 0.50f, XMFLOAT3(0.0f, 6.0f, 0.0f));
break;
}
cylinder->Active(true);
m_object.push_back(cylinder);
m_renderObject.push_back(cylinder);
m_renderObject.push_back(cylinder);
}

MediaReader^ mediaReader = ref new MediaReader;


auto targetHitSound = mediaReader->LoadMedia("hit.wav");

// Instantiate the targets for use in the game.


// Each target has a different initial position, size and orientation,
// but share a common set of material properties.
// The target is defined by a position and two vectors that define both
// the plane of the target in world space and the size of the parallelogram
// based on the lengths of the vectors.
// Each target is assigned a number for identification purposes.
// The Target ID number is 1 based.
// All targets has the same material properties.
for (int a = 1; a < GameConstants::MaxTargets; a++)
{
Face^ target;
switch (a)
{
case 1:
target = ref new Face(XMFLOAT3(-2.5f, -1.0f, -1.5f), XMFLOAT3(-1.5f, -1.0f, -2.0f), XMFLOAT3(-2.5f, 1.0f, -1.5f));
break;
case 2:
target = ref new Face(XMFLOAT3(-1.0f, 1.0f, -3.0f), XMFLOAT3(0.0f, 1.0f, -3.0f), XMFLOAT3(-1.0f, 2.0f, -3.0f));
break;
case 3:
target = ref new Face(XMFLOAT3(1.5f, 0.0f, -3.0f), XMFLOAT3(2.5f, 0.0f, -2.0f), XMFLOAT3(1.5f, 2.0f, -3.0f));
break;
case 4:
target = ref new Face(XMFLOAT3(-2.5f, -1.0f, -5.5f), XMFLOAT3(-0.5f, -1.0f, -5.5f), XMFLOAT3(-2.5f, 1.0f, -5.5f));
break;
case 5:
target = ref new Face(XMFLOAT3(0.5f, -2.0f, -5.0f), XMFLOAT3(1.5f, -2.0f, -5.0f), XMFLOAT3(0.5f, 0.0f, -5.0f));
break;
case 6:
target = ref new Face(XMFLOAT3(1.5f, -2.0f, -5.5f), XMFLOAT3(2.5f, -2.0f, -5.0f), XMFLOAT3(1.5f, 0.0f, -5.5f));
break;
case 7:
target = ref new Face(XMFLOAT3(0.0f, 0.0f, 0.0f), XMFLOAT3(0.5f, 0.0f, 0.0f), XMFLOAT3(0.0f, 0.5f, 0.0f));
break;
case 8:
target = ref new Face(XMFLOAT3(0.0f, 0.0f, 0.0f), XMFLOAT3(0.5f, 0.0f, 0.0f), XMFLOAT3(0.0f, 0.5f, 0.0f));
break;
case 9:
target = ref new Face(XMFLOAT3(0.0f, 0.0f, 0.0f), XMFLOAT3(0.5f, 0.0f, 0.0f), XMFLOAT3(0.0f, 0.5f, 0.0f));
break;
}

target->Target(true);
target->TargetId(a);
target->Active(true);
target->HitSound(ref new SoundEffect());
target->HitSound()->Initialize(
m_audioController->SoundEffectEngine(),
mediaReader->GetOutputWaveFormatEx(),
targetHitSound);

m_object.push_back(target);
m_renderObject.push_back(target);
}

// Instantiate a set of spheres to be used as ammunition for the game


// and set the material properties of the spheres.
auto ammoHitSound = mediaReader->LoadMedia("bounce.wav");

for (int a = 0; a < GameConstants::MaxAmmo; a++)


{
m_ammo[a] = ref new Sphere;
m_ammo[a]->Radius(GameConstants::AmmoRadius);
m_ammo[a]->Radius(GameConstants::AmmoRadius);
m_ammo[a]->HitSound(ref new SoundEffect());
m_ammo[a]->HitSound()->Initialize(
m_audioController->SoundEffectEngine(),
mediaReader->GetOutputWaveFormatEx(),
ammoHitSound);
m_ammo[a]->Active(false);
m_renderObject.push_back(m_ammo[a]);
}

// Instantiate each of the game levels. The Level class contains methods
// that initialize the objects in the world for the given level and also
// define any motion paths for the objects in that level.

m_level.push_back(ref new Level1);


m_level.push_back(ref new Level2);
m_level.push_back(ref new Level3);
m_level.push_back(ref new Level4);
m_level.push_back(ref new Level5);
m_level.push_back(ref new Level6);
m_levelCount = static_cast<uint32>(m_level.size());

// Load the top score from disk if it exists.


LoadHighScore();

// Load the currentScore for saved state.


LoadState();

m_controller->Active(false);
}

//----------------------------------------------------------------------

void Simple3DGame::LoadGame()
{
m_player->Position(XMFLOAT3 (0.0f, -1.3f, 4.0f));

m_camera->SetViewParams(
m_player->Position(), // Eye point in world coordinates.
XMFLOAT3 (0.0f, 0.7f, 0.0f), // Look at point in world coordinates.
XMFLOAT3 (0.0f, 1.0f, 0.0f) // The Up vector for the camera.
);

m_controller->Pitch(m_camera->Pitch());
m_controller->Yaw(m_camera->Yaw());
m_currentLevel = 0;
m_levelTimeRemaining = 0.0f;
m_levelBonusTime = m_levelTimeRemaining;
m_levelDuration = m_level[m_currentLevel]->TimeLimit() + m_levelBonusTime;
InitializeAmmo();
m_totalHits = 0;
m_totalShots = 0;
m_gameActive = false;
m_levelActive = false;
m_timer->Reset();
}

//----------------------------------------------------------------------

task<void> Simple3DGame::LoadLevelAsync()
{
// Initialize the level and spin up the async loading of the rendering
// resources for the level.
// This will run in a separate thread, so for Direct3D 11, only Device
// methods are allowed. Any DeviceContext method calls need to be
// done in FinalizeLoadLevel.

m_level[m_currentLevel]->Initialize(m_object);
m_levelDuration = m_level[m_currentLevel]->TimeLimit() + m_levelBonusTime;
return m_renderer->LoadLevelResourcesAsync();
}

//----------------------------------------------------------------------

void Simple3DGame::FinalizeLoadLevel()
{
// This method is called on the main thread, so Direct3D 11 DeviceContext
// method calls are allowable here.

SaveState();

// Finalize the Level loading.


m_renderer->FinalizeLoadLevelResources();
}

//----------------------------------------------------------------------

void Simple3DGame::StartLevel()
{
m_timer->Reset();
m_timer->Start();
if (m_currentLevel == 0)
{
m_gameActive = true;
}
m_levelActive = true;
m_controller->Active(true);
}

//----------------------------------------------------------------------

void Simple3DGame::PauseGame()
{
m_timer->Stop();
SaveState();
}

//----------------------------------------------------------------------

void Simple3DGame::ContinueGame()
{
m_timer->Start();
m_controller->Active(true);
}

//----------------------------------------------------------------------

GameState Simple3DGame::RunGame()
{
m_timer->Update();

m_levelTimeRemaining = m_levelDuration - m_timer->PlayingTime();

if (m_levelTimeRemaining <= 0.0f)


{
// Time expired, so the game is over.
m_levelTimeRemaining = 0.0f;
InitializeAmmo();
m_timer->Reset();
m_gameActive = false;
m_levelActive = false;
SaveState();

if (m_totalHits > m_topScore.totalHits)


{
m_topScore.totalHits = m_totalHits;
m_topScore.totalShots = m_totalShots;
m_topScore.levelCompleted = m_currentLevel;

SaveHighScore();
}
return GameState::TimeExpired;
}
else
{
// Time has not expired, so run one frame of game play.
m_player->Velocity(m_controller->Velocity());
m_camera->LookDirection(m_controller->LookDirection());

UpdateDynamics();

// Update the Camera with the player position updates from the dynamics calculations.
m_camera->Eye(m_player->Position());
m_camera->LookDirection(m_controller->LookDirection());

if (m_level[m_currentLevel]->Update(m_timer->PlayingTime(), m_timer->DeltaTime(), m_levelTimeRemaining, m_object))


{
// The level has been completed.
m_levelActive = false;
InitializeAmmo();

if (m_currentLevel < m_levelCount-1)


{
// More levels to go so increment the level number.
// Actual level loading will occur in the LoadLevelAsync / FinalizeLoadLevel
// methods.
m_timer->Reset();
m_currentLevel++;
m_levelBonusTime = m_levelTimeRemaining;
SaveState();
return GameState::LevelComplete;
}
else
{
// All levels have been completed.
m_timer->Reset();
m_gameActive = false;
m_levelActive = false;
SaveState();

if (m_totalHits > m_topScore.totalHits)


{
m_topScore.totalHits = m_totalHits;
m_topScore.totalShots = m_totalShots;
m_topScore.levelCompleted = m_currentLevel;

SaveHighScore();
}
return GameState::GameComplete;
}
}
}
return GameState::Active;
}

//----------------------------------------------------------------------

void Simple3DGame::OnSuspending()
{
m_audioController->SuspendAudio();
}

//----------------------------------------------------------------------

void Simple3DGame::OnResuming()
{
{
m_audioController->ResumeAudio();
}

//--------------------------------------------------------------------------------------

void Simple3DGame::UpdateDynamics()
{
float timeTotal = m_timer->PlayingTime();
float timeFrame = m_timer->DeltaTime();
bool fire = m_controller->IsFiring();

#pragma region Shoot Ammo


// Shoot ammo.
if (fire)
{
static float lastFired; // Timestamp of the last ammo fired.

if (timeTotal < lastFired)


{
// timeTotal is not guarenteed to be monotomically increasing because it is
// reset at each level.
lastFired = timeTotal - GameConstants::Physics::AutoFireDelay;
}

if (timeTotal - lastFired >= GameConstants::Physics::AutoFireDelay)


{
// Get the inverse view matrix.
XMMATRIX invView;
XMVECTOR det;
invView = XMMatrixInverse(&det, m_camera->View());

// Compute the initial velocity in world space from camera space.


XMFLOAT4 initialVelocity(0.0f, 0.0f, 15.0f, 0.0f);
m_ammo[m_ammoNext]->Velocity(XMVector4Transform(XMLoadFloat4(&initialVelocity), invView));

// Populate the position.


// Offset from the player to avoid an initial collision with the player object.
XMFLOAT4 initialPosition(0.0f, -0.15f, m_player->Radius() + GameConstants::AmmoSize, 1.0f);
m_ammo[m_ammoNext]->Position(XMVector4Transform(XMLoadFloat4(&initialPosition), invView));

// Initially not laying on the ground.


m_ammo[m_ammoNext]->OnGround(false);
m_ammo[m_ammoNext]->Active(true);

// Set the position in the array of the next Ammo to use.


// We will reuse ammo taking the least recently used after we've hit the
// MaxAmmo in use.
m_ammoNext = (m_ammoNext + 1) % GameConstants::MaxAmmo;
m_ammoCount = min(m_ammoCount + 1, GameConstants::MaxAmmo);

lastFired = timeTotal;
m_totalShots++;
}
}
#pragma endregion

#pragma region Animate Objects


for (uint32 i = 0; i < m_object.size(); i++)
{
if (m_object[i]->AnimatePosition())
{
m_object[i]->Position(m_object[i]->AnimatePosition()->Evaluate(timeTotal));
if (m_object[i]->AnimatePosition()->IsFinished(timeTotal))
{
m_object[i]->AnimatePosition(nullptr);
}
}
}
#pragma endregion
#pragma endregion

// If the elapsed time is too long, we slice up the time and


// handle physics over several passes.
float timeLeft = timeFrame;
float elapsedFrameTime;
while (timeLeft > 0.0f)
{
elapsedFrameTime = min(timeLeft, GameConstants::Physics::FrameLength);
timeLeft -= elapsedFrameTime;

// Update the player position.


m_player->Position(m_player->VectorPosition() + m_player->VectorVelocity() * elapsedFrameTime);

// Do m_player / object intersections.


for (uint32 a = 0; a < m_object.size(); a++)
{
if (m_object[a]->Active() && m_object[a] != m_player)
{
XMFLOAT3 contact;
XMFLOAT3 normal;

if (m_object[a]->IsTouching(m_player->Position(), m_player->Radius(), &contact, &normal))


{
XMVECTOR oneToTwo;
oneToTwo = -XMLoadFloat3(&normal);

// The player is in contact with Object.


float impact;
impact = XMVectorGetX(
XMVector3Dot (oneToTwo, m_player->VectorVelocity())
);
// Make sure that the player is actually headed towards the object at grazing angles
// could appear to be an impact when the player is actually already hit and moving away
if (impact > 0.0f)
{
// Compute the normal and tangential components of the player's velocity.
XMVECTOR velocityOneNormal = XMVector3Dot(oneToTwo, m_player->VectorVelocity()) * oneToTwo;
XMVECTOR velocityOneTangent = m_player->VectorVelocity() - velocityOneNormal;

// Compute the post-collision velocity.


m_player->Velocity(velocityOneTangent - velocityOneNormal);

// Fix the positions so that the ball is exactly GameConstants::AmmoRadius from target.
float distanceToMove = m_player->Radius();
m_player->Position(XMLoadFloat3(&contact) - (oneToTwo * distanceToMove));
}
}
}
}
{
// Do collision detection of the player with the bounding world.
XMFLOAT3 position = m_player->Position();
XMFLOAT3 velocity = m_player->Velocity();
float radius = m_player->Radius();

// Check for player collisions with the walls, floor, or ceiling


// and adjust the position.

float limit = m_minBound.x + radius;


if (position.x < limit)
{
position.x = limit;
velocity.x = -velocity.x * GameConstants::Physics::GroundRestitution;
}
limit = m_maxBound.x - radius;
if (position.x > limit)
{
position.x = limit;
velocity.x = -velocity.x + GameConstants::Physics::GroundRestitution;
velocity.x = -velocity.x + GameConstants::Physics::GroundRestitution;
}
limit = m_minBound.y + radius;
if (position.y < limit)
{
position.y = limit;
velocity.y = -velocity.y * GameConstants::Physics::GroundRestitution;
}
limit = m_maxBound.y - radius;
if (position.y > limit)
{
position.y = limit;
velocity.y = -velocity.y * GameConstants::Physics::GroundRestitution;
}
limit = m_minBound.z + radius;
if (position.z < limit)
{
position.z = limit;
velocity.z = -velocity.z * GameConstants::Physics::GroundRestitution;
}
limit = m_maxBound.z - radius;
if (position.z > limit)
{
position.z = limit;
velocity.z = -velocity.z * GameConstants::Physics::GroundRestitution;
}
m_player->Position(position);
m_player->Velocity(velocity);
}

// Animate the ammo.


if (m_ammoCount > 0)
{
// Check for inter-ammo collision.
#pragma region inter-ammo collision detection
if (m_ammoCount > 1)
{
for (uint32 one = 0; one < m_ammoCount; one++)
{
for (uint32 two = (one + 1); two < m_ammoCount; two++)
{
// Check for collision between instances One and Two.
// vOneToTwo is the collision normal vector.
XMVECTOR oneToTwo;
oneToTwo = m_ammo[two]->VectorPosition() - m_ammo[one]->VectorPosition();
float distanceSquared;
distanceSquared = XMVectorGetX(
XMVector3LengthSq(oneToTwo)
);
if (distanceSquared < (GameConstants::AmmoSize * GameConstants::AmmoSize))
{
oneToTwo = XMVector3Normalize(oneToTwo);

// Check if the two instances are already moving away from each other.
// If so, skip the collision. This can happen when a lot of instances are
// bunched up next to each other.
float impact;
impact = XMVectorGetX(
XMVector3Dot(oneToTwo, m_ammo[one]->VectorVelocity()) -
XMVector3Dot(oneToTwo, m_ammo[two]->VectorVelocity())
);
if (impact > 0.0f)
{
// Compute the normal and tangential components of One's velocity.
XMVECTOR velocityOneNormal = (1 - GameConstants::Physics::BounceLost) *
XMVector3Dot(oneToTwo, m_ammo[one]->VectorVelocity()) * oneToTwo;
XMVECTOR velocityOneTangent = (1 - GameConstants::Physics::BounceLost) *
m_ammo[one]->VectorVelocity() - velocityOneNormal;
// Compute the normal and tangential components of Two's velocity.
XMVECTOR velocityTwoN = (1 - GameConstants::Physics::BounceLost) *
XMVECTOR velocityTwoN = (1 - GameConstants::Physics::BounceLost) *
XMVector3Dot(oneToTwo, m_ammo[two]->VectorVelocity()) * oneToTwo;
XMVECTOR velocityTwoT = (1 - GameConstants::Physics::BounceLost) *
m_ammo[two]->VectorVelocity() - velocityTwoN;

// Compute the post-collision velocity.


m_ammo[one]->Velocity(velocityOneTangent - velocityOneNormal * (1 - GameConstants::Physics::BounceTransfer) +
velocityTwoN * GameConstants::Physics::BounceTransfer);
m_ammo[two]->Velocity(velocityTwoT - velocityTwoN * (1 - GameConstants::Physics::BounceTransfer) +
velocityOneNormal * GameConstants::Physics::BounceTransfer);

// Fix the positions so that the two balls are exactly GameConstants::AmmoSize apart.
float distanceToMove = (GameConstants::AmmoSize - sqrtf(distanceSquared)) * 0.5f;
m_ammo[one]->Position(m_ammo[one]->VectorPosition() - (oneToTwo * distanceToMove));
m_ammo[two]->Position(m_ammo[two]->VectorPosition() + (oneToTwo * distanceToMove));

// Flag the two instances so that they are not laying on ground.
m_ammo[one]->OnGround(false);
m_ammo[two]->OnGround(false);

m_ammo[one]->PlaySound(impact, m_player->Position());
m_ammo[two]->PlaySound(impact, m_player->Position());
}
}
}
}
}
#pragma endregion

#pragma region Ammo-Object intersections


// Check for intersections with Objects.
for (uint32 one = 0; one < m_ammoCount; one++)
{
if (m_object.size() > 0)
{
if (!m_ammo[one]->OnGround())
{
for (uint32 i = 0; i < m_object.size(); i++)
{
if (m_object[i]->Active())
{
XMFLOAT3 contact;
XMFLOAT3 normal;

if (m_object[i]->IsTouching(m_ammo[one]->Position(), GameConstants::AmmoRadius, &contact, &normal))


{
XMVECTOR oneToTwo;
oneToTwo = -XMLoadFloat3(&normal);

// Ball is in contact with the Object.


float impact;
impact = XMVectorGetX(
XMVector3Dot (oneToTwo, m_ammo[one]->VectorVelocity())
);
// Make sure that the ball is actually headed towards the object at grazing angles. There
// could appear to be an impact when the ball is actually already hit and moving away.
if (impact > 0.0f)
{
// Compute the normal and tangential components of One's velocity.
XMVECTOR velocityOneNormal = (1 - GameConstants::Physics::BounceLost) * XMVector3Dot(oneToTwo,
m_ammo[one]->VectorVelocity()) * oneToTwo;
XMVECTOR velocityOneTangent = (1 - GameConstants::Physics::BounceLost) * m_ammo[one]->VectorVelocity() -
velocityOneNormal;

// Compute the post-collision velocity.


m_ammo[one]->Velocity(velocityOneTangent - velocityOneNormal * (1 - GameConstants::Physics::BounceTransfer));

// Fix the positions so that the ball is exactly GameConstants::AmmoRadius from target.
float distanceToMove = GameConstants::AmmoSize;
m_ammo[one]->Position(XMLoadFloat3(&contact) - (oneToTwo * distanceToMove));
m_ammo[one]->Position(XMLoadFloat3(&contact) - (oneToTwo * distanceToMove));

// Flag the Ammo as not laying on the ground and mark the object as hit if it is a target.
m_ammo[one]->OnGround(false);

// Play the sound associated with the Ammo hitting something.


m_ammo[one]->PlaySound(impact, m_player->Position());

if (m_object[i]->Target() && !m_object[i]->Hit())


{
m_object[i]->Hit(true);
m_object[i]->HitTime(timeTotal);
m_totalHits++;

// Only play target sound if it was an active hit.


m_object[i]->PlaySound(impact, m_player->Position());
}
}
}
}
}
}
}
}
#pragma endregion

#pragma region Apply Gravity and world intersection


// Apply gravity and check for collision against the ground and walls.
for (uint32 i = 0; i < m_ammoCount; i++)
{
m_ammo[i]->Position(m_ammo[i]->VectorPosition() + m_ammo[i]->VectorVelocity() * elapsedFrameTime);

XMFLOAT3 velocity = m_ammo[i]->Velocity();


XMFLOAT3 position = m_ammo[i]->Position();

velocity.x -= velocity.x * 0.1f * elapsedFrameTime;


velocity.z -= velocity.z * 0.1f * elapsedFrameTime;
// Apply gravity if the ball is not resting on the ground.
if (!m_ammo[i]->OnGround())
{
velocity.y -= GameConstants::Physics::Gravity * elapsedFrameTime;
}

// Check the bounce on the ground.


if (!m_ammo[i]->OnGround())
{
float limit = m_minBound.y + GameConstants::AmmoRadius;
if (position.y < limit)
{
// Align the ball with the ground.
position.y = limit;

// Play the sound for impact.


m_ammo[i]->PlaySound(-velocity.y, m_player->Position());

// Invert the Y velocity.


velocity.y = -velocity.y * GameConstants::Physics::GroundRestitution;

// X and Z velocity are reduced because of friction.


velocity.x *= GameConstants::Physics::Friction;
velocity.z *= GameConstants::Physics::Friction;
}
}
else
{
// Ball is resting or rolling on the ground.
// X and Z velocity are reduced because of friction.
velocity.x *= GameConstants::Physics::Friction;
velocity.z *= GameConstants::Physics::Friction;
}

// Check the bounce on the ceiling.


float limit = m_maxBound.y - GameConstants::AmmoRadius;
if (position.y > limit)
{
// Align the ball with the ground.
position.y = limit;

// Play the sound for impact.


m_ammo[i]->PlaySound(-velocity.y, m_player->Position());

// Invert the Y velocity.


velocity.y = -velocity.y * GameConstants::Physics::GroundRestitution;

// X and Z velocity are reduced because of friction.


velocity.x *= GameConstants::Physics::Friction;
velocity.z *= GameConstants::Physics::Friction;
}

// If the Y direction motion is below a certain threshold, flag the instance as


// laying on the ground.
limit = m_minBound.y + GameConstants::AmmoRadius;
if ((GameConstants::Physics::Gravity * (position.y - limit) + 0.5f * velocity.y * velocity.y) < GameConstants::Physics::RestThreshold)
{
// Align the ball with the ground.
position.y = limit;

// Y direction velocity becomes 0.


velocity.y = 0.0f;

// Flag it.
m_ammo[i]->OnGround(true);
}

// Check the bounce on the front and back walls.


limit = m_minBound.z + GameConstants::AmmoRadius;
if (position.z < limit)
{
// Align the ball with the wall.
position.z = limit;

// Play the sound for impact.


m_ammo[i]->PlaySound(-velocity.z, m_player->Position());

// Invert the Z velocity.


velocity.z = -velocity.z * GameConstants::Physics::GroundRestitution;
}
limit = m_maxBound.z - GameConstants::AmmoRadius;
if (position.z > limit)
{
// Align the ball with the wall.
position.z = limit;

// Play the sound for impact.


m_ammo[i]->PlaySound(-velocity.z, m_player->Position());

// Invert the Z velocity.


velocity.z = -velocity.z * GameConstants::Physics::GroundRestitution;
}

// Check the bounce on the left and right walls.


limit = m_minBound.x + GameConstants::AmmoRadius;
if (position.x < limit)
{
// Align the ball with the wall.
position.x = limit;

// Play the sound for impact.


m_ammo[i]->PlaySound(-velocity.x, m_player->Position());

// Invert the X velocity.


velocity.x = -velocity.x * GameConstants::Physics::GroundRestitution;
}
limit = m_maxBound.x - GameConstants::AmmoRadius;
if (position.x > limit)
{
// Align the ball with the wall.
position.x = limit;

m_ammo[i]->PlaySound(-velocity.x, m_player->Position());

// Invert the X velocity.


velocity.x = -velocity.x * GameConstants::Physics::GroundRestitution;
}
m_ammo[i]->Velocity(velocity);
m_ammo[i]->Position(position);
}
}
}
#pragma endregion
}

//----------------------------------------------------------------------

void Simple3DGame::SaveState()
{
// Save basic state of the game.
m_savedState->SaveBool(":GameActive", m_gameActive);
m_savedState->SaveBool(":LevelActive", m_levelActive);
m_savedState->SaveInt32(":LevelCompleted", m_currentLevel);
m_savedState->SaveInt32(":TotalShots", m_totalShots);
m_savedState->SaveInt32(":TotalHits", m_totalHits);
m_savedState->SaveSingle(":BonusRoundTime", m_levelBonusTime);
m_savedState->SaveXMFLOAT3(":PlayerPosition", m_player->Position());
m_savedState->SaveXMFLOAT3(":PlayerLookDirection", m_controller->LookDirection());

// Save the extended state of the game because it is currently in the middle of a level.
if (m_levelActive)
{
m_savedState->SaveSingle(":LevelDuration", m_levelDuration);
m_savedState->SaveSingle(":LevelPlayingTime", m_timer->PlayingTime());

m_savedState->SaveInt32(":AmmoCount", m_ammoCount);
m_savedState->SaveInt32(":AmmoNext", m_ammoNext);

const int bufferLength = 16;


char16 str[bufferLength];

for (uint32 i = 0; i < m_ammoCount; i++)


{
int len = swprintf_s(str, bufferLength, L"%d", i);
Platform::String^ string = ref new Platform::String(str, len);

m_savedState->SaveBool(Platform::String::Concat(":AmmoActive", string), m_ammo[i]->Active());


m_savedState->SaveXMFLOAT3(Platform::String::Concat(":AmmoPosition", string), m_ammo[i]->Position());
m_savedState->SaveXMFLOAT3(Platform::String::Concat(":AmmoVelocity", string), m_ammo[i]->Velocity());
}

m_savedState->SaveInt32(":ObjectCount", static_cast<int>(m_object.size()));

for (uint32 i = 0; i < m_object.size(); i++)


{
int len = swprintf_s(str, bufferLength, L"%d", i);
Platform::String^ string = ref new Platform::String(str, len);

m_savedState->SaveBool(Platform::String::Concat(":ObjectActive", string), m_object[i]->Active());


m_savedState->SaveBool(Platform::String::Concat(":ObjectTarget", string), m_object[i]->Target());
m_savedState->SaveBool(Platform::String::Concat(":ObjectTarget", string), m_object[i]->Target());
m_savedState->SaveXMFLOAT3(Platform::String::Concat(":ObjectPosition", string), m_object[i]->Position());
}

m_level[m_currentLevel]->SaveState(m_savedState);
}
}

//----------------------------------------------------------------------

void Simple3DGame::LoadState()
{
m_gameActive = m_savedState->LoadBool(":GameActive", m_gameActive);
m_levelActive = m_savedState->LoadBool(":LevelActive", m_levelActive);

if (m_gameActive)
{
// Loading from the last known state means the Game wasn't finished when it was last played,
// so set the current level.

m_totalShots = m_savedState->LoadInt32(":TotalShots", 0);


m_totalHits = m_savedState->LoadInt32(":TotalHits", 0);
m_currentLevel = m_savedState->LoadInt32(":LevelCompleted", 0);
m_levelBonusTime = m_savedState->LoadSingle(":BonusRoundTime", 0.0f);

m_levelTimeRemaining = m_levelBonusTime;

// Reload the current player position and set both the camera and the controller
// with the current Look Direction.
m_player->Position(
m_savedState->LoadXMFLOAT3(":PlayerPosition", XMFLOAT3(0.0f, 0.0f, 0.0f))
);
m_camera->Eye(m_player->Position());
m_camera->LookDirection(
m_savedState->LoadXMFLOAT3(":PlayerLookDirection", XMFLOAT3(0.0f, 0.0f, 1.0f))
);
m_controller->Pitch(m_camera->Pitch());
m_controller->Yaw(m_camera->Yaw());
}
else
{
// Initialize to the beginning.
m_currentLevel = 0;
m_levelBonusTime = 0;
}

// Initialize the state of the Update and Render engines and load the current level.
m_level[m_currentLevel]->Initialize(m_object);

m_levelDuration = m_level[m_currentLevel]->TimeLimit() + m_levelBonusTime;

if (m_gameActive)
{
if (m_levelActive)
{
// Middle of a level so restart where left off.
m_levelDuration = m_savedState->LoadSingle(":LevelDuration", 0.0f);

m_timer->Reset();
m_timer->PlayingTime(m_savedState->LoadSingle(":LevelPlayingTime", 0.0f));

m_ammoCount = m_savedState->LoadInt32(":AmmoCount", 0);

m_ammoNext = m_savedState->LoadInt32(":AmmoNext", 0);

const int bufferLength = 16;


char16 str[bufferLength];

for (uint32 i = 0; i < m_ammoCount; i++)


{
{
int len = swprintf_s(str, bufferLength, L"%d", i);
Platform::String^ string = ref new Platform::String(str, len);

m_ammo[i]->Active(
m_savedState->LoadBool(
Platform::String::Concat(":AmmoActive", string),
m_ammo[i]->Active()
)
);
if (m_ammo[i]->Active())
{
m_ammo[i]->OnGround(false);
}

m_ammo[i]->Position(
m_savedState->LoadXMFLOAT3(
Platform::String::Concat(":AmmoPosition", string),
m_ammo[i]->Position()
)
);

m_ammo[i]->Velocity(
m_savedState->LoadXMFLOAT3(
Platform::String::Concat(":AmmoVelocity", string),
m_ammo[i]->Velocity()
)
);
}

int storedObjectCount = 0;
storedObjectCount = m_savedState->LoadInt32(":ObjectCount", 0);

storedObjectCount = min(storedObjectCount, static_cast<int>(m_object.size()));

for (int i = 0; i < storedObjectCount; i++)


{
int len = swprintf_s(str, bufferLength, L"%d", i);
Platform::String^ string = ref new Platform::String(str, len);

m_object[i]->Active(
m_savedState->LoadBool(
Platform::String::Concat(":ObjectActive", string),
m_object[i]->Active()
)
);

m_object[i]->Target(
m_savedState->LoadBool(
Platform::String::Concat(":ObjectTarget", string),
m_object[i]->Target()
)
);

m_object[i]->Position(
m_savedState->LoadXMFLOAT3(
Platform::String::Concat(":ObjectPosition", string),
m_object[i]->Position()
)
);
}

m_level[m_currentLevel]->LoadState(m_savedState);
m_levelTimeRemaining = m_level[m_currentLevel]->TimeLimit() + m_levelBonusTime - m_timer->PlayingTime();
}
}
}

//----------------------------------------------------------------------
void Simple3DGame::SaveHighScore()
{
m_savedState->SaveInt32(":HighScore:LevelCompleted", m_topScore.levelCompleted);
m_savedState->SaveInt32(":HighScore:TotalShots", m_topScore.totalShots);
m_savedState->SaveInt32(":HighScore:TotalHits", m_topScore.totalHits);
}

//----------------------------------------------------------------------

void Simple3DGame::LoadHighScore()
{
m_topScore.levelCompleted = m_savedState->LoadInt32(":HighScore:LevelCompleted", 0);
m_topScore.totalShots = m_savedState->LoadInt32(":HighScore:TotalShots", 0);
m_topScore.totalHits = m_savedState->LoadInt32(":HighScore:TotalHits", 0);
}

//----------------------------------------------------------------------

void Simple3DGame::InitializeAmmo()
{
m_ammoCount = 0;
m_ammoNext = 0;
for (uint32 i = 0; i < GameConstants::MaxAmmo; i++)
{
m_ammo[i]->Active(false);
}
}

Level.h
//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#pragma once

// Level:
// This is an abstract class from which all of the levels of the game are derived.
// Each level potentially overrides up to four methods:
// Initialize - (required) takes a list of objects and enables the objects that
// are active for the level as well as setting their positions and
// any animations associated with the objects.
// Update - this method is called once per time step and is expected to
// determine if the level has been completed. The Level class provides
// a 'standard' Update method which checks each object that is a target
// and disables any active targets that have been hit. It returns true
// once there are no active targets remaining.
// SaveState - method to save any Level specific state. Default is defined as
// not saving any state.
// LoadState - method to restore any Level specific state. Default is defined
// as not restoring any state.

#include "GameObject.h"
#include "PersistentState.h"

ref class Level abstract


{
internal:
virtual void Initialize(
std::vector<GameObject^> objects
) = 0;

virtual bool Update(


float time,
float elapsedTime,
float timeRemaining,
std::vector<GameObject^> objects
);

virtual void SaveState(PersistentState^ state);


virtual void LoadState(PersistentState^ state);

Platform::String^ Objective();
float TimeLimit();

protected private:
Platform::String^ m_objective;
float m_timeLimit;
};

Level.cpp
//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#include "pch.h"
#include "Level.h"

//----------------------------------------------------------------------

bool Level::Update(
float /* time */,
float /* elapsedTime */,
float /* timeRemaining*/,
std::vector<GameObject^> objects
)
{
int left = 0;

for (auto object = objects.begin(); object != objects.end(); object++)


{
if ((*object)->Active() && (*object)->Target())
{
if ((*object)->Hit())
{
(*object)->Active(false);
}
else
{
left++;
}
}
}
return (left == 0);
}

//----------------------------------------------------------------------

void Level::SaveState(PersistentState^ /* state */)


{
}

//----------------------------------------------------------------------

void Level::LoadState(PersistentState^ /* state */)


{
}

//----------------------------------------------------------------------

Platform::String^ Level::Objective()
{
return m_objective;
}

//----------------------------------------------------------------------

float Level::TimeLimit()
{
return m_timeLimit;
}

//----------------------------------------------------------------------
Level1.h

//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#pragma once

// Level1:
// This class defines the first level of the game. There are nine active targets.
// Each of the targets is stationary and can be hit in any order.

#include "Level.h"

ref class Level1: public Level


{
internal:
Level1();
virtual void Initialize(std::vector<GameObject^> objects) override;
};

Level1.cpp
//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#include "pch.h"
#include "Level1.h"
#include "Face.h"

using namespace DirectX;

//----------------------------------------------------------------------

Level1::Level1()
{
m_timeLimit = 20.0f;
m_objective = "Hit each of the targets before time runs out.\nTouch to aim. Tap in right box to fire. Drag in left box to move.";
}

//----------------------------------------------------------------------

void Level1::Initialize(std::vector<GameObject^> objects)


{
XMFLOAT3 position[] =
{
XMFLOAT3(-2.5f, -1.0f, -1.5f),
XMFLOAT3(-1.0f, 1.0f, -3.0f),
XMFLOAT3( 1.5f, 0.0f, -3.0f),
XMFLOAT3(-2.5f, -1.0f, -5.5f),
XMFLOAT3( 0.5f, -2.0f, -5.0f),
XMFLOAT3( 1.5f, -2.0f, -5.5f),
XMFLOAT3( 2.0f, 0.0f, 0.0f),
XMFLOAT3( 0.0f, 0.0f, 0.0f),
XMFLOAT3(-2.0f, 0.0f, 0.0f)
};

int targetCount = 0;
for (auto object = objects.begin(); object != objects.end(); object++)
{
if (Face^ target = dynamic_cast<Face^>(*object))
{
if (targetCount < 9)
{
target->Active(true);
target->Target(true);
target->Hit(false);
target->AnimatePosition(nullptr);
target->Position(position[targetCount]);
targetCount++;
}
else
{
(*object)->Active(false);
}
}
else
{
(*object)->Active(false);
}
}
}

//----------------------------------------------------------------------
Level2.h

//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#pragma once

// Level2:
// This class defines the second level of the game. It derives from the
// first level. In this level, the targets must be hit in numeric order.

#include "Level1.h"

ref class Level2: public Level1


{
internal:
Level2();
virtual void Initialize(std::vector<GameObject^> objects) override;

virtual bool Update(


float time,
float elapsedTime,
float timeRemaining,
std::vector<GameObject^> objects
) override;

virtual void SaveState(PersistentState^ state) override;


virtual void LoadState(PersistentState^ state) override;

private:
int m_nextId;
};

Level2.cpp

//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#include "pch.h"
#include "Level2.h"
#include "Face.h"

//----------------------------------------------------------------------

Level2::Level2()
{
m_timeLimit = 30.0f;
m_objective = "Hit each of the targets in ORDER before time runs out.";
}

//----------------------------------------------------------------------

void Level2::Initialize(std::vector<GameObject^> objects)


{
Level1::Initialize(objects);

int targetCount = 0;
for (auto object = objects.begin(); object != objects.end(); object++)
{
if (Face^ target = dynamic_cast<Face^>(*object))
{
if (targetCount < 9)
{
target->Target(targetCount == 0 ? true : false);
targetCount++;
}
}
}
m_nextId = 1;
}

//----------------------------------------------------------------------

bool Level2::Update(
float /* time */,
float /* elapsedTime */,
float /* timeRemaining */,
std::vector<GameObject^> objects
)
{
int left = 0;
for (auto object = objects.begin(); object != objects.end(); object++)
{
if ((*object)->Active() && ((*object)->TargetId() > 0))
{
if ((*object)->Hit() && ((*object)->TargetId() == m_nextId))
{
(*object)->Active(false);
m_nextId++;
}
else
{
left++;
}
}
if ((*object)->Active() && ((*object)->TargetId() == m_nextId))
{
(*object)->Target(true);
}
}
return (left == 0);
}

//----------------------------------------------------------------------

void Level2::SaveState(PersistentState^ state)


{
state->SaveInt32(":NextTarget", m_nextId);
}

//----------------------------------------------------------------------

void Level2::LoadState(PersistentState^ state)


{
m_nextId = state->LoadInt32(":NextTarget", 1);
}

//----------------------------------------------------------------------

Level3.h
//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#pragma once

// Level3:
// This class defines the third level of the game. In this level, each of the
// nine targets is moving along closed paths and can be hit
// in any order.

#include "Level.h"

ref class Level3: public Level


{
internal:
Level3();
virtual void Initialize(std::vector<GameObject^> objects) override;
};

Level3.cpp

//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#include "pch.h"
#include "Level3.h"
#include "Face.h"
#include "Animate.h"

using namespace DirectX;

//----------------------------------------------------------------------

Level3::Level3()
{
m_timeLimit = 30.0f;
m_objective = "Hit each of the moving targets before time runs out.";
}

//----------------------------------------------------------------------

void Level3::Initialize(std::vector<GameObject^> objects)


{
XMFLOAT3 position[] =
{
XMFLOAT3(-2.5f, -1.0f, -1.5f),
XMFLOAT3(-1.0f, 1.0f, -3.0f),
XMFLOAT3( 1.5f, 0.0f, -5.5f),
XMFLOAT3(-2.5f, -1.0f, -5.5f),
XMFLOAT3( 0.5f, -2.0f, -5.0f),
XMFLOAT3( 1.5f, -2.0f, -5.5f),
XMFLOAT3( 0.0f, -3.6f, 0.0f),
XMFLOAT3( 0.0f, -3.6f, 0.0f),
XMFLOAT3( 0.0f, -3.6f, 0.0f)
};
XMFLOAT3 LineList1[] =
{
XMFLOAT3(-2.5f, -1.0f, -1.5f),
XMFLOAT3(-0.5f, 1.0f, 1.0f),
XMFLOAT3(-0.5f, -2.5f, 1.0f),
XMFLOAT3(-2.5f, -1.0f, -1.5f),
};
XMFLOAT3 LineList2[] =
{
XMFLOAT3(-1.0f, 1.0f, -3.0f),
XMFLOAT3(-2.0f, 2.0f, -1.5f),
XMFLOAT3(-2.0f, -2.5f, -1.5f),
XMFLOAT3( 1.5f, -2.5f, -1.5f),
XMFLOAT3( 1.5f, -2.5f, -3.0f),
XMFLOAT3(-1.0f, 1.0f, -3.0f),
};
XMFLOAT3 LineList3[] =
{
XMFLOAT3(1.5f, 0.0f, -5.5f),
XMFLOAT3(1.5f, 1.0f, -5.5f),
XMFLOAT3(1.5f, -2.5f, -5.5f),
XMFLOAT3(1.5f, 0.0f, -5.5f),
};
XMFLOAT3 LineList4[] =
{
XMFLOAT3(-2.5f, -1.0f, -5.5f),
XMFLOAT3( 1.0f, -1.0f, -5.5f),
XMFLOAT3( 1.0f, 1.0f, -5.5f),
XMFLOAT3(-2.5f, 1.0f, -5.5f),
XMFLOAT3(-2.5f, -1.0f, -5.5f),
};
XMFLOAT3 LineList5[] =
{
XMFLOAT3( 0.5f, -2.0f, -5.0f),
XMFLOAT3( 2.0f, -2.0f, -5.0f),
XMFLOAT3( 2.0f, 1.0f, -5.0f),
XMFLOAT3(-2.5f, 1.0f, -5.0f),
XMFLOAT3(-2.5f, -2.0f, -5.0f),
XMFLOAT3( 0.5f, -2.0f, -5.0f),
};
XMFLOAT3 LineList6[] =
{
XMFLOAT3( 1.5f, -2.0f, -5.5f),
XMFLOAT3(-2.5f, -2.0f, -5.5f),
XMFLOAT3(-2.5f, 1.0f, -5.5f),
XMFLOAT3( 1.5f, 1.0f, -5.5f),
XMFLOAT3( 1.5f, -2.0f, -5.5f),
};

int targetCount = 0;
for (auto object = objects.begin(); object != objects.end(); object++)
{
if (Face^ target = dynamic_cast<Face^>(*object))
{
if (targetCount < 9)
{
target->Active(true);
target->Target(true);
target->Hit(false);
target->Position(position[targetCount]);
switch (targetCount)
{
case 0:
target->AnimatePosition(ref new AnimateLineListPosition(4, LineList1, 10.0f, true));
break;
case 1:
target->AnimatePosition(ref new AnimateLineListPosition(6, LineList2, 15.0f, true));
break;
case 2:
target->AnimatePosition(ref new AnimateLineListPosition(4, LineList3, 15.0f, true));
break;
case 3:
case 3:
target->AnimatePosition(ref new AnimateLineListPosition(5, LineList4, 15.0f, true));
break;
case 4:
target->AnimatePosition(ref new AnimateLineListPosition(6, LineList5, 15.0f, true));
break;
case 5:
target->AnimatePosition(ref new AnimateLineListPosition(5, LineList6, 15.0f, true));
break;
case 6:
target->AnimatePosition(
ref new AnimateCirclePosition(
XMFLOAT3(0.0f, -2.5f, 0.0f),
XMFLOAT3(0.0f, -3.6f, 0.0f),
XMFLOAT3(0.0f, 0.0f, 1.0f),
9.0f,
true,
true
)
);
break;
case 7:
target->AnimatePosition(
ref new AnimateCirclePosition(
XMFLOAT3(0.0f, -2.5f, 0.0f),
XMFLOAT3(0.0f, -3.6f, 0.0f),
XMFLOAT3(0.0f, 0.0f, 1.0f),
9.0f,
true,
true
)
);
target->AnimatePosition()->Start(3.0f);
break;
case 8:
target->AnimatePosition(
ref new AnimateCirclePosition(
XMFLOAT3(0.0f, -2.5f, 0.0f),
XMFLOAT3(0.0f, -3.6f, 0.0f),
XMFLOAT3(0.0f, 0.0f, 1.0f),
9.0f,
true,
true
)
);
target->AnimatePosition()->Start(6.0f);
break;
}
targetCount++;
}
else
{
target->Active(false);
}
}
else
{
(*object)->Active(false);
}
}
}

//----------------------------------------------------------------------

Level4.h
//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#pragma once

// Level4:
// This class defines the fourth level of the game. It derives from the
// third level. The targets must be hit in numeric order.

#include "Level3.h"

ref class Level4: public Level3


{
internal:
Level4();
virtual void Initialize(std::vector<GameObject^> objects) override;

virtual bool Update(


float time,
float elapsedTime,
float timeRemaining,
std::vector<GameObject^> objects
) override;

virtual void SaveState(PersistentState^ state) override;


virtual void LoadState(PersistentState^ state) override;

private:
int m_nextId;
};

Level4.cpp

//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#include "pch.h"
#include "Level4.h"
#include "Face.h"

//----------------------------------------------------------------------

Level4::Level4()
{
m_timeLimit = 30.0f;
m_objective = "Hit each of the moving targets in ORDER before time runs out.";
}

//----------------------------------------------------------------------

void Level4::Initialize(std::vector<GameObject^> objects)


{
Level3::Initialize(objects);

int targetCount = 0;
for (auto object = objects.begin(); object != objects.end(); object++)
{
if (Face^ target = dynamic_cast<Face^>(*object))
{
if (targetCount < 9)
{
target->Target(targetCount == 0 ? true : false);
targetCount++;
}
}
}
m_nextId = 1;
}

//----------------------------------------------------------------------

bool Level4::Update(
float /* time */,
float /* elapsedTime */,
float /* timeRemaining */,
std::vector<GameObject^> objects
)
{
int left = 0;
for (auto object = objects.begin(); object != objects.end(); object++)
{
if ((*object)->Active() && ((*object)->TargetId() > 0))
{
if ((*object)->Hit() && ((*object)->TargetId() == m_nextId))
{
(*object)->Active(false);
m_nextId++;
}
else
{
left++;
}
}
if ((*object)->Active() && ((*object)->TargetId() == m_nextId))
{
(*object)->Target(true);
}
}
return (left == 0);
}

//----------------------------------------------------------------------

void Level4::SaveState(PersistentState^ state)


{
state->SaveInt32(":NextTarget", m_nextId);
}

//----------------------------------------------------------------------

void Level4::LoadState(PersistentState^ state)


{
m_nextId = state->LoadInt32(":NextTarget", 1);
}

//----------------------------------------------------------------------

Level5.h
//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#pragma once

// Level5:
// This class defines the fifth level of the game. It derives from the
// third level. This level introduces obstacles that move into place
// during game play. The targets may be hit in any order.

#include "Level3.h"

ref class Level5: public Level3


{
internal:
Level5();
virtual void Initialize(std::vector<GameObject^> objects) override;
};

Level5.cpp

//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#include "pch.h"
#include "Level5.h"
#include "Cylinder.h"
#include "Animate.h"

using namespace DirectX;

//----------------------------------------------------------------------

Level5::Level5()
{
m_timeLimit = 30.0f;
m_objective = "Hit each of the moving targets while avoiding the obstacles before time runs out.";
}

//----------------------------------------------------------------------

void Level5::Initialize(std::vector<GameObject^> objects)


{
Level3::Initialize(objects);

XMFLOAT3 obstacleStartPosition[] =
{
XMFLOAT3(-4.5f, -3.0f, 0.0f),
XMFLOAT3(4.5f, -3.0f, 0.0f),
XMFLOAT3(0.0f, 3.01f, -2.0f),
XMFLOAT3(-1.5f, -3.0f, -6.5f),
XMFLOAT3(1.5f, -3.0f, -6.5f)
};
XMFLOAT3 obstacleEndPosition[] =
{
XMFLOAT3(-2.0f, -3.0f, 0.0f),
XMFLOAT3(2.0f, -3.0f, 0.0f),
XMFLOAT3(0.0f, -3.0f, -2.0f),
XMFLOAT3(-1.5f, -3.0f, -4.0f),
XMFLOAT3(1.5f, -3.0f, -4.0f)
};
float obstacleStartTime[] =
{
2.0f,
5.0f,
8.0f,
11.0f,
14.0f
};

int obstacleCount = 0;
for (auto object = objects.begin(); object != objects.end(); object++)
{
if (Cylinder^ obstacle = dynamic_cast<Cylinder^>(*object))
{
if (obstacleCount < 5)
{
obstacle->Active(true);
obstacle->Position(obstacleStartPosition[obstacleCount]);
obstacle->AnimatePosition(
ref new AnimateLinePosition(
obstacleStartPosition[obstacleCount],
obstacleEndPosition[obstacleCount],
10.0,
false
)
);
obstacle->AnimatePosition()->Start(obstacleStartTime[obstacleCount]);
obstacleCount ++;
}
}
}
}

//----------------------------------------------------------------------

Level6.h
//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#pragma once

// Level6:
// This class defines the sixth and final level of the game. It derives from the
// fifth level. In this level, the targets do not disappear when they are hit.
// The target will stay highlighted for two seconds. As this is the last level,
// the only criteria for completion is time expiring.

#include "Level5.h"

ref class Level6: public Level5


{
internal:
Level6();
virtual bool Update(
float time,
float elapsedTime,
float timeRemaining,
std::vector<GameObject^> objects
) override;
};

Level6.cpp
//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#include "pch.h"
#include "Level6.h"

//----------------------------------------------------------------------

Level6::Level6()
{
m_timeLimit = 20.0f;
m_objective = "Hit as many moving targets as possible while avoiding the obstacles before time runs out.";
}

//----------------------------------------------------------------------

bool Level6::Update(
float time,
float elapsedTime,
float timeRemaining,
std::vector<GameObject^> objects
)
{
for (auto object = objects.begin(); object != objects.end(); object++)
{
if ((*object)->Active() && (*object)->Target())
{
if ((*object)->Hit() && ((*object)->HitTime() < (time - 2.0f)))
{
(*object)->Hit(false);
}
}
}
return ((timeRemaining - elapsedTime) <= 0.0f);
}

//----------------------------------------------------------------------

Animate.h

//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#pragma once

// Animate:
// This is an abstract class for animations. It defines a set of
// capabilities for animating XMFLOAT3 variables. An animation has the following
// characteristics:
// Start - the time for the animation to start.
// Duration - the length of time the animation is to run.
// Continuous - whether the animation loops after duration is reached or just
// stops.
// There are two query functions:
// IsActive - determines if the animation is active at time t.
// IsFinished - determines if the animation is done at time t.
// It is expected that each derived class will provide an Evaluate method for the
// specific kind of animation.
ref class Animate abstract
{
internal:
Animate();

virtual DirectX::XMFLOAT3 Evaluate (_In_ float t) = 0;

bool IsActive(_In_ float t) { return ((t >= m_startTime) && (m_continuous || (t < (m_startTime + m_duration)))); };
bool IsFinished(_In_ float t) { return (!m_continuous && (t >= (m_startTime + m_duration))); }
float Start();
void Start(_In_ float start);
float Duration();
void Duration(_In_ float duration);
bool Continuous();
void Continuous(_In_ bool continuous);

protected private:
bool m_continuous; // if true means at end cycle back to beginning
float m_startTime; // time at which animation begins
float m_duration; // for continuous, this is the duration of 1 cycle through path
};

//----------------------------------------------------------------------

// AnimateLinePosition:
// This class is a specialization of Animate that defines an animation of a position vector
// along a straight line defined in world coordinates from startPosition to endPosition.

ref class AnimateLinePosition: public Animate


{
internal:
AnimateLinePosition(
_In_ DirectX::XMFLOAT3 startPosition,
_In_ DirectX::XMFLOAT3 endPosition,
_In_ float duration,
_In_ bool continuous
);
virtual DirectX::XMFLOAT3 Evaluate(_In_ float t) override;

private:
DirectX::XMFLOAT3 m_startPosition;
DirectX::XMFLOAT3 m_endPosition;
float m_length;
};

//----------------------------------------------------------------------

struct LineSegment
{
DirectX::XMFLOAT3 position;
float length;
float uStart;
float uLength;
};

// AnimateLineListPosition:
// This class is a specialization of Animate that defines an animation of a position vector
// along a set of line segments defined by a set of points. The animation along the path is
// such that the evaluation of the position along the path will be uniform independent of
// the length of each of the line segments. A continuous loop can be achieved by having the
// first and last points of the list be the same.

ref class AnimateLineListPosition: public Animate


{
internal:
AnimateLineListPosition(
_In_ unsigned int count,
_In_reads_(count) DirectX::XMFLOAT3 position[],
_In_reads_(count) DirectX::XMFLOAT3 position[],
_In_ float duration,
_In_ bool continuous
);
virtual DirectX::XMFLOAT3 Evaluate(_In_ float t) override;

private:
unsigned int m_count;
float m_totalLength;
std::vector<LineSegment> m_segment;
};

//----------------------------------------------------------------------

// AnimateCirclePosition:
// This class is a specialization of Animate that defines an animation of a position vector
// along a circular path centered at 'center' with a starting point of 'startPosition'. The
// distance between 'center' and 'startPosition' defines the radius of the circle. The plane
// of the circle will pass through 'center' and 'startPosition' and have a normal of 'planeNormal'.
// The direction of the animation can be either clockwise or counterclockwise based
// on the 'clockwise' parameter.

ref class AnimateCirclePosition: public Animate


{
internal:
AnimateCirclePosition(
_In_ DirectX::XMFLOAT3 center,
_In_ DirectX::XMFLOAT3 startPosition,
_In_ DirectX::XMFLOAT3 planeNormal,
_In_ float duration,
_In_ bool continuous,
_In_ bool clockwise
);
virtual DirectX::XMFLOAT3 Evaluate(_In_ float t) override;

private:
DirectX::XMFLOAT4X4 m_rotationMatrix;
DirectX::XMFLOAT3 m_center;
DirectX::XMFLOAT3 m_planeNormal;
DirectX::XMFLOAT3 m_startPosition;
float m_radius;
bool m_clockwise;
};

//----------------------------------------------------------------------

Animate.cpp

//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#include "pch.h"
#include "Animate.h"

using namespace DirectX;

//----------------------------------------------------------------------

Animate::Animate():
m_continuous(false),
m_startTime(0.0f),
m_duration(10.0f)
{
}
//----------------------------------------------------------------------

float Animate::Start()
{
return m_startTime;
}

//----------------------------------------------------------------------

void Animate::Start(_In_ float start)


{
m_startTime = start;
}

//----------------------------------------------------------------------

float Animate::Duration()
{
return m_duration;
}

//----------------------------------------------------------------------

void Animate::Duration(_In_ float duration)


{
m_duration = duration;
}

//----------------------------------------------------------------------

bool Animate::Continuous()
{
return m_continuous;
}

//----------------------------------------------------------------------

void Animate::Continuous(_In_ bool continuous)


{
m_continuous = continuous;
}

//----------------------------------------------------------------------

AnimateLinePosition::AnimateLinePosition(
_In_ XMFLOAT3 startPosition,
_In_ XMFLOAT3 endPosition,
_In_ float duration,
_In_ bool continuous)
{
m_startPosition = startPosition;
m_endPosition = endPosition;
m_duration = duration;
m_continuous = continuous;

m_length = XMVectorGetX(
XMVector3Length(XMLoadFloat3(&endPosition) - XMLoadFloat3(&startPosition))
);
}

//----------------------------------------------------------------------

XMFLOAT3 AnimateLinePosition::Evaluate(_In_ float t)


{
if (t <= m_startTime)
{
return m_startPosition;
}
}

if ((t >= (m_startTime + m_duration)) && !m_continuous)


{
return m_endPosition;
}

float startTime = m_startTime;


if (m_continuous)
{
// For continuous operation, move the start time forward to
// eliminate previous iterations.
startTime += ((int)((t - m_startTime) / m_duration)) * m_duration;
}

float u = (t - startTime) / m_duration;


XMFLOAT3 currentPosition;
currentPosition.x = m_startPosition.x + (m_endPosition.x - m_startPosition.x)*u;
currentPosition.y = m_startPosition.y + (m_endPosition.y - m_startPosition.y)*u;
currentPosition.z = m_startPosition.z + (m_endPosition.z - m_startPosition.z)*u;

return currentPosition;
}

//----------------------------------------------------------------------

AnimateLineListPosition::AnimateLineListPosition(
_In_ unsigned int count,
_In_reads_(count) XMFLOAT3 position[],
_In_ float duration,
_In_ bool continuous)
{
m_duration = duration;
m_continuous = continuous;
m_count = count;

std::vector<LineSegment> segment(m_count);
m_segment = segment;
m_totalLength = 0.0f;

m_segment[0].position = position[0];
for (unsigned int i = 1; i < count; i++)
{
m_segment[i].position = position[i];
m_segment[i - 1].length = XMVectorGetX(
XMVector3Length(
XMLoadFloat3(&m_segment[i].position) -
XMLoadFloat3(&m_segment[i - 1].position)
)
);
m_totalLength += m_segment[i - 1].length;
}

// Parameterize the segments to ensure uniform evaluation along the path.


float u = 0.0f;
for (unsigned int i = 0; i < (count - 1); i++)
{
m_segment[i].uStart = u;
m_segment[i].uLength = (m_segment[i].length / m_totalLength);
u += m_segment[i].uLength;
}
m_segment[count-1].uStart = 1.0f;
}

//----------------------------------------------------------------------

XMFLOAT3 AnimateLineListPosition::Evaluate(_In_ float t)


{
if (t <= m_startTime)
{
{
return m_segment[0].position;
}

if ((t >= (m_startTime + m_duration)) && !m_continuous)


{
return m_segment[m_count-1].position;
}

float startTime = m_startTime;


if (m_continuous)
{
// For continuous operation, move the start time forward to
// eliminate previous iterations.
startTime += ((int)((t - m_startTime) / m_duration)) * m_duration;
}

float u = (t - startTime) / m_duration;


// Find the right segment.
unsigned int i = 0;
while (u > m_segment[i + 1].uStart)
{
i++;
}

u -= m_segment[i].uStart;
u /= m_segment[i].uLength;

XMFLOAT3 currentPosition;
currentPosition.x = m_segment[i].position.x + (m_segment[i + 1].position.x - m_segment[i].position.x)*u;
currentPosition.y = m_segment[i].position.y + (m_segment[i + 1].position.y - m_segment[i].position.y)*u;
currentPosition.z = m_segment[i].position.z + (m_segment[i + 1].position.z - m_segment[i].position.z)*u;

return currentPosition;
}

//----------------------------------------------------------------------

AnimateCirclePosition:: AnimateCirclePosition(
_In_ XMFLOAT3 center,
_In_ XMFLOAT3 startPosition,
_In_ XMFLOAT3 planeNormal,
_In_ float duration,
_In_ bool continuous,
_In_ bool clockwise)
{
m_center = center;
m_planeNormal = planeNormal;
m_startPosition = startPosition;
m_duration = duration;
m_continuous = continuous;
m_clockwise = clockwise;

XMVECTOR coordX = XMLoadFloat3(&m_startPosition) - XMLoadFloat3(&m_center);


m_radius = XMVectorGetX(XMVector3Length(coordX));

XMVector3Normalize(coordX);

XMVECTOR coordZ = XMLoadFloat3(&m_planeNormal);


XMVector3Normalize(coordZ);

XMVECTOR coordY;
if (m_clockwise)
{
coordY = XMVector3Cross(coordZ, coordX);
}
else
{
coordY = XMVector3Cross(coordX, coordZ);
}
}

XMVECTOR vectorX = XMVectorSet(1.0f, 0.0f, 0.0f, 1.0f);


XMVECTOR vectorY = XMVectorSet(0.0f, 1.0f, 0.0f, 1.0f);
XMMATRIX mat1 = XMMatrixIdentity();
XMMATRIX mat2 = XMMatrixIdentity();

if (!XMVector3Equal(coordX, vectorX))
{
float angle;
angle = XMVectorGetX(
XMVector3AngleBetweenVectors(vectorX, coordX)
);
if ((angle * angle) > 0.025)
{
XMVECTOR axis1 = XMVector3Cross(vectorX, coordX);

mat1 = XMMatrixRotationAxis(axis1, angle);


vectorY = XMVector3TransformCoord(vectorY, mat1);
}
}
if (!XMVector3Equal(vectorY, coordY))
{
float angle;
angle = XMVectorGetX(
XMVector3AngleBetweenVectors(vectorY, coordY)
);
if ((angle * angle) > 0.025)
{
XMVECTOR axis2 = XMVector3Cross(vectorY, coordY);
mat2 = XMMatrixRotationAxis(axis2, angle);
}
}
XMStoreFloat4x4(
&m_rotationMatrix,
mat1 *
mat2 *
XMMatrixTranslation(m_center.x, m_center.y, m_center.z)
);
}

//----------------------------------------------------------------------

XMFLOAT3 AnimateCirclePosition::Evaluate(_In_ float t)


{
if (t <= m_startTime)
{
return m_startPosition;
}

if ((t >= (m_startTime + m_duration)) && !m_continuous)


{
return m_startPosition;
}

float startTime = m_startTime;


if (m_continuous)
{
// For continuous operation move the start time forward to
// eliminate previous iterations.
startTime += ((int)((t - m_startTime) / m_duration)) * m_duration;
}

float u = (t - startTime) / m_duration * XM_2PI;

XMFLOAT3 currentPosition;
currentPosition.x = m_radius * cos(u);
currentPosition.y = m_radius * sin(u);
currentPosition.z = 0.0f;
XMStoreFloat3(
&currentPosition,
XMVector3TransformCoord(
XMLoadFloat3(&currentPosition),
XMLoadFloat4x4(&m_rotationMatrix)
)
);

return currentPosition;
}

//----------------------------------------------------------------------

Note
This article is for Windows 10 developers writing Universal Windows Platform (UWP) apps. If youre
developing for Windows 8.x or Windows Phone 8.x, see the archived documentation.

Related topics
Create a simple UWP game with DirectX
Assemble the rendering framework
3/6/2017 97 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
By now, you've seen how to structure a Universal Windows Platform (UWP) game to work with the Windows
Runtime, and how to define a state machine to handle the flow of the game. Now, it's time to look at how the
sample game uses that structure and state to display its graphics. Here, we look at how to implement a rendering
framework, starting from the initialization of the graphics device through the presentation of the graphics objects
for display.

Objective
To understand how to set up a basic rendering framework to display the graphics output for a UWP DirectX
game.

Note The following code files are not discussed here, but provide classes and methods referred to in this
topic and are provided as code at the end of this topic:
Animate.h/.cpp.
BasicLoader.h/.cpp. Provides methods for loading meshes, shaders and textures, both synchronously
and asynchronously. Very useful!
MeshObject.h/.cpp, SphereMesh.h/.cpp, CylinderMesh.h/.cpp, FaceMesh.h/.cpp, and
WorldMesh.h/.cpp. Contains the definitions of the object primitives used in the game, such as the ammo
spheres, the cylinder and cone obstacles, and the walls of the shooting gallery. (GameObject.cpp, briefly
discussed in this topic, contains the method for rendering these primitives.)
Level.h/.cpp and Level[1-6].h/.cpp. Contains the configuration for each of the games six levels,
including the success criteria and the number and position of the targets and obstacles.
TargetTexture.h/.cpp. Contains a set of methods for drawing the bitmaps used as the textures on the
targets.

These files contain code that is not specific to UWP DirectX games. But you can review them separately if you'd
like more implementation details.
This section covers three key files from the game sample (provided as code at the end of this topic):
Camera.h/.cpp
GameRenderer.h/.cpp
PrimObject.h/.cpp
Again, we assume that you understand basic 3D programming concepts like meshes, vertices, and textures. For
more info about Direct3D 11 programming in general, see Programming Guide for Direct3D 11. With that said,
let's look at the work that must be done to put our game on the screen.

An overview of the Windows Runtime and DirectX


DirectX is a fundamental part of the Windows Runtime and of the Windows 10 experience. All of Windows 10's
visuals are built on top of DirectX, and you have the same direct line to the same low-level graphics interface,
DXGI, which provides an abstraction layer for the graphics hardware and its drivers. All the Direct3D 11 APIs are
available for you to talk to DXGI directly. The result is fast, high performing graphics in your games that give you
access to all the latest graphics hardware features.
To add DirectX support to a UWP app, you create a view provider for DirectX resources by implementing the
IFrameworkViewSource and IFrameworkView interfaces. These provide a factory pattern for your view
provider type and the implementation of your DirectX view provider, respectively. The UWP singleton,
represented by the CoreApplication object, runs this implementation.
In Defining the game's UWP framework, we looked at how the renderer fit into the game sample's app
framework. Now, let's look at how the game renderer connects to the view and builds the graphics that define the
look of the game.

Defining the renderer


The GameRenderer abstract type inherits from the DirectXBase renderer type, adds support for stereo 3-D, and
declares constant buffers and resources for the shaders that create and define our graphic primitives.
Here's the definition of GameRenderer.
ref class GameRenderer : public DirectXBase
{
internal:
GameRenderer();

virtual void Initialize(


_In_ Windows::UI::Core::CoreWindow^ window,
float dpi
) override;

virtual void CreateDeviceIndependentResources() override;


virtual void CreateDeviceResources() override;
virtual void UpdateForWindowSizeChange() override;
virtual void Render() override;
virtual void SetDpi(float dpi) override;

concurrency::task<void> CreateGameDeviceResourcesAsync(_In_ Simple3DGame^ game);


void FinalizeCreateGameDeviceResources();
concurrency::task<void> LoadLevelResourcesAsync();
void FinalizeLoadLevelResources();

GameInfoOverlay^ InfoOverlay() { return m_gameInfoOverlay; };

DirectX::XMFLOAT2 GameInfoOverlayUpperLeft()
{
return DirectX::XMFLOAT2(
(m_windowBounds.Width - GameInfoOverlayConstant::Width) / 2.0f,
(m_windowBounds.Height - GameInfoOverlayConstant::Height) / 2.0f
);
};
DirectX::XMFLOAT2 GameInfoOverlayLowerRight()
{
return DirectX::XMFLOAT2(
(m_windowBounds.Width - GameInfoOverlayConstant::Width) / 2.0f + GameInfoOverlayConstant::Width,
(m_windowBounds.Height - GameInfoOverlayConstant::Height) / 2.0f + GameInfoOverlayConstant::Height
);
};

protected private:
bool m_initialized;
bool m_gameResourcesLoaded;
bool m_levelResourcesLoaded;
GameInfoOverlay^ m_gameInfoOverlay;
GameHud^ m_gameHud;
Simple3DGame^ m_game;

Microsoft::WRL::ComPtr<ID3D11ShaderResourceView> m_sphereTexture;
Microsoft::WRL::ComPtr<ID3D11ShaderResourceView> m_cylinderTexture;
Microsoft::WRL::ComPtr<ID3D11ShaderResourceView> m_ceilingTexture;
Microsoft::WRL::ComPtr<ID3D11ShaderResourceView> m_floorTexture;
Microsoft::WRL::ComPtr<ID3D11ShaderResourceView> m_wallsTexture;

// Constant Buffers
Microsoft::WRL::ComPtr<ID3D11Buffer> m_constantBufferNeverChanges;
Microsoft::WRL::ComPtr<ID3D11Buffer> m_constantBufferChangeOnResize;
Microsoft::WRL::ComPtr<ID3D11Buffer> m_constantBufferChangesEveryFrame;
Microsoft::WRL::ComPtr<ID3D11Buffer> m_constantBufferChangesEveryPrim;
Microsoft::WRL::ComPtr<ID3D11SamplerState> m_samplerLinear;
Microsoft::WRL::ComPtr<ID3D11VertexShader> m_vertexShader;
Microsoft::WRL::ComPtr<ID3D11VertexShader> m_vertexShaderFlat;
Microsoft::WRL::ComPtr<ID3D11PixelShader> m_pixelShader;
Microsoft::WRL::ComPtr<ID3D11PixelShader> m_pixelShaderFlat;
Microsoft::WRL::ComPtr<ID3D11InputLayout> m_vertexLayout;
};

Because the Direct3D 11 APIs are defined as COM APIs, you must provide ComPtr references to the objects
defined by these APIs. These objects are automatically freed when their last reference goes out of scope when the
app terminates.
The game sample declares 4 specific constant buffers:
m_constantBufferNeverChanges. This constant buffer contains the lighting parameters. It's set one time
and never changes again.
m_constantBufferChangeOnResize. This constant buffer contains the projection matrix. The projection
matrix is dependent on the size and aspect ratio of the window. It's updated only when the window size
changes.
m_constantBufferChangesEveryFrame. This constant buffer contains the view matrix. This matrix is
dependent on the camera position and look direction (the normal to the projection) and changes only one
time per frame.
m_constantBufferChangesEveryPrim. This constant buffer contains the model matrix and material
properties of each primitive. The model matrix transforms vertices from local coordinates into world
coordinates. These constants are specific to each primitive and are updated for every draw call.
The whole idea of multiple constant buffers with different frequencies is to reduce the amount of data that must
be sent to the GPU per frame. Therefore, the sample separates constants into different buffers based on the
frequency that they must be updated. This is a best practice for Direct3D programming.
The renderer contains the shader objects that compute our primitives and textures: m_vertexShader and
m_pixelShader. The vertex shader processes the primitives and the basic lighting, and the pixel shader
(sometimes called a fragment shader) processes the textures and any per-pixel effects. There are two versions of
these shaders (regular and flat) for rendering different primitives. The flat versions are much simpler and don't do
specular highlights or any per pixel lighting effects. These are used for the walls and make rendering faster on
lower powered devices.
The renderer class contains the DirectWrite and Direct2D resources used for the overlay and the Heads Up
Display (the GameHud object). The overlay and HUD are drawn on top of the render target when projection is
complete in the graphics pipeline.
The renderer also defines the shader resource objects that hold the textures for the primitives. Some of these
textures are pre-defined (DDS textures for the walls and floor of the world as well as the ammo spheres).
Now, it's time to see how this object is created!

Initializing the renderer


The sample game calls this Initialize method as part of the CoreApplication initialization sequence in
App::SetWindow.
void GameRenderer::Initialize(
_In_ CoreWindow^ window,
float dpi
)
{
if (!m_initialized)
{
m_gameHud = ref new GameHud(
"Windows 8 Samples",
"DirectX first-person game sample"
);
m_gameInfoOverlay = ref new GameInfoOverlay();
m_initialized = true;
}

DirectXBase::Initialize(window, dpi);

// Initialize could be called multiple times as a result of an error with the hardware device
// that requires it to be reinitialized. Because the m_gameInfoOverlay variable has resources that are
// dependent on the device, it will need to be reinitialized each time with the new device information.
m_gameInfoOverlay->Initialize(m_d2dDevice.Get(), m_d2dContext.Get(), m_dwriteFactory.Get(), dpi);
}

This is a pretty straightforward method. It checks to see if the renderer had been previously initialized, and if it
hasn't, it instantiates the GameHud and GameInfoOverlay objects.
After that, the renderer initialization process runs the base implementation of Initialize provided on the
DirectXBase class it inherited from.
When the DirectXBase initialization completes, the GameInfoOverlay object is initialized. After initialization is
complete, it's time to look at the methods for creating and loading the graphics resources for the game.

Creating and loading DirectX graphics resources


The first order of business in any game is to establish a connection to our graphics interface, create the resources
we need to draw the graphics, and then set up a render target into which we can draw those graphics. In the
game sample (and in the Microsoft Visual StudioDirectX 11 App (Universal Windows) template), this process
is implemented with three methods:
CreateDeviceIndependentResources
CreateDeviceResources
CreateWindowSizeDependentResources
Now, in the game sample, we override two of these methods (CreateDeviceIndependentResources and
CreateDeviceResources) provided on the DirectXBase class implemented in the DirectX 11 App (Universal
Windows) template. For each of these override methods, we first call the DirectXBase implementations they
override, and then add more implementation details specific to the game sample. Be aware that the DirectXBase
class implementation included with the game sample has been modified from the version provided in the Visual
Studio template to include stereoscopic view support and includes pre-rotation of the SwapBuffer object.
CreateWindowSizeDependentResources is not overridden by the GameRenderer object. We use the
implementation of it provided in the DirectXBase class.
For more info about the DirectXBase base implementations of these methods, see How to set up your UWP
DirectX app to display a view.
The first of these overridden methods, CreateDeviceIndependentResources, calls the
GameHud::CreateDeviceIndependentResources method to create the DirectWrite text resources that use the
Segoe UI font, which is the font used by most UWP apps.
CreateDeviceIndependentResources

void GameRenderer::CreateDeviceIndependentResources()
{
DirectXBase::CreateDeviceIndependentResources();
m_gameHud->CreateDeviceIndependentResources(m_dwriteFactory.Get(), m_wicFactory.Get());
}
void GameHud::CreateDeviceIndependentResources(
_In_ IDWriteFactory* dwriteFactory,
_In_ IWICImagingFactory* wicFactory
)
{
m_dwriteFactory = dwriteFactory;
m_wicFactory = wicFactory;

DX::ThrowIfFailed(
m_dwriteFactory->CreateTextFormat(
L"Segoe UI",
nullptr,
DWRITE_FONT_WEIGHT_LIGHT,
DWRITE_FONT_STYLE_NORMAL,
DWRITE_FONT_STRETCH_NORMAL,
GameConstants::HudBodyPointSize,
L"en-us",
&m_textFormatBody
)
);
DX::ThrowIfFailed(
m_dwriteFactory->CreateTextFormat(
L"Segoe UI Symbol",
nullptr,
DWRITE_FONT_WEIGHT_LIGHT,
DWRITE_FONT_STYLE_NORMAL,
DWRITE_FONT_STRETCH_NORMAL,
GameConstants::HudBodyPointSize,
L"en-us",
&m_textFormatBodySymbol
)
);
DX::ThrowIfFailed(
m_dwriteFactory->CreateTextFormat(
L"Segoe UI Light",
nullptr,
DWRITE_FONT_WEIGHT_LIGHT,
DWRITE_FONT_STYLE_NORMAL,
DWRITE_FONT_STRETCH_NORMAL,
GameConstants::HudTitleHeaderPointSize,
L"en-us",
&m_textFormatTitleHeader
)
);
DX::ThrowIfFailed(
m_dwriteFactory->CreateTextFormat(
L"Segoe UI Light",
nullptr,
DWRITE_FONT_WEIGHT_LIGHT,
DWRITE_FONT_STYLE_NORMAL,
DWRITE_FONT_STRETCH_NORMAL,
GameConstants::HudTitleBodyPointSize,
L"en-us",
&m_textFormatTitleBody
)
);

DX::ThrowIfFailed(m_textFormatBody->SetTextAlignment(DWRITE_TEXT_ALIGNMENT_LEADING));
DX::ThrowIfFailed(m_textFormatBody->SetParagraphAlignment(DWRITE_PARAGRAPH_ALIGNMENT_NEAR));
DX::ThrowIfFailed(m_textFormatBodySymbol->SetTextAlignment(DWRITE_TEXT_ALIGNMENT_LEADING));
DX::ThrowIfFailed(m_textFormatBodySymbol->SetParagraphAlignment(DWRITE_PARAGRAPH_ALIGNMENT_NEAR));
DX::ThrowIfFailed(m_textFormatTitleHeader->SetTextAlignment(DWRITE_TEXT_ALIGNMENT_LEADING));
DX::ThrowIfFailed(m_textFormatTitleHeader->SetParagraphAlignment(DWRITE_PARAGRAPH_ALIGNMENT_NEAR));
DX::ThrowIfFailed(m_textFormatTitleBody->SetTextAlignment(DWRITE_TEXT_ALIGNMENT_LEADING));
DX::ThrowIfFailed(m_textFormatTitleBody->SetParagraphAlignment(DWRITE_PARAGRAPH_ALIGNMENT_NEAR));
}
The sample uses four text formatters: two for title header and title body text, and two for body text. This is used in
much of the overlay text.
The second method, CreateDeviceResources, loads the specific resources for the game that will be computed
on the graphics device. Let's look at the code for this method.
CreateDeviceResources

oid GameRenderer::CreateDeviceResources()
{
DirectXBase::CreateDeviceResources();

m_gameHud->CreateDeviceResources(m_d2dContext.Get());

if (m_game != nullptr)
{
// The initial invocation of CreateDeviceResources occurs
// before the Game State is initialized when the device is first
// being created, so that the inital loading screen can be displayed.
// Subsequent invocations of CreateDeviceResources will be a result
// of an error with the device that requires the resources to be
// recreated. In this case, the game state is already initialized
// so the game device resources need to be recreated.

// This sample doesn't gracefully handle all the async recreation


// of resources so an exception is thrown.
throw Platform::Exception::CreateException(
DXGI_ERROR_DEVICE_REMOVED,
"GameRenderer::CreateDeviceResources - Recreation of resources after TDR not available\n"
);
}
}
void GameHud::CreateDeviceResources(_In_ ID2D1DeviceContext* d2dContext)
{
auto location = Package::Current->InstalledLocation;
Platform::String^ path = Platform::String::Concat(location->Path, "\\");
path = Platform::String::Concat(path, "windows-sdk.png");

ComPtr<IWICBitmapDecoder> wicBitmapDecoder;
DX::ThrowIfFailed(
m_wicFactory->CreateDecoderFromFilename(
path->Data(),
nullptr,
GENERIC_READ,
WICDecodeMetadataCacheOnDemand,
&wicBitmapDecoder
)
);

ComPtr<IWICBitmapFrameDecode> wicBitmapFrame;
DX::ThrowIfFailed(
wicBitmapDecoder->GetFrame(0, &wicBitmapFrame)
);

ComPtr<IWICFormatConverter> wicFormatConverter;
DX::ThrowIfFailed(
m_wicFactory->CreateFormatConverter(&wicFormatConverter)
);

DX::ThrowIfFailed(
wicFormatConverter->Initialize(
wicBitmapFrame.Get(),
GUID_WICPixelFormat32bppPBGRA,
WICBitmapDitherTypeNone,
nullptr,
0.0,
WICBitmapPaletteTypeCustom // The BGRA format has no palette so this value is ignored.
)
);

double dpiX = 96.0f;


double dpiY = 96.0f;
DX::ThrowIfFailed(
wicFormatConverter->GetResolution(&dpiX, &dpiY)
);

// Create D2D Resources.


DX::ThrowIfFailed(
d2dContext->CreateBitmapFromWicBitmap(
wicFormatConverter.Get(),
BitmapProperties(
PixelFormat(DXGI_FORMAT_B8G8R8A8_UNORM, D2D1_ALPHA_MODE_PREMULTIPLIED),
static_cast<float>(dpiX),
static_cast<float>(dpiY)
),
&m_logoBitmap
)
);

m_logoSize = m_logoBitmap->GetSize();

DX::ThrowIfFailed(
d2dContext->CreateSolidColorBrush(
D2D1::ColorF(D2D1::ColorF::White),
&m_textBrush
)
);
}
In this example, in normal execution, the CreateDeviceResources method just calls the base class method and
then calls the GameHud::CreateDeviceResources method (also listed previously). If there's a problem later with
the underlying graphics device, it might have to be re-initialized. In this case, the CreateDeviceResources
method initiates a set of async tasks to create the game device resources. This is done through a sequence of two
methods: a call to CreateDeviceResourcesAsync, and then, when it completes,
FinalizeCreateGameDeviceResources.
CreateGameDeviceResourcesAsync and FinalizeCreateGameDeviceResources

task<void> GameRenderer::CreateGameDeviceResourcesAsync(_In_ Simple3DGame^ game)


{
// Set the Loading state to wait until any async resources have
// been loaded before proceeding.
m_game = game;

// NOTE: Only the m_d3dDevice is used in this method. It's expected


// to not run on the same thread as the GameRenderer was created.
// Create methods on the m_d3dDevice are free-threaded and are safe while any methods
// in the m_d3dContext should only be used on a single thread and handled
// in the FinalizeCreateGameDeviceResources method.

D3D11_BUFFER_DESC bd;
ZeroMemory(&bd, sizeof(bd));

// Create the constant buffers.


bd.Usage = D3D11_USAGE_DEFAULT;
bd.BindFlags = D3D11_BIND_CONSTANT_BUFFER;
bd.CPUAccessFlags = 0;
bd.ByteWidth = (sizeof(ConstantBufferNeverChanges) + 15) / 16 * 16;
DX::ThrowIfFailed(
m_d3dDevice->CreateBuffer(&bd, nullptr, &m_constantBufferNeverChanges)
);

bd.ByteWidth = (sizeof(ConstantBufferChangeOnResize) + 15) / 16 * 16;


DX::ThrowIfFailed(
m_d3dDevice->CreateBuffer(&bd, nullptr, &m_constantBufferChangeOnResize)
);

bd.ByteWidth = (sizeof(ConstantBufferChangesEveryFrame) + 15) / 16 * 16;


DX::ThrowIfFailed(
m_d3dDevice->CreateBuffer(&bd, nullptr, &m_constantBufferChangesEveryFrame)
);

bd.ByteWidth = (sizeof(ConstantBufferChangesEveryPrim) + 15) / 16 * 16;


DX::ThrowIfFailed(
m_d3dDevice->CreateBuffer(&bd, nullptr, &m_constantBufferChangesEveryPrim)
);

D3D11_SAMPLER_DESC sampDesc;
ZeroMemory(&sampDesc, sizeof(sampDesc));

sampDesc.Filter = D3D11_FILTER_MIN_MAG_MIP_LINEAR;
sampDesc.AddressU = D3D11_TEXTURE_ADDRESS_WRAP;
sampDesc.AddressV = D3D11_TEXTURE_ADDRESS_WRAP;
sampDesc.AddressW = D3D11_TEXTURE_ADDRESS_WRAP;
sampDesc.ComparisonFunc = D3D11_COMPARISON_NEVER;
sampDesc.MinLOD = 0;
sampDesc.MaxLOD = FLT_MAX;
DX::ThrowIfFailed(
m_d3dDevice->CreateSamplerState(&sampDesc, &m_samplerLinear)
);

// Start the async tasks to load the shaders and textures.


BasicLoader^ loader = ref new BasicLoader(m_d3dDevice.Get());

std::vector<task<void>> tasks;
uint32 numElements = ARRAYSIZE(PNTVertexLayout);
tasks.push_back(loader->LoadShaderAsync("VertexShader.cso", PNTVertexLayout, numElements, &m_vertexShader, &m_vertexLayout));
tasks.push_back(loader->LoadShaderAsync("VertexShaderFlat.cso", nullptr, numElements, &m_vertexShaderFlat, nullptr));
tasks.push_back(loader->LoadShaderAsync("PixelShader.cso", &m_pixelShader));
tasks.push_back(loader->LoadShaderAsync("PixelShaderFlat.cso", &m_pixelShaderFlat));

// Make sure previous versions if any of the textures are released.


m_sphereTexture = nullptr;
m_cylinderTexture = nullptr;
m_ceilingTexture = nullptr;
m_floorTexture = nullptr;
m_wallsTexture = nullptr;

// Load Game specific Textures.


tasks.push_back(loader->LoadTextureAsync("seafloor.dds", nullptr, &m_sphereTexture));
tasks.push_back(loader->LoadTextureAsync("metal_texture.dds", nullptr, &m_cylinderTexture));
tasks.push_back(loader->LoadTextureAsync("cellceiling.dds", nullptr, &m_ceilingTexture));
tasks.push_back(loader->LoadTextureAsync("cellfloor.dds", nullptr, &m_floorTexture));
tasks.push_back(loader->LoadTextureAsync("cellwall.dds", nullptr, &m_wallsTexture));
tasks.push_back(create_task([]()
{
// Simulate loading additional resources.
wait(GameConstants::InitialLoadingDelay);
}));

// Return the task group of all the async tasks for loading the shader and texture assets.
return when_all(tasks.begin(), tasks.end());
}

void GameRenderer::FinalizeCreateGameDeviceResources()
{
// All asynchronously loaded resources have completed loading.
// Now, associate all the resources with the appropriate
// Game objects.
// This method is expected to run in the same thread as the GameRenderer
// was created. All work will happen behind the "Loading ..." screen after the
// main loop has been entered.

// Initialize the Constant buffer with the light positions.


// These are handled here to ensure that the d3dContext is only
// used in one thread.

ConstantBufferNeverChanges constantBufferNeverChanges;
constantBufferNeverChanges.lightPosition[0] = XMFLOAT4( 3.5f, 2.5f, 5.5f, 1.0f);
constantBufferNeverChanges.lightPosition[1] = XMFLOAT4( 3.5f, 2.5f, -5.5f, 1.0f);
constantBufferNeverChanges.lightPosition[2] = XMFLOAT4(-3.5f, 2.5f, -5.5f, 1.0f);
constantBufferNeverChanges.lightPosition[3] = XMFLOAT4( 3.5f, 2.5f, 5.5f, 1.0f);
constantBufferNeverChanges.lightColor = XMFLOAT4(0.25f, 0.25f, 0.25f, 1.0f);
m_d3dContext->UpdateSubresource(m_constantBufferNeverChanges.Get(), 0, nullptr, &constantBufferNeverChanges, 0, 0);

// For the targets, there are two unique generated textures.


// Each texture image includes the number of the texture.
// Make sure the 2-D rendering is occurring on the same thread
// as the main rendering.

TargetTexture^ textureGenerator = ref new TargetTexture(


m_d3dDevice.Get(),
m_d2dFactory.Get(),
m_dwriteFactory.Get(),
m_d2dContext.Get()
);

MeshObject^ cylinderMesh = ref new CylinderMesh(m_d3dDevice.Get(), 26);


MeshObject^ targetMesh = ref new FaceMesh(m_d3dDevice.Get());
MeshObject^ sphereMesh = ref new SphereMesh(m_d3dDevice.Get(), 26);

Material^ cylinderMaterial = ref new Material(


Material^ cylinderMaterial = ref new Material(
XMFLOAT4(0.8f, 0.8f, 0.8f, .5f),
XMFLOAT4(0.8f, 0.8f, 0.8f, .5f),
XMFLOAT4(1.0f, 1.0f, 1.0f, 1.0f),
15.0f,
m_cylinderTexture.Get(),
m_vertexShader.Get(),
m_pixelShader.Get()
);

Material^ sphereMaterial = ref new Material(


XMFLOAT4(0.8f, 0.4f, 0.0f, 1.0f),
XMFLOAT4(0.8f, 0.4f, 0.0f, 1.0f),
XMFLOAT4(1.0f, 1.0f, 1.0f, 1.0f),
50.0f,
m_sphereTexture.Get(),
m_vertexShader.Get(),
m_pixelShader.Get()
);

auto objects = m_game->RenderObjects();

// Attach the textures to the appropriate game objects.


for (auto object = objects.begin(); object != objects.end(); object++)
{
if ((*object)->TargetId() == GameConstants::WorldFloorId)
{
(*object)->NormalMaterial(
ref new Material(
XMFLOAT4(0.5f, 0.5f, 0.5f, 1.0f),
XMFLOAT4(0.8f, 0.8f, 0.8f, 1.0f),
XMFLOAT4(0.3f, 0.3f, 0.3f, 1.0f),
150.0f,
m_floorTexture.Get(),
m_vertexShaderFlat.Get(),
m_pixelShaderFlat.Get()
)
);
(*object)->Mesh(ref new WorldFloorMesh(m_d3dDevice.Get()));
}
else if ((*object)->TargetId() == GameConstants::WorldCeilingId)
{
(*object)->NormalMaterial(
ref new Material(
XMFLOAT4(0.5f, 0.5f, 0.5f, 1.0f),
XMFLOAT4(0.8f, 0.8f, 0.8f, 1.0f),
XMFLOAT4(0.3f, 0.3f, 0.3f, 1.0f),
150.0f,
m_ceilingTexture.Get(),
m_vertexShaderFlat.Get(),
m_pixelShaderFlat.Get()
)
);
(*object)->Mesh(ref new WorldCeilingMesh(m_d3dDevice.Get()));
}
else if ((*object)->TargetId() == GameConstants::WorldWallsId)
{
(*object)->NormalMaterial(
ref new Material(
XMFLOAT4(0.5f, 0.5f, 0.5f, 1.0f),
XMFLOAT4(0.8f, 0.8f, 0.8f, 1.0f),
XMFLOAT4(0.3f, 0.3f, 0.3f, 1.0f),
150.0f,
m_wallsTexture.Get(),
m_vertexShaderFlat.Get(),
m_pixelShaderFlat.Get()
)
);
(*object)->Mesh(ref new WorldWallsMesh(m_d3dDevice.Get()));
}
}
else if (Cylinder^ cylinder = dynamic_cast<Cylinder^>(*object))
{
cylinder->Mesh(cylinderMesh);
cylinder->NormalMaterial(cylinderMaterial);
}
else if (Face^ target = dynamic_cast<Face^>(*object))
{
const int bufferLength = 16;
char16 str[bufferLength];
int len = swprintf_s(str, bufferLength, L"%d", target->TargetId());
Platform::String^ string = ref new Platform::String(str, len);

ComPtr<ID3D11ShaderResourceView> texture;
textureGenerator->CreateTextureResourceView(string, &texture);
target->NormalMaterial(
ref new Material(
XMFLOAT4(0.8f, 0.8f, 0.8f, 0.5f),
XMFLOAT4(0.8f, 0.8f, 0.8f, 0.5f),
XMFLOAT4(0.3f, 0.3f, 0.3f, 1.0f),
5.0f,
texture.Get(),
m_vertexShader.Get(),
m_pixelShader.Get()
)
);

textureGenerator->CreateHitTextureResourceView(string, &texture);
target->HitMaterial(
ref new Material(
XMFLOAT4(0.8f, 0.8f, 0.8f, 0.5f),
XMFLOAT4(0.8f, 0.8f, 0.8f, 0.5f),
XMFLOAT4(0.3f, 0.3f, 0.3f, 1.0f),
5.0f,
texture.Get(),
m_vertexShader.Get(),
m_pixelShader.Get()
)
);

target->Mesh(targetMesh);
}
else if (Sphere^ sphere = dynamic_cast<Sphere^>(*object))
{
sphere->Mesh(sphereMesh);
sphere->NormalMaterial(sphereMaterial);
}
}

// Ensure that the camera has been initialized with the right Projection
// matrix. The camera is not created at the time the first window resize event
// occurs.
m_game->GameCamera()->SetProjParams(
XM_PI / 2, m_renderTargetSize.Width / m_renderTargetSize.Height,
0.01f,
100.0f
);

// Make sure that Projection matrix has been set in the constant buffer
// now that all the resources are loaded.
// DirectXBase is handling screen rotations directly to eliminate an unaligned
// fullscreen copy. As a result, it's necessary to post multiply the rotationTransform3D
// matrix to the camera projection matrix.
ConstantBufferChangeOnResize changesOnResize;
XMStoreFloat4x4(
&changesOnResize.projection,
XMMatrixMultiply(
XMMatrixTranspose(m_game->GameCamera()->Projection()),
XMMatrixTranspose(XMLoadFloat4x4(&m_rotationTransform3D))
)
);

m_d3dContext->UpdateSubresource(
m_constantBufferChangeOnResize.Get(),
0,
nullptr,
&changesOnResize,
0,
0
);

m_gameResourcesLoaded = true;
}

CreateDeviceResourcesAsync is a method that runs as a separate set of async tasks to load the game
resources. Because it's expected to run on a separate thread, it only has access to the Direct3D 11 device methods
(those defined on ID3D11Device) and not the device context methods (the methods defined on
ID3D11DeviceContext), so it has the option to not perform any rendering. The
FinalizeCreateGameDeviceResources method runs on the main thread and does have access to the Direct3D
11 device context methods.
The sequence of events for loading the game devices resources proceeds as follows.
CreateDeviceResourcesAsync first initializes constant buffers for the primitives. Constant buffers are low-
latency, fixed-width buffers that hold the data that a shader uses during shader execution. (Think of these buffers
as passing data to the shader that is constant over the execution of the particular draw call.) In this sample, the
buffers contain the data that the shaders will use to:
Place the light sources and set their color when the renderer initializes
Compute the view matrix whenever the window is resized
Compute the projection matrix for every frame update
Compute the transformations of the primitives on every render update
The constants receive the source information (vertices) and transform the vertex coordinates and data from
model space into the device space. Ultimately, this data results in texel coordinates and pixels in the render target.
Next, the game renderer object creates a loader for the shaders that will perform the computation. (See
BasicLoader.cpp in the sample for the specific implementation.)
Then, CreateDeviceResourcesAsync initiates async tasks for loading all the texture resources into
ShaderResourceViews. These texture resources are stored in the DirectDraw Surface (DDS) textures that came
with the sample. DDS textures are a lossy texture format that work with DirectX Texture Compression (DXTC). We
use these textures on the walls, ceiling and floor of the world, and on the ammo spheres and pillar obstacles.
Finally, it returns a task group that contains all the async tasks created by the method. The calling function waits
for the completion of all these async tasks, and then calls FinalizeCreateGameDeviceResources.
FinalizeCreateGameDeviceResources loads the initial data into the constant buffers with a device context
method call to ID3D11DeviceContext::UpdateSubresource: m_deviceContext->UpdateSubresource . This method
creates the mesh objects for the sphere, cylinder, face, and world game objects and the associated materials. It
then walks the game object list associating the appropriate device resources with each object.
The textures for the ringed and numbered target objects are procedurally generated using the code in
TargetTexture.cpp. The renderer creates an instance of the TargetTexture type, which creates the bitmap
texture for the target objects in the game when we call the TargetTexture::CreateTextureResourceView
method. The resulting texture is composed of concentric colored rings, with a numeric value on the top. These
generated resources are associated with the appropriate target game objects.
Lastly, FinalizeCreateGameDeviceResources set the m_gameResourcesLoaded Boolean global variable to indicate
that all resources are now loaded.
The game has the resources to display the graphics in the current window, and it can recreate those resources as
the window changes. Now, let's look at the camera used to define the player's view of the scene in that window.

Implementing the camera object


The game has the code in place to update the world in its own coordinate system (sometimes called the world
space or scene space). All objects, including the camera, are positioned and oriented in this space. In the sample
game, the camera's position along with the look vectors (the "look at" vector that points directly into the scene
from the camera, and the "look up" vector that is upwards perpendicular to it) define the camera space. The
projection parameters determine how much of that space is actually visible in the final scene; and the Field of
View (FoV), aspect ratio, and clipping planes define the projection transformation. A vertex shader does the heavy
lifting of converting from the model coordinates to device coordinates with the following algorithm (where V is a
vector and M is a matrix):
V(device) = V(model) x M(model-to-world) x M(world-to-view) x M(view-to-device) .
M(model-to-world) is a transformation matrix for model coordinates to world coordinates. This is provided by
the primitive. (We'll review this in the section on primitives, here.)
M(world-to-view) is a transformation matrix for world coordinates to view coordinates. This is provided by the
view matrix of the camera.
M(view-to-device) is a transformation matrix for view coordinates to device coordinates. This is provided by the
projection of the camera.
The shader code in VertexShader.hlsl is loaded with these vectors and matrices from the constant buffers, and
performs this transformation for every vertex.
The Camera object defines the view and projection matrices. Let's look at how the sample game declares it.
ref class Camera
{
internal:
Camera();

// Call these from client and use Get*Matrix() to read new matrices.
// Functions to change camera matrices
void SetViewParams(_In_ DirectX::XMFLOAT3 eye, _In_ DirectX::XMFLOAT3 lookAt, _In_ DirectX::XMFLOAT3 up);
void SetProjParams(_In_ float fieldOfView, _In_ float aspectRatio, _In_ float nearPlane, _In_ float farPlane);

void LookDirection (_In_ DirectX::XMFLOAT3 lookDirection);


void Eye (_In_ DirectX::XMFLOAT3 position);

DirectX::XMMATRIX View();
DirectX::XMMATRIX Projection();
DirectX::XMMATRIX LeftEyeProjection();
DirectX::XMMATRIX RightEyeProjection();
DirectX::XMMATRIX World();
DirectX::XMFLOAT3 Eye();
DirectX::XMFLOAT3 LookAt();
DirectX::XMFLOAT3 Up();
float NearClipPlane();
float FarClipPlane();
float Pitch();
float Yaw();

protected private:
DirectX::XMFLOAT4X4 m_viewMatrix; // View matrix
DirectX::XMFLOAT4X4 m_projectionMatrix; // Projection matrix
DirectX::XMFLOAT4X4 m_projectionMatrixLeft; // Projection Matrix for Left Eye Stereo
DirectX::XMFLOAT4X4 m_projectionMatrixRight; // Projection Matrix for Right Eye Stereo

DirectX::XMFLOAT4X4 m_inverseView;

DirectX::XMFLOAT3 m_eye; // Camera eye position


DirectX::XMFLOAT3 m_lookAt; // LookAt position
DirectX::XMFLOAT3 m_up; // Up vector
float m_cameraYawAngle; // Yaw angle of camera
float m_cameraPitchAngle; // Pitch angle of camera

float m_fieldOfView; // Field of view


float m_aspectRatio; // Aspect ratio
float m_nearPlane; // Near plane
float m_farPlane; // Far plane
};

There are two 4x4 matrices that define the transformations to the view and projection coordinates,
m_viewMatrix and m_projectionMatrix. (For stereo projection, you use two projection matrices: one for each
eye's view.) They are calculated with these two methods, respectively:
SetViewParams
SetProjParams
The code for these two methods looks like this:

void Camera::SetViewParams(
_In_ XMFLOAT3 eye,
_In_ XMFLOAT3 lookAt,
_In_ XMFLOAT3 up
)
{
m_eye = eye;
m_lookAt = lookAt;
m_up = up;
// Calc the view matrix.
XMMATRIX view = XMMatrixLookAtLH(
XMLoadFloat3(&m_eye),
XMLoadFloat3(&m_lookAt),
XMLoadFloat3(&m_up)
);

XMVECTOR det;
XMMATRIX inverseView = XMMatrixInverse(&det, view);
XMStoreFloat4x4(&m_viewMatrix, view);
XMStoreFloat4x4(&m_inverseView, inverseView);

// The axis basis vectors and camera position are stored inside the
// position matrix in the 4 rows of the camera's world matrix.
// To figure out the yaw/pitch of the camera, we just need the Z basis vector
XMFLOAT3* zBasis = (XMFLOAT3*)&inverseView._31;

m_cameraYawAngle = atan2f(zBasis->x, zBasis->z);

float len = sqrtf(zBasis->z * zBasis->z + zBasis->x * zBasis->x);


m_cameraPitchAngle = atan2f(zBasis->y, len);
}
//--------------------------------------------------------------------------------------
// Calculates the projection matrix based on input params
//--------------------------------------------------------------------------------------
void Camera::SetProjParams(
_In_ float fieldOfView,
_In_ float aspectRatio,
_In_ float nearPlane,
_In_ float farPlane
)
{
// Set attributes for the projection matrix
m_fieldOfView = fieldOfView;
m_aspectRatio = aspectRatio;
m_nearPlane = nearPlane;
m_farPlane = farPlane;
XMStoreFloat4x4(
&m_projectionMatrix,
XMMatrixPerspectiveFovLH(
m_fieldOfView,
m_aspectRatio,
m_nearPlane,
m_farPlane
)
);

STEREO_PARAMETERS* stereoParams = nullptr;


// change the projection matrix
XMStoreFloat4x4(
&m_projectionMatrixLeft,
MatrixStereoProjectionFovLH(
stereoParams,
STEREO_CHANNEL::LEFT,
m_fieldOfView,
m_aspectRatio,
m_nearPlane,
m_farPlane,
STEREO_MODE::NORMAL
)
);

XMStoreFloat4x4(
&m_projectionMatrixRight,
MatrixStereoProjectionFovLH(
stereoParams,
STEREO_CHANNEL::RIGHT,
m_fieldOfView,
m_aspectRatio,
m_aspectRatio,
m_nearPlane,
m_farPlane,
STEREO_MODE::NORMAL
)
);
}

We get the resulting view and projection data by calling the View and Projection methods, respectively, on the
Camera object. These calls occur in the next step we review, the GameRenderer::Render method called in the
game loop.
Now, let's look at how the game creates the framework to draw our game graphics using the camera. This
includes defining the primitives that comprise the game world and its elements.

Defining the primitives


In the game sample code, we define and implement the primitives in two base classes and the corresponding
specializations for each primitive type.
MeshObject.h/.cpp defines the base class for all mesh objects. The SphereMesh.h/.cpp,
CylinderMesh.h/.cpp, FaceMesh.h/.cpp, and WorldMesh.h/.cpp files contain the code that populates the
constant buffers for each primitive with the vertex and vertex normal data that defines the primitive's geometry.
These code files are a good place to start if you're looking to understand how to create Direct3D primitives in
your own game app, but we won't cover them here as it's too specific to this game's implementation. For now, we
assume that the vertex buffers for each primitive have been populated, and look at how the game sample handles
those buffers to update the game itself.
The base class for objects that represent the primitives from the perspective of the game is defined in
GameObject.h./.cpp. This class, GameObject, defines the fields and methods for the common behaviors across
all primitives. Each primitive object type derives from it. Let's look at how it's defined:

ref class GameObject


{
internal:
GameObject();

// Expect these two functions to be overloaded by subclasses.


virtual bool IsTouching(
DirectX::XMFLOAT3 /* point */,
float /* radius */,
_Out_ DirectX::XMFLOAT3 *contact,
_Out_ DirectX::XMFLOAT3 *normal
)
{
*contact = DirectX::XMFLOAT3(0.0f, 0.0f, 0.0f);
*normal = DirectX::XMFLOAT3(0.0f, 0.0f, 1.0f);
return false;
};

void Render(
_In_ ID3D11DeviceContext *context,
_In_ ID3D11Buffer *primitiveConstantBuffer
);

void Active(bool active);


bool Active();
void Target(bool target);
bool Target();
void Hit(bool hit);
bool Hit();
void OnGround(bool ground);
bool OnGround();
bool OnGround();
void TargetId(int targetId);
int TargetId();
void HitTime(float t);
float HitTime();

void AnimatePosition(_In_opt_ Animate^ animate);


Animate^ AnimatePosition();

void HitSound(_In_ SoundEffect^ hitSound);


SoundEffect^ HitSound();

void PlaySound(float impactSpeed, DirectX::XMFLOAT3 eyePoint);

void Mesh(_In_ MeshObject^ mesh);

void NormalMaterial(_In_ Material^ material);


Material^ NormalMaterial();
void HitMaterial(_In_ Material^ material);
Material^ HitMaterial();

void Position(DirectX::XMFLOAT3 position);


void Position(DirectX::XMVECTOR position);
void Velocity(DirectX::XMFLOAT3 velocity);
void Velocity(DirectX::XMVECTOR velocity);
DirectX::XMMATRIX ModelMatrix();
DirectX::XMFLOAT3 Position();
DirectX::XMVECTOR VectorPosition();
DirectX::XMVECTOR VectorVelocity();
DirectX::XMFLOAT3 Velocity();

protected private:
virtual void UpdatePosition() {};
// Object Data
bool m_active;
bool m_target;
int m_targetId;
bool m_hit;
bool m_ground;

DirectX::XMFLOAT3 m_position;
DirectX::XMFLOAT3 m_velocity;
DirectX::XMFLOAT4X4 m_modelMatrix;

Material^ m_normalMaterial;
Material^ m_hitMaterial;

DirectX::XMFLOAT3 m_defaultXAxis;
DirectX::XMFLOAT3 m_defaultYAxis;
DirectX::XMFLOAT3 m_defaultZAxis;

float m_hitTime;

Animate^ m_animatePosition;
MeshObject^ m_mesh;

SoundEffect^ m_hitSound;
};

Most of the fields contain data about the state, visual properties, or position of the primitive in the game world.
There are a few methods in particular that are necessary in most games:
Mesh. Gets the mesh geometry for the primitive, which is stored in m_mesh. This geometry is defined in
MeshObject.h/.cpp.
IsTouching. This method determines if the primitive is within a specific distance of a point, and returns the
point on the surface closest to the point and the normal to the surface of the object at that point. Because the
sample is only concerned with ammo-primitive collisions, this is enough for the game's dynamics. It is not a
general purpose primitive-primitive intersection function, although it could be used as the basis for one.
AnimatePosition. Updates the movement and animation for the primitive.
UpdatePosition. Updates the position of the object in the world coordinate space.
Render. Puts the material properties of the primitive into the primitive constant buffer and then renders
(draws) the primitive geometry using the device context.
It's a good practice to create a base object type that defines the minimum set of methods for a primitive because
most games have a very large number of primitives, and the code can quickly become difficult to manage. It also
simplifies game code when the update loop can treat the primitives polymorphically, letting the objects
themselves define their own update and rendering behaviors.
Let's look at the basic rendering of a primitive in the game sample.

Rendering the primitives


The primitives in the game sample use the base Render method implemented on the parent GameObject class,
as here:

void GameObject::Render(
_In_ ID3D11DeviceContext *context,
_In_ ID3D11Buffer *primitiveConstantBuffer
)
{
if (!m_active || (m_mesh == nullptr) || (m_normalMaterial == nullptr))
{
return;
}

ConstantBufferChangesEveryPrim constantBuffer;

XMStoreFloat4x4(
&constantBuffer.worldMatrix,
XMMatrixTranspose(ModelMatrix())
);

if (m_hit && m_hitMaterial != nullptr)


{
m_hitMaterial->RenderSetup(context, &constantBuffer);
}
else
{
m_normalMaterial->RenderSetup(context, &constantBuffer);
}
context->UpdateSubresource(primitiveConstantBuffer, 0, nullptr, &constantBuffer, 0, 0);

m_mesh->Render(context);
}

The GameObject::Render method updates the primitive constant buffer with the data specific to a given
primitive. The game uses multiple constant buffers, but only needs to update these buffers one time per primitive.
Think of the constant buffers as input to the shaders that run for each primitive. Some data is static
(m_constantBufferNeverChanges); some data is constant over the frame
(m_constantBufferChangesEveryFrame), like the position of the camera; and some data is specific to the
primitive, like its color and textures (m_constantBufferChangesEveryPrim). The game renderer separates these
inputs into different constant buffers to optimize the memory bandwidth that the CPU and GPU use. This
approach also helps to minimize the amount of data the GPU needs to keep track of. Remember, the GPU has a
big queue of commands, and each time the game calls Draw, that command is queued along with the data
associated with it. When the game updates the primitive constant buffer and issues the next Draw command, the
graphics driver adds this next command and the associated data to the queue. If the game draws 100 primitives,
it could potentially have 100 copies of the constant buffer data in the queue. We want to minimize the amount of
data the game is sending to the GPU, so the game uses a separate primitive constant buffer that only contains the
updates for each primitive.
If a collision (a hit) is detected, GameObject::Render checks the current context, which indicates whether the
target has been hit by an ammo sphere. If the target has been hit, this method applies a hit material, which
reverses the colors of the rings of the target to indicate a successful hit to the player. Otherwise, it applies the
default material with the same method. In both cases, it sets the material by calling Material::RenderSetup,
which sets the appropriate constants into the constant buffer. Then, it calls
ID3D11DeviceContext::PSSetShaderResources to set the corresponding texture resource for the pixel shader,
and ID3D11DeviceContext::VSSetShader and ID3D11DeviceContext::PSSetShader to set the vertex shader
and pixel shader objects themselves, respectively.
Here's how Material::RenderSetup configures the constant buffers and assigns the shader resources. Again,
note that the constant buffer is the one used for updating changes to primitives, specifically.

Note The Material class is defined in Material.h/.cpp.

void Material::RenderSetup(
_In_ ID3D11DeviceContext* context,
_Inout_ ConstantBufferChangesEveryPrim* constantBuffer
)
{
constantBuffer->meshColor = m_meshColor;
constantBuffer->specularColor = m_specularColor;
constantBuffer->specularPower = m_specularExponent;
constantBuffer->diffuseColor = m_diffuseColor;

context->PSSetShaderResources(0, 1, m_textureRV.GetAddressOf());
context->VSSetShader(m_vertexShader.Get(), nullptr, 0);
context->PSSetShader(m_pixelShader.Get(), nullptr, 0);
}

Finally, the PrimObject::Render calls the Render method for the underlying MeshObject object.

void MeshObject::Render(_In_ ID3D11DeviceContext *context)


{
uint32 stride = sizeof(PNTVertex);
uint32 offset = 0;

context->IASetVertexBuffers(0, 1, m_vertexBuffer.GetAddressOf(), &stride, &offset);


context->IASetIndexBuffer(m_indexBuffer.Get(), DXGI_FORMAT_R16_UINT, 0);
context->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
context->DrawIndexed(m_indexCount, 0, 0);
}

Now, the game samples MeshObject::Render method queues the drawing command to execute the shaders on
the GPU using the current scene state. The vertex shader converts the geometry (vertices) from model
coordinates into device (world) coordinates, taking into account where the camera is and the perspective
transformation. Lastly, the pixel shaders render the transformed triangles into the back buffer using the texture
set above.
This happens on the actual rendering process!

Creating the vertex and pixel shaders


At this point, the game sample has defined the primitives to draw and the constant buffers that define their
rendering. These constant buffers serve as the sets of parameters to the shaders that run on the graphics device.
These shader programs come in two types:
Vertex shaders perform per-vertex operations, such as vertex transformations and lighting.
Pixel (or fragment) shaders perform per-pixel operations, such as texturing and per-pixel lighting. They can
also be used to perform post-processing effects on bitmaps, such as the final render target.
The shader code is defined using High-Level Shader Language (HLSL), which, in Direct3D 11, is compiled from a
program created with a C-like syntax. (The complete syntax can be found here.) The two principal shaders for the
sample game are defined in PixelShader.hlsl and VertexShader.hlsl. (There are also two "low power" shaders
defined for low power devices: PixelShaderFlat.hlsl and VertexShaderFlat.hlsl. These two shaders provide
reduced effects, such as a lack of specular highlights on textures surfaces.) FInally, there is an .hlsli file that
contains the format of the constant buffers, ConstantBuffers.hlsli.
ConstantBuffers.hlsli is defined like this:
Texture2D diffuseTexture : register(t0);
SamplerState linearSampler : register(s0);

cbuffer ConstantBufferNeverChanges : register(b0)


{
float4 lightPosition[4];
float4 lightColor;
}

cbuffer ConstantBufferChangeOnResize : register(b1)


{
matrix projection;
};

cbuffer ConstantBufferChangesEveryFrame : register(b2)


{
matrix view;
};

cbuffer ConstantBufferChangesEveryPrim : register (b3)


{
matrix world;
float4 meshColor;
float4 diffuseColor;
float4 specularColor;
float specularExponent;
};

struct VertextShaderInput
{
float4 position : POSITION;
float4 normal : NORMAL;
float2 textureUV : TEXCOORD0;
};

struct PixelShaderInput
{
float4 position : SV_POSITION;
float2 textureUV : TEXCOORD0;
float3 vertexToEye : TEXCOORD1;
float3 normal : TEXCOORD2;
float3 vertexToLight0 : TEXCOORD3;
float3 vertexToLight1 : TEXCOORD4;
float3 vertexToLight2 : TEXCOORD5;
float3 vertexToLight3 : TEXCOORD6;
};

struct PixelShaderFlatInput
{
float4 position : SV_POSITION;
float2 textureUV : TEXCOORD0;
float4 diffuseColor : TEXCOORD1;
};

VertexShader.hlsl is defined like this:


VertexShader.hlsl
#include "ConstantBuffers.hlsli"

PixelShaderInput main(VertextShaderInput input)


{
PixelShaderInput output = (PixelShaderInput)0;

output.position = mul(mul(mul(input.position, world), view), projection);


output.textureUV = input.textureUV;

// compute view space normal


output.normal = normalize (mul(mul(input.normal.xyz, (float3x3)world), (float3x3)view));

// Vertex pos in view space (normalize in pixel shader)


output.vertexToEye = -mul(mul(input.position, world), view).xyz;

// Compute view space vertex to light vectors (normalized)


output.vertexToLight0 = normalize(mul(lightPosition[0], view ).xyz + output.vertexToEye);
output.vertexToLight1 = normalize(mul(lightPosition[1], view ).xyz + output.vertexToEye);
output.vertexToLight2 = normalize(mul(lightPosition[2], view ).xyz + output.vertexToEye);
output.vertexToLight3 = normalize(mul(lightPosition[3], view ).xyz + output.vertexToEye);

return output;
}

The main function in VertexShader.hlsl performs the vertex transformation sequence we discussed in the
camera section. It's run one time per vertex. The resultant outputs are passed to the pixel shader code for
texturing and material effects.
PixelShader.hlsl

#include "ConstantBuffers.hlsli"

float4 main(PixelShaderInput input) : SV_Target


{
float diffuseLuminance =
max(0.0f, dot(input.normal, input.vertexToLight0)) +
max(0.0f, dot(input.normal, input.vertexToLight1)) +
max(0.0f, dot(input.normal, input.vertexToLight2)) +
max(0.0f, dot(input.normal, input.vertexToLight3));

// Normalize view space vertex-to-eye


input.vertexToEye = normalize(input.vertexToEye);

float specularLuminance =
pow(max(0.0f, dot(input.normal, normalize(input.vertexToEye + input.vertexToLight0))), specularExponent) +
pow(max(0.0f, dot(input.normal, normalize(input.vertexToEye + input.vertexToLight1))), specularExponent) +
pow(max(0.0f, dot(input.normal, normalize(input.vertexToEye + input.vertexToLight2))), specularExponent) +
pow(max(0.0f, dot(input.normal, normalize(input.vertexToEye + input.vertexToLight3))), specularExponent);

float4 specular;
specular = specularColor * specularLuminance * 0.5f;

return diffuseTexture.Sample(linearSampler, input.textureUV) * diffuseColor * diffuseLuminance * 0.5f + specular;


}

The main function in PixelShader.hlsl takes the 2-D projections of the triangle surfaces for each primitive in the
scene, and computes the color value for each pixel of the visible surfaces based on the textures and effects (in this
case, specular lighting) applied to them.
Now, let's bring all these ideas (primitives, camera, and shaders) together, and see how the sample game builds
the complete rendering process.
Rendering the frame for output
We briefly discussed this method in Defining the main game object. Now, let's look at it in a little more detail.

void GameRenderer::Render()
{
int renderingPasses = 1;
if (m_stereoEnabled)
{
renderingPasses = 2;
}

for (int i = 0; i < renderingPasses; i++)


{
if (m_stereoEnabled && i > 0)
{
// Doing the Right Eye View
m_d3dContext->OMSetRenderTargets(1, m_renderTargetViewRight.GetAddressOf(), m_depthStencilView.Get());
m_d3dContext->ClearDepthStencilView(m_depthStencilView.Get(), D3D11_CLEAR_DEPTH, 1.0f, 0);
m_d2dContext->SetTarget(m_d2dTargetBitmapRight.Get());
}
else
{
// Doing the Mono or Left Eye View
m_d3dContext->OMSetRenderTargets(1, m_renderTargetView.GetAddressOf(), m_depthStencilView.Get());
m_d3dContext->ClearDepthStencilView(m_depthStencilView.Get(), D3D11_CLEAR_DEPTH, 1.0f, 0);
m_d2dContext->SetTarget(m_d2dTargetBitmap.Get());
}

if (m_game != nullptr && m_gameResourcesLoaded && m_levelResourcesLoaded)


{
// This section is only used after the game state has been initialized and all device
// resources needed for the game have been created and associated with the game objects.
if (m_stereoEnabled)
{
ConstantBufferChangeOnResize changesOnResize;
XMStoreFloat4x4(
&changesOnResize.projection,
XMMatrixMultiply(
XMMatrixTranspose(
i == 0 ?
m_game->GameCamera()->LeftEyeProjection() :
m_game->GameCamera()->RightEyeProjection()
),
XMMatrixTranspose(XMLoadFloat4x4(&m_rotationTransform3D))
)
);

m_d3dContext->UpdateSubresource(
m_constantBufferChangeOnResize.Get(),
0,
nullptr,
&changesOnResize,
0,
0
);
}
// Update variables that change one time per frame

ConstantBufferChangesEveryFrame constantBufferChangesEveryFrame;
XMStoreFloat4x4(
&constantBufferChangesEveryFrame.view,
XMMatrixTranspose(m_game->GameCamera()->View())
);
m_d3dContext->UpdateSubresource(
m_constantBufferChangesEveryFrame.Get(),
0,
nullptr,
nullptr,
&constantBufferChangesEveryFrame,
0,
0
);

// Setup Pipeline

m_d3dContext->IASetInputLayout(m_vertexLayout.Get());
m_d3dContext->VSSetConstantBuffers(0, 1, m_constantBufferNeverChanges.GetAddressOf());
m_d3dContext->VSSetConstantBuffers(1, 1, m_constantBufferChangeOnResize.GetAddressOf());
m_d3dContext->VSSetConstantBuffers(2, 1, m_constantBufferChangesEveryFrame.GetAddressOf());
m_d3dContext->VSSetConstantBuffers(3, 1, m_constantBufferChangesEveryPrim.GetAddressOf());

m_d3dContext->PSSetConstantBuffers(2, 1, m_constantBufferChangesEveryFrame.GetAddressOf());
m_d3dContext->PSSetConstantBuffers(3, 1, m_constantBufferChangesEveryPrim.GetAddressOf());
m_d3dContext->PSSetSamplers(0, 1, m_samplerLinear.GetAddressOf());

// Get the all the objects to render from the Game state.
auto objects = m_game->RenderObjects();
for (auto object = objects.begin(); object != objects.end(); object++)
{
(*object)->Render(m_d3dContext.Get(), m_constantBufferChangesEveryPrim.Get());
}
}
else
{
const float ClearColor[4] = {0.1f, 0.1f, 0.1f, 1.0f};

// Only need to clear the background when not rendering the full 3-D scene because
// the 3-D world is a fully enclosed box and the dynamics prevents the camera from
// moving outside this space.
if (m_stereoEnabled && i > 0)
{
// Doing the Right Eye View
m_d3dContext->ClearRenderTargetView(m_renderTargetViewRight.Get(), ClearColor);
}
else
{
// Doing the Mono or Left Eye View
m_d3dContext->ClearRenderTargetView(m_renderTargetView.Get(), ClearColor);
}
}

m_d2dContext->BeginDraw();

// To handle the swapchain being pre-rotated, set the D2D transformation to include it.
m_d2dContext->SetTransform(m_rotationTransform2D);

if (m_game != nullptr && m_gameResourcesLoaded)


{
// This is only used after the game state has been initialized.
m_gameHud->Render(m_game, m_d2dContext.Get(), m_windowBounds);
}

if (m_gameInfoOverlay->Visible())
{
m_d2dContext->DrawBitmap(
m_gameInfoOverlay->Bitmap(),
D2D1::RectF(
(m_windowBounds.Width - GameInfoOverlayConstant::Width)/2.0f,
(m_windowBounds.Height - GameInfoOverlayConstant::Height)/2.0f,
(m_windowBounds.Width - GameInfoOverlayConstant::Width)/2.0f + GameInfoOverlayConstant::Width,
(m_windowBounds.Height - GameInfoOverlayConstant::Height)/2.0f + GameInfoOverlayConstant::Height
)
);
}

HRESULT hr = m_d2dContext->EndDraw();
if (hr != D2DERR_RECREATE_TARGET)
if (hr != D2DERR_RECREATE_TARGET)
{
// The D2DERR_RECREATE_TARGET indicates there has been a problem with the underlying
// D3D device. All subsequent rendering will be ignored until the device is recreated.
// This error will be propagated and the appropriate D3D error will be returned from the
// swapchain->Present(...) call. At that point, the sample will recreate the device
// and all associated resources. As a result the D2DERR_RECREATE_TARGET doesn't
// need to be handled here.
DX::ThrowIfFailed(hr);
}
}
Present();
}

The game has all the pieces to assemble a view for output: primitives and the rules for their behavior, a camera
object to provide the player's view of the game world, and the graphics resources for drawing.
Now, let's look at the process that brings it all together.
1. If stereo 3D is enabled, set the following rendering process to run two times, one time for each eye.
2. The whole scene is enclosed in a bounding world volume, so draw every pixel (even those we dont need) to
clear the color planes of the render target. Set the depth stencil buffer to the default value.
3. Update the constant buffer for frame update data by using the camera's view matrix and data.
4. Set up the Direct3D context to use the four content buffers that were defined earlier.
5. Call the Render method on each primitive object. This results in a Draw or DrawIndexed call on the context
to draw the geometry of that each primitive. Specifically, this Draw call queues commands and data to the
GPU, as parameterized by the constant buffer data. Each draw call executes the vertex shader one time per
vertex, and then the pixel shader one time for every pixel of each triangle in the primitive. The textures are part
of the state that the pixel shader uses to do the rendering.
6. Draw the HUD and the overlay using the Direct2D context.
7. Call DirectXBase::Present.
And the game has updated the display! Altogether, this is the basic process for implementing the graphics
framework of a game. Of course, the larger your game, the more abstractions you must put in place to handle
that complexity, such as entire hierarchies of object types and animation behaviors, and more complex methods
for loading and managing assets such as meshes and textures.

Next steps
Moving forward, let's look at a few important parts of the game sample that we've only discussed in passing: the
user interface overlay, the input controls, and the sound.

Complete sample code for this section


Camera.h
//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#pragma once

ref class Camera


{
internal:
Camera();

// Call these from client and use Get*Matrix() to read new matrices.
// Functions to change camera matrices.
void SetViewParams(_In_ DirectX::XMFLOAT3 eye, _In_ DirectX::XMFLOAT3 lookAt, _In_ DirectX::XMFLOAT3 up);
void SetProjParams(_In_ float fieldOfView, _In_ float aspectRatio, _In_ float nearPlane, _In_ float farPlane);

void LookDirection (_In_ DirectX::XMFLOAT3 lookDirection);


void Eye (_In_ DirectX::XMFLOAT3 position);

DirectX::XMMATRIX View();
DirectX::XMMATRIX Projection();
DirectX::XMMATRIX LeftEyeProjection();
DirectX::XMMATRIX RightEyeProjection();
DirectX::XMMATRIX World();
DirectX::XMFLOAT3 Eye();
DirectX::XMFLOAT3 LookAt();
DirectX::XMFLOAT3 Up();
float NearClipPlane();
float FarClipPlane();
float Pitch();
float Yaw();

protected private:
DirectX::XMFLOAT4X4 m_viewMatrix; // View matrix
DirectX::XMFLOAT4X4 m_projectionMatrix; // Projection matrix
DirectX::XMFLOAT4X4 m_projectionMatrixLeft; // Projection Matrix for Left Eye Stereo
DirectX::XMFLOAT4X4 m_projectionMatrixRight; // Projection Matrix for Right Eye Stereo

DirectX::XMFLOAT4X4 m_inverseView;

DirectX::XMFLOAT3 m_eye; // Camera eye position


DirectX::XMFLOAT3 m_lookAt; // LookAt position
DirectX::XMFLOAT3 m_up; // Up vector
float m_cameraYawAngle; // Yaw angle of camera
float m_cameraPitchAngle; // Pitch angle of camera

float m_fieldOfView; // Field of view


float m_aspectRatio; // Aspect ratio
float m_nearPlane; // Near plane
float m_farPlane; // Far plane
};

Camera.cpp

//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#include "pch.h"
#include "Camera.h"
#include "StereoProjection.h"

using namespace DirectX;

#undef min // use __min instead


#undef max // use __max instead
//--------------------------------------------------------------------------------------
// Constructor
//--------------------------------------------------------------------------------------
Camera::Camera()
{
// Setup the view matrix.
SetViewParams(
XMFLOAT3(0.0f, 0.0f, 0.0f), // default eye position.
XMFLOAT3(0.0f, 0.0f, 1.0f), // default look at position.
XMFLOAT3(0.0f, 1.0f, 0.0f) // default up vector.
);

// Setup the projection matrix.


SetProjParams(XM_PI / 4, 1.0f, 1.0f, 1000.0f);
}
//--------------------------------------------------------------------------------------
void Camera::LookDirection (_In_ XMFLOAT3 lookDirection)
{
XMFLOAT3 lookAt;
lookAt.x = m_eye.x + lookDirection.x;
lookAt.y = m_eye.y + lookDirection.y;
lookAt.z = m_eye.z + lookDirection.z;

SetViewParams(m_eye, lookAt, m_up);


}
//--------------------------------------------------------------------------------------
void Camera::Eye(_In_ XMFLOAT3 eye)
{
SetViewParams(eye, m_lookAt, m_up);
}
//--------------------------------------------------------------------------------------
void Camera::SetViewParams(
_In_ XMFLOAT3 eye,
_In_ XMFLOAT3 lookAt,
_In_ XMFLOAT3 up
)
{
m_eye = eye;
m_lookAt = lookAt;
m_up = up;

// Calculate the view matrix.


XMMATRIX view = XMMatrixLookAtLH(
XMLoadFloat3(&m_eye),
XMLoadFloat3(&m_lookAt),
XMLoadFloat3(&m_up)
);

XMVECTOR det;
XMMATRIX inverseView = XMMatrixInverse(&det, view);
XMStoreFloat4x4(&m_viewMatrix, view);
XMStoreFloat4x4(&m_inverseView, inverseView);

// The axis basis vectors and camera position are stored inside the
// position matrix in the four rows of the camera's world matrix.
// To figure out the yaw/pitch of the camera, we just need the Z basis vector.
XMFLOAT3 zBasis;
XMStoreFloat3(&zBasis, inverseView.r[2]);

m_cameraYawAngle = atan2f(zBasis.x, zBasis.z);

float len = sqrtf(zBasis.z * zBasis.z + zBasis.x * zBasis.x);


m_cameraPitchAngle = atan2f(zBasis.y, len);
m_cameraPitchAngle = atan2f(zBasis.y, len);
}
//--------------------------------------------------------------------------------------
// Calculates the projection matrix based on input params.
//--------------------------------------------------------------------------------------
void Camera::SetProjParams(
_In_ float fieldOfView,
_In_ float aspectRatio,
_In_ float nearPlane,
_In_ float farPlane
)
{
// Set attributes for the projection matrix.
m_fieldOfView = fieldOfView;
m_aspectRatio = aspectRatio;
m_nearPlane = nearPlane;
m_farPlane = farPlane;
XMStoreFloat4x4(
&m_projectionMatrix,
XMMatrixPerspectiveFovLH(
m_fieldOfView,
m_aspectRatio,
m_nearPlane,
m_farPlane
)
);

STEREO_PARAMETERS* stereoParams = nullptr;


// Change the projection matrix.
XMStoreFloat4x4(
&m_projectionMatrixLeft,
MatrixStereoProjectionFovLH(
stereoParams,
STEREO_CHANNEL::LEFT,
m_fieldOfView,
m_aspectRatio,
m_nearPlane,
m_farPlane,
STEREO_MODE::NORMAL
)
);

XMStoreFloat4x4(
&m_projectionMatrixRight,
MatrixStereoProjectionFovLH(
stereoParams,
STEREO_CHANNEL::RIGHT,
m_fieldOfView,
m_aspectRatio,
m_nearPlane,
m_farPlane,
STEREO_MODE::NORMAL
)
);
}
//--------------------------------------------------------------------------------------
DirectX::XMMATRIX Camera::View()
{
return XMLoadFloat4x4(&m_viewMatrix);
}
DirectX::XMMATRIX Camera::Projection()
{
return XMLoadFloat4x4(&m_projectionMatrix);
}
DirectX::XMMATRIX Camera::LeftEyeProjection()
{
return XMLoadFloat4x4(&m_projectionMatrixLeft);
}
DirectX::XMMATRIX Camera::RightEyeProjection()
{
{
return XMLoadFloat4x4(&m_projectionMatrixRight);
}
DirectX::XMMATRIX Camera::World()
{
return XMLoadFloat4x4(&m_inverseView);
}
DirectX::XMFLOAT3 Camera::Eye()
{
return m_eye;
}
DirectX::XMFLOAT3 Camera::LookAt()
{
return m_lookAt;
}
float Camera::NearClipPlane()
{
return m_nearPlane;
}
float Camera::FarClipPlane()
{
return m_farPlane;
}
float Camera::Pitch()
{
return m_cameraPitchAngle;
}
float Camera::Yaw()
{
return m_cameraYawAngle;
}

GameRenderer.h

//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#pragma once

#include "DirectXBase.h"
#include "GameInfoOverlay.h"
#include "GameHud.h"
#include "Simple3DGame.h"

ref class Simple3DGame;


ref class GameHud;

ref class GameRenderer : public DirectXBase


{
internal:
GameRenderer();

virtual void Initialize(


_In_ Windows::UI::Core::CoreWindow^ window,
float dpi
) override;

virtual void CreateDeviceIndependentResources() override;


virtual void CreateDeviceResources() override;
virtual void UpdateForWindowSizeChange() override;
virtual void Render() override;
virtual void SetDpi(float dpi) override;

concurrency::task<void> CreateGameDeviceResourcesAsync(_In_ Simple3DGame^ game);


concurrency::task<void> CreateGameDeviceResourcesAsync(_In_ Simple3DGame^ game);
void FinalizeCreateGameDeviceResources();
concurrency::task<void> LoadLevelResourcesAsync();
void FinalizeLoadLevelResources();

GameInfoOverlay^ InfoOverlay() { return m_gameInfoOverlay; };

DirectX::XMFLOAT2 GameInfoOverlayUpperLeft()
{
return DirectX::XMFLOAT2(
(m_windowBounds.Width - GameInfoOverlayConstant::Width) / 2.0f,
(m_windowBounds.Height - GameInfoOverlayConstant::Height) / 2.0f
);
};
DirectX::XMFLOAT2 GameInfoOverlayLowerRight()
{
return DirectX::XMFLOAT2(
(m_windowBounds.Width - GameInfoOverlayConstant::Width) / 2.0f + GameInfoOverlayConstant::Width,
(m_windowBounds.Height - GameInfoOverlayConstant::Height) / 2.0f + GameInfoOverlayConstant::Height
);
};

protected private:
bool m_initialized;
bool m_gameResourcesLoaded;
bool m_levelResourcesLoaded;
GameInfoOverlay^ m_gameInfoOverlay;
GameHud^ m_gameHud;
Simple3DGame^ m_game;

Microsoft::WRL::ComPtr<ID3D11ShaderResourceView> m_sphereTexture;
Microsoft::WRL::ComPtr<ID3D11ShaderResourceView> m_cylinderTexture;
Microsoft::WRL::ComPtr<ID3D11ShaderResourceView> m_ceilingTexture;
Microsoft::WRL::ComPtr<ID3D11ShaderResourceView> m_floorTexture;
Microsoft::WRL::ComPtr<ID3D11ShaderResourceView> m_wallsTexture;

// Constant Buffers.
Microsoft::WRL::ComPtr<ID3D11Buffer> m_constantBufferNeverChanges;
Microsoft::WRL::ComPtr<ID3D11Buffer> m_constantBufferChangeOnResize;
Microsoft::WRL::ComPtr<ID3D11Buffer> m_constantBufferChangesEveryFrame;
Microsoft::WRL::ComPtr<ID3D11Buffer> m_constantBufferChangesEveryPrim;
Microsoft::WRL::ComPtr<ID3D11SamplerState> m_samplerLinear;
Microsoft::WRL::ComPtr<ID3D11VertexShader> m_vertexShader;
Microsoft::WRL::ComPtr<ID3D11VertexShader> m_vertexShaderFlat;
Microsoft::WRL::ComPtr<ID3D11PixelShader> m_pixelShader;
Microsoft::WRL::ComPtr<ID3D11PixelShader> m_pixelShaderFlat;
Microsoft::WRL::ComPtr<ID3D11InputLayout> m_vertexLayout;
};

GameRenderer.cpp

//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#include "pch.h"
#include "DirectXSample.h"
#include "GameRenderer.h"
#include "ConstantBuffers.h"
#include "TargetTexture.h"
#include "BasicLoader.h"
#include "CylinderMesh.h"
#include "FaceMesh.h"
#include "SphereMesh.h"
#include "WorldMesh.h"
#include "WorldMesh.h"
#include "Face.h"
#include "Sphere.h"
#include "Cylinder.h"

using namespace concurrency;


using namespace DirectX;
using namespace Microsoft::WRL;
using namespace Windows::UI::Core;

//----------------------------------------------------------------------

GameRenderer::GameRenderer() :
m_initialized(false),
m_gameResourcesLoaded(false),
m_levelResourcesLoaded(false)
{
}

//----------------------------------------------------------------------

void GameRenderer::Initialize(
_In_ CoreWindow^ window,
float dpi
)
{
if (!m_initialized)
{
m_gameHud = ref new GameHud(
"Windows 8 Samples",
"DirectX first-person game sample"
);
m_gameInfoOverlay = ref new GameInfoOverlay();
m_initialized = true;
}

DirectXBase::Initialize(window, dpi);

// Initialize could be called multiple times as a result of an error with the hardware device
// that requires it to be reinitialized. Since the m_gameInfoOverlay has resources that are
// dependent on the device, it will need to be reinitialized each time with the new device information.
m_gameInfoOverlay->Initialize(m_d2dDevice.Get(), m_d2dContext.Get(), m_dwriteFactory.Get(), dpi);
}

//----------------------------------------------------------------------

void GameRenderer::CreateDeviceIndependentResources()
{
DirectXBase::CreateDeviceIndependentResources();
m_gameHud->CreateDeviceIndependentResources(m_dwriteFactory.Get(), m_wicFactory.Get());
}

//----------------------------------------------------------------------

void GameRenderer::CreateDeviceResources()
{
DirectXBase::CreateDeviceResources();

m_gameHud->CreateDeviceResources(m_d2dContext.Get());

if (m_game != nullptr)
{
// The initial invocation of CreateDeviceResources will occur
// before the Game State is initialized when the device is first
// being created, so that the inital loading screen can be displayed.
// Subsequent invocations of CreateDeviceResources will be a result
// of an error with the Device that requires the resources to be
// recreated. In this case, the game state is already initialized
// so the game device resources need to be recreated.
// This sample doesn't gracefully handle all the async recreation
// of resources, so an exception is thrown.
throw Platform::Exception::CreateException(
DXGI_ERROR_DEVICE_REMOVED,
"GameRenderer::CreateDeviceResources - Recreation of resources after TDR not available\n"
);
}
}

//----------------------------------------------------------------------

void GameRenderer::UpdateForWindowSizeChange()
{
DirectXBase::UpdateForWindowSizeChange();

m_gameHud->UpdateForWindowSizeChange(m_windowBounds);

// Update the Projection Matrix and update the associated Constant Buffer.
if (m_game != nullptr)
{
m_game->GameCamera()->SetProjParams(
XM_PI / 2, m_renderTargetSize.Width / m_renderTargetSize.Height,
0.01f,
100.0f
);
ConstantBufferChangeOnResize changesOnResize;
XMStoreFloat4x4(
&changesOnResize.projection,
XMMatrixMultiply(
XMMatrixTranspose(m_game->GameCamera()->Projection()),
XMMatrixTranspose(XMLoadFloat4x4(&m_rotationTransform3D))
)
);

m_d3dContext->UpdateSubresource(
m_constantBufferChangeOnResize.Get(),
0,
nullptr,
&changesOnResize,
0,
0
);
}
}

//----------------------------------------------------------------------

void GameRenderer::SetDpi(float dpi)


{
DirectXBase::SetDpi(dpi);

if (m_gameInfoOverlay)
{
m_gameInfoOverlay->SetDpi(dpi);
}
}

//----------------------------------------------------------------------

task<void> GameRenderer::CreateGameDeviceResourcesAsync(_In_ Simple3DGame^ game)


{
// Set the Loading state to wait until any async resources have
// been loaded before proceeding.
m_game = game;

// NOTE: Only the m_d3dDevice is used in this method. It is expected


// to not run on the same thread as the GameRenderer was created.
// Create methods on the m_d3dDevice are free-threaded and are safe while any methods
// in the m_d3dContext should only be used on a single thread and handled
// in the m_d3dContext should only be used on a single thread and handled
// in the FinalizeCreateGameDeviceResources method.

D3D11_BUFFER_DESC bd;
ZeroMemory(&bd, sizeof(bd));

// Create the constant buffers,


bd.Usage = D3D11_USAGE_DEFAULT;
bd.BindFlags = D3D11_BIND_CONSTANT_BUFFER;
bd.CPUAccessFlags = 0;
bd.ByteWidth = (sizeof(ConstantBufferNeverChanges) + 15) / 16 * 16;
DX::ThrowIfFailed(
m_d3dDevice->CreateBuffer(&bd, nullptr, &m_constantBufferNeverChanges)
);

bd.ByteWidth = (sizeof(ConstantBufferChangeOnResize) + 15) / 16 * 16;


DX::ThrowIfFailed(
m_d3dDevice->CreateBuffer(&bd, nullptr, &m_constantBufferChangeOnResize)
);

bd.ByteWidth = (sizeof(ConstantBufferChangesEveryFrame) + 15) / 16 * 16;


DX::ThrowIfFailed(
m_d3dDevice->CreateBuffer(&bd, nullptr, &m_constantBufferChangesEveryFrame)
);

bd.ByteWidth = (sizeof(ConstantBufferChangesEveryPrim) + 15) / 16 * 16;


DX::ThrowIfFailed(
m_d3dDevice->CreateBuffer(&bd, nullptr, &m_constantBufferChangesEveryPrim)
);

D3D11_SAMPLER_DESC sampDesc;
ZeroMemory(&sampDesc, sizeof(sampDesc));

sampDesc.Filter = D3D11_FILTER_MIN_MAG_MIP_LINEAR;
sampDesc.AddressU = D3D11_TEXTURE_ADDRESS_WRAP;
sampDesc.AddressV = D3D11_TEXTURE_ADDRESS_WRAP;
sampDesc.AddressW = D3D11_TEXTURE_ADDRESS_WRAP;
sampDesc.ComparisonFunc = D3D11_COMPARISON_NEVER;
sampDesc.MinLOD = 0;
sampDesc.MaxLOD = FLT_MAX;
DX::ThrowIfFailed(
m_d3dDevice->CreateSamplerState(&sampDesc, &m_samplerLinear)
);

// Start the async tasks to load the shaders and textures.


BasicLoader^ loader = ref new BasicLoader(m_d3dDevice.Get());

std::vector<task<void>> tasks;

uint32 numElements = ARRAYSIZE(PNTVertexLayout);


tasks.push_back(loader->LoadShaderAsync("VertexShader.cso", PNTVertexLayout, numElements, &m_vertexShader, &m_vertexLayout));
tasks.push_back(loader->LoadShaderAsync("VertexShaderFlat.cso", nullptr, numElements, &m_vertexShaderFlat, nullptr));
tasks.push_back(loader->LoadShaderAsync("PixelShader.cso", &m_pixelShader));
tasks.push_back(loader->LoadShaderAsync("PixelShaderFlat.cso", &m_pixelShaderFlat));

// Make sure previous versions if any of the textures are released.


m_sphereTexture = nullptr;
m_cylinderTexture = nullptr;
m_ceilingTexture = nullptr;
m_floorTexture = nullptr;
m_wallsTexture = nullptr;

// Load Game specific textures.


tasks.push_back(loader->LoadTextureAsync("seafloor.dds", nullptr, &m_sphereTexture));
tasks.push_back(loader->LoadTextureAsync("metal_texture.dds", nullptr, &m_cylinderTexture));
tasks.push_back(loader->LoadTextureAsync("cellceiling.dds", nullptr, &m_ceilingTexture));
tasks.push_back(loader->LoadTextureAsync("cellfloor.dds", nullptr, &m_floorTexture));
tasks.push_back(loader->LoadTextureAsync("cellwall.dds", nullptr, &m_wallsTexture));
tasks.push_back(create_task([]()
{
{
// Simulate loading additional resources.
wait(GameConstants::InitialLoadingDelay);
}));

// Return the task group of all the async tasks for loading the shader and texture assets.
return when_all(tasks.begin(), tasks.end());
}

//----------------------------------------------------------------------

void GameRenderer::FinalizeCreateGameDeviceResources()
{
// All asynchronously loaded resources have completed loading.
// Now, associate all the resources with the appropriate
// Game objects.
// This method is expected to run in the same thread as the GameRenderer
// was created. All work will happen behind the "Loading ..." screen after the
// main loop has been entered.

// Initialize the Constant buffer with the light positions.


// These are handled here to ensure that the d3dContext is only
// used in one thread.

ConstantBufferNeverChanges constantBufferNeverChanges;
constantBufferNeverChanges.lightPosition[0] = XMFLOAT4( 3.5f, 2.5f, 5.5f, 1.0f);
constantBufferNeverChanges.lightPosition[1] = XMFLOAT4( 3.5f, 2.5f, -5.5f, 1.0f);
constantBufferNeverChanges.lightPosition[2] = XMFLOAT4(-3.5f, 2.5f, -5.5f, 1.0f);
constantBufferNeverChanges.lightPosition[3] = XMFLOAT4( 3.5f, 2.5f, 5.5f, 1.0f);
constantBufferNeverChanges.lightColor = XMFLOAT4(0.25f, 0.25f, 0.25f, 1.0f);
m_d3dContext->UpdateSubresource(m_constantBufferNeverChanges.Get(), 0, nullptr, &constantBufferNeverChanges, 0, 0);

// For the targets, there are two unique generated textures.


// Each texture image includes the number of the texture.
// Make sure the 2D rendering is occurring on the same thread
// as the main rendering.

TargetTexture^ textureGenerator = ref new TargetTexture(


m_d3dDevice.Get(),
m_d2dFactory.Get(),
m_dwriteFactory.Get(),
m_d2dContext.Get()
);

MeshObject^ cylinderMesh = ref new CylinderMesh(m_d3dDevice.Get(), 26);


MeshObject^ targetMesh = ref new FaceMesh(m_d3dDevice.Get());
MeshObject^ sphereMesh = ref new SphereMesh(m_d3dDevice.Get(), 26);

Material^ cylinderMaterial = ref new Material(


XMFLOAT4(0.8f, 0.8f, 0.8f, .5f),
XMFLOAT4(0.8f, 0.8f, 0.8f, .5f),
XMFLOAT4(1.0f, 1.0f, 1.0f, 1.0f),
15.0f,
m_cylinderTexture.Get(),
m_vertexShader.Get(),
m_pixelShader.Get()
);

Material^ sphereMaterial = ref new Material(


XMFLOAT4(0.8f, 0.4f, 0.0f, 1.0f),
XMFLOAT4(0.8f, 0.4f, 0.0f, 1.0f),
XMFLOAT4(1.0f, 1.0f, 1.0f, 1.0f),
50.0f,
m_sphereTexture.Get(),
m_vertexShader.Get(),
m_pixelShader.Get()
);

auto objects = m_game->RenderObjects();


// Attach the textures to the appropriate game objects.
for (auto object = objects.begin(); object != objects.end(); object++)
{
if ((*object)->TargetId() == GameConstants::WorldFloorId)
{
(*object)->NormalMaterial(
ref new Material(
XMFLOAT4(0.5f, 0.5f, 0.5f, 1.0f),
XMFLOAT4(0.8f, 0.8f, 0.8f, 1.0f),
XMFLOAT4(0.3f, 0.3f, 0.3f, 1.0f),
150.0f,
m_floorTexture.Get(),
m_vertexShaderFlat.Get(),
m_pixelShaderFlat.Get()
)
);
(*object)->Mesh(ref new WorldFloorMesh(m_d3dDevice.Get()));
}
else if ((*object)->TargetId() == GameConstants::WorldCeilingId)
{
(*object)->NormalMaterial(
ref new Material(
XMFLOAT4(0.5f, 0.5f, 0.5f, 1.0f),
XMFLOAT4(0.8f, 0.8f, 0.8f, 1.0f),
XMFLOAT4(0.3f, 0.3f, 0.3f, 1.0f),
150.0f,
m_ceilingTexture.Get(),
m_vertexShaderFlat.Get(),
m_pixelShaderFlat.Get()
)
);
(*object)->Mesh(ref new WorldCeilingMesh(m_d3dDevice.Get()));
}
else if ((*object)->TargetId() == GameConstants::WorldWallsId)
{
(*object)->NormalMaterial(
ref new Material(
XMFLOAT4(0.5f, 0.5f, 0.5f, 1.0f),
XMFLOAT4(0.8f, 0.8f, 0.8f, 1.0f),
XMFLOAT4(0.3f, 0.3f, 0.3f, 1.0f),
150.0f,
m_wallsTexture.Get(),
m_vertexShaderFlat.Get(),
m_pixelShaderFlat.Get()
)
);
(*object)->Mesh(ref new WorldWallsMesh(m_d3dDevice.Get()));
}
else if (Cylinder^ cylinder = dynamic_cast<Cylinder^>(*object))
{
cylinder->Mesh(cylinderMesh);
cylinder->NormalMaterial(cylinderMaterial);
}
else if (Face^ target = dynamic_cast<Face^>(*object))
{
const int bufferLength = 16;
char16 str[bufferLength];
int len = swprintf_s(str, bufferLength, L"%d", target->TargetId());
Platform::String^ string = ref new Platform::String(str, len);

ComPtr<ID3D11ShaderResourceView> texture;
textureGenerator->CreateTextureResourceView(string, &texture);
target->NormalMaterial(
ref new Material(
XMFLOAT4(0.8f, 0.8f, 0.8f, 0.5f),
XMFLOAT4(0.8f, 0.8f, 0.8f, 0.5f),
XMFLOAT4(0.3f, 0.3f, 0.3f, 1.0f),
5.0f,
texture.Get(),
m_vertexShader.Get(),
m_pixelShader.Get()
)
);

textureGenerator->CreateHitTextureResourceView(string, &texture);
target->HitMaterial(
ref new Material(
XMFLOAT4(0.8f, 0.8f, 0.8f, 0.5f),
XMFLOAT4(0.8f, 0.8f, 0.8f, 0.5f),
XMFLOAT4(0.3f, 0.3f, 0.3f, 1.0f),
5.0f,
texture.Get(),
m_vertexShader.Get(),
m_pixelShader.Get()
)
);

target->Mesh(targetMesh);
}
else if (Sphere^ sphere = dynamic_cast<Sphere^>(*object))
{
sphere->Mesh(sphereMesh);
sphere->NormalMaterial(sphereMaterial);
}
}

// Ensure that the camera has been initialized with the right Projection
// matrix. The camera is not created at the time the first window resize event
// occurs.
m_game->GameCamera()->SetProjParams(
XM_PI / 2, m_renderTargetSize.Width / m_renderTargetSize.Height,
0.01f,
100.0f
);

// Make sure that Projection matrix has been set in the constant buffer
// now that all the resources are loaded.
// DirectXBase is handling screen rotations directly to eliminate an unaligned
// fullscreen copy. As a result, it is necessary to post multiply the rotationTransform3D
// matrix to the camera projection matrix.
ConstantBufferChangeOnResize changesOnResize;
XMStoreFloat4x4(
&changesOnResize.projection,
XMMatrixMultiply(
XMMatrixTranspose(m_game->GameCamera()->Projection()),
XMMatrixTranspose(XMLoadFloat4x4(&m_rotationTransform3D))
)
);

m_d3dContext->UpdateSubresource(
m_constantBufferChangeOnResize.Get(),
0,
nullptr,
&changesOnResize,
0,
0
);

m_gameResourcesLoaded = true;
}

//----------------------------------------------------------------------

task<void> GameRenderer::LoadLevelResourcesAsync()
{
m_levelResourcesLoaded = false;
return create_task([this]()
{
// This is where additional async loading of level specific resources
// would be done. Because there aren't any to load, just simulate
// by delaying for some time.
wait(GameConstants::LevelLoadingDelay);
});
}

//----------------------------------------------------------------------

void GameRenderer::FinalizeLoadLevelResources()
{
// After the level specific resources had been loaded, this method is
// where D3D context specific actions would be handled. This method
// runs in the same thread context as the GameRenderer was created.

m_levelResourcesLoaded = true;
}

//----------------------------------------------------------------------

void GameRenderer::Render()
{
int renderingPasses = 1;
if (m_stereoEnabled)
{
renderingPasses = 2;
}

for (int i = 0; i < renderingPasses; i++)


{
if (m_stereoEnabled && i > 0)
{
// Doing the Right Eye View.
m_d3dContext->OMSetRenderTargets(1, m_renderTargetViewRight.GetAddressOf(), m_depthStencilView.Get());
m_d3dContext->ClearDepthStencilView(m_depthStencilView.Get(), D3D11_CLEAR_DEPTH, 1.0f, 0);
m_d2dContext->SetTarget(m_d2dTargetBitmapRight.Get());
}
else
{
// Doing the Mono or Left Eye View.
m_d3dContext->OMSetRenderTargets(1, m_renderTargetView.GetAddressOf(), m_depthStencilView.Get());
m_d3dContext->ClearDepthStencilView(m_depthStencilView.Get(), D3D11_CLEAR_DEPTH, 1.0f, 0);
m_d2dContext->SetTarget(m_d2dTargetBitmap.Get());
}

if (m_game != nullptr && m_gameResourcesLoaded && m_levelResourcesLoaded)


{
// This section is only used after the game state has been initialized and all device
// resources needed for the game have been created and associated with the game objects.
if (m_stereoEnabled)
{
ConstantBufferChangeOnResize changesOnResize;
XMStoreFloat4x4(
&changesOnResize.projection,
XMMatrixMultiply(
XMMatrixTranspose(
i == 0 ?
m_game->GameCamera()->LeftEyeProjection() :
m_game->GameCamera()->RightEyeProjection()
),
XMMatrixTranspose(XMLoadFloat4x4(&m_rotationTransform3D))
)
);

m_d3dContext->UpdateSubresource(
m_constantBufferChangeOnResize.Get(),
0,
0,
nullptr,
&changesOnResize,
0,
0
);
}
// Update variables that change one time per frame.

ConstantBufferChangesEveryFrame constantBufferChangesEveryFrame;
XMStoreFloat4x4(
&constantBufferChangesEveryFrame.view,
XMMatrixTranspose(m_game->GameCamera()->View())
);
m_d3dContext->UpdateSubresource(
m_constantBufferChangesEveryFrame.Get(),
0,
nullptr,
&constantBufferChangesEveryFrame,
0,
0
);

// Set up Pipeline.

m_d3dContext->IASetInputLayout(m_vertexLayout.Get());
m_d3dContext->VSSetConstantBuffers(0, 1, m_constantBufferNeverChanges.GetAddressOf());
m_d3dContext->VSSetConstantBuffers(1, 1, m_constantBufferChangeOnResize.GetAddressOf());
m_d3dContext->VSSetConstantBuffers(2, 1, m_constantBufferChangesEveryFrame.GetAddressOf());
m_d3dContext->VSSetConstantBuffers(3, 1, m_constantBufferChangesEveryPrim.GetAddressOf());

m_d3dContext->PSSetConstantBuffers(2, 1, m_constantBufferChangesEveryFrame.GetAddressOf());
m_d3dContext->PSSetConstantBuffers(3, 1, m_constantBufferChangesEveryPrim.GetAddressOf());
m_d3dContext->PSSetSamplers(0, 1, m_samplerLinear.GetAddressOf());

// Get all the objects to render from the Game state.


auto objects = m_game->RenderObjects();
for (auto object = objects.begin(); object != objects.end(); object++)
{
(*object)->Render(m_d3dContext.Get(), m_constantBufferChangesEveryPrim.Get());
}
}
else
{
const float ClearColor[4] = {0.1f, 0.1f, 0.1f, 1.0f};

// Only need to clear the background when not rendering the full 3-D scene because
// the 3-D world is a fully enclosed box and the dynamics prevents the camera from
// moving outside this space.
if (m_stereoEnabled && i > 0)
{
// Doing the Right Eye View.
m_d3dContext->ClearRenderTargetView(m_renderTargetViewRight.Get(), ClearColor);
}
else
{
// Doing the Mono or Left Eye View.
m_d3dContext->ClearRenderTargetView(m_renderTargetView.Get(), ClearColor);
}
}

m_d2dContext->BeginDraw();

// To handle the swapchain being pre-rotated, set the D2D transformation to include it.
m_d2dContext->SetTransform(m_rotationTransform2D);

if (m_game != nullptr && m_gameResourcesLoaded)


{
// This is only used after the game state has been initialized.
m_gameHud->Render(m_game, m_d2dContext.Get(), m_windowBounds);
m_gameHud->Render(m_game, m_d2dContext.Get(), m_windowBounds);
}

if (m_gameInfoOverlay->Visible())
{
m_d2dContext->DrawBitmap(
m_gameInfoOverlay->Bitmap(),
D2D1::RectF(
(m_windowBounds.Width - GameInfoOverlayConstant::Width)/2.0f,
(m_windowBounds.Height - GameInfoOverlayConstant::Height)/2.0f,
(m_windowBounds.Width - GameInfoOverlayConstant::Width)/2.0f + GameInfoOverlayConstant::Width,
(m_windowBounds.Height - GameInfoOverlayConstant::Height)/2.0f + GameInfoOverlayConstant::Height
)
);
}

HRESULT hr = m_d2dContext->EndDraw();
if (hr != D2DERR_RECREATE_TARGET)
{
// The D2DERR_RECREATE_TARGET indicates there has been a problem with the underlying
// D3D device. All subsequent rendering will be ignored until the device is recreated.
// This error will be propagated and the appropriate D3D error will be returned from the
// swapchain->Present(...) call. At that point, the sample will recreate the device
// and all associated resources. As a result, the D2DERR_RECREATE_TARGET doesn't
// need to be handled here.
DX::ThrowIfFailed(hr);
}
}
Present();
}

GameObject.h

/// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#pragma once

#include "MeshObject.h"
#include "SoundEffect.h"
#include "Animate.h"
#include "Material.h"

ref class GameObject


{
internal:
GameObject();

// Expect these two functions to be overloaded by subclasses.


virtual bool IsTouching(
DirectX::XMFLOAT3 /* point */,
float /* radius */,
_Out_ DirectX::XMFLOAT3 *contact,
_Out_ DirectX::XMFLOAT3 *normal
)
{
*contact = DirectX::XMFLOAT3(0.0f, 0.0f, 0.0f);
*normal = DirectX::XMFLOAT3(0.0f, 0.0f, 1.0f);
return false;
};

void Render(
_In_ ID3D11DeviceContext *context,
_In_ ID3D11Buffer *primitiveConstantBuffer
_In_ ID3D11Buffer *primitiveConstantBuffer
);

void Active(bool active);


bool Active();
void Target(bool target);
bool Target();
void Hit(bool hit);
bool Hit();
void OnGround(bool ground);
bool OnGround();
void TargetId(int targetId);
int TargetId();
void HitTime(float t);
float HitTime();

void AnimatePosition(_In_opt_ Animate^ animate);


Animate^ AnimatePosition();

void HitSound(_In_ SoundEffect^ hitSound);


SoundEffect^ HitSound();

void PlaySound(float impactSpeed, DirectX::XMFLOAT3 eyePoint);

void Mesh(_In_ MeshObject^ mesh);

void NormalMaterial(_In_ Material^ material);


Material^ NormalMaterial();
void HitMaterial(_In_ Material^ material);
Material^ HitMaterial();

void Position(DirectX::XMFLOAT3 position);


void Position(DirectX::XMVECTOR position);
void Velocity(DirectX::XMFLOAT3 velocity);
void Velocity(DirectX::XMVECTOR velocity);
DirectX::XMMATRIX ModelMatrix();
DirectX::XMFLOAT3 Position();
DirectX::XMVECTOR VectorPosition();
DirectX::XMVECTOR VectorVelocity();
DirectX::XMFLOAT3 Velocity();

protected private:
virtual void UpdatePosition() {};
// Object Data.
bool m_active;
bool m_target;
int m_targetId;
bool m_hit;
bool m_ground;

DirectX::XMFLOAT3 m_position;
DirectX::XMFLOAT3 m_velocity;
DirectX::XMFLOAT4X4 m_modelMatrix;

Material^ m_normalMaterial;
Material^ m_hitMaterial;

DirectX::XMFLOAT3 m_defaultXAxis;
DirectX::XMFLOAT3 m_defaultYAxis;
DirectX::XMFLOAT3 m_defaultZAxis;

float m_hitTime;

Animate^ m_animatePosition;
MeshObject^ m_mesh;

SoundEffect^ m_hitSound;
};
__forceinline void GameObject::Active(bool active)
{
m_active = active;
}

__forceinline bool GameObject::Active()


{
return m_active;
}

__forceinline void GameObject::Target(bool target)


{
m_target = target;
}

__forceinline bool GameObject::Target()


{
return m_target;
}

__forceinline void GameObject::Hit(bool hit)


{
m_hit = hit;
}

__forceinline bool GameObject::Hit()


{
return m_hit;
}

__forceinline void GameObject::OnGround(bool ground)


{
m_ground = ground;
}

__forceinline bool GameObject::OnGround()


{
return m_ground;
}

__forceinline void GameObject::TargetId(int targetId)


{
m_targetId = targetId;
}

__forceinline int GameObject::TargetId()


{
return m_targetId;
}

__forceinline void GameObject::HitTime(float t)


{
m_hitTime = t;
}

__forceinline float GameObject::HitTime()


{
return m_hitTime;
}

__forceinline void GameObject::Position(DirectX::XMFLOAT3 position)


{
m_position = position;
// Update any internal states that are dependent on the position.
// UpdatePosition is a virtual function that is specific to the derived class.
UpdatePosition();
}

__forceinline void GameObject::Position(DirectX::XMVECTOR position)


__forceinline void GameObject::Position(DirectX::XMVECTOR position)
{
XMStoreFloat3(&m_position, position);
// Update any internal states that are dependent on the position.
// UpdatePosition is a virtual function that is specific to the derived class.
UpdatePosition();
}

__forceinline DirectX::XMFLOAT3 GameObject::Position()


{
return m_position;
}

__forceinline DirectX::XMVECTOR GameObject::VectorPosition()


{
return DirectX::XMLoadFloat3(&m_position);
}

__forceinline void GameObject::Velocity(DirectX::XMFLOAT3 velocity)


{
m_velocity = velocity;
}

__forceinline void GameObject::Velocity(DirectX::XMVECTOR velocity)


{
XMStoreFloat3(&m_velocity, velocity);
}

__forceinline DirectX::XMFLOAT3 GameObject::Velocity()


{
return m_velocity;
}

__forceinline DirectX::XMVECTOR GameObject::VectorVelocity()


{
return DirectX::XMLoadFloat3(&m_velocity);
}

__forceinline void GameObject::AnimatePosition(_In_opt_ Animate^ animate)


{
m_animatePosition = animate;
}

__forceinline Animate^ GameObject::AnimatePosition()


{
return m_animatePosition;
}

__forceinline void GameObject::NormalMaterial(_In_ Material^ material)


{
m_normalMaterial = material;
}

__forceinline Material^ GameObject::NormalMaterial()


{
return m_normalMaterial;
}

__forceinline void GameObject::HitMaterial(_In_ Material^ material)


{
m_hitMaterial = material;
}

__forceinline Material^ GameObject::HitMaterial()


{
return m_hitMaterial;
}

__forceinline void GameObject::Mesh(_In_ MeshObject^ mesh)


{
m_mesh = mesh;
}

__forceinline void GameObject::HitSound(_In_ SoundEffect^ hitSound)


{
m_hitSound = hitSound;
}

__forceinline SoundEffect^ GameObject::HitSound()


{
return m_hitSound;
}

__forceinline DirectX::XMMATRIX GameObject::ModelMatrix()


{
return DirectX::XMLoadFloat4x4(&m_modelMatrix);
}

GameObject.cpp

//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#include "pch.h"
#include "GameObject.h"
#include "ConstantBuffers.h"
#include "GameConstants.h"

using namespace DirectX;

//--------------------------------------------------------------------------------

GameObject::GameObject() :
m_normalMaterial(nullptr),
m_hitMaterial(nullptr)
{
m_active = false;
m_target = false;
m_targetId = 0;
m_hit = false;
m_ground = true;

m_position = XMFLOAT3(0.0f, 0.0f, 0.0f);


m_velocity = XMFLOAT3(0.0f, 0.0f, 0.0f);
m_defaultXAxis = XMFLOAT3(1.0f, 0.0f, 0.0f);
m_defaultYAxis = XMFLOAT3(0.0f, 1.0f, 0.0f);
m_defaultZAxis = XMFLOAT3(0.0f, 0.0f, 1.0f);
XMStoreFloat4x4(&m_modelMatrix, XMMatrixIdentity());

m_hitTime = 0.0f;

m_animatePosition = nullptr;
}

//----------------------------------------------------------------------

void GameObject::Render(
_In_ ID3D11DeviceContext *context,
_In_ ID3D11Buffer *primitiveConstantBuffer
)
{
if (!m_active || (m_mesh == nullptr) || (m_normalMaterial == nullptr))
{
{
return;
}

ConstantBufferChangesEveryPrim constantBuffer;

XMStoreFloat4x4(
&constantBuffer.worldMatrix,
XMMatrixTranspose(ModelMatrix())
);

if (m_hit && m_hitMaterial != nullptr)


{
m_hitMaterial->RenderSetup(context, &constantBuffer);
}
else
{
m_normalMaterial->RenderSetup(context, &constantBuffer);
}
context->UpdateSubresource(primitiveConstantBuffer, 0, nullptr, &constantBuffer, 0, 0);

m_mesh->Render(context);
}

//----------------------------------------------------------------------

void GameObject::PlaySound(float impactSpeed, XMFLOAT3 eyePoint)


{
if (m_hitSound != nullptr)
{
// Determine the sound volume adjustment based on velocity.
float adjustment;
if (impactSpeed < GameConstants::Sound::MinVelocity)
{
adjustment = 0.0f; // Too soft. Don't play sound.
}
else
{
adjustment = min(1.0f, impactSpeed / GameConstants::Sound::MaxVelocity);
adjustment = GameConstants::Sound::MinAdjustment + adjustment * (1.0f - GameConstants::Sound::MinAdjustment);
}

// Compute Distance to eyePoint to adjust the volume based on that distance.


XMVECTOR cameraToPosition = XMLoadFloat3(&eyePoint) - VectorPosition();
float distToPositionSquared = XMVectorGetX(XMVector3LengthSq(cameraToPosition));

float volume = min(distToPositionSquared, 1.0f);


// Scale
// Sound is proportional to how hard the ball is hitting.
volume = adjustment * volume;

m_hitSound->PlaySound(volume);
}
}

Animate.h

//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#pragma once

// Animate:
// This is an abstract class for animations. It defines a set of
// This is an abstract class for animations. It defines a set of
// capabilities for animating XMFLOAT3 variables. An animation has the following
// characteristics:
// Start - the time for the animation to start.
// Duration - the length of time the animation is to run.
// Continuous - whether the animation loops after duration is reached or just
// stops.
// There are two query functions:
// IsActive - determines if the animation is active at time t.
// IsFinished - determines if the animation is done at time t.
// It is expected that each derived class will provide an Evaluate method for the
// specific kind of animation.

ref class Animate abstract


{
internal:
Animate();

virtual DirectX::XMFLOAT3 Evaluate (_In_ float t) = 0;

bool IsActive(_In_ float t) { return ((t >= m_startTime) && (m_continuous || (t < (m_startTime + m_duration)))); };
bool IsFinished(_In_ float t) { return (!m_continuous && (t >= (m_startTime + m_duration))); }
float Start();
void Start(_In_ float start);
float Duration();
void Duration(_In_ float duration);
bool Continuous();
void Continuous(_In_ bool continuous);

protected private:
bool m_continuous; // if true means at end cycle back to beginning
float m_startTime; // time at which animation begins
float m_duration; // for continuous, this is the duration of 1 cycle through path
};

//----------------------------------------------------------------------

// AnimateLinePosition:
// This class is a specialization of Animate that defines an animation of a position vector
// along a straight line defined in world coordinates from startPosition to endPosition.

ref class AnimateLinePosition: public Animate


{
internal:
AnimateLinePosition(
_In_ DirectX::XMFLOAT3 startPosition,
_In_ DirectX::XMFLOAT3 endPosition,
_In_ float duration,
_In_ bool continuous
);
virtual DirectX::XMFLOAT3 Evaluate(_In_ float t) override;

private:
DirectX::XMFLOAT3 m_startPosition;
DirectX::XMFLOAT3 m_endPosition;
float m_length;
};

//----------------------------------------------------------------------

struct LineSegment
{
DirectX::XMFLOAT3 position;
float length;
float uStart;
float uLength;
};

// AnimateLineListPosition:
// This class is a specialization of Animate that defines an animation of a position vector
// along a set of line segments defined by a set of points. The animation along the path is
// such that the evaluation of the position along the path will be uniform independent of
// the length of each of the line segments. A continuous loop can be achieved by having the
// first and last points of the list be the same.

ref class AnimateLineListPosition: public Animate


{
internal:
AnimateLineListPosition(
_In_ unsigned int count,
_In_reads_(count) DirectX::XMFLOAT3 position[],
_In_ float duration,
_In_ bool continuous
);
virtual DirectX::XMFLOAT3 Evaluate(_In_ float t) override;

private:
unsigned int m_count;
float m_totalLength;
std::vector<LineSegment> m_segment;
};

//----------------------------------------------------------------------

// AnimateCirclePosition:
// This class is a specialization of Animate that defines an animation of a position vector
// along a circular path centered at 'center' with a starting point of 'startPosition'. The
// distance between 'center' and 'startPosition' defines the radius of the circle. The plane
// of the circle will pass through 'center' and 'startPosition' and have a normal of 'planeNormal'.
// The direction of the animation can be either clockwise or counterclockwise based
// on the 'clockwise' parameter.

ref class AnimateCirclePosition: public Animate


{
internal:
AnimateCirclePosition(
_In_ DirectX::XMFLOAT3 center,
_In_ DirectX::XMFLOAT3 startPosition,
_In_ DirectX::XMFLOAT3 planeNormal,
_In_ float duration,
_In_ bool continuous,
_In_ bool clockwise
);
virtual DirectX::XMFLOAT3 Evaluate(_In_ float t) override;

private:
DirectX::XMFLOAT4X4 m_rotationMatrix;
DirectX::XMFLOAT3 m_center;
DirectX::XMFLOAT3 m_planeNormal;
DirectX::XMFLOAT3 m_startPosition;
float m_radius;
bool m_clockwise;
};

//----------------------------------------------------------------------

Animate.cpp

//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#include "pch.h"
#include "Animate.h"

using namespace DirectX;

//----------------------------------------------------------------------

Animate::Animate():
m_continuous(false),
m_startTime(0.0f),
m_duration(10.0f)
{
}

//----------------------------------------------------------------------

float Animate::Start()
{
return m_startTime;
}

//----------------------------------------------------------------------

void Animate::Start(_In_ float start)


{
m_startTime = start;
}

//----------------------------------------------------------------------

float Animate::Duration()
{
return m_duration;
}

//----------------------------------------------------------------------

void Animate::Duration(_In_ float duration)


{
m_duration = duration;
}

//----------------------------------------------------------------------

bool Animate::Continuous()
{
return m_continuous;
}

//----------------------------------------------------------------------

void Animate::Continuous(_In_ bool continuous)


{
m_continuous = continuous;
}

//----------------------------------------------------------------------

AnimateLinePosition::AnimateLinePosition(
_In_ XMFLOAT3 startPosition,
_In_ XMFLOAT3 endPosition,
_In_ float duration,
_In_ bool continuous)
{
m_startPosition = startPosition;
m_endPosition = endPosition;
m_duration = duration;
m_continuous = continuous;

m_length = XMVectorGetX(
XMVector3Length(XMLoadFloat3(&endPosition) - XMLoadFloat3(&startPosition))
);
}

//----------------------------------------------------------------------

XMFLOAT3 AnimateLinePosition::Evaluate(_In_ float t)


{
if (t <= m_startTime)
{
return m_startPosition;
}

if ((t >= (m_startTime + m_duration)) && !m_continuous)


{
return m_endPosition;
}

float startTime = m_startTime;


if (m_continuous)
{
// For continuous operation, move the start time forward to
// eliminate previous iterations.
startTime += ((int)((t - m_startTime) / m_duration)) * m_duration;
}

float u = (t - startTime) / m_duration;


XMFLOAT3 currentPosition;
currentPosition.x = m_startPosition.x + (m_endPosition.x - m_startPosition.x)*u;
currentPosition.y = m_startPosition.y + (m_endPosition.y - m_startPosition.y)*u;
currentPosition.z = m_startPosition.z + (m_endPosition.z - m_startPosition.z)*u;

return currentPosition;
}

//----------------------------------------------------------------------

AnimateLineListPosition::AnimateLineListPosition(
_In_ unsigned int count,
_In_reads_(count) XMFLOAT3 position[],
_In_ float duration,
_In_ bool continuous)
{
m_duration = duration;
m_continuous = continuous;
m_count = count;

std::vector<LineSegment> segment(m_count);
m_segment = segment;
m_totalLength = 0.0f;

m_segment[0].position = position[0];
for (unsigned int i = 1; i < count; i++)
{
m_segment[i].position = position[i];
m_segment[i - 1].length = XMVectorGetX(
XMVector3Length(
XMLoadFloat3(&m_segment[i].position) -
XMLoadFloat3(&m_segment[i - 1].position)
)
);
m_totalLength += m_segment[i - 1].length;
}

// Parameterize the segments to ensure uniform evaluation along the path.


float u = 0.0f;
for (unsigned int i = 0; i < (count - 1); i++)
{
m_segment[i].uStart = u;
m_segment[i].uStart = u;
m_segment[i].uLength = (m_segment[i].length / m_totalLength);
u += m_segment[i].uLength;
}
m_segment[count-1].uStart = 1.0f;
}

//----------------------------------------------------------------------

XMFLOAT3 AnimateLineListPosition::Evaluate(_In_ float t)


{
if (t <= m_startTime)
{
return m_segment[0].position;
}

if ((t >= (m_startTime + m_duration)) && !m_continuous)


{
return m_segment[m_count-1].position;
}

float startTime = m_startTime;


if (m_continuous)
{
// For continuous operation, move the start time forward to
// eliminate previous iterations.
startTime += ((int)((t - m_startTime) / m_duration)) * m_duration;
}

float u = (t - startTime) / m_duration;


// Find the right segment.
unsigned int i = 0;
while (u > m_segment[i + 1].uStart)
{
i++;
}

u -= m_segment[i].uStart;
u /= m_segment[i].uLength;

XMFLOAT3 currentPosition;
currentPosition.x = m_segment[i].position.x + (m_segment[i + 1].position.x - m_segment[i].position.x)*u;
currentPosition.y = m_segment[i].position.y + (m_segment[i + 1].position.y - m_segment[i].position.y)*u;
currentPosition.z = m_segment[i].position.z + (m_segment[i + 1].position.z - m_segment[i].position.z)*u;

return currentPosition;
}

//----------------------------------------------------------------------

AnimateCirclePosition:: AnimateCirclePosition(
_In_ XMFLOAT3 center,
_In_ XMFLOAT3 startPosition,
_In_ XMFLOAT3 planeNormal,
_In_ float duration,
_In_ bool continuous,
_In_ bool clockwise)
{
m_center = center;
m_planeNormal = planeNormal;
m_startPosition = startPosition;
m_duration = duration;
m_continuous = continuous;
m_clockwise = clockwise;

XMVECTOR coordX = XMLoadFloat3(&m_startPosition) - XMLoadFloat3(&m_center);


m_radius = XMVectorGetX(XMVector3Length(coordX));

XMVector3Normalize(coordX);
XMVECTOR coordZ = XMLoadFloat3(&m_planeNormal);
XMVector3Normalize(coordZ);

XMVECTOR coordY;
if (m_clockwise)
{
coordY = XMVector3Cross(coordZ, coordX);
}
else
{
coordY = XMVector3Cross(coordX, coordZ);
}

XMVECTOR vectorX = XMVectorSet(1.0f, 0.0f, 0.0f, 1.0f);


XMVECTOR vectorY = XMVectorSet(0.0f, 1.0f, 0.0f, 1.0f);
XMMATRIX mat1 = XMMatrixIdentity();
XMMATRIX mat2 = XMMatrixIdentity();

if (!XMVector3Equal(coordX, vectorX))
{
float angle;
angle = XMVectorGetX(
XMVector3AngleBetweenVectors(vectorX, coordX)
);
if ((angle * angle) > 0.025)
{
XMVECTOR axis1 = XMVector3Cross(vectorX, coordX);

mat1 = XMMatrixRotationAxis(axis1, angle);


vectorY = XMVector3TransformCoord(vectorY, mat1);
}
}
if (!XMVector3Equal(vectorY, coordY))
{
float angle;
angle = XMVectorGetX(
XMVector3AngleBetweenVectors(vectorY, coordY)
);
if ((angle * angle) > 0.025)
{
XMVECTOR axis2 = XMVector3Cross(vectorY, coordY);
mat2 = XMMatrixRotationAxis(axis2, angle);
}
}
XMStoreFloat4x4(
&m_rotationMatrix,
mat1 *
mat2 *
XMMatrixTranslation(m_center.x, m_center.y, m_center.z)
);
}

//----------------------------------------------------------------------

XMFLOAT3 AnimateCirclePosition::Evaluate(_In_ float t)


{
if (t <= m_startTime)
{
return m_startPosition;
}

if ((t >= (m_startTime + m_duration)) && !m_continuous)


{
return m_startPosition;
}

float startTime = m_startTime;


if (m_continuous)
{
{
// For continuous operation, move the start time forward to
// eliminate previous iterations.
startTime += ((int)((t - m_startTime) / m_duration)) * m_duration;
}

float u = (t - startTime) / m_duration * XM_2PI;

XMFLOAT3 currentPosition;
currentPosition.x = m_radius * cos(u);
currentPosition.y = m_radius * sin(u);
currentPosition.z = 0.0f;

XMStoreFloat3(
&currentPosition,
XMVector3TransformCoord(
XMLoadFloat3(&currentPosition),
XMLoadFloat4x4(&m_rotationMatrix)
)
);

return currentPosition;
}

//----------------------------------------------------------------------

BasicLoader.h

//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#pragma once

#include "BasicReaderWriter.h"

// A simple loader class that provides support for loading shaders, textures,
// and meshes from files on disk. Provides synchronous and asynchronous methods.
ref class BasicLoader
{
internal:
BasicLoader(
_In_ ID3D11Device* d3dDevice,
_In_opt_ IWICImagingFactory2* wicFactory = nullptr
);

void LoadTexture(
_In_ Platform::String^ filename,
_Out_opt_ ID3D11Texture2D** texture,
_Out_opt_ ID3D11ShaderResourceView** textureView
);

concurrency::task<void> LoadTextureAsync(
_In_ Platform::String^ filename,
_Out_opt_ ID3D11Texture2D** texture,
_Out_opt_ ID3D11ShaderResourceView** textureView
);

void LoadShader(
_In_ Platform::String^ filename,
_In_reads_opt_(layoutDescNumElements) D3D11_INPUT_ELEMENT_DESC layoutDesc[],
_In_ uint32 layoutDescNumElements,
_Out_ ID3D11VertexShader** shader,
_Out_opt_ ID3D11InputLayout** layout
);
);

concurrency::task<void> LoadShaderAsync(
_In_ Platform::String^ filename,
_In_reads_opt_(layoutDescNumElements) D3D11_INPUT_ELEMENT_DESC layoutDesc[],
_In_ uint32 layoutDescNumElements,
_Out_ ID3D11VertexShader** shader,
_Out_opt_ ID3D11InputLayout** layout
);

void LoadShader(
_In_ Platform::String^ filename,
_Out_ ID3D11PixelShader** shader
);

concurrency::task<void> LoadShaderAsync(
_In_ Platform::String^ filename,
_Out_ ID3D11PixelShader** shader
);

void LoadShader(
_In_ Platform::String^ filename,
_Out_ ID3D11ComputeShader** shader
);

concurrency::task<void> LoadShaderAsync(
_In_ Platform::String^ filename,
_Out_ ID3D11ComputeShader** shader
);

void LoadShader(
_In_ Platform::String^ filename,
_Out_ ID3D11GeometryShader** shader
);

concurrency::task<void> LoadShaderAsync(
_In_ Platform::String^ filename,
_Out_ ID3D11GeometryShader** shader
);

void LoadShader(
_In_ Platform::String^ filename,
_In_reads_opt_(numEntries) const D3D11_SO_DECLARATION_ENTRY* streamOutDeclaration,
_In_ uint32 numEntries,
_In_reads_opt_(numStrides) const uint32* bufferStrides,
_In_ uint32 numStrides,
_In_ uint32 rasterizedStream,
_Out_ ID3D11GeometryShader** shader
);

concurrency::task<void> LoadShaderAsync(
_In_ Platform::String^ filename,
_In_reads_opt_(numEntries) const D3D11_SO_DECLARATION_ENTRY* streamOutDeclaration,
_In_ uint32 numEntries,
_In_reads_opt_(numStrides) const uint32* bufferStrides,
_In_ uint32 numStrides,
_In_ uint32 rasterizedStream,
_Out_ ID3D11GeometryShader** shader
);

void LoadShader(
_In_ Platform::String^ filename,
_Out_ ID3D11HullShader** shader
);

concurrency::task<void> LoadShaderAsync(
_In_ Platform::String^ filename,
_Out_ ID3D11HullShader** shader
);
void LoadShader(
_In_ Platform::String^ filename,
_Out_ ID3D11DomainShader** shader
);

concurrency::task<void> LoadShaderAsync(
_In_ Platform::String^ filename,
_Out_ ID3D11DomainShader** shader
);

void LoadMesh(
_In_ Platform::String^ filename,
_Out_ ID3D11Buffer** vertexBuffer,
_Out_ ID3D11Buffer** indexBuffer,
_Out_opt_ uint32* vertexCount,
_Out_opt_ uint32* indexCount
);

concurrency::task<void> LoadMeshAsync(
_In_ Platform::String^ filename,
_Out_ ID3D11Buffer** vertexBuffer,
_Out_ ID3D11Buffer** indexBuffer,
_Out_opt_ uint32* vertexCount,
_Out_opt_ uint32* indexCount
);

private:
Microsoft::WRL::ComPtr<ID3D11Device> m_d3dDevice;
Microsoft::WRL::ComPtr<IWICImagingFactory2> m_wicFactory;
BasicReaderWriter^ m_basicReaderWriter;

template <class DeviceChildType>


inline void SetDebugName(
_In_ DeviceChildType* object,
_In_ Platform::String^ name
);

Platform::String^ GetExtension(
_In_ Platform::String^ filename
);

void CreateTexture(
_In_ bool decodeAsDDS,
_In_reads_bytes_(dataSize) byte* data,
_In_ uint32 dataSize,
_Out_opt_ ID3D11Texture2D** texture,
_Out_opt_ ID3D11ShaderResourceView** textureView,
_In_opt_ Platform::String^ debugName
);

void CreateInputLayout(
_In_reads_bytes_(bytecodeSize) byte* bytecode,
_In_ uint32 bytecodeSize,
_In_reads_opt_(layoutDescNumElements) D3D11_INPUT_ELEMENT_DESC* layoutDesc,
_In_ uint32 layoutDescNumElements,
_Out_ ID3D11InputLayout** layout
);

void CreateMesh(
_In_ byte* meshData,
_Out_ ID3D11Buffer** vertexBuffer,
_Out_ ID3D11Buffer** indexBuffer,
_Out_opt_ uint32* vertexCount,
_Out_opt_ uint32* indexCount,
_In_opt_ Platform::String^ debugName
);
};
BasicLoader.cpp

//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#include "pch.h"
#include "BasicLoader.h"
#include "BasicShapes.h"
#include "DDSTextureLoader.h"
#include "DirectXSample.h"
#include <memory>

using namespace Microsoft::WRL;


using namespace Windows::Storage;
using namespace Windows::Storage::Streams;
using namespace Windows::Foundation;
using namespace Windows::ApplicationModel;
using namespace std;
using namespace concurrency;

BasicLoader::BasicLoader(
_In_ ID3D11Device* d3dDevice,
_In_opt_ IWICImagingFactory2* wicFactory
):
m_d3dDevice(d3dDevice),
m_wicFactory(wicFactory)
{
// Create a new BasicReaderWriter to do raw file I/O.
m_basicReaderWriter = ref new BasicReaderWriter();
}

template <class DeviceChildType>


inline void BasicLoader::SetDebugName(
_In_ DeviceChildType* object,
_In_ Platform::String^ name
)
{
#if defined(_DEBUG)
// Only assign debug names in debug builds.

char nameString[1024];
int nameStringLength = WideCharToMultiByte(
CP_ACP,
0,
name->Data(),
-1,
nameString,
1024,
nullptr,
nullptr
);

if (nameStringLength == 0)
{
char defaultNameString[] = "BasicLoaderObject";
DX::ThrowIfFailed(
object->SetPrivateData(
WKPDID_D3DDebugObjectName,
sizeof(defaultNameString) - 1,
defaultNameString
)
);
}
else
else
{
DX::ThrowIfFailed(
object->SetPrivateData(
WKPDID_D3DDebugObjectName,
nameStringLength - 1,
nameString
)
);
}
#endif
}

Platform::String^ BasicLoader::GetExtension(
_In_ Platform::String^ filename
)
{
int lastDotIndex = -1;
for (int i = filename->Length() - 1; i >= 0 && lastDotIndex == -1; i--)
{
if (*(filename->Data() + i) == '.')
{
lastDotIndex = i;
}
}
if (lastDotIndex != -1)
{
std::unique_ptr<wchar_t[]> extension(new wchar_t[filename->Length() - lastDotIndex]);
for (unsigned int i = 0; i < filename->Length() - lastDotIndex; i++)
{
extension[i] = tolower(*(filename->Data() + lastDotIndex + 1 + i));
}
return ref new Platform::String(extension.get());
}
return "";
}

void BasicLoader::CreateTexture(
_In_ bool decodeAsDDS,
_In_reads_bytes_(dataSize) byte* data,
_In_ uint32 dataSize,
_Out_opt_ ID3D11Texture2D** texture,
_Out_opt_ ID3D11ShaderResourceView** textureView,
_In_opt_ Platform::String^ debugName
)
{
ComPtr<ID3D11ShaderResourceView> shaderResourceView;
ComPtr<ID3D11Texture2D> texture2D;

if (decodeAsDDS)
{
ComPtr<ID3D11Resource> resource;

if (textureView == nullptr)
{
CreateDDSTextureFromMemory(
m_d3dDevice.Get(),
data,
dataSize,
&resource,
nullptr
);
}
else
{
CreateDDSTextureFromMemory(
m_d3dDevice.Get(),
data,
dataSize,
&resource,
&resource,
&shaderResourceView
);
}

DX::ThrowIfFailed(
resource.As(&texture2D)
);
}
else
{
if (m_wicFactory.Get() == nullptr)
{
// A WIC factory object is required in order to load texture
// assets stored in non-DDS formats. If BasicLoader was not
// initialized with one, create one as needed.
DX::ThrowIfFailed(
CoCreateInstance(
CLSID_WICImagingFactory,
nullptr,
CLSCTX_INPROC_SERVER,
IID_PPV_ARGS(&m_wicFactory)
)
);
}

ComPtr<IWICStream> stream;
DX::ThrowIfFailed(
m_wicFactory->CreateStream(&stream)
);

DX::ThrowIfFailed(
stream->InitializeFromMemory(
data,
dataSize
)
);

ComPtr<IWICBitmapDecoder> bitmapDecoder;
DX::ThrowIfFailed(
m_wicFactory->CreateDecoderFromStream(
stream.Get(),
nullptr,
WICDecodeMetadataCacheOnDemand,
&bitmapDecoder
)
);

ComPtr<IWICBitmapFrameDecode> bitmapFrame;
DX::ThrowIfFailed(
bitmapDecoder->GetFrame(0, &bitmapFrame)
);

ComPtr<IWICFormatConverter> formatConverter;
DX::ThrowIfFailed(
m_wicFactory->CreateFormatConverter(&formatConverter)
);

DX::ThrowIfFailed(
formatConverter->Initialize(
bitmapFrame.Get(),
GUID_WICPixelFormat32bppPBGRA,
WICBitmapDitherTypeNone,
nullptr,
0.0,
WICBitmapPaletteTypeCustom
)
);

uint32 width;
uint32 width;
uint32 height;
DX::ThrowIfFailed(
bitmapFrame->GetSize(&width, &height)
);

std::unique_ptr<byte[]> bitmapPixels(new byte[width * height * 4]);


DX::ThrowIfFailed(
formatConverter->CopyPixels(
nullptr,
width * 4,
width * height * 4,
bitmapPixels.get()
)
);

D3D11_SUBRESOURCE_DATA initialData;
ZeroMemory(&initialData, sizeof(initialData));
initialData.pSysMem = bitmapPixels.get();
initialData.SysMemPitch = width * 4;
initialData.SysMemSlicePitch = 0;

CD3D11_TEXTURE2D_DESC textureDesc(
DXGI_FORMAT_B8G8R8A8_UNORM,
width,
height,
1,
1
);

DX::ThrowIfFailed(
m_d3dDevice->CreateTexture2D(
&textureDesc,
&initialData,
&texture2D
)
);

if (textureView != nullptr)
{
CD3D11_SHADER_RESOURCE_VIEW_DESC shaderResourceViewDesc(
texture2D.Get(),
D3D11_SRV_DIMENSION_TEXTURE2D
);

DX::ThrowIfFailed(
m_d3dDevice->CreateShaderResourceView(
texture2D.Get(),
&shaderResourceViewDesc,
&shaderResourceView
)
);
}
}

SetDebugName(texture2D.Get(), debugName);

if (texture != nullptr)
{
*texture = texture2D.Detach();
}
if (textureView != nullptr)
{
*textureView = shaderResourceView.Detach();
}
}

void BasicLoader::CreateInputLayout(
_In_reads_bytes_(bytecodeSize) byte* bytecode,
_In_ uint32 bytecodeSize,
_In_ uint32 bytecodeSize,
_In_reads_opt_(layoutDescNumElements) D3D11_INPUT_ELEMENT_DESC* layoutDesc,
_In_ uint32 layoutDescNumElements,
_Out_ ID3D11InputLayout** layout
)
{
if (layoutDesc == nullptr)
{
// If no input layout is specified, use the BasicVertex layout.
const D3D11_INPUT_ELEMENT_DESC basicVertexLayoutDesc[] =
{
{ "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 },
{ "NORMAL", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0 },
{ "TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT, 0, 24, D3D11_INPUT_PER_VERTEX_DATA, 0 },
};

DX::ThrowIfFailed(
m_d3dDevice->CreateInputLayout(
basicVertexLayoutDesc,
ARRAYSIZE(basicVertexLayoutDesc),
bytecode,
bytecodeSize,
layout
)
);
}
else
{
DX::ThrowIfFailed(
m_d3dDevice->CreateInputLayout(
layoutDesc,
layoutDescNumElements,
bytecode,
bytecodeSize,
layout
)
);
}
}

void BasicLoader::CreateMesh(
_In_ byte* meshData,
_Out_ ID3D11Buffer** vertexBuffer,
_Out_ ID3D11Buffer** indexBuffer,
_Out_opt_ uint32* vertexCount,
_Out_opt_ uint32* indexCount,
_In_opt_ Platform::String^ debugName
)
{
// The first 4 bytes of the BasicMesh format define the number of vertices in the mesh.
uint32 numVertices = *reinterpret_cast<uint32*>(meshData);

// The following 4 bytes define the number of indices in the mesh.


uint32 numIndices = *reinterpret_cast<uint32*>(meshData + sizeof(uint32));

// The next segment of the BasicMesh format contains the vertices of the mesh.
BasicVertex* vertices = reinterpret_cast<BasicVertex*>(meshData + sizeof(uint32) * 2);

// The last segment of the BasicMesh format contains the indices of the mesh.
uint16* indices = reinterpret_cast<uint16*>(meshData + sizeof(uint32) * 2 + sizeof(BasicVertex) * numVertices);

// Create the vertex and index buffers with the mesh data.

D3D11_SUBRESOURCE_DATA vertexBufferData = {0};


vertexBufferData.pSysMem = vertices;
vertexBufferData.SysMemPitch = 0;
vertexBufferData.SysMemSlicePitch = 0;
CD3D11_BUFFER_DESC vertexBufferDesc(numVertices * sizeof(BasicVertex), D3D11_BIND_VERTEX_BUFFER);
DX::ThrowIfFailed(
m_d3dDevice->CreateBuffer(
&vertexBufferDesc,
&vertexBufferData,
vertexBuffer
)
);

D3D11_SUBRESOURCE_DATA indexBufferData = {0};


indexBufferData.pSysMem = indices;
indexBufferData.SysMemPitch = 0;
indexBufferData.SysMemSlicePitch = 0;
CD3D11_BUFFER_DESC indexBufferDesc(numIndices * sizeof(uint16), D3D11_BIND_INDEX_BUFFER);
DX::ThrowIfFailed(
m_d3dDevice->CreateBuffer(
&indexBufferDesc,
&indexBufferData,
indexBuffer
)
);

SetDebugName(*vertexBuffer, Platform::String::Concat(debugName, "_VertexBuffer"));


SetDebugName(*indexBuffer, Platform::String::Concat(debugName, "_IndexBuffer"));

if (vertexCount != nullptr)
{
*vertexCount = numVertices;
}
if (indexCount != nullptr)
{
*indexCount = numIndices;
}
}

void BasicLoader::LoadTexture(
_In_ Platform::String^ filename,
_Out_opt_ ID3D11Texture2D** texture,
_Out_opt_ ID3D11ShaderResourceView** textureView
)
{
Platform::Array<byte>^ textureData = m_basicReaderWriter->ReadData(filename);

CreateTexture(
GetExtension(filename) == "dds",
textureData->Data,
textureData->Length,
texture,
textureView,
filename
);
}

task<void> BasicLoader::LoadTextureAsync(
_In_ Platform::String^ filename,
_Out_opt_ ID3D11Texture2D** texture,
_Out_opt_ ID3D11ShaderResourceView** textureView
)
{
return m_basicReaderWriter->ReadDataAsync(filename).then([=](const Platform::Array<byte>^ textureData)
{
CreateTexture(
GetExtension(filename) == "dds",
textureData->Data,
textureData->Length,
texture,
textureView,
filename
);
});
}
void BasicLoader::LoadShader(
_In_ Platform::String^ filename,
_In_reads_opt_(layoutDescNumElements) D3D11_INPUT_ELEMENT_DESC layoutDesc[],
_In_ uint32 layoutDescNumElements,
_Out_ ID3D11VertexShader** shader,
_Out_opt_ ID3D11InputLayout** layout
)
{
Platform::Array<byte>^ bytecode = m_basicReaderWriter->ReadData(filename);

DX::ThrowIfFailed(
m_d3dDevice->CreateVertexShader(
bytecode->Data,
bytecode->Length,
nullptr,
shader
)
);

SetDebugName(*shader, filename);

if (layout != nullptr)
{
CreateInputLayout(
bytecode->Data,
bytecode->Length,
layoutDesc,
layoutDescNumElements,
layout
);

SetDebugName(*layout, filename);
}
}

task<void> BasicLoader::LoadShaderAsync(
_In_ Platform::String^ filename,
_In_reads_opt_(layoutDescNumElements) D3D11_INPUT_ELEMENT_DESC layoutDesc[],
_In_ uint32 layoutDescNumElements,
_Out_ ID3D11VertexShader** shader,
_Out_opt_ ID3D11InputLayout** layout
)
{
// This method assumes that the lifetime of input arguments may be shorter
// than the duration of this task. In order to ensure accurate results, a
// copy of all arguments passed by pointer must be made. The method then
// ensures that the lifetime of the copied data exceeds that of the task.

// Create copies of the layoutDesc array as well as the SemanticName strings,


// both of which are pointers to data whose lifetimes may be shorter than that
// of this method's task.
shared_ptr<vector<D3D11_INPUT_ELEMENT_DESC>> layoutDescCopy;
shared_ptr<vector<string>> layoutDescSemanticNamesCopy;
if (layoutDesc != nullptr)
{
layoutDescCopy.reset(
new vector<D3D11_INPUT_ELEMENT_DESC>(
layoutDesc,
layoutDesc + layoutDescNumElements
)
);

layoutDescSemanticNamesCopy.reset(
new vector<string>(layoutDescNumElements)
);

for (uint32 i = 0; i < layoutDescNumElements; i++)


{
{
layoutDescSemanticNamesCopy->at(i).assign(layoutDesc[i].SemanticName);
}
}

return m_basicReaderWriter->ReadDataAsync(filename).then([=](const Platform::Array<byte>^ bytecode)


{
DX::ThrowIfFailed(
m_d3dDevice->CreateVertexShader(
bytecode->Data,
bytecode->Length,
nullptr,
shader
)
);

SetDebugName(*shader, filename);

if (layout != nullptr)
{
if (layoutDesc != nullptr)
{
// Reassign the SemanticName elements of the layoutDesc array copy to point
// to the corresponding copied strings. Performing the assignment inside the
// lambda body ensures that the lambda will take a reference to the shared_ptr
// that holds the data. This will guarantee that the data is still valid when
// CreateInputLayout is called.
for (uint32 i = 0; i < layoutDescNumElements; i++)
{
layoutDescCopy->at(i).SemanticName = layoutDescSemanticNamesCopy->at(i).c_str();
}
}

CreateInputLayout(
bytecode->Data,
bytecode->Length,
layoutDesc == nullptr ? nullptr : layoutDescCopy->data(),
layoutDescNumElements,
layout
);

SetDebugName(*layout, filename);
}
});
}

void BasicLoader::LoadShader(
_In_ Platform::String^ filename,
_Out_ ID3D11PixelShader** shader
)
{
Platform::Array<byte>^ bytecode = m_basicReaderWriter->ReadData(filename);

DX::ThrowIfFailed(
m_d3dDevice->CreatePixelShader(
bytecode->Data,
bytecode->Length,
nullptr,
shader
)
);

SetDebugName(*shader, filename);
}

task<void> BasicLoader::LoadShaderAsync(
_In_ Platform::String^ filename,
_Out_ ID3D11PixelShader** shader
)
{
{
return m_basicReaderWriter->ReadDataAsync(filename).then([=](const Platform::Array<byte>^ bytecode)
{
DX::ThrowIfFailed(
m_d3dDevice->CreatePixelShader(
bytecode->Data,
bytecode->Length,
nullptr,
shader
)
);

SetDebugName(*shader, filename);
});
}

void BasicLoader::LoadShader(
_In_ Platform::String^ filename,
_Out_ ID3D11ComputeShader** shader
)
{
Platform::Array<byte>^ bytecode = m_basicReaderWriter->ReadData(filename);

DX::ThrowIfFailed(
m_d3dDevice->CreateComputeShader(
bytecode->Data,
bytecode->Length,
nullptr,
shader
)
);

SetDebugName(*shader, filename);
}

task<void> BasicLoader::LoadShaderAsync(
_In_ Platform::String^ filename,
_Out_ ID3D11ComputeShader** shader
)
{
return m_basicReaderWriter->ReadDataAsync(filename).then([=](const Platform::Array<byte>^ bytecode)
{
DX::ThrowIfFailed(
m_d3dDevice->CreateComputeShader(
bytecode->Data,
bytecode->Length,
nullptr,
shader
)
);

SetDebugName(*shader, filename);
});
}

void BasicLoader::LoadShader(
_In_ Platform::String^ filename,
_Out_ ID3D11GeometryShader** shader
)
{
Platform::Array<byte>^ bytecode = m_basicReaderWriter->ReadData(filename);

DX::ThrowIfFailed(
m_d3dDevice->CreateGeometryShader(
bytecode->Data,
bytecode->Length,
nullptr,
shader
)
);
);

SetDebugName(*shader, filename);
}

task<void> BasicLoader::LoadShaderAsync(
_In_ Platform::String^ filename,
_Out_ ID3D11GeometryShader** shader
)
{
return m_basicReaderWriter->ReadDataAsync(filename).then([=](const Platform::Array<byte>^ bytecode)
{
DX::ThrowIfFailed(
m_d3dDevice->CreateGeometryShader(
bytecode->Data,
bytecode->Length,
nullptr,
shader
)
);

SetDebugName(*shader, filename);
});
}

void BasicLoader::LoadShader(
_In_ Platform::String^ filename,
_In_reads_opt_(numEntries) const D3D11_SO_DECLARATION_ENTRY* streamOutDeclaration,
_In_ uint32 numEntries,
_In_reads_opt_(numStrides) const uint32* bufferStrides,
_In_ uint32 numStrides,
_In_ uint32 rasterizedStream,
_Out_ ID3D11GeometryShader** shader
)
{
Platform::Array<byte>^ bytecode = m_basicReaderWriter->ReadData(filename);

DX::ThrowIfFailed(
m_d3dDevice->CreateGeometryShaderWithStreamOutput(
bytecode->Data,
bytecode->Length,
streamOutDeclaration,
numEntries,
bufferStrides,
numStrides,
rasterizedStream,
nullptr,
shader
)
);

SetDebugName(*shader, filename);
}

task<void> BasicLoader::LoadShaderAsync(
_In_ Platform::String^ filename,
_In_reads_opt_(numEntries) const D3D11_SO_DECLARATION_ENTRY* streamOutDeclaration,
_In_ uint32 numEntries,
_In_reads_opt_(numStrides) const uint32* bufferStrides,
_In_ uint32 numStrides,
_In_ uint32 rasterizedStream,
_Out_ ID3D11GeometryShader** shader
)
{
// This method assumes that the lifetime of input arguments may be shorter
// than the duration of this task. In order to ensure accurate results, a
// copy of all arguments passed by pointer must be made. The method then
// ensures that the lifetime of the copied data exceeds that of the task.

// Create copies of the streamOutDeclaration array as well as the SemanticName


// Create copies of the streamOutDeclaration array as well as the SemanticName
// strings, both of which are pointers to data whose lifetimes may be shorter
// than that of this method's task.
shared_ptr<vector<D3D11_SO_DECLARATION_ENTRY>> streamOutDeclarationCopy;
shared_ptr<vector<string>> streamOutDeclarationSemanticNamesCopy;
if (streamOutDeclaration != nullptr)
{
streamOutDeclarationCopy.reset(
new vector<D3D11_SO_DECLARATION_ENTRY>(
streamOutDeclaration,
streamOutDeclaration + numEntries
)
);

streamOutDeclarationSemanticNamesCopy.reset(
new vector<string>(numEntries)
);

for (uint32 i = 0; i < numEntries; i++)


{
streamOutDeclarationSemanticNamesCopy->at(i).assign(streamOutDeclaration[i].SemanticName);
}
}

// Create a copy of the bufferStrides array, which is a pointer to data


// whose lifetime may be shorter than that of this method's task.
shared_ptr<vector<uint32>> bufferStridesCopy;
if (bufferStrides != nullptr)
{
bufferStridesCopy.reset(
new vector<uint32>(
bufferStrides,
bufferStrides + numStrides
)
);
}

return m_basicReaderWriter->ReadDataAsync(filename).then([=](const Platform::Array<byte>^ bytecode)


{
if (streamOutDeclaration != nullptr)
{
// Reassign the SemanticName elements of the streamOutDeclaration array copy to
// point to the corresponding copied strings. Performing the assignment inside the
// lambda body ensures that the lambda will take a reference to the shared_ptr
// that holds the data. This will guarantee that the data is still valid when
// CreateGeometryShaderWithStreamOutput is called.
for (uint32 i = 0; i < numEntries; i++)
{
streamOutDeclarationCopy->at(i).SemanticName = streamOutDeclarationSemanticNamesCopy->at(i).c_str();
}
}

DX::ThrowIfFailed(
m_d3dDevice->CreateGeometryShaderWithStreamOutput(
bytecode->Data,
bytecode->Length,
streamOutDeclaration == nullptr ? nullptr : streamOutDeclarationCopy->data(),
numEntries,
bufferStrides == nullptr ? nullptr : bufferStridesCopy->data(),
numStrides,
rasterizedStream,
nullptr,
shader
)
);

SetDebugName(*shader, filename);
});
}
void BasicLoader::LoadShader(
_In_ Platform::String^ filename,
_Out_ ID3D11HullShader** shader
)
{
Platform::Array<byte>^ bytecode = m_basicReaderWriter->ReadData(filename);

DX::ThrowIfFailed(
m_d3dDevice->CreateHullShader(
bytecode->Data,
bytecode->Length,
nullptr,
shader
)
);

SetDebugName(*shader, filename);
}

task<void> BasicLoader::LoadShaderAsync(
_In_ Platform::String^ filename,
_Out_ ID3D11HullShader** shader
)
{
return m_basicReaderWriter->ReadDataAsync(filename).then([=](const Platform::Array<byte>^ bytecode)
{
DX::ThrowIfFailed(
m_d3dDevice->CreateHullShader(
bytecode->Data,
bytecode->Length,
nullptr,
shader
)
);

SetDebugName(*shader, filename);
});
}

void BasicLoader::LoadShader(
_In_ Platform::String^ filename,
_Out_ ID3D11DomainShader** shader
)
{
Platform::Array<byte>^ bytecode = m_basicReaderWriter->ReadData(filename);

DX::ThrowIfFailed(
m_d3dDevice->CreateDomainShader(
bytecode->Data,
bytecode->Length,
nullptr,
shader
)
);

SetDebugName(*shader, filename);
}

task<void> BasicLoader::LoadShaderAsync(
_In_ Platform::String^ filename,
_Out_ ID3D11DomainShader** shader
)
{
return m_basicReaderWriter->ReadDataAsync(filename).then([=](const Platform::Array<byte>^ bytecode)
{
DX::ThrowIfFailed(
m_d3dDevice->CreateDomainShader(
bytecode->Data,
bytecode->Length,
nullptr,
shader
)
);

SetDebugName(*shader, filename);
});
}

void BasicLoader::LoadMesh(
_In_ Platform::String^ filename,
_Out_ ID3D11Buffer** vertexBuffer,
_Out_ ID3D11Buffer** indexBuffer,
_Out_opt_ uint32* vertexCount,
_Out_opt_ uint32* indexCount
)
{
Platform::Array<byte>^ meshData = m_basicReaderWriter->ReadData(filename);

CreateMesh(
meshData->Data,
vertexBuffer,
indexBuffer,
vertexCount,
indexCount,
filename
);
}

task<void> BasicLoader::LoadMeshAsync(
_In_ Platform::String^ filename,
_Out_ ID3D11Buffer** vertexBuffer,
_Out_ ID3D11Buffer** indexBuffer,
_Out_opt_ uint32* vertexCount,
_Out_opt_ uint32* indexCount
)
{
return m_basicReaderWriter->ReadDataAsync(filename).then([=](const Platform::Array<byte>^ meshData)
{
CreateMesh(
meshData->Data,
vertexBuffer,
indexBuffer,
vertexCount,
indexCount,
filename
);
});
}

MeshObject.h
//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#pragma once

// MeshObject:
// This class is the generic (abstract) representation of D3D11 Indexed triangle
// list. Each of the derived classes is just the constructor for the specific
// geometry primitive. This abstract class does not place any requirements on
// the format of the geometry directly.
// The primary method of the MeshObject is Render. The default implementation
// just sets the IndexBuffer, VertexBuffer, and topology to a TriangleList and
// makes a DrawIndexed call on the context. It assumes all other state has
// already been set on the context.

ref class MeshObject abstract


{
internal:
MeshObject();

virtual void Render(_In_ ID3D11DeviceContext *context);

protected private:
Microsoft::WRL::ComPtr<ID3D11Buffer> m_vertexBuffer;
Microsoft::WRL::ComPtr<ID3D11Buffer> m_indexBuffer;
int m_vertexCount;
int m_indexCount;
};

MeshObject.cpp
//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#include "pch.h"
#include "MeshObject.h"
#include "DirectXSample.h"
#include "ConstantBuffers.h"

using namespace Microsoft::WRL;


using namespace DirectX;

MeshObject::MeshObject():
m_vertexCount(0),
m_indexCount(0)
{
}

//--------------------------------------------------------------------------------

void MeshObject::Render(_In_ ID3D11DeviceContext *context)


{
uint32 stride = sizeof(PNTVertex);
uint32 offset = 0;

context->IASetVertexBuffers(0, 1, m_vertexBuffer.GetAddressOf(), &stride, &offset);


context->IASetIndexBuffer(m_indexBuffer.Get(), DXGI_FORMAT_R16_UINT, 0);
context->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
context->DrawIndexed(m_indexCount, 0, 0);
}

//--------------------------------------------------------------------------------

SphereMesh.h

//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#pragma once

// SphereMesh:
// This class derives from MeshObject and creates a ID3D11Buffer of
// vertices and indices to represent a canonical sphere that is
// positioned at the origin with a radius of 1.0.

#include "MeshObject.h"

ref class SphereMesh: public MeshObject


{
internal:
SphereMesh(_In_ ID3D11Device *device, uint32 segments);
};

SphereMesh.cpp

//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#include "pch.h"
#include "SphereMesh.h"
#include "DirectXSample.h"
#include "ConstantBuffers.h"

using namespace Microsoft::WRL;


using namespace DirectX;

SphereMesh::SphereMesh(_In_ ID3D11Device *device, uint32 segments)


{
D3D11_BUFFER_DESC bd = {0};
D3D11_SUBRESOURCE_DATA initData = {0};

uint32 slices = segments / 2;


uint32 numVertices = (slices + 1) * (segments + 1) + 1;
uint32 numIndices = slices * segments * 3 * 2;

std::vector<PNTVertex> point(numVertices);
std::vector<uint16> index(numIndices);

// To make the texture look right on the top and bottom of the sphere,
// each slice will have 'segments + 1' vertices. The top and bottom
// vertices will all be coincident, but have different U texture cooordinates.
uint32 p = 0;
for (uint32 a = 0; a <= slices; a++)
{
float angle1 = static_cast<float>(a) / static_cast<float>(slices) * XM_PI;
float z = cos(angle1);
float r = sin(angle1);
for (uint32 b = 0; b <= segments; b++)
{
float angle2 = static_cast<float>(b) / static_cast<float>(segments) * XM_2PI;
point[p].position = XMFLOAT3(r * cos(angle2), r * sin(angle2), z);
point[p].normal = point[p].position;
point[p].textureCoordinate = XMFLOAT2((1.0f-z) / 2.0f, static_cast<float>(b) / static_cast<float>(segments));
p++;
}
}
m_vertexCount = p;

p = 0;
for (uint16 a = 0; a < slices; a++)
{
uint16 p1 = a * (segments + 1);
uint16 p2 = (a + 1) * (segments + 1);

// Generate two triangles for each segment around the slice.


for (uint16 b = 0; b < segments; b++)
{
if (a < (slices - 1))
{
// For all but the bottom slice, add the triangle with one
// vertex in the a slice and two vertices in the a + 1 slice.
// Skip it for the bottom slice since the triangle would be
// degenerate as all the vertices in the bottom slice are coincident.
index[p] = b + p1;
index[p+1] = b + p2;
index[p+2] = b + p2 + 1;
p = p + 3;
}
if (a > 0)
{
// For all but the top slice, add the triangle with two
// vertices in the a slice and one vertex in the a + 1 slice.
// Skip it for the top slice since the triangle would be
// degenerate as all the vertices in the top slice are coincident.
index[p] = b + p1;
index[p+1] = b + p2 + 1;
index[p+2] = b + p1 + 1;
p = p + 3;
}
}
}
m_indexCount = p;

bd.Usage = D3D11_USAGE_DEFAULT;
bd.ByteWidth = sizeof(PNTVertex) * m_vertexCount;
bd.BindFlags = D3D11_BIND_VERTEX_BUFFER;
bd.CPUAccessFlags = 0;
initData.pSysMem = point.data();
DX::ThrowIfFailed(
device->CreateBuffer(&bd, &initData, &m_vertexBuffer)
);

bd.Usage = D3D11_USAGE_DEFAULT;
bd.ByteWidth = sizeof(uint16) * m_indexCount;
bd.BindFlags = D3D11_BIND_INDEX_BUFFER;
bd.CPUAccessFlags = 0;
initData.pSysMem = index.data();
DX::ThrowIfFailed(
device->CreateBuffer(&bd, &initData, &m_indexBuffer)
);
}

CylinderMesh.h

//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#pragma once

// CylinderMesh:
// This class derives from MeshObject and creates a ID3D11Buffer of
// vertices and indices to represent a canonical cylinder (capped at
// both ends) that is positioned at the origin with a radius of 1.0,
// a height of 1.0 and with its axis in the +Z direction.

#include "MeshObject.h"

ref class CylinderMesh: public MeshObject


{
internal:
CylinderMesh(_In_ ID3D11Device *device, uint32 segments);
};

CylinderMesh.cpp

//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#include "pch.h"
#include "CylinderMesh.h"
#include "CylinderMesh.h"
#include "DirectXSample.h"
#include "ConstantBuffers.h"

using namespace Microsoft::WRL;


using namespace DirectX;

CylinderMesh::CylinderMesh(_In_ ID3D11Device *device, uint32 segments)


{
D3D11_BUFFER_DESC bd = {0};
D3D11_SUBRESOURCE_DATA initData = {0};

uint32 numVertices = 6 * (segments + 1) + 1;


uint32 numIndices = 3 * segments * 3 * 2;

std::vector<PNTVertex> point(numVertices);
std::vector<uint16> index(numIndices);

uint32 p = 0;
// Top center point (multiple points for texture coordinates).
for (uint32 a = 0; a <= segments; a++)
{
point[p].position = XMFLOAT3(0.0f, 0.0f, 1.0f);
point[p].normal = XMFLOAT3(0.0f, 0.0f, 1.0f);
point[p].textureCoordinate = XMFLOAT2(static_cast<float>(a) / static_cast<float>(segments), 0.0f);
p++;
}
// Top edge of cylinder: Normals point up for lighting of top surface.
for (uint32 a = 0; a <= segments; a++)
{
float angle = static_cast<float>(a) / static_cast<float>(segments) * XM_2PI;
point[p].position = XMFLOAT3(cos(angle), sin(angle), 1.0f);
point[p].normal = XMFLOAT3(0.0f, 0.0f, 1.0f);
point[p].textureCoordinate = XMFLOAT2(static_cast<float>(a) / static_cast<float>(segments), 0.0f);
p++;
}
// Top edge of cylinder: Normals point out for lighting of the side surface.
for (uint32 a = 0; a <= segments; a++)
{
float angle = static_cast<float>(a) / static_cast<float>(segments) * XM_2PI;
point[p].position = XMFLOAT3(cos(angle), sin(angle), 1.0f);
point[p].normal = XMFLOAT3(cos(angle), sin(angle), 0.0f);
point[p].textureCoordinate = XMFLOAT2(static_cast<float>(a) / static_cast<float>(segments), 0.0f);
p++;
}
// Bottom edge of cylinder: Normals point out for lighting of the side surface.
for (uint32 a = 0; a <= segments; a++)
{
float angle = static_cast<float>(a) / static_cast<float>(segments) * XM_2PI;
point[p].position = XMFLOAT3(cos(angle), sin(angle), 0.0f);
point[p].normal = XMFLOAT3(cos(angle), sin(angle), 0.0f);
point[p].textureCoordinate = XMFLOAT2(static_cast<float>(a) / static_cast<float>(segments), 1.0f);
p++;
}
// Bottom edge of cylinder: Normals point down for lighting of the bottom surface.
for (uint32 a = 0; a <= segments; a++)
{
float angle = static_cast<float>(a) / static_cast<float>(segments) * XM_2PI;
point[p].position = XMFLOAT3(cos(angle), sin(angle), 0.0f);
point[p].normal = XMFLOAT3(0.0f, 0.0f, -1.0f);
point[p].textureCoordinate = XMFLOAT2(static_cast<float>(a) / static_cast<float>(segments), 1.0f);
p++;
}
// Bottom center of cylinder: Normals point down for lighting on the bottom surface.
for (uint32 a = 0; a <= segments; a++)
{
point[p].position = XMFLOAT3(0.0f, 0.0f, 0.0f);
point[p].normal = XMFLOAT3(0.0f, 0.0f, -1.0f);
point[p].textureCoordinate = XMFLOAT2(static_cast<float>(a) / static_cast<float>(segments), 1.0f);
p++;
p++;
}
m_vertexCount = p;

p = 0;
for (uint16 a = 0; a < 6; a += 2)
{
uint16 p1 = a*(segments + 1);
uint16 p2 = (a+1)*(segments + 1);
for (uint16 b = 0; b < segments; b++)
{
if (a < 4)
{
index[p] = b + p1;
index[p+1] = b + p2;
index[p+2] = b + p2 + 1;
p = p + 3;
}
if (a > 0)
{
index[p] = b + p1;
index[p+1] = b + p2 + 1;
index[p+2] = b + p1 + 1;
p = p + 3;
}
}
}
m_indexCount = p;

bd.Usage = D3D11_USAGE_DEFAULT;
bd.ByteWidth = sizeof(PNTVertex) * m_vertexCount;
bd.BindFlags = D3D11_BIND_VERTEX_BUFFER;
bd.CPUAccessFlags = 0;
initData.pSysMem = point.data();
DX::ThrowIfFailed(
device->CreateBuffer(&bd, &initData, &m_vertexBuffer)
);

bd.Usage = D3D11_USAGE_DEFAULT;
bd.ByteWidth = sizeof(uint16) * m_indexCount;
bd.BindFlags = D3D11_BIND_INDEX_BUFFER;
bd.CPUAccessFlags = 0;
initData.pSysMem = index.data();
DX::ThrowIfFailed(
device->CreateBuffer(&bd, &initData, &m_indexBuffer)
);
}

FaceMesh.h
//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#pragma once

// FaceMesh:
// This class derives from MeshObject and creates a ID3D11Buffer of
// vertices and indices to represent a canonical face defined as a
// rectangle at the origin extending 1 unit in the +X and
// 1 unit in the +Y direction.
// The face is defined to be two sided, so it is visible from either
// side.

#include "MeshObject.h"

ref class FaceMesh: public MeshObject


{
internal:
FaceMesh(_In_ ID3D11Device *device);
};

FaceMesh.cpp
//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#include "pch.h"
#include "FaceMesh.h"
#include "DirectXSample.h"
#include "ConstantBuffers.h"

using namespace Microsoft::WRL;


using namespace DirectX;

FaceMesh::FaceMesh(_In_ ID3D11Device *device)


{
D3D11_BUFFER_DESC bd = {0};
D3D11_SUBRESOURCE_DATA initData = {0};

PNTVertex target_vertices[] =
{
{XMFLOAT3(0.0f, 0.0f, 0.0f), XMFLOAT3(0.0f, 0.0f, 1.0f), XMFLOAT2(1.0f, 1.0f)},
{XMFLOAT3(1.0f, 0.0f, 0.0f), XMFLOAT3(0.0f, 0.0f, 1.0f), XMFLOAT2(0.0f, 1.0f)},
{XMFLOAT3(1.0f, 1.0f, 0.0f), XMFLOAT3(0.0f, 0.0f, 1.0f), XMFLOAT2(0.0f, 0.0f)},
{XMFLOAT3(0.0f, 1.0f, 0.0f), XMFLOAT3(0.0f, 0.0f, 1.0f), XMFLOAT2(1.0f, 0.0f)}
};
WORD target_indices[] =
{
0, 1, 2,
0, 2, 3,
0, 2, 1,
0, 3, 2
};

m_vertexCount = 4;
m_indexCount = 12;

bd.Usage = D3D11_USAGE_DEFAULT;
bd.ByteWidth = sizeof(PNTVertex) * m_vertexCount;
bd.BindFlags = D3D11_BIND_VERTEX_BUFFER;
bd.CPUAccessFlags = 0;
initData.pSysMem = target_vertices;
DX::ThrowIfFailed(
device->CreateBuffer(&bd, &initData, &m_vertexBuffer)
);

bd.Usage = D3D11_USAGE_DEFAULT;
bd.ByteWidth = sizeof(WORD) * m_indexCount;
bd.BindFlags = D3D11_BIND_INDEX_BUFFER;
bd.CPUAccessFlags = 0;
initData.pSysMem = target_indices;
DX::ThrowIfFailed(
device->CreateBuffer(&bd, &initData, &m_indexBuffer)
);
}

WorldMesh.h
//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#pragma once

#include "MeshObject.h"

// WorldCeilingMesh:
// This class derives from MeshObject and creates a ID3D11Buffer of
// vertices and indices to represent the ceiling of the bounding cube
// of the world.
// The vertices are defined by a position, a normal and a single
// 2D texture coordinate.

ref class WorldCeilingMesh: public MeshObject


{
internal:
WorldCeilingMesh(_In_ ID3D11Device *device);
};

// WorldFloorMesh:
// This class derives from MeshObject and creates a ID3D11Buffer of
// vertices and indices to represent the floor of the bounding cube
// of the world.

ref class WorldFloorMesh: public MeshObject


{
internal:
WorldFloorMesh(_In_ ID3D11Device *device);
};

// WorldWallsMesh:
// This class derives from MeshObject and creates a ID3D11Buffer of
// vertices and indices to represent the walls of the bounding cube
// of the world.

ref class WorldWallsMesh: public MeshObject


{
internal:
WorldWallsMesh(_In_ ID3D11Device *device);
};

WorldMesh.cpp

//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#include "pch.h"
#include "WorldMesh.h"
#include "DirectXSample.h"
#include "ConstantBuffers.h"

using namespace Microsoft::WRL;


using namespace DirectX;

WorldCeilingMesh::WorldCeilingMesh(_In_ ID3D11Device *device)


{
PNTVertex cellVertices[] =
{
// CEILING
{XMFLOAT3(-4.0f, 3.0f, -6.0f), XMFLOAT3(0.0f, -1.0f, 0.0f), XMFLOAT2(-0.15f, 0.0f)},
{XMFLOAT3( 4.0f, 3.0f, -6.0f), XMFLOAT3(0.0f, -1.0f, 0.0f), XMFLOAT2( 1.25f, 0.0f)},
{XMFLOAT3(-4.0f, 3.0f, 6.0f), XMFLOAT3(0.0f, -1.0f, 0.0f), XMFLOAT2(-0.15f, 2.1f)},
{XMFLOAT3( 4.0f, 3.0f, 6.0f), XMFLOAT3(0.0f, -1.0f, 0.0f), XMFLOAT2( 1.25f, 2.1f)},
};

WORD cellIndices[] = {
0, 1, 2,
1, 3, 2,
};

m_vertexCount = 4;
m_indexCount = 6;

D3D11_BUFFER_DESC bd = {0};
D3D11_SUBRESOURCE_DATA initData = {0};

bd.Usage = D3D11_USAGE_DEFAULT;
bd.ByteWidth = sizeof(PNTVertex) * m_vertexCount;
bd.BindFlags = D3D11_BIND_VERTEX_BUFFER;
bd.CPUAccessFlags = 0;
initData.pSysMem = cellVertices;
DX::ThrowIfFailed(
device->CreateBuffer(&bd, &initData, &m_vertexBuffer)
);

bd.Usage = D3D11_USAGE_DEFAULT;
bd.ByteWidth = sizeof(WORD) * m_indexCount;
bd.BindFlags = D3D11_BIND_INDEX_BUFFER;
bd.CPUAccessFlags = 0;
initData.pSysMem = cellIndices;
DX::ThrowIfFailed(
device->CreateBuffer(&bd, &initData, &m_indexBuffer)
);
}

WorldFloorMesh::WorldFloorMesh(_In_ ID3D11Device *device)


{
PNTVertex cellVertices[] =
{
// FLOOR
{XMFLOAT3(-4.0f, -3.0f, 6.0f), XMFLOAT3(0.0f, 1.0f, 0.0f), XMFLOAT2(0.0f, 0.0f)},
{XMFLOAT3( 4.0f, -3.0f, 6.0f), XMFLOAT3(0.0f, 1.0f, 0.0f), XMFLOAT2(1.0f, 0.0f)},
{XMFLOAT3(-4.0f, -3.0f, -6.0f), XMFLOAT3(0.0f, 1.0f, 0.0f), XMFLOAT2(0.0f, 1.5f)},
{XMFLOAT3( 4.0f, -3.0f, -6.0f), XMFLOAT3(0.0f, 1.0f, 0.0f), XMFLOAT2(1.0f, 1.5f)},
};

WORD cellIndices[] = {
0, 1, 2,
1, 3, 2,
};

m_vertexCount = 4;
m_indexCount = 6;

D3D11_BUFFER_DESC bd = {0};
D3D11_SUBRESOURCE_DATA initData = {0};

bd.Usage = D3D11_USAGE_DEFAULT;
bd.ByteWidth = sizeof(PNTVertex) * m_vertexCount;
bd.BindFlags = D3D11_BIND_VERTEX_BUFFER;
bd.CPUAccessFlags = 0;
initData.pSysMem = cellVertices;
DX::ThrowIfFailed(
device->CreateBuffer(&bd, &initData, &m_vertexBuffer)
);
bd.Usage = D3D11_USAGE_DEFAULT;
bd.ByteWidth = sizeof(WORD) * m_indexCount;
bd.BindFlags = D3D11_BIND_INDEX_BUFFER;
bd.CPUAccessFlags = 0;
initData.pSysMem = cellIndices;
DX::ThrowIfFailed(
device->CreateBuffer(&bd, &initData, &m_indexBuffer)
);
}

WorldWallsMesh::WorldWallsMesh(_In_ ID3D11Device *device)


{
PNTVertex cellVertices[] =
{
// WALL
{XMFLOAT3(-4.0f, 3.0f, 6.0f), XMFLOAT3(0.0f, 0.0f, -1.0f), XMFLOAT2(0.0f, 0.0f)},
{XMFLOAT3( 4.0f, 3.0f, 6.0f), XMFLOAT3(0.0f, 0.0f, -1.0f), XMFLOAT2(2.0f, 0.0f)},
{XMFLOAT3(-4.0f, -3.0f, 6.0f), XMFLOAT3(0.0f, 0.0f, -1.0f), XMFLOAT2(0.0f, 1.5f)},
{XMFLOAT3( 4.0f, -3.0f, 6.0f), XMFLOAT3(0.0f, 0.0f, -1.0f), XMFLOAT2(2.0f, 1.5f)},
// WALL
{XMFLOAT3(4.0f, 3.0f, 6.0f), XMFLOAT3(-1.0f, 0.0f, 0.0f), XMFLOAT2(0.0f, 0.0f)},
{XMFLOAT3(4.0f, 3.0f, -6.0f), XMFLOAT3(-1.0f, 0.0f, 0.0f), XMFLOAT2(3.0f, 0.0f)},
{XMFLOAT3(4.0f, -3.0f, 6.0f), XMFLOAT3(-1.0f, 0.0f, 0.0f), XMFLOAT2(0.0f, 1.5f)},
{XMFLOAT3(4.0f, -3.0f, -6.0f), XMFLOAT3(-1.0f, 0.0f, 0.0f), XMFLOAT2(3.0f, 1.5f)},
// WALL
{XMFLOAT3( 4.0f, 3.0f, -6.0f), XMFLOAT3(0.0f, 0.0f, 1.0f), XMFLOAT2(0.0f, 0.0f)},
{XMFLOAT3(-4.0f, 3.0f, -6.0f), XMFLOAT3(0.0f, 0.0f, 1.0f), XMFLOAT2(2.0f, 0.0f)},
{XMFLOAT3( 4.0f, -3.0f, -6.0f), XMFLOAT3(0.0f, 0.0f, 1.0f), XMFLOAT2(0.0f, 1.5f)},
{XMFLOAT3(-4.0f, -3.0f, -6.0f), XMFLOAT3(0.0f, 0.0f, 1.0f), XMFLOAT2(2.0f, 1.5f)},
// WALL
{XMFLOAT3(-4.0f, 3.0f, -6.0f), XMFLOAT3(1.0f, 0.0f, 0.0f), XMFLOAT2(0.0f, 0.0f)},
{XMFLOAT3(-4.0f, 3.0f, 6.0f), XMFLOAT3(1.0f, 0.0f, 0.0f), XMFLOAT2(3.0f, 0.0f)},
{XMFLOAT3(-4.0f, -3.0f, -6.0f), XMFLOAT3(1.0f, 0.0f, 0.0f), XMFLOAT2(0.0f, 1.5f)},
{XMFLOAT3(-4.0f, -3.0f, 6.0f), XMFLOAT3(1.0f, 0.0f, 0.0f), XMFLOAT2(3.0f, 1.5f)},
};

WORD cellIndices[] = {
0, 1, 2,
1, 3, 2,
4, 5, 6,
5, 7, 6,
8, 9, 10,
9, 11, 10,
12, 13, 14,
13, 15, 14,
};

m_vertexCount = 16;
m_indexCount = 24;

D3D11_BUFFER_DESC bd = {0};
D3D11_SUBRESOURCE_DATA initData = {0};

bd.Usage = D3D11_USAGE_DEFAULT;
bd.ByteWidth = sizeof(PNTVertex) * m_vertexCount;
bd.BindFlags = D3D11_BIND_VERTEX_BUFFER;
bd.CPUAccessFlags = 0;
initData.pSysMem = cellVertices;
DX::ThrowIfFailed(
device->CreateBuffer(&bd, &initData, &m_vertexBuffer)
);

bd.Usage = D3D11_USAGE_DEFAULT;
bd.ByteWidth = sizeof(WORD) * m_indexCount;
bd.BindFlags = D3D11_BIND_INDEX_BUFFER;
bd.CPUAccessFlags = 0;
initData.pSysMem = cellIndices;
DX::ThrowIfFailed(
device->CreateBuffer(&bd, &initData, &m_indexBuffer)
device->CreateBuffer(&bd, &initData, &m_indexBuffer)
);
}

Level.h

//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#pragma once

// Level:
// This is an abstract class from which all of the levels of the game are derived.
// Each level potentially overrides up to four methods:
// Initialize - (required) takes a list of objects and enables the objects that
// are active for the level as well as setting their positions and
// any animations associated with the objects.
// Update - this method is called once per time step and is expected to
// determine if the level has been completed. The Level class provides
// a 'standard' Update method which checks each object that is a target
// and disables any active targets that have been hit. It returns true
// once there are no active targets remaining.
// SaveState - method to save any Level specific state. Default is defined as
// not saving any state.
// LoadState - method to restore any Level specific state. Default is defined
// as not restoring any state.

#include "GameObject.h"
#include "PersistentState.h"

ref class Level abstract


{
internal:
virtual void Initialize(
std::vector<GameObject^> objects
) = 0;

virtual bool Update(


float time,
float elapsedTime,
float timeRemaining,
std::vector<GameObject^> objects
);

virtual void SaveState(PersistentState^ state);


virtual void LoadState(PersistentState^ state);

Platform::String^ Objective();
float TimeLimit();

protected private:
Platform::String^ m_objective;
float m_timeLimit;
};

Level.cpp
//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#include "pch.h"
#include "Level.h"

//----------------------------------------------------------------------

bool Level::Update(
float /* time */,
float /* elapsedTime */,
float /* timeRemaining*/,
std::vector<GameObject^> objects
)
{
int left = 0;

for (auto object = objects.begin(); object != objects.end(); object++)


{
if ((*object)->Active() && (*object)->Target())
{
if ((*object)->Hit())
{
(*object)->Active(false);
}
else
{
left++;
}
}
}
return (left == 0);
}

//----------------------------------------------------------------------

void Level::SaveState(PersistentState^ /* state */)


{
}

//----------------------------------------------------------------------

void Level::LoadState(PersistentState^ /* state */)


{
}

//----------------------------------------------------------------------

Platform::String^ Level::Objective()
{
return m_objective;
}

//----------------------------------------------------------------------

float Level::TimeLimit()
{
return m_timeLimit;
}

//----------------------------------------------------------------------
Level1.h

//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#pragma once

// Level1:
// This class defines the first level of the game. There are nine active targets.
// Each of the targets is stationary and can be hit in any order.

#include "Level.h"

ref class Level1: public Level


{
internal:
Level1();
virtual void Initialize(std::vector<GameObject^> objects) override;
};

Level1.cpp
//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#include "pch.h"
#include "Level1.h"
#include "Face.h"

using namespace DirectX;

//----------------------------------------------------------------------

Level1::Level1()
{
m_timeLimit = 20.0f;
m_objective = "Hit each of the targets before time runs out.\nTouch to aim. Tap in right box to fire. Drag in left box to move.";
}

//----------------------------------------------------------------------

void Level1::Initialize(std::vector<GameObject^> objects)


{
XMFLOAT3 position[] =
{
XMFLOAT3(-2.5f, -1.0f, -1.5f),
XMFLOAT3(-1.0f, 1.0f, -3.0f),
XMFLOAT3( 1.5f, 0.0f, -3.0f),
XMFLOAT3(-2.5f, -1.0f, -5.5f),
XMFLOAT3( 0.5f, -2.0f, -5.0f),
XMFLOAT3( 1.5f, -2.0f, -5.5f),
XMFLOAT3( 2.0f, 0.0f, 0.0f),
XMFLOAT3( 0.0f, 0.0f, 0.0f),
XMFLOAT3(-2.0f, 0.0f, 0.0f)
};

int targetCount = 0;
for (auto object = objects.begin(); object != objects.end(); object++)
{
if (Face^ target = dynamic_cast<Face^>(*object))
{
if (targetCount < 9)
{
target->Active(true);
target->Target(true);
target->Hit(false);
target->AnimatePosition(nullptr);
target->Position(position[targetCount]);
targetCount++;
}
else
{
(*object)->Active(false);
}
}
else
{
(*object)->Active(false);
}
}
}

//----------------------------------------------------------------------
Level2.h

//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#pragma once

// Level2:
// This class defines the second level of the game. It derives from the
// first level. In this level, the targets must be hit in numeric order.

#include "Level1.h"

ref class Level2: public Level1


{
internal:
Level2();
virtual void Initialize(std::vector<GameObject^> objects) override;

virtual bool Update(


float time,
float elapsedTime,
float timeRemaining,
std::vector<GameObject^> objects
) override;

virtual void SaveState(PersistentState^ state) override;


virtual void LoadState(PersistentState^ state) override;

private:
int m_nextId;
};

Level2.cpp

//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#include "pch.h"
#include "Level2.h"
#include "Face.h"

//----------------------------------------------------------------------

Level2::Level2()
{
m_timeLimit = 30.0f;
m_objective = "Hit each of the targets in ORDER before time runs out.";
}

//----------------------------------------------------------------------

void Level2::Initialize(std::vector<GameObject^> objects)


{
Level1::Initialize(objects);

int targetCount = 0;
for (auto object = objects.begin(); object != objects.end(); object++)
{
if (Face^ target = dynamic_cast<Face^>(*object))
{
if (targetCount < 9)
{
target->Target(targetCount == 0 ? true : false);
targetCount++;
}
}
}
m_nextId = 1;
}

//----------------------------------------------------------------------

bool Level2::Update(
float /* time */,
float /* elapsedTime */,
float /* timeRemaining */,
std::vector<GameObject^> objects
)
{
int left = 0;
for (auto object = objects.begin(); object != objects.end(); object++)
{
if ((*object)->Active() && ((*object)->TargetId() > 0))
{
if ((*object)->Hit() && ((*object)->TargetId() == m_nextId))
{
(*object)->Active(false);
m_nextId++;
}
else
{
left++;
}
}
if ((*object)->Active() && ((*object)->TargetId() == m_nextId))
{
(*object)->Target(true);
}
}
return (left == 0);
}

//----------------------------------------------------------------------

void Level2::SaveState(PersistentState^ state)


{
state->SaveInt32(":NextTarget", m_nextId);
}

//----------------------------------------------------------------------

void Level2::LoadState(PersistentState^ state)


{
m_nextId = state->LoadInt32(":NextTarget", 1);
}

//----------------------------------------------------------------------

Level3.h
//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#pragma once

// Level3:
// This class defines the third level of the game. In this level, each of the
// nine targets is moving along closed paths and can be hit
// in any order.

#include "Level.h"

ref class Level3: public Level


{
internal:
Level3();
virtual void Initialize(std::vector<GameObject^> objects) override;
};

Level3.cpp

//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#include "pch.h"
#include "Level3.h"
#include "Face.h"
#include "Animate.h"

using namespace DirectX;

//----------------------------------------------------------------------

Level3::Level3()
{
m_timeLimit = 30.0f;
m_objective = "Hit each of the moving targets before time runs out.";
}

//----------------------------------------------------------------------

void Level3::Initialize(std::vector<GameObject^> objects)


{
XMFLOAT3 position[] =
{
XMFLOAT3(-2.5f, -1.0f, -1.5f),
XMFLOAT3(-1.0f, 1.0f, -3.0f),
XMFLOAT3( 1.5f, 0.0f, -5.5f),
XMFLOAT3(-2.5f, -1.0f, -5.5f),
XMFLOAT3( 0.5f, -2.0f, -5.0f),
XMFLOAT3( 1.5f, -2.0f, -5.5f),
XMFLOAT3( 0.0f, -3.6f, 0.0f),
XMFLOAT3( 0.0f, -3.6f, 0.0f),
XMFLOAT3( 0.0f, -3.6f, 0.0f)
};
XMFLOAT3 LineList1[] =
{
XMFLOAT3(-2.5f, -1.0f, -1.5f),
XMFLOAT3(-0.5f, 1.0f, 1.0f),
XMFLOAT3(-0.5f, -2.5f, 1.0f),
XMFLOAT3(-2.5f, -1.0f, -1.5f),
};
XMFLOAT3 LineList2[] =
{
XMFLOAT3(-1.0f, 1.0f, -3.0f),
XMFLOAT3(-2.0f, 2.0f, -1.5f),
XMFLOAT3(-2.0f, -2.5f, -1.5f),
XMFLOAT3( 1.5f, -2.5f, -1.5f),
XMFLOAT3( 1.5f, -2.5f, -3.0f),
XMFLOAT3(-1.0f, 1.0f, -3.0f),
};
XMFLOAT3 LineList3[] =
{
XMFLOAT3(1.5f, 0.0f, -5.5f),
XMFLOAT3(1.5f, 1.0f, -5.5f),
XMFLOAT3(1.5f, -2.5f, -5.5f),
XMFLOAT3(1.5f, 0.0f, -5.5f),
};
XMFLOAT3 LineList4[] =
{
XMFLOAT3(-2.5f, -1.0f, -5.5f),
XMFLOAT3( 1.0f, -1.0f, -5.5f),
XMFLOAT3( 1.0f, 1.0f, -5.5f),
XMFLOAT3(-2.5f, 1.0f, -5.5f),
XMFLOAT3(-2.5f, -1.0f, -5.5f),
};
XMFLOAT3 LineList5[] =
{
XMFLOAT3( 0.5f, -2.0f, -5.0f),
XMFLOAT3( 2.0f, -2.0f, -5.0f),
XMFLOAT3( 2.0f, 1.0f, -5.0f),
XMFLOAT3(-2.5f, 1.0f, -5.0f),
XMFLOAT3(-2.5f, -2.0f, -5.0f),
XMFLOAT3( 0.5f, -2.0f, -5.0f),
};
XMFLOAT3 LineList6[] =
{
XMFLOAT3( 1.5f, -2.0f, -5.5f),
XMFLOAT3(-2.5f, -2.0f, -5.5f),
XMFLOAT3(-2.5f, 1.0f, -5.5f),
XMFLOAT3( 1.5f, 1.0f, -5.5f),
XMFLOAT3( 1.5f, -2.0f, -5.5f),
};

int targetCount = 0;
for (auto object = objects.begin(); object != objects.end(); object++)
{
if (Face^ target = dynamic_cast<Face^>(*object))
{
if (targetCount < 9)
{
target->Active(true);
target->Target(true);
target->Hit(false);
target->Position(position[targetCount]);
switch (targetCount)
{
case 0:
target->AnimatePosition(ref new AnimateLineListPosition(4, LineList1, 10.0f, true));
break;
case 1:
target->AnimatePosition(ref new AnimateLineListPosition(6, LineList2, 15.0f, true));
break;
case 2:
target->AnimatePosition(ref new AnimateLineListPosition(4, LineList3, 15.0f, true));
break;
case 3:
case 3:
target->AnimatePosition(ref new AnimateLineListPosition(5, LineList4, 15.0f, true));
break;
case 4:
target->AnimatePosition(ref new AnimateLineListPosition(6, LineList5, 15.0f, true));
break;
case 5:
target->AnimatePosition(ref new AnimateLineListPosition(5, LineList6, 15.0f, true));
break;
case 6:
target->AnimatePosition(
ref new AnimateCirclePosition(
XMFLOAT3(0.0f, -2.5f, 0.0f),
XMFLOAT3(0.0f, -3.6f, 0.0f),
XMFLOAT3(0.0f, 0.0f, 1.0f),
9.0f,
true,
true
)
);
break;
case 7:
target->AnimatePosition(
ref new AnimateCirclePosition(
XMFLOAT3(0.0f, -2.5f, 0.0f),
XMFLOAT3(0.0f, -3.6f, 0.0f),
XMFLOAT3(0.0f, 0.0f, 1.0f),
9.0f,
true,
true
)
);
target->AnimatePosition()->Start(3.0f);
break;
case 8:
target->AnimatePosition(
ref new AnimateCirclePosition(
XMFLOAT3(0.0f, -2.5f, 0.0f),
XMFLOAT3(0.0f, -3.6f, 0.0f),
XMFLOAT3(0.0f, 0.0f, 1.0f),
9.0f,
true,
true
)
);
target->AnimatePosition()->Start(6.0f);
break;
}
targetCount++;
}
else
{
target->Active(false);
}
}
else
{
(*object)->Active(false);
}
}
}

//----------------------------------------------------------------------

Level4.h
//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#pragma once

// Level4:
// This class defines the fourth level of the game. It derives from the
// third level. The targets must be hit in numeric order.

#include "Level3.h"

ref class Level4: public Level3


{
internal:
Level4();
virtual void Initialize(std::vector<GameObject^> objects) override;

virtual bool Update(


float time,
float elapsedTime,
float timeRemaining,
std::vector<GameObject^> objects
) override;

virtual void SaveState(PersistentState^ state) override;


virtual void LoadState(PersistentState^ state) override;

private:
int m_nextId;
};

Level4.cpp

//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#include "pch.h"
#include "Level4.h"
#include "Face.h"

//----------------------------------------------------------------------

Level4::Level4()
{
m_timeLimit = 30.0f;
m_objective = "Hit each of the moving targets in ORDER before time runs out.";
}

//----------------------------------------------------------------------

void Level4::Initialize(std::vector<GameObject^> objects)


{
Level3::Initialize(objects);

int targetCount = 0;
for (auto object = objects.begin(); object != objects.end(); object++)
{
if (Face^ target = dynamic_cast<Face^>(*object))
{
if (targetCount < 9)
{
target->Target(targetCount == 0 ? true : false);
targetCount++;
}
}
}
m_nextId = 1;
}

//----------------------------------------------------------------------

bool Level4::Update(
float /* time */,
float /* elapsedTime */,
float /* timeRemaining */,
std::vector<GameObject^> objects
)
{
int left = 0;
for (auto object = objects.begin(); object != objects.end(); object++)
{
if ((*object)->Active() && ((*object)->TargetId() > 0))
{
if ((*object)->Hit() && ((*object)->TargetId() == m_nextId))
{
(*object)->Active(false);
m_nextId++;
}
else
{
left++;
}
}
if ((*object)->Active() && ((*object)->TargetId() == m_nextId))
{
(*object)->Target(true);
}
}
return (left == 0);
}

//----------------------------------------------------------------------

void Level4::SaveState(PersistentState^ state)


{
state->SaveInt32(":NextTarget", m_nextId);
}

//----------------------------------------------------------------------

void Level4::LoadState(PersistentState^ state)


{
m_nextId = state->LoadInt32(":NextTarget", 1);
}

//----------------------------------------------------------------------

Level5.h
//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#pragma once

// Level5:
// This class defines the fifth level of the game. It derives from the
// third level. This level introduces obstacles that move into place
// during game play. The targets may be hit in any order.

#include "Level3.h"

ref class Level5: public Level3


{
internal:
Level5();
virtual void Initialize(std::vector<GameObject^> objects) override;
};

Level5.cpp

//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#include "pch.h"
#include "Level5.h"
#include "Cylinder.h"
#include "Animate.h"

using namespace DirectX;

//----------------------------------------------------------------------

Level5::Level5()
{
m_timeLimit = 30.0f;
m_objective = "Hit each of the moving targets while avoiding the obstacles before time runs out.";
}

//----------------------------------------------------------------------

void Level5::Initialize(std::vector<GameObject^> objects)


{
Level3::Initialize(objects);

XMFLOAT3 obstacleStartPosition[] =
{
XMFLOAT3(-4.5f, -3.0f, 0.0f),
XMFLOAT3(4.5f, -3.0f, 0.0f),
XMFLOAT3(0.0f, 3.01f, -2.0f),
XMFLOAT3(-1.5f, -3.0f, -6.5f),
XMFLOAT3(1.5f, -3.0f, -6.5f)
};
XMFLOAT3 obstacleEndPosition[] =
{
XMFLOAT3(-2.0f, -3.0f, 0.0f),
XMFLOAT3(2.0f, -3.0f, 0.0f),
XMFLOAT3(0.0f, -3.0f, -2.0f),
XMFLOAT3(-1.5f, -3.0f, -4.0f),
XMFLOAT3(1.5f, -3.0f, -4.0f)
};
float obstacleStartTime[] =
{
2.0f,
5.0f,
8.0f,
11.0f,
14.0f
};

int obstacleCount = 0;
for (auto object = objects.begin(); object != objects.end(); object++)
{
if (Cylinder^ obstacle = dynamic_cast<Cylinder^>(*object))
{
if (obstacleCount < 5)
{
obstacle->Active(true);
obstacle->Position(obstacleStartPosition[obstacleCount]);
obstacle->AnimatePosition(
ref new AnimateLinePosition(
obstacleStartPosition[obstacleCount],
obstacleEndPosition[obstacleCount],
10.0,
false
)
);
obstacle->AnimatePosition()->Start(obstacleStartTime[obstacleCount]);
obstacleCount ++;
}
}
}
}

//----------------------------------------------------------------------

Level6.h
//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#pragma once

// Level6:
// This class defines the sixth and final level of the game. It derives from the
// fifth level. In this level, the targets do not disappear when they are hit.
// The target will stay highlighted for two seconds. As this is the last level,
// the only criteria for completion is time expiring.

#include "Level5.h"

ref class Level6: public Level5


{
internal:
Level6();
virtual bool Update(
float time,
float elapsedTime,
float timeRemaining,
std::vector<GameObject^> objects
) override;
};

Level6.cpp
//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#include "pch.h"
#include "Level6.h"

//----------------------------------------------------------------------

Level6::Level6()
{
m_timeLimit = 20.0f;
m_objective = "Hit as many moving targets as possible while avoiding the obstacles before time runs out.";
}

//----------------------------------------------------------------------

bool Level6::Update(
float time,
float elapsedTime,
float timeRemaining,
std::vector<GameObject^> objects
)
{
for (auto object = objects.begin(); object != objects.end(); object++)
{
if ((*object)->Active() && (*object)->Target())
{
if ((*object)->Hit() && ((*object)->HitTime() < (time - 2.0f)))
{
(*object)->Hit(false);
}
}
}
return ((timeRemaining - elapsedTime) <= 0.0f);
}

//----------------------------------------------------------------------

TargetTexture.h
//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#pragma once

// TargetTexture:
// This is a helper class to procedurally generate textures for game
// targets. There are two versions of the textures, one when it is
// hit and the other is when it is not.
// The class creates the necessary resources to draw the texture into
// an off screen resource at initialization time.

ref class TargetTexture


{
internal:
TargetTexture(
_In_ ID3D11Device1* d3dDevice,
_In_ ID2D1Factory1* d2dFactory,
_In_ IDWriteFactory1* dwriteFactory,
_In_ ID2D1DeviceContext* d2dContext
);

void CreateTextureResourceView(
_In_ Platform::String^ name,
_Out_ ID3D11ShaderResourceView** textureResourceView
);
void CreateHitTextureResourceView(
_In_ Platform::String^ name,
_Out_ ID3D11ShaderResourceView** textureResourceView
);

protected private:
Microsoft::WRL::ComPtr<ID3D11Device1> m_d3dDevice;
Microsoft::WRL::ComPtr<ID2D1Factory1> m_d2dFactory;
Microsoft::WRL::ComPtr<ID2D1DeviceContext> m_d2dContext;
Microsoft::WRL::ComPtr<IDWriteFactory1> m_dwriteFactory;

Microsoft::WRL::ComPtr<ID2D1SolidColorBrush> m_redBrush;
Microsoft::WRL::ComPtr<ID2D1SolidColorBrush> m_blueBrush;
Microsoft::WRL::ComPtr<ID2D1SolidColorBrush> m_greenBrush;
Microsoft::WRL::ComPtr<ID2D1SolidColorBrush> m_whiteBrush;
Microsoft::WRL::ComPtr<ID2D1SolidColorBrush> m_blackBrush;
Microsoft::WRL::ComPtr<ID2D1SolidColorBrush> m_yellowBrush;
Microsoft::WRL::ComPtr<ID2D1SolidColorBrush> m_clearBrush;

Microsoft::WRL::ComPtr<ID2D1EllipseGeometry> m_circleGeometry1;
Microsoft::WRL::ComPtr<ID2D1EllipseGeometry> m_circleGeometry2;
Microsoft::WRL::ComPtr<ID2D1EllipseGeometry> m_circleGeometry3;
Microsoft::WRL::ComPtr<ID2D1EllipseGeometry> m_circleGeometry4;
Microsoft::WRL::ComPtr<ID2D1EllipseGeometry> m_circleGeometry5;

Microsoft::WRL::ComPtr<IDWriteTextFormat> m_textFormat;
};

TargetTexture.cpp

//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved
#include "pch.h"
#include "TargetTexture.h"
#include "DirectXSample.h"

using namespace Microsoft::WRL;


using namespace Windows::Graphics::Display;

TargetTexture::TargetTexture(
_In_ ID3D11Device1* d3dDevice,
_In_ ID2D1Factory1* d2dFactory,
_In_ IDWriteFactory1* dwriteFactory,
_In_ ID2D1DeviceContext* d2dContext
)
{
m_d3dDevice = d3dDevice;
m_d2dFactory = d2dFactory;
m_dwriteFactory = dwriteFactory;
m_d2dContext = d2dContext;

DX::ThrowIfFailed(
m_d2dContext->CreateSolidColorBrush(
D2D1::ColorF(D2D1::ColorF::Red, 1.f),
&m_redBrush
)
);

DX::ThrowIfFailed(
m_d2dContext->CreateSolidColorBrush(
D2D1::ColorF(D2D1::ColorF::CornflowerBlue, 1.0f),
&m_blueBrush
)
);

DX::ThrowIfFailed(
m_d2dContext->CreateSolidColorBrush(
D2D1::ColorF(D2D1::ColorF::Green, 1.f),
&m_greenBrush
)
);

DX::ThrowIfFailed(
m_d2dContext->CreateSolidColorBrush(
D2D1::ColorF(D2D1::ColorF::White, 1.f),
&m_whiteBrush
)
);

DX::ThrowIfFailed(
m_d2dContext->CreateSolidColorBrush(
D2D1::ColorF(D2D1::ColorF::Black, 1.f),
&m_blackBrush
)
);

DX::ThrowIfFailed(
m_d2dContext->CreateSolidColorBrush(
D2D1::ColorF(D2D1::ColorF::Yellow, 1.f),
&m_yellowBrush
)
);

DX::ThrowIfFailed(
m_d2dContext->CreateSolidColorBrush(
D2D1::ColorF(D2D1::ColorF::White, 0.0f),
&m_clearBrush
)
);
DX::ThrowIfFailed(
m_d2dFactory->CreateEllipseGeometry(
D2D1::Ellipse(
D2D1::Point2F(256.0f, 256.0f),
50.0f,
50.0f
),
&m_circleGeometry1)
);

DX::ThrowIfFailed(
m_d2dFactory->CreateEllipseGeometry(
D2D1::Ellipse(
D2D1::Point2F(256.0f, 256.0f),
100.0f,
100.0f
),
&m_circleGeometry2)
);

DX::ThrowIfFailed(
m_d2dFactory->CreateEllipseGeometry(
D2D1::Ellipse(
D2D1::Point2F(256.0f, 256.0f),
150.0f,
150.0f
),
&m_circleGeometry3)
);

DX::ThrowIfFailed(
m_d2dFactory->CreateEllipseGeometry(
D2D1::Ellipse(
D2D1::Point2F(256.0f, 256.0f),
200.0f,
200.0f
),
&m_circleGeometry4)
);

DX::ThrowIfFailed(
m_d2dFactory->CreateEllipseGeometry(
D2D1::Ellipse(
D2D1::Point2F(256.0f, 256.0f),
250.0f,
250.0f
),
&m_circleGeometry5)
);

DX::ThrowIfFailed(
m_dwriteFactory->CreateTextFormat(
L"Segoe UI",
nullptr,
DWRITE_FONT_WEIGHT_LIGHT,
DWRITE_FONT_STYLE_NORMAL,
DWRITE_FONT_STRETCH_NORMAL,
425, // fontsize
L"en-US", // locale
&m_textFormat
)
);

// Center the text horizontally.


DX::ThrowIfFailed(
m_textFormat->SetTextAlignment(DWRITE_TEXT_ALIGNMENT_CENTER)
);

// Center the text vertically.


// Center the text vertically.
DX::ThrowIfFailed(
m_textFormat->SetParagraphAlignment(DWRITE_PARAGRAPH_ALIGNMENT_CENTER)
);
}
//----------------------------------------------------------------------
void TargetTexture::CreateTextureResourceView(
_In_ Platform::String^ name,
_Out_ ID3D11ShaderResourceView** textureResourceView
)
{
// Allocate a offscreen D3D surface for D2D to render our 2D content into
D3D11_TEXTURE2D_DESC texDesc;
texDesc.ArraySize = 1;
texDesc.BindFlags = D3D11_BIND_RENDER_TARGET | D3D11_BIND_SHADER_RESOURCE;
texDesc.CPUAccessFlags = 0;
texDesc.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
texDesc.Height = 512;
texDesc.Width = 512;
texDesc.MipLevels = 1;
texDesc.MiscFlags = 0;
texDesc.SampleDesc.Count = 1;
texDesc.SampleDesc.Quality = 0;
texDesc.Usage = D3D11_USAGE_DEFAULT;

ComPtr<ID3D11Texture2D> offscreenTexture;
DX::ThrowIfFailed(
m_d3dDevice->CreateTexture2D(&texDesc, nullptr, &offscreenTexture)
);

// Convert the Direct2D texture into a Shader Resource View


ComPtr<ID3D11ShaderResourceView> texture;
DX::ThrowIfFailed(
m_d3dDevice->CreateShaderResourceView(offscreenTexture.Get(), nullptr, &texture)
);
#if defined(_DEBUG)
{
char debugName[100];
int l = sprintf_s(debugName, sizeof(debugName) / sizeof(debugName[0]), "Simple3DGame Target %ls", name->Data());
DX::ThrowIfFailed(
texture->SetPrivateData(WKPDID_D3DDebugObjectName, l, debugName)
);
}
#endif

ComPtr<IDXGISurface> dxgiSurface;
DX::ThrowIfFailed(
offscreenTexture.As(&dxgiSurface)
);

// Create a D2D render target which can draw into our offscreen D3D
// surface. Given that we use a constant size for the texture, we
// fix the DPI at 96.

D2D1_BITMAP_PROPERTIES1 properties;
properties.pixelFormat.format = DXGI_FORMAT_B8G8R8A8_UNORM;
properties.pixelFormat.alphaMode = D2D1_ALPHA_MODE_PREMULTIPLIED;
properties.dpiX = 96;
properties.dpiY = 96;
properties.bitmapOptions = D2D1_BITMAP_OPTIONS_TARGET | D2D1_BITMAP_OPTIONS_CANNOT_DRAW;
properties.colorContext = nullptr;

ComPtr<ID2D1Bitmap1> renderTarget;
DX::ThrowIfFailed(
m_d2dContext->CreateBitmapFromDxgiSurface(
dxgiSurface.Get(),
&properties,
&renderTarget
)
);
);

m_d2dContext->SetTarget(renderTarget.Get());
float saveDpiX;
float saveDpiY;

m_d2dContext->GetDpi(&saveDpiX, &saveDpiY);
m_d2dContext->SetDpi(96.0f, 96.0f);

m_d2dContext->BeginDraw();
m_d2dContext->SetTransform(D2D1::Matrix3x2F::Identity());

D2D1_SIZE_F renderTargetSize = renderTarget->GetSize();

m_d2dContext->Clear(D2D1::ColorF(D2D1::ColorF::White));
m_d2dContext->FillGeometry(m_circleGeometry5.Get(), m_redBrush.Get());
m_d2dContext->FillGeometry(m_circleGeometry4.Get(), m_blueBrush.Get());
m_d2dContext->FillGeometry(m_circleGeometry3.Get(), m_greenBrush.Get());
m_d2dContext->FillGeometry(m_circleGeometry2.Get(), m_yellowBrush.Get());
m_d2dContext->FillGeometry(m_circleGeometry1.Get(), m_blackBrush.Get());
m_d2dContext->DrawText(
name->Data(),
name->Length(),
m_textFormat.Get(),
D2D1::RectF(0, 0, renderTargetSize.width, renderTargetSize.height),
m_whiteBrush.Get()
);

// We ignore D2DERR_RECREATE_TARGET here. This error indicates that the device


// is lost. It will be handled during the next call to Present.
HRESULT hr = m_d2dContext->EndDraw();
if (hr != D2DERR_RECREATE_TARGET)
{
DX::ThrowIfFailed(hr);
}

m_d2dContext->SetTarget(nullptr);
m_d2dContext->SetDpi(saveDpiX, saveDpiY);

*textureResourceView = texture.Detach();
}
//----------------------------------------------------------------------
void TargetTexture::CreateHitTextureResourceView(
_In_ Platform::String^ name,
_Out_ ID3D11ShaderResourceView** textureResourceView
)
{
// Allocate a offscreen D3D surface for D2D to render our 2D content into
D3D11_TEXTURE2D_DESC texDesc;
texDesc.ArraySize = 1;
texDesc.BindFlags = D3D11_BIND_RENDER_TARGET | D3D11_BIND_SHADER_RESOURCE;
texDesc.CPUAccessFlags = 0;
texDesc.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
texDesc.Height = 512;
texDesc.Width = 512;
texDesc.MipLevels = 1;
texDesc.MiscFlags = 0;
texDesc.SampleDesc.Count = 1;
texDesc.SampleDesc.Quality = 0;
texDesc.Usage = D3D11_USAGE_DEFAULT;

ComPtr<ID3D11Texture2D> offscreenTexture;
DX::ThrowIfFailed(
m_d3dDevice->CreateTexture2D(&texDesc, nullptr, &offscreenTexture)
);

// Convert the Direct2D texture into a Shader Resource View.


ComPtr<ID3D11ShaderResourceView> texture;
DX::ThrowIfFailed(
m_d3dDevice->CreateShaderResourceView(offscreenTexture.Get(), nullptr, &texture)
m_d3dDevice->CreateShaderResourceView(offscreenTexture.Get(), nullptr, &texture)
);
#if defined(_DEBUG)
{
char debugName[100];
int l = sprintf_s(debugName, sizeof(debugName) / sizeof(debugName[0]), "Simple3DGame HitTarget %ls", name->Data());
DX::ThrowIfFailed(
texture->SetPrivateData(WKPDID_D3DDebugObjectName, l, debugName)
);
}
#endif

ComPtr<IDXGISurface> dxgiSurface;
DX::ThrowIfFailed(
offscreenTexture.As(&dxgiSurface)
);

// Create a D2D render target which can draw into our offscreen D3D
// surface. Given that we use a constant size for the texture, we
// fix the DPI at 96.

D2D1_BITMAP_PROPERTIES1 properties;
properties.pixelFormat.format = DXGI_FORMAT_B8G8R8A8_UNORM;
properties.pixelFormat.alphaMode = D2D1_ALPHA_MODE_PREMULTIPLIED;
properties.dpiX = 96;
properties.dpiY = 96;
properties.bitmapOptions = D2D1_BITMAP_OPTIONS_TARGET | D2D1_BITMAP_OPTIONS_CANNOT_DRAW;
properties.colorContext = nullptr;

ComPtr<ID2D1Bitmap1> renderTarget;
DX::ThrowIfFailed(
m_d2dContext->CreateBitmapFromDxgiSurface(
dxgiSurface.Get(),
&properties,
&renderTarget
)
);

m_d2dContext->SetTarget(renderTarget.Get());
float saveDpiX;
float saveDpiY;

m_d2dContext->GetDpi(&saveDpiX, &saveDpiY);
m_d2dContext->SetDpi(96.0f, 96.0f);

m_d2dContext->BeginDraw();
m_d2dContext->SetTransform(D2D1::Matrix3x2F::Identity());

D2D1_SIZE_F renderTargetSize = renderTarget->GetSize();

m_d2dContext->Clear(D2D1::ColorF(D2D1::ColorF::Black));
m_d2dContext->FillGeometry(m_circleGeometry5.Get(), m_yellowBrush.Get());
m_d2dContext->FillGeometry(m_circleGeometry4.Get(), m_greenBrush.Get());
m_d2dContext->FillGeometry(m_circleGeometry3.Get(), m_blueBrush.Get());
m_d2dContext->FillGeometry(m_circleGeometry2.Get(), m_redBrush.Get());
m_d2dContext->FillGeometry(m_circleGeometry1.Get(), m_whiteBrush.Get());
m_d2dContext->DrawText(
name->Data(),
name->Length(),
m_textFormat.Get(),
D2D1::RectF(0, 0, renderTargetSize.width, renderTargetSize.height),
m_blackBrush.Get()
);

// We ignore D2DERR_RECREATE_TARGET here. This error indicates that the device


// is lost. It will be handled during the next call to Present.
HRESULT hr = m_d2dContext->EndDraw();
if (hr != D2DERR_RECREATE_TARGET)
{
DX::ThrowIfFailed(hr);
DX::ThrowIfFailed(hr);
}

m_d2dContext->SetTarget(nullptr);
m_d2dContext->SetDpi(saveDpiX, saveDpiY);

*textureResourceView = texture.Detach();
}
//----------------------------------------------------------------------

Material.h

//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#pragma once

// Material:
// This class maintains the properties that represent how an object will
// look when it is rendered. This includes the color of the object, the
// texture used to render the object, and the vertex and pixel shader that
// should be used for rendering.
// The RenderSetup method sets the appropriate values into the constantBuffer
// and calls the appropriate D3D11 context methods to set up the rendering pipeline
// in the graphics hardware.

#include "ConstantBuffers.h"

ref class Material


{
internal:
Material(
DirectX::XMFLOAT4 meshColor,
DirectX::XMFLOAT4 diffuseColor,
DirectX::XMFLOAT4 specularColor,
float specularExponent,
_In_ ID3D11ShaderResourceView* textureResourceView,
_In_ ID3D11VertexShader* vertexShader,
_In_ ID3D11PixelShader* pixelShader
);

void RenderSetup(
_In_ ID3D11DeviceContext* context,
_Inout_ ConstantBufferChangesEveryPrim* constantBuffer
);

void SetTexture(_In_ ID3D11ShaderResourceView* textureResourceView)


{
m_textureRV = textureResourceView;
}

protected private:
DirectX::XMFLOAT4 m_meshColor;
DirectX::XMFLOAT4 m_diffuseColor;
DirectX::XMFLOAT4 m_hitColor;
DirectX::XMFLOAT4 m_specularColor;
float m_specularExponent;

Microsoft::WRL::ComPtr<ID3D11VertexShader> m_vertexShader;
Microsoft::WRL::ComPtr<ID3D11PixelShader> m_pixelShader;
Microsoft::WRL::ComPtr<ID3D11ShaderResourceView> m_textureRV;
};
Material.cpp

//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#include "pch.h"
#include "Material.h"
#include "GameConstants.h"

using namespace DirectX;

//--------------------------------------------------------------------------------

Material::Material(
XMFLOAT4 meshColor,
XMFLOAT4 diffuseColor,
XMFLOAT4 specularColor,
float specularExponent,
_In_ ID3D11ShaderResourceView* textureResourceView,
_In_ ID3D11VertexShader* vertexShader,
_In_ ID3D11PixelShader* pixelShader
)
{
m_meshColor = meshColor;
m_diffuseColor = diffuseColor;
m_specularColor = specularColor;
m_specularExponent = specularExponent;

m_vertexShader = vertexShader;
m_pixelShader = pixelShader;
m_textureRV = textureResourceView;
}

//--------------------------------------------------------------------------------

void Material::RenderSetup(
_In_ ID3D11DeviceContext* context,
_Inout_ ConstantBufferChangesEveryPrim* constantBuffer
)
{
constantBuffer->meshColor = m_meshColor;
constantBuffer->specularColor = m_specularColor;
constantBuffer->specularPower = m_specularExponent;
constantBuffer->diffuseColor = m_diffuseColor;

context->PSSetShaderResources(0, 1, m_textureRV.GetAddressOf());
context->VSSetShader(m_vertexShader.Get(), nullptr, 0);
context->PSSetShader(m_pixelShader.Get(), nullptr, 0);
}

Note
This article is for Windows 10 developers writing Universal Windows Platform (UWP) apps. If youre
developing for Windows 8.x or Windows Phone 8.x, see the archived documentation.

Related topics
Create a simple UWP game with DirectX
Add a user interface
3/6/2017 22 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
You've seen how the sample game implements the main game object as well as the basic rendering framework.
Now, let's look at how the sample game provides feedback about game state to the player. Here, you learn how
you can add simple menu options and heads-up display components on top of the 3-D graphics pipeline output.

Objective
To add basic user interface graphics and behaviors to a Universal Windows Platform (UWP) DirectX game.

The user interface overlay


While there are many ways to display text and user interface elements in a DirectX game, we are going to focus on
one, Direct2D (with DirectWrite for the text elements).
First, let's be clear about what Direct2D is not. It's not specifically designed for user interfaces or layouts, like HTML
or XAML. It doesn't provide user interface components, like list boxes or buttons; and it doesn't provide layout
components like divs, tables, or grids.
Direct2D is a set of 2-D drawing APIs used to draw pixel-based primitives and effects. When starting out with
Direct2D, keep it simple. Complex layouts and interface behaviors need time and planning. If your game requires a
complex user interface to play, like those found in simulation and strategy games, consider XAML instead.
(For info about developing a user interface with XAML in a UWP DirectX game, see Extending the game sample.)
In this game sample, we have two major UI components: the heads-up display for the score and in-game controls;
and an overlay used to display game state text and options (such as pause info and level start options).
Using Direct2D for a heads-up display
This is the in-game heads-up display for the game sample without the game visuals. It's simple and uncluttered,
allowing the player to focus on navigating the 3-D world and shooting the targets. A good interface or heads-up
display must never obfuscate the ability of the player to process and react to the events in the game.
As you can see, the overlay consists of basic primitives: two intersecting line segments for the cross hairs, and two
rectangles for the move-look controller. In the upper-right corner, DirectWrite text informs the player of the
current number of successful hits, the number of shots the player has made, the time remaining in the level, and
the current level number. The in-game heads-up display state of the overlay is drawn in the Render method of the
GameHud class, and is coded like this:

void GameHud::Render(
_In_ Simple3DGame^ game,
_In_ ID2D1DeviceContext* d2dContext,
_In_ Windows::Foundation::Rect windowBounds
)
{
if (m_showTitle)
{
d2dContext->DrawBitmap(
m_logoBitmap.Get(),
D2D1::RectF(
GameConstants::Margin,
GameConstants::Margin,
m_logoSize.width + GameConstants::Margin,
m_logoSize.height + GameConstants::Margin
)
);
d2dContext->DrawTextLayout(
Point2F(m_logoSize.width + 2.0f * GameConstants::Margin, GameConstants::Margin),
m_titleHeaderLayout.Get(),
m_textBrush.Get()
);
d2dContext->DrawTextLayout(
Point2F(GameConstants::Margin, m_titleBodyVerticalOffset),
m_titleBodyLayout.Get(),
m_textBrush.Get()
);
}

if (game != nullptr)
{
// This section is only used after the game state has been initialized.
static const int bufferLength = 256;
static char16 wsbuffer[bufferLength];
int length = swprintf_s(
wsbuffer,
bufferLength,
L"Hits:\t%10d\nShots:\t%10d\nTime:\t%8.1f",
L"Hits:\t%10d\nShots:\t%10d\nTime:\t%8.1f",
game->TotalHits(),
game->TotalShots(),
game->TimeRemaining()
);

d2dContext->DrawText(
wsbuffer,
length,
m_textFormatBody.Get(),
D2D1::RectF(
windowBounds.Width - GameConstants::HudRightOffset,
GameConstants::HudTopOffset,
windowBounds.Width,
GameConstants::HudTopOffset + (GameConstants::HudBodyPointSize + GameConstants::Margin) * 3
),
m_textBrush.Get()
);

// Using the unicode characters starting at 0x2780 ( ) for the consecutive levels of the game.
// For completed levels, start with 0x278A ( ) (This is 0x2780 + 10).
uint32 levelCharacter[6];
for (uint32 i = 0; i < 6; i++)
{
levelCharacter[i] = 0x2780 + i + ((static_cast<uint32>(game->LevelCompleted()) == i) ? 10 : 0);
}
length = swprintf_s(
wsbuffer,
bufferLength,
L"%lc %lc %lc %lc %lc %lc",
levelCharacter[0],
levelCharacter[1],
levelCharacter[2],
levelCharacter[3],
levelCharacter[4],
levelCharacter[5]
);
d2dContext->DrawText(
wsbuffer,
length,
m_textFormatBodySymbol.Get(),
D2D1::RectF(
windowBounds.Width - GameConstants::HudRightOffset,
GameConstants::HudTopOffset + (GameConstants::HudBodyPointSize + GameConstants::Margin) * 3 + GameConstants::Margin,
windowBounds.Width,
GameConstants::HudTopOffset + (GameConstants::HudBodyPointSize+ GameConstants::Margin) * 4
),
m_textBrush.Get()
);

if (game->IsActivePlay())
{
// Draw a rectangle for the touch input for the move control.
d2dContext->DrawRectangle(
D2D1::RectF(
0.0f,
windowBounds.Height - GameConstants::TouchRectangleSize,
GameConstants::TouchRectangleSize,
windowBounds.Height
),
m_textBrush.Get()
);
// Draw a rectangle for the touch input for the fire control.
d2dContext->DrawRectangle(
D2D1::RectF(
windowBounds.Width - GameConstants::TouchRectangleSize,
windowBounds.Height - GameConstants::TouchRectangleSize,
windowBounds.Width,
windowBounds.Height
),
),
m_textBrush.Get()
);

// Draw the cross hairs.


d2dContext->DrawLine(
D2D1::Point2F(windowBounds.Width / 2.0f - GameConstants::CrossHairHalfSize, windowBounds.Height / 2.0f),
D2D1::Point2F(windowBounds.Width / 2.0f + GameConstants::CrossHairHalfSize, windowBounds.Height / 2.0f),
m_textBrush.Get(),
3.0f
);
d2dContext->DrawLine(
D2D1::Point2F(windowBounds.Width / 2.0f, windowBounds.Height / 2.0f - GameConstants::CrossHairHalfSize),
D2D1::Point2F(windowBounds.Width / 2.0f, windowBounds.Height / 2.0f + GameConstants::CrossHairHalfSize),
m_textBrush.Get(),
3.0f
);
}
}
}

In this code, the Direct2D render target established for the overlay is updated to reflect the changes in the number
of hits, the time remaining, and the level number. The rectangles are drawn with calls to DrawRect, and the cross
hairs are drawn with a pair of calls to DrawLine.

Note You probably noticed the call to GameHud::Render takes a Windows::Foundation::Rect parameter,
which contains the size of the main window rectangle. This demonstrates an essential part of UI programming:
obtaining the size of window in a measurement called DIPs (device independent pixels), where a DIP is defined
as 1/96 of an inch. Direct2D scales the drawing units to actual pixels when the drawing occurs, and it does so
by using the Windows dots per inch (DPI) setting. Similarly, when you draw text using DirectWrite, you specify
DIPs rather than points for the size of the font. DIPs are expressed as floating point numbers.

Displaying game state information with an overlay


Besides the heads-up display, the game sample has an overlay that represents five game states, and all of which
feature a large black rectangle primitive with text for the player to read. (Be aware that the move-look controller
rectangles are not drawn, because they are not active in these states.) These overlay states are:
The game start overlay. We show this when the player starts the game. It contains the high score across
game sessions.
The pause state.

The level start state. We show this when the player starts a new level.

The game over state. We show this when the player fails a level.
The game stat display state. We show this when the player wins. It contains the final score the player has
achieved.

Let's look at how we initialize and draw the overlay for these five states.
Initializing and drawing the overlay
The five explicit states have some things in common: one, they all use a black rectangle in the center of the screen
as their background; two, the displayed text is either title text or body text; and three, the text uses the Segoe UI
font and is drawn on top of the back rectangle. As a result, the resources they need and the methods that
implement them are very similar.
The game sample has four methods( GameInfoOverlay::Initialize, GameInfoOverlay::SetDpi,
GameInfoOverlay::RecreateDirectXResources, and GameInfoOverlay::RecreateDpiDependentResources)
that it uses to initialize, set the dots per inch, recreate the DirectWrite resources (the text elements), and construct
this overlay for display, respectively. This is the code for these four methods:
void GameInfoOverlay::Initialize(
_In_ ID2D1Device* d2dDevice,
_In_ ID2D1DeviceContext* d2dContext,
_In_ IDWriteFactory* dwriteFactory,
_In_ float dpi)
{
m_initialized = true;

m_dwriteFactory = dwriteFactory;
m_dpi = dpi;
m_d2dDevice = d2dDevice;
m_d2dContext = d2dContext;

ComPtr<ID2D1Factory> factory;
d2dDevice->GetFactory(&factory);

DX::ThrowIfFailed(
factory.As(&m_d2dFactory)
);

RecreateDirectXResources();
}

void GameInfoOverlay::SetDpi(float dpi)


{
if (m_initialized)
{
if (dpi != m_dpi)
{
m_dpi = dpi;
RecreateDpiDependentResources();
}
}
}

void GameInfoOverlay::RecreateDirectXResources()
{
if (!m_initialized)
{
return;
}

// Create D2D Resources.


DX::ThrowIfFailed(
m_dwriteFactory->CreateTextFormat(
L"Segoe UI",
nullptr,
DWRITE_FONT_WEIGHT_MEDIUM,
DWRITE_FONT_STYLE_NORMAL,
DWRITE_FONT_STRETCH_NORMAL,
32, // font size
L"en-us", // locale
&m_textFormatTitle
)
);

DX::ThrowIfFailed(
m_dwriteFactory->CreateTextFormat(
L"Segoe UI",
nullptr,
DWRITE_FONT_WEIGHT_LIGHT,
DWRITE_FONT_STYLE_NORMAL,
DWRITE_FONT_STRETCH_NORMAL,
24, // font size
L"en-us", // locale
L"en-us", // locale
&m_textFormatBody
)
);

DX::ThrowIfFailed(
m_textFormatTitle->SetTextAlignment(DWRITE_TEXT_ALIGNMENT_CENTER)
);
DX::ThrowIfFailed(
m_textFormatTitle->SetParagraphAlignment(DWRITE_PARAGRAPH_ALIGNMENT_NEAR)
);
DX::ThrowIfFailed(
m_textFormatBody->SetTextAlignment(DWRITE_TEXT_ALIGNMENT_LEADING)
);
DX::ThrowIfFailed(
m_textFormatBody->SetParagraphAlignment(DWRITE_PARAGRAPH_ALIGNMENT_NEAR)
);

DX::ThrowIfFailed(
m_d2dContext->CreateSolidColorBrush(
D2D1::ColorF(D2D1::ColorF::White),
&m_textBrush
)
);
DX::ThrowIfFailed(
m_d2dContext->CreateSolidColorBrush(
D2D1::ColorF(D2D1::ColorF::Black),
&m_backgroundBrush
)
);
DX::ThrowIfFailed(
m_d2dContext->CreateSolidColorBrush(
D2D1::ColorF(0xdb7100, 1.0f),
&m_actionBrush
)
);

RecreateDpiDependentResources();
}
void GameInfoOverlay::RecreateDpiDependentResources()
{
m_levelBitmap = nullptr;

// Create a D2D bitmap to be used for Game Info Overlay when waiting to
// start a level or to display game statistics.
D2D1_BITMAP_PROPERTIES1 properties;
properties.pixelFormat.format = DXGI_FORMAT_B8G8R8A8_UNORM;
properties.pixelFormat.alphaMode = D2D1_ALPHA_MODE_PREMULTIPLIED;
properties.dpiX = m_dpi;
properties.dpiY = m_dpi;
properties.bitmapOptions = D2D1_BITMAP_OPTIONS_TARGET;
properties.colorContext = nullptr;
DX::ThrowIfFailed(
m_d2dContext->CreateBitmap(
D2D1::SizeU(
static_cast<UINT32>(GameInfoOverlayConstant::Width * m_dpi / 96.0f),
static_cast<UINT32>(GameInfoOverlayConstant::Height * m_dpi / 96.0f)
),
nullptr,
0,
&properties,
&m_levelBitmap
)
);
m_d2dContext->SetTarget(m_levelBitmap.Get());
m_d2dContext->BeginDraw();
m_d2dContext->SetTransform(D2D1::Matrix3x2F::Identity());
m_d2dContext->Clear(D2D1::ColorF(D2D1::ColorF::Black));
HRESULT hr = m_d2dContext->EndDraw();
if (hr != D2DERR_RECREATE_TARGET)
{
// The D2DERR_RECREATE_TARGET indicates there has been a problem with the underlying
// D3D device. All subsequent rendering will be ignored until the device is recreated.
// This error will be propagated and the appropriate D3D error will be returned from the
// swapchain->Present(...) call. At that point, the sample will recreate the device
// and all associated resources. As a result, the D2DERR_RECREATE_TARGET doesn't
// need to be handled here.
DX::ThrowIfFailed(hr);
}
}

The Initialize method obtains a factory from the ID2D1Device object passed to it, which it uses to create an
ID2D1DeviceContext that the overlay object itself can draw into, and sets the m_dWriteFactory field to the
provided IDWriteFactory reference. It also sets the DPI for the context. Then, it calls RecreateDeviceResources
to assemble and draw the overlay.
RecreateDeviceResources uses the DirectWrite factory object to create formatters (brushes) for the title and
body text strings that will be displayed on the overlay. It creates a white brush to draw the text, a black brush to
draw the background, and an orange brush to draw action messages. Then, it calls
RecreateDpiDependentResources to prepare a bitmap to draw the text on by calling
ID2D1DeviceContext::CreateBitmap. Lastly, RecreateDpiDependentResources sets the render target for the
Direct2D device context to the bitmap and clears it, which then sets each pixel in the bitmap to the color black.
Now, all the overlay needs is some text to display!
Representing game state in the overlay
Each of the five overlay states in the game sample has a corresponding method on the GameInfoOverlay object.
These methods draw a variation of the overlay to communicate explicit info to the player about the game itself.
This communication is, of course, represented as two strings: a title string, and a body string. Because the sample
already configured the resources and layout for this info in the RecreateDeviceResources method, it only needs
to provide the overlay state-specific strings.
Now, in the definition of the GameInfoOverlay class, the sample declared three rectangular areas that correspond
to specific regions of the overlay, as shown here:

static const D2D1_RECT_F titleRectangle = D2D1::RectF(50.0f, 50.0f, GameInfoOverlayConstant::Width - 50.0f, 100.0f);


static const D2D1_RECT_F bodyRectangle = D2D1::RectF(50.0f, 110.0f, GameInfoOverlayConstant::Width - 50.0f,
GameInfoOverlayConstant::Height - 50.0f);
static const D2D1_RECT_F actionRectangle = D2D1::RectF(50.0f, GameInfoOverlayConstant::Height - 45.0f, GameInfoOverlayConstant::Width -
50.0f, GameInfoOverlayConstant::Height - 5.0f);

These areas each have a specific purpose:


titleRectangle is where the title text is drawn.
bodyRectangle is where the body text is drawn.
actionRectangle is where the text that informs the player to take a specific action is drawn. (It's in the bottom
left of the overlay bitmap.)
With these areas in mind, let's look at one of the state-specific methods, GameInfoOverlay::SetGameStats, and
see how the overlay is drawn.
void GameInfoOverlay::SetGameStats(int maxLevel, int hitCount, int shotCount)
{
int length;
Platform::String^ string;

m_d2dContext->SetTarget(m_levelBitmap.Get());
m_d2dContext->BeginDraw();
m_d2dContext->SetTransform(D2D1::Matrix3x2F::Identity());
m_d2dContext->FillRectangle(&titleRectangle, m_backgroundBrush.Get());
m_d2dContext->FillRectangle(&bodyRectangle, m_backgroundBrush.Get());
string = "High Score";

m_d2dContext->DrawText(
string->Data(),
string->Length(),
m_textFormatTitle.Get(),
titleRectangle,
m_textBrush.Get()
);
length = swprintf_s(
wsbuffer,
bufferLength,
L"Levels Completed %d\nTotal Points %d\nTotal Shots %d",
maxLevel,
hitCount,
shotCount
);
string = ref new Platform::String(wsbuffer, length);
m_d2dContext->DrawText(
string->Data(),
string->Length(),
m_textFormatBody.Get(),
bodyRectangle,
m_textBrush.Get()
);
HRESULT hr = m_d2dContext->EndDraw();
if (hr != D2DERR_RECREATE_TARGET)
{
// The D2DERR_RECREATE_TARGET indicates there has been a problem with the underlying
// D3D device. All subsequent rendering will be ignored until the device is recreated.
// This error will be propagated and the appropriate D3D error will be returned from the
// swapchain->Present(...) call. At that point, the sample will recreate the device
// and all associated resources. As a result, the D2DERR_RECREATE_TARGET doesn't
// need to be handled here.
DX::ThrowIfFailed(hr);
}
}

Using the Direct2D device context that the GameInfoOverlay object initialized and configured using Initialize
and RecreateDirectXResources, this method fills the title and body rectangles with black using the background
brush. It draws the text for the "High Score" string to the title rectangle and a string containing the updates game
state information to the body rectangle using the white text brush.
The action rectangle is updated by a subsequent call to GameInfoOverlay::SetAction from a method on the
DirectXApp object, which provides the game state info needed by SetAction to determine the right message to
the player (such as "Tap to continue").
The overlay for any given state is chosen in the SetGameInfoOverlay method on DirectXApp, like this:
void DirectXApp::SetGameInfoOverlay(GameInfoOverlayState state)
{
m_gameInfoOverlayState = state;
switch (state)
{
case GameInfoOverlayState::Loading:
m_renderer->InfoOverlay()->SetGameLoading(m_loadingCount);
break;

case GameInfoOverlayState::GameStats:
m_renderer->InfoOverlay()->SetGameStats(
m_game->HighScore().levelCompleted + 1,
m_game->HighScore().totalHits,
m_game->HighScore().totalShots
);
break;

case GameInfoOverlayState::LevelStart:
m_renderer->InfoOverlay()->SetLevelStart(
m_game->LevelCompleted() + 1,
m_game->CurrentLevel()->Objective(),
m_game->CurrentLevel()->TimeLimit(),
m_game->BonusTime()
);
break;

case GameInfoOverlayState::GameOverCompleted:
m_renderer->InfoOverlay()->SetGameOver(
true,
m_game->LevelCompleted() + 1,
m_game->TotalHits(),
m_game->TotalShots(),
m_game->HighScore().totalHits
);
break;

case GameInfoOverlayState::GameOverExpired:
m_renderer->InfoOverlay()->SetGameOver(
false,
m_game->LevelCompleted(),
m_game->TotalHits(),
m_game->TotalShots(),
m_game->HighScore().totalHits
);
break;

case GameInfoOverlayState::Pause:
m_renderer->InfoOverlay()->SetPause();
break;
}
}

And now the game sample has a way to communicate text info to the player based on game state.
Next steps
In the next topic, Adding controls, we look at how the player interacts with the game sample, and how input
changes game state.
Complete sample code for this section
GameHud.h
//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#pragma once

#include "Simple3DGame.h"
#include "DirectXSample.h"

ref class Simple3DGame;

ref class GameHud


{
internal:
GameHud(
_In_ Platform::String^ titleHeader,
_In_ Platform::String^ titleBody
);

void CreateDeviceIndependentResources(
_In_ IDWriteFactory* dwriteFactory,
_In_ IWICImagingFactory* wicFactory
);

void CreateDeviceResources(_In_ ID2D1DeviceContext* d2dContext);


void UpdateForWindowSizeChange(_In_ Windows::Foundation::Rect windowBounds);
void Render(
_In_ Simple3DGame^ game,
_In_ ID2D1DeviceContext* d2dContext,
_In_ Windows::Foundation::Rect windowBounds
);

private:
Microsoft::WRL::ComPtr<IDWriteFactory> m_dwriteFactory;
Microsoft::WRL::ComPtr<IWICImagingFactory> m_wicFactory;

Microsoft::WRL::ComPtr<ID2D1SolidColorBrush> m_textBrush;
Microsoft::WRL::ComPtr<IDWriteTextFormat> m_textFormatBody;
Microsoft::WRL::ComPtr<IDWriteTextFormat> m_textFormatBodySymbol;

Microsoft::WRL::ComPtr<IDWriteTextFormat> m_textFormatTitleHeader;
Microsoft::WRL::ComPtr<IDWriteTextFormat> m_textFormatTitleBody;
Microsoft::WRL::ComPtr<ID2D1Bitmap> m_logoBitmap;
Microsoft::WRL::ComPtr<IDWriteTextLayout> m_titleHeaderLayout;
Microsoft::WRL::ComPtr<IDWriteTextLayout> m_titleBodyLayout;

bool m_showTitle;
Platform::String^ m_titleHeader;
Platform::String^ m_titleBody;

float m_titleBodyVerticalOffset;
D2D1_SIZE_F m_logoSize;
D2D1_SIZE_F m_maxTitleSize;
};

GameHud.cpp

//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved
#include "pch.h"
#include "GameHud.h"
#include "GameConstants.h"

using namespace Microsoft::WRL;


using namespace Windows::UI::Core;
using namespace Windows::ApplicationModel;
using namespace Windows::Foundation;
using namespace Windows::Storage;
using namespace Windows::UI::ViewManagement;
using namespace Windows::Graphics::Display;
using namespace D2D1;

//----------------------------------------------------------------------

GameHud::GameHud(
_In_ Platform::String^ titleHeader,
_In_ Platform::String^ titleBody
)
{
m_titleHeader = titleHeader;
m_titleBody = titleBody;

m_showTitle = true;
m_titleBodyVerticalOffset = GameConstants::Margin;
m_logoSize = D2D1::SizeF(0.0f, 0.0f);
}

//----------------------------------------------------------------------

void GameHud::CreateDeviceIndependentResources(
_In_ IDWriteFactory* dwriteFactory,
_In_ IWICImagingFactory* wicFactory
)
{
m_dwriteFactory = dwriteFactory;
m_wicFactory = wicFactory;

DX::ThrowIfFailed(
m_dwriteFactory->CreateTextFormat(
L"Segoe UI",
nullptr,
DWRITE_FONT_WEIGHT_LIGHT,
DWRITE_FONT_STYLE_NORMAL,
DWRITE_FONT_STRETCH_NORMAL,
GameConstants::HudBodyPointSize,
L"en-us",
&m_textFormatBody
)
);
DX::ThrowIfFailed(
m_dwriteFactory->CreateTextFormat(
L"Segoe UI Symbol",
nullptr,
DWRITE_FONT_WEIGHT_LIGHT,
DWRITE_FONT_STYLE_NORMAL,
DWRITE_FONT_STRETCH_NORMAL,
GameConstants::HudBodyPointSize,
L"en-us",
&m_textFormatBodySymbol
)
);
DX::ThrowIfFailed(
m_dwriteFactory->CreateTextFormat(
L"Segoe UI Light",
nullptr,
DWRITE_FONT_WEIGHT_LIGHT,
DWRITE_FONT_STYLE_NORMAL,
DWRITE_FONT_STYLE_NORMAL,
DWRITE_FONT_STRETCH_NORMAL,
GameConstants::HudTitleHeaderPointSize,
L"en-us",
&m_textFormatTitleHeader
)
);
DX::ThrowIfFailed(
m_dwriteFactory->CreateTextFormat(
L"Segoe UI Light",
nullptr,
DWRITE_FONT_WEIGHT_LIGHT,
DWRITE_FONT_STYLE_NORMAL,
DWRITE_FONT_STRETCH_NORMAL,
GameConstants::HudTitleBodyPointSize,
L"en-us",
&m_textFormatTitleBody
)
);

DX::ThrowIfFailed(m_textFormatBody->SetTextAlignment(DWRITE_TEXT_ALIGNMENT_LEADING));
DX::ThrowIfFailed(m_textFormatBody->SetParagraphAlignment(DWRITE_PARAGRAPH_ALIGNMENT_NEAR));
DX::ThrowIfFailed(m_textFormatBodySymbol->SetTextAlignment(DWRITE_TEXT_ALIGNMENT_LEADING));
DX::ThrowIfFailed(m_textFormatBodySymbol->SetParagraphAlignment(DWRITE_PARAGRAPH_ALIGNMENT_NEAR));
DX::ThrowIfFailed(m_textFormatTitleHeader->SetTextAlignment(DWRITE_TEXT_ALIGNMENT_LEADING));
DX::ThrowIfFailed(m_textFormatTitleHeader->SetParagraphAlignment(DWRITE_PARAGRAPH_ALIGNMENT_NEAR));
DX::ThrowIfFailed(m_textFormatTitleBody->SetTextAlignment(DWRITE_TEXT_ALIGNMENT_LEADING));
DX::ThrowIfFailed(m_textFormatTitleBody->SetParagraphAlignment(DWRITE_PARAGRAPH_ALIGNMENT_NEAR));
}

//----------------------------------------------------------------------

void GameHud::CreateDeviceResources(_In_ ID2D1DeviceContext* d2dContext)


{
auto location = Package::Current->InstalledLocation;
Platform::String^ path = Platform::String::Concat(location->Path, "\\");
path = Platform::String::Concat(path, "windows-sdk.png");

ComPtr<IWICBitmapDecoder> wicBitmapDecoder;
DX::ThrowIfFailed(
m_wicFactory->CreateDecoderFromFilename(
path->Data(),
nullptr,
GENERIC_READ,
WICDecodeMetadataCacheOnDemand,
&wicBitmapDecoder
)
);

ComPtr<IWICBitmapFrameDecode> wicBitmapFrame;
DX::ThrowIfFailed(
wicBitmapDecoder->GetFrame(0, &wicBitmapFrame)
);

ComPtr<IWICFormatConverter> wicFormatConverter;
DX::ThrowIfFailed(
m_wicFactory->CreateFormatConverter(&wicFormatConverter)
);

DX::ThrowIfFailed(
wicFormatConverter->Initialize(
wicBitmapFrame.Get(),
GUID_WICPixelFormat32bppPBGRA,
WICBitmapDitherTypeNone,
nullptr,
0.0,
WICBitmapPaletteTypeCustom // The BGRA format has no palette, so this value is ignored.
)
);
double dpiX = 96.0f;
double dpiY = 96.0f;
DX::ThrowIfFailed(
wicFormatConverter->GetResolution(&dpiX, &dpiY)
);

// Create D2D Resources.


DX::ThrowIfFailed(
d2dContext->CreateBitmapFromWicBitmap(
wicFormatConverter.Get(),
BitmapProperties(
PixelFormat(DXGI_FORMAT_B8G8R8A8_UNORM, D2D1_ALPHA_MODE_PREMULTIPLIED),
static_cast<float>(dpiX),
static_cast<float>(dpiY)
),
&m_logoBitmap
)
);

m_logoSize = m_logoBitmap->GetSize();

DX::ThrowIfFailed(
d2dContext->CreateSolidColorBrush(
D2D1::ColorF(D2D1::ColorF::White),
&m_textBrush
)
);
}

//----------------------------------------------------------------------

void GameHud::UpdateForWindowSizeChange(_In_ Windows::Foundation::Rect windowBounds)


{
m_maxTitleSize.width = windowBounds.Width - GameConstants::HudSafeWidth;
m_maxTitleSize.height = windowBounds.Height;

float headerWidth = m_maxTitleSize.width - (m_logoSize.width + 2 * GameConstants::Margin);

if (headerWidth > 0)
{
// Only resize the text layout for the Title area when there is enough space.
m_showTitle = true;

DX::ThrowIfFailed(
m_dwriteFactory->CreateTextLayout(
m_titleHeader->Data(),
m_titleHeader->Length(),
m_textFormatTitleHeader.Get(),
headerWidth,
m_maxTitleSize.height,
&m_titleHeaderLayout
)
);

DWRITE_TEXT_METRICS metrics = {0};


DX::ThrowIfFailed(
m_titleHeaderLayout->GetMetrics(&metrics)
);

// Compute the vertical size of the laid out header and logo. This could change
// based on the window size and the layout of the text. In some cases, the text
// may wrap.
m_titleBodyVerticalOffset = max(m_logoSize.height + GameConstants::Margin * 2, metrics.height + 2 * GameConstants::Margin);

DX::ThrowIfFailed(
m_dwriteFactory->CreateTextLayout(
m_titleBody->Data(),
m_titleBody->Length(),
m_textFormatTitleBody.Get(),
m_textFormatTitleBody.Get(),
m_maxTitleSize.width,
m_maxTitleSize.height - m_titleBodyVerticalOffset,
&m_titleBodyLayout
)
);
}
else
{
// Not enough horizontal space for the titles, so just turn it off.
m_showTitle = false;
}
}

//----------------------------------------------------------------------

void GameHud::Render(
_In_ Simple3DGame^ game,
_In_ ID2D1DeviceContext* d2dContext,
_In_ Windows::Foundation::Rect windowBounds
)
{
if (m_showTitle)
{
d2dContext->DrawBitmap(
m_logoBitmap.Get(),
D2D1::RectF(
GameConstants::Margin,
GameConstants::Margin,
m_logoSize.width + GameConstants::Margin,
m_logoSize.height + GameConstants::Margin
)
);
d2dContext->DrawTextLayout(
Point2F(m_logoSize.width + 2.0f * GameConstants::Margin, GameConstants::Margin),
m_titleHeaderLayout.Get(),
m_textBrush.Get()
);
d2dContext->DrawTextLayout(
Point2F(GameConstants::Margin, m_titleBodyVerticalOffset),
m_titleBodyLayout.Get(),
m_textBrush.Get()
);
}

if (game != nullptr)
{
// This section is only used after the game state has been initialized.
static const int bufferLength = 256;
static char16 wsbuffer[bufferLength];
int length = swprintf_s(
wsbuffer,
bufferLength,
L"Hits:\t%10d\nShots:\t%10d\nTime:\t%8.1f",
game->TotalHits(),
game->TotalShots(),
game->TimeRemaining()
);

d2dContext->DrawText(
wsbuffer,
length,
m_textFormatBody.Get(),
D2D1::RectF(
windowBounds.Width - GameConstants::HudRightOffset,
GameConstants::HudTopOffset,
windowBounds.Width,
GameConstants::HudTopOffset + (GameConstants::HudBodyPointSize + GameConstants::Margin) * 3
),
m_textBrush.Get()
m_textBrush.Get()
);

// Using the unicode characters starting at 0x2780 ( ) for the consecutive levels of the game.
// For completed levels start with 0x278A ( ) (This is 0x2780 + 10).
uint32 levelCharacter[6];
for (uint32 i = 0; i < 6; i++)
{
levelCharacter[i] = 0x2780 + i + ((static_cast<uint32>(game->LevelCompleted()) == i) ? 10 : 0);
}
length = swprintf_s(
wsbuffer,
bufferLength,
L"%lc %lc %lc %lc %lc %lc",
levelCharacter[0],
levelCharacter[1],
levelCharacter[2],
levelCharacter[3],
levelCharacter[4],
levelCharacter[5]
);
d2dContext->DrawText(
wsbuffer,
length,
m_textFormatBodySymbol.Get(),
D2D1::RectF(
windowBounds.Width - GameConstants::HudRightOffset,
GameConstants::HudTopOffset + (GameConstants::HudBodyPointSize + GameConstants::Margin) * 3 + GameConstants::Margin,
windowBounds.Width,
GameConstants::HudTopOffset + (GameConstants::HudBodyPointSize+ GameConstants::Margin) * 4
),
m_textBrush.Get()
);

if (game->IsActivePlay())
{
// Draw a rectangle for the touch input for the move control.
d2dContext->DrawRectangle(
D2D1::RectF(
0.0f,
windowBounds.Height - GameConstants::TouchRectangleSize,
GameConstants::TouchRectangleSize,
windowBounds.Height
),
m_textBrush.Get()
);
// Draw a rectangle for the touch input for the fire control.
d2dContext->DrawRectangle(
D2D1::RectF(
windowBounds.Width - GameConstants::TouchRectangleSize,
windowBounds.Height - GameConstants::TouchRectangleSize,
windowBounds.Width,
windowBounds.Height
),
m_textBrush.Get()
);

// Draw the cross hairs.


d2dContext->DrawLine(
D2D1::Point2F(windowBounds.Width / 2.0f - GameConstants::CrossHairHalfSize, windowBounds.Height / 2.0f),
D2D1::Point2F(windowBounds.Width / 2.0f + GameConstants::CrossHairHalfSize, windowBounds.Height / 2.0f),
m_textBrush.Get(),
3.0f
);
d2dContext->DrawLine(
D2D1::Point2F(windowBounds.Width / 2.0f, windowBounds.Height / 2.0f - GameConstants::CrossHairHalfSize),
D2D1::Point2F(windowBounds.Width / 2.0f, windowBounds.Height / 2.0f + GameConstants::CrossHairHalfSize),
m_textBrush.Get(),
3.0f
);
);
}
}
}

GameInfoOverlay.h

//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#pragma once

namespace GameInfoOverlayConstant
{
static const float Width = 750.0f;
static const float Height = 380.0f;
};

enum class GameInfoOverlayCommand


{
None,
TapToContinue,
PleaseWait,
PlayAgain,
};

ref class GameInfoOverlay


{
internal:
GameInfoOverlay();

void Initialize(
_In_ ID2D1Device* d2dDevice,
_In_ ID2D1DeviceContext* d2dContext,
_In_ IDWriteFactory* dwriteFactory,
_In_ float dpi
);

void RecreateDirectXResources();
void SetDpi(float dpi);

void SetGameLoading(uint32 dots);


void SetGameStats(int maxLevel, int hitCount, int shotCount);
void SetGameOver(bool win, int maxLevel, int hitCount, int shotCount, int highScore);
void SetLevelStart(int level, Platform::String^ objective, float timeLimit, float bonusTime);
void SetPause();
void SetAction(GameInfoOverlayCommand action);
void HideGameInfoOverlay() { m_visible = false; };
void ShowGameInfoOverlay() { m_visible = true; };
bool Visible() { return m_visible; };
ID2D1Bitmap1* Bitmap() { return m_levelBitmap.Get(); }

private:
void RecreateDpiDependentResources();

bool m_initialized;
float m_dpi;
bool m_visible;

Microsoft::WRL::ComPtr<ID2D1Factory1> m_d2dFactory;
Microsoft::WRL::ComPtr<ID2D1Device> m_d2dDevice;
Microsoft::WRL::ComPtr<ID2D1DeviceContext> m_d2dContext;
Microsoft::WRL::ComPtr<IDWriteFactory> m_dwriteFactory;
Microsoft::WRL::ComPtr<ID2D1Bitmap1> m_levelBitmap;
Microsoft::WRL::ComPtr<IDWriteTextFormat> m_textFormatTitle;
Microsoft::WRL::ComPtr<IDWriteTextFormat> m_textFormatBody;
Microsoft::WRL::ComPtr<ID2D1SolidColorBrush> m_textBrush;
Microsoft::WRL::ComPtr<ID2D1SolidColorBrush> m_backgroundBrush;
Microsoft::WRL::ComPtr<ID2D1SolidColorBrush> m_actionBrush;
};

GameInfoOverlay.cpp

//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#include "pch.h"
#include "GameInfoOverlay.h"
#include "DirectXSample.h"

using namespace Windows::UI::Core;


using namespace Windows::Foundation;
using namespace Microsoft::WRL;
using namespace Windows::UI::ViewManagement;
using namespace Windows::Graphics::Display;
using namespace D2D1;

static const D2D1_RECT_F titleRectangle = D2D1::RectF(50.0f, 50.0f, GameInfoOverlayConstant::Width - 50.0f, 100.0f);


static const D2D1_RECT_F bodyRectangle = D2D1::RectF(50.0f, 110.0f, GameInfoOverlayConstant::Width - 50.0f,
GameInfoOverlayConstant::Height - 50.0f);
static const D2D1_RECT_F actionRectangle = D2D1::RectF(50.0f, GameInfoOverlayConstant::Height - 45.0f, GameInfoOverlayConstant::Width -
50.0f, GameInfoOverlayConstant::Height - 5.0f);
static const int bufferLength = 1000;
static char16 wsbuffer[bufferLength];

GameInfoOverlay::GameInfoOverlay():
m_initialized(false),
m_visible(false)
{
}
//----------------------------------------------------------------------
void GameInfoOverlay::Initialize(
_In_ ID2D1Device* d2dDevice,
_In_ ID2D1DeviceContext* d2dContext,
_In_ IDWriteFactory* dwriteFactory,
_In_ float dpi)
{
m_initialized = true;

m_dwriteFactory = dwriteFactory;
m_dpi = dpi;
m_d2dDevice = d2dDevice;
m_d2dContext = d2dContext;

ComPtr<ID2D1Factory> factory;
d2dDevice->GetFactory(&factory);

DX::ThrowIfFailed(
factory.As(&m_d2dFactory)
);

RecreateDirectXResources();
}
//----------------------------------------------------------------------
void GameInfoOverlay::SetDpi(float dpi)
{
{
if (m_initialized)
{
if (dpi != m_dpi)
{
m_dpi = dpi;
RecreateDpiDependentResources();
}
}
}
//----------------------------------------------------------------------
void GameInfoOverlay::RecreateDirectXResources()
{
if (!m_initialized)
{
return;
}

// Create D2D resources.


DX::ThrowIfFailed(
m_dwriteFactory->CreateTextFormat(
L"Segoe UI",
nullptr,
DWRITE_FONT_WEIGHT_MEDIUM,
DWRITE_FONT_STYLE_NORMAL,
DWRITE_FONT_STRETCH_NORMAL,
32, // font size
L"en-us", // locale
&m_textFormatTitle
)
);

DX::ThrowIfFailed(
m_dwriteFactory->CreateTextFormat(
L"Segoe UI",
nullptr,
DWRITE_FONT_WEIGHT_LIGHT,
DWRITE_FONT_STYLE_NORMAL,
DWRITE_FONT_STRETCH_NORMAL,
24, // font size
L"en-us", // locale
&m_textFormatBody
)
);

DX::ThrowIfFailed(
m_textFormatTitle->SetTextAlignment(DWRITE_TEXT_ALIGNMENT_CENTER)
);
DX::ThrowIfFailed(
m_textFormatTitle->SetParagraphAlignment(DWRITE_PARAGRAPH_ALIGNMENT_NEAR)
);
DX::ThrowIfFailed(
m_textFormatBody->SetTextAlignment(DWRITE_TEXT_ALIGNMENT_LEADING)
);
DX::ThrowIfFailed(
m_textFormatBody->SetParagraphAlignment(DWRITE_PARAGRAPH_ALIGNMENT_NEAR)
);

DX::ThrowIfFailed(
m_d2dContext->CreateSolidColorBrush(
D2D1::ColorF(D2D1::ColorF::White),
&m_textBrush
)
);
DX::ThrowIfFailed(
m_d2dContext->CreateSolidColorBrush(
D2D1::ColorF(D2D1::ColorF::Black),
&m_backgroundBrush
)
);
);
DX::ThrowIfFailed(
m_d2dContext->CreateSolidColorBrush(
D2D1::ColorF(0xdb7100, 1.0f),
&m_actionBrush
)
);

RecreateDpiDependentResources();
}
//----------------------------------------------------------------------
void GameInfoOverlay::RecreateDpiDependentResources()
{
m_levelBitmap = nullptr;

// Create a D2D bitmap to be used for Game Info Overlay when waiting to
// start a level or when displaying game statistics.
D2D1_BITMAP_PROPERTIES1 properties;
properties.pixelFormat.format = DXGI_FORMAT_B8G8R8A8_UNORM;
properties.pixelFormat.alphaMode = D2D1_ALPHA_MODE_PREMULTIPLIED;
properties.dpiX = m_dpi;
properties.dpiY = m_dpi;
properties.bitmapOptions = D2D1_BITMAP_OPTIONS_TARGET;
properties.colorContext = nullptr;
DX::ThrowIfFailed(
m_d2dContext->CreateBitmap(
D2D1::SizeU(
static_cast<UINT32>(GameInfoOverlayConstant::Width * m_dpi / 96.0f),
static_cast<UINT32>(GameInfoOverlayConstant::Height * m_dpi / 96.0f)
),
nullptr,
0,
&properties,
&m_levelBitmap
)
);
m_d2dContext->SetTarget(m_levelBitmap.Get());
m_d2dContext->BeginDraw();
m_d2dContext->SetTransform(D2D1::Matrix3x2F::Identity());
m_d2dContext->Clear(D2D1::ColorF(D2D1::ColorF::Black));
HRESULT hr = m_d2dContext->EndDraw();
if (hr != D2DERR_RECREATE_TARGET)
{
// The D2DERR_RECREATE_TARGET indicates there has been a problem with the underlying
// D3D device. All subsequent rendering will be ignored until the device is recreated.
// This error will be propagated and the appropriate D3D error will be returned from the
// swapchain->Present(...) call. At that point, the sample will recreate the device
// and all associated resources. As a result, the D2DERR_RECREATE_TARGET doesn't
// need to be handled here.
DX::ThrowIfFailed(hr);
}
}
//----------------------------------------------------------------------
void GameInfoOverlay::SetGameLoading(uint32 dots)
{
int length;
Platform::String^ string = "Loading Resources";

m_d2dContext->SetTarget(m_levelBitmap.Get());
m_d2dContext->BeginDraw();
m_d2dContext->SetTransform(D2D1::Matrix3x2F::Identity());
m_d2dContext->FillRectangle(&titleRectangle, m_backgroundBrush.Get());
m_d2dContext->FillRectangle(&bodyRectangle, m_backgroundBrush.Get());
m_d2dContext->FillRectangle(&actionRectangle, m_backgroundBrush.Get());

m_d2dContext->DrawText(
string->Data(),
string->Length(),
m_textFormatTitle.Get(),
titleRectangle,
m_textBrush.Get()
);

dots = dots % 10;


for (length = 0; length < 25; length++)
{
wsbuffer[length] = L' ';
}
for (uint32 i = 0; i < dots; i++)
{
wsbuffer[length++] = 0x25CF; // This is a Dot character in the font.
wsbuffer[length++] = L' ';
wsbuffer[length++] = L' ';
wsbuffer[length++] = L' ';
}

m_d2dContext->DrawText(
wsbuffer,
length,
m_textFormatBody.Get(),
bodyRectangle,
m_actionBrush.Get()
);

HRESULT hr = m_d2dContext->EndDraw();
if (hr != D2DERR_RECREATE_TARGET)
{
// The D2DERR_RECREATE_TARGET indicates there has been a problem with the underlying
// D3D device. All subsequent rendering will be ignored until the device is recreated.
// This error will be propagated and the appropriate D3D error will be returned from the
// swapchain->Present(...) call. At that point, the sample will recreate the device
// and all associated resources. As a result, the D2DERR_RECREATE_TARGET doesn't
// need to be handled here.
DX::ThrowIfFailed(hr);
}
}
//----------------------------------------------------------------------
void GameInfoOverlay::SetGameStats(int maxLevel, int hitCount, int shotCount)
{
int length;
Platform::String^ string;

m_d2dContext->SetTarget(m_levelBitmap.Get());
m_d2dContext->BeginDraw();
m_d2dContext->SetTransform(D2D1::Matrix3x2F::Identity());
m_d2dContext->FillRectangle(&titleRectangle, m_backgroundBrush.Get());
m_d2dContext->FillRectangle(&bodyRectangle, m_backgroundBrush.Get());
string = "High Score";

m_d2dContext->DrawText(
string->Data(),
string->Length(),
m_textFormatTitle.Get(),
titleRectangle,
m_textBrush.Get()
);
length = swprintf_s(
wsbuffer,
bufferLength,
L"Levels Completed %d\nTotal Points %d\nTotal Shots %d",
maxLevel,
hitCount,
shotCount
);
string = ref new Platform::String(wsbuffer, length);
m_d2dContext->DrawText(
string->Data(),
string->Length(),
string->Length(),
m_textFormatBody.Get(),
bodyRectangle,
m_textBrush.Get()
);
HRESULT hr = m_d2dContext->EndDraw();
if (hr != D2DERR_RECREATE_TARGET)
{
// The D2DERR_RECREATE_TARGET indicates there has been a problem with the underlying
// D3D device. All subsequent rendering will be ignored until the device is recreated.
// This error will be propagated and the appropriate D3D error will be returned from the
// swapchain->Present(...) call. At that point, the sample will recreate the device
// and all associated resources. As a result, the D2DERR_RECREATE_TARGET doesn't
// need to be handled here.
DX::ThrowIfFailed(hr);
}
}
//----------------------------------------------------------------------
void GameInfoOverlay::SetGameOver(bool win, int maxLevel, int hitCount, int shotCount, int highScore)
{
int length;
Platform::String^ string;

m_d2dContext->SetTarget(m_levelBitmap.Get());
m_d2dContext->BeginDraw();
m_d2dContext->SetTransform(D2D1::Matrix3x2F::Identity());
m_d2dContext->FillRectangle(&titleRectangle, m_backgroundBrush.Get());
m_d2dContext->FillRectangle(&bodyRectangle, m_backgroundBrush.Get());
if (win)
{
string = "You WON!";
}
else
{
string = "Game Over";
}
m_d2dContext->DrawText(
string->Data(),
string->Length(),
m_textFormatTitle.Get(),
titleRectangle,
m_textBrush.Get()
);
length = swprintf_s(
wsbuffer,
bufferLength,
L"Levels Completed %d\nTotal Points %d\nTotal Shots %d\n\nHigh Score %d\n",
maxLevel,
hitCount,
shotCount,
highScore
);
m_d2dContext->DrawText(
wsbuffer,
length,
m_textFormatBody.Get(),
bodyRectangle,
m_textBrush.Get()
);
HRESULT hr = m_d2dContext->EndDraw();
if (hr != D2DERR_RECREATE_TARGET)
{
// The D2DERR_RECREATE_TARGET indicates there has been a problem with the underlying
// D3D device. All subsequent rendering will be ignored until the device is recreated.
// This error will be propagated and the appropriate D3D error will be returned from the
// swapchain->Present(...) call. At that point, the sample will recreate the device
// and all associated resources. As a result, the D2DERR_RECREATE_TARGET doesn't
// need to be handled here.
DX::ThrowIfFailed(hr);
DX::ThrowIfFailed(hr);
}
}
//----------------------------------------------------------------------
void GameInfoOverlay::SetLevelStart(int level, Platform::String^ objective, float timeLimit, float bonusTime)
{
int length;
Platform::String^ string;

m_d2dContext->SetTarget(m_levelBitmap.Get());
m_d2dContext->BeginDraw();
m_d2dContext->SetTransform(D2D1::Matrix3x2F::Identity());
m_d2dContext->FillRectangle(&titleRectangle, m_backgroundBrush.Get());
m_d2dContext->FillRectangle(&bodyRectangle, m_backgroundBrush.Get());
length = swprintf_s(wsbuffer, bufferLength, L"Level %d", level);
m_d2dContext->DrawText(
wsbuffer,
length,
m_textFormatTitle.Get(),
titleRectangle,
m_textBrush.Get()
);

if (bonusTime > 0.0f)


{
length = swprintf_s(
wsbuffer,
bufferLength,
L"Objective: %s\nTime Limit: %6.1f sec\nBonus Time: %6.1f sec\n",
objective->Data(),
timeLimit,
bonusTime
);
}
else
{
length = swprintf_s(
wsbuffer,
bufferLength,
L"Objective: %s\nTime Limit: %6.1f sec\n",
objective->Data(),
timeLimit
);
}
string = ref new Platform::String(wsbuffer, length);
m_d2dContext->DrawText(
string->Data(),
string->Length(),
m_textFormatBody.Get(),
bodyRectangle,
m_textBrush.Get()
);
HRESULT hr = m_d2dContext->EndDraw();
if (hr != D2DERR_RECREATE_TARGET)
{
// The D2DERR_RECREATE_TARGET indicates there has been a problem with the underlying
// D3D device. All subsequent rendering will be ignored until the device is recreated.
// This error will be propagated and the appropriate D3D error will be returned from the
// swapchain->Present(...) call. At that point, the sample will recreate the device
// and all associated resources. As a result, the D2DERR_RECREATE_TARGET doesn't
// need to be handled here.
DX::ThrowIfFailed(hr);
}
}
//----------------------------------------------------------------------
void GameInfoOverlay::SetPause()
{
Platform::String^ string;

m_d2dContext->SetTarget(m_levelBitmap.Get());
m_d2dContext->SetTarget(m_levelBitmap.Get());
m_d2dContext->BeginDraw();
m_d2dContext->SetTransform(D2D1::Matrix3x2F::Identity());
m_d2dContext->FillRectangle(&titleRectangle, m_backgroundBrush.Get());
m_d2dContext->FillRectangle(&bodyRectangle, m_backgroundBrush.Get());
string = "Game Paused";

m_d2dContext->DrawText(
string->Data(),
string->Length(),
m_textFormatTitle.Get(),
bodyRectangle,
m_textBrush.Get()
);
HRESULT hr = m_d2dContext->EndDraw();
if (hr != D2DERR_RECREATE_TARGET)
{
// The D2DERR_RECREATE_TARGET indicates there has been a problem with the underlying
// D3D device. All subsequent rendering will be ignored until the device is recreated.
// This error will be propagated and the appropriate D3D error will be returned from the
// swapchain->Present(...) call. At that point, the sample will recreate the device
// and all associated resources. As a result, the D2DERR_RECREATE_TARGET doesn't
// need to be handled here.
DX::ThrowIfFailed(hr);
}
}
//----------------------------------------------------------------------
void GameInfoOverlay::SetAction(GameInfoOverlayCommand action)
{
Platform::String^ string;

m_d2dContext->SetTarget(m_levelBitmap.Get());
m_d2dContext->BeginDraw();
m_d2dContext->SetTransform(D2D1::Matrix3x2F::Identity());
m_d2dContext->FillRectangle(&actionRectangle, m_backgroundBrush.Get());

switch (action)
{
case GameInfoOverlayCommand::PlayAgain:
string = "Tap to play again ...";
break;
case GameInfoOverlayCommand::PleaseWait:
string = "Level loading, please wait ...";
break;
case GameInfoOverlayCommand::TapToContinue:
string = "Tap to continue ...";
break;
default:
string = "";
break;
}
if (action != GameInfoOverlayCommand::None)
{
m_d2dContext->DrawText(
string->Data(),
string->Length(),
m_textFormatBody.Get(),
actionRectangle,
m_actionBrush.Get()
);
}
HRESULT hr = m_d2dContext->EndDraw();
if (hr != D2DERR_RECREATE_TARGET)
{
// The D2DERR_RECREATE_TARGET indicates there has been a problem with the underlying
// D3D device. All subsequent rendering will be ignored until the device is recreated.
// This error will be propagated and the appropriate D3D error will be returned from the
// swapchain->Present(...) call. At that point, the sample will recreate the device
// and all associated resources. As a result, the D2DERR_RECREATE_TARGET doesn't
// need to be handled here.
// need to be handled here.
DX::ThrowIfFailed(hr);
}
}

Related topics
Create a simple UWP game with DirectX
Add controls
3/6/2017 35 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Now, we take a look at how the game sample implements move-look controls in a 3-D game, and how to develop
basic touch, mouse, and game controller controls.

Objective
To implement mouse/keyboard, touch, and Xbox controller controls in a Universal Windows Platform (UWP)
game with DirectX.

UWP game apps and controls


A good UWP game supports a broad variety of interfaces. A potential player might have Windows 10 on a tablet
with no physical buttons, or a media PC with an Xbox controller attached, or the latest desktop gaming rig with a
high-performance mouse and gaming keyboard. Your game should support all of these devices if the game
design allows it.
This sample supports all three. It's a simple first-person shooting game, and the move-look controls that are
standard for this genre are easily implemented for all three types of input.
For more info about controls, and move-look controls specifically, see Move-look controls for games and Touch
controls for games.

Common control behaviors


Touch controls and mouse/keyboard controls have a very similar core implementation. In a UWP app, a pointer is
simply a point on the screen. You can move it by sliding the mouse or sliding your finger on the touch screen. As
a result, you can register for a single set of events, and not worry about whether the player is using a mouse or a
touch screen to move and press the pointer.
When the MoveLookController class in the game sample is initialized, it registers for four pointer-specific
events and one mouse-specific event:
CoreWindow::PointerPressed. The left or right mouse button was pressed (and held), or the touch surface
was touched.
CoreWindow::PointerMoved. The mouse moved, or a drag action was made on the touch surface.
CoreWindow::PointerReleased. The left mouse button was released, or the object contacting the touch
surface was lifted.
CoreWindow::PointerExited. The pointer moved out of the main window.
Windows::Devices::Input::MouseMoved. The mouse moved a certain distance. Be aware that we are only
interested in mouse movement delta values, and not the current x-y position.
void MoveLookController::Initialize(
_In_ CoreWindow^ window
)
{
window->PointerPressed +=
ref new TypedEventHandler<CoreWindow^, PointerEventArgs^>(this, &MoveLookController::OnPointerPressed);

window->PointerMoved +=
ref new TypedEventHandler<CoreWindow^, PointerEventArgs^>(this, &MoveLookController::OnPointerMoved);

window->PointerReleased +=
ref new TypedEventHandler<CoreWindow^, PointerEventArgs^>(this, &MoveLookController::OnPointerReleased);

window->PointerExited +=
ref new TypedEventHandler<CoreWindow^, PointerEventArgs^>(this, &MoveLookController::OnPointerExited);

window->KeyDown +=
ref new TypedEventHandler<CoreWindow^, KeyEventArgs^>(this, &MoveLookController::OnKeyDown);

window->KeyUp +=
ref new TypedEventHandler<CoreWindow^, KeyEventArgs^>(this, &MoveLookController::OnKeyUp);

// A separate handler for mouse only relative mouse movement events.


Windows::Devices::Input::MouseDevice::GetForCurrentView()->MouseMoved +=
ref new TypedEventHandler<MouseDevice^, MouseEventArgs^>(this, &MoveLookController::OnMouseMoved);

ResetState();
m_state = MoveLookControllerState::None;

m_pitch = 0.0f;
m_yaw = 0.0f;
}

The Xbox controller is handled separately, using the XInput APIs. We talk about the implementation of game
controller controls in a bit.
In the game sample, the MoveLookController class has three controller-specific states, regardless of the control
type:
None. This is the initialized state for the controller. The game is not anticipating any controller input.
WaitForInput. The game is paused and is waiting for the player to continue.
Active. The game is running, processing player input.
The Active state is the state when the player is actively playing the game. During this state, the
MoveLookController instance is processing input events from all enabled input devices and interpreting the
player's intentions based on the aggregated event data. As a result, it updates the velocity and look direction (the
view plane normal) of the player's view and shares the updated data with the game after Update is called from
the game loop.
Be aware that the player can take more than one action at the same time. For example, he or she could be firing
spheres while moving the camera. All of these inputs are tracked in the Active state, with different pointer IDs
corresponding to different pointer actions. This is necessary because from a player's perspective, a pointer event
in the firing rectangle is different from one in the move rectangle or in the rest of the screen.
When a PointerPressed event is received, the MoveLookController obtains the pointer ID value created by the
window. The pointer ID represents a specific type of input. For example, on a multi-touch device, there may be
several different active inputs at the same time. The IDs are used to keep track of which input the player is using.
If one event is in the move rectangle of the touch screen, a pointer ID is assigned to track any pointer events in
move rectangle. Other pointer events in the fire rectangle are tracked separately, with a separate pointer ID. (We
talk about this some more in the section on touch controls.)
Input from the mouse has yet another ID and is also handled separately.
After the pointer events have been mapped to a specific game action, it's time to update the data the
MoveLookController object shares with the main game loop.
When called, the Update method in the game sample processes the input and updates the velocity and look
direction variables (m_velocity and m_lookdirection), which the game loop then retrieves by calling the public
Velocity and LookDirection methods on the MoveLookController instance.

void MoveLookController::Update()
{
UpdateGameController();

if (m_moveInUse)
{
// Move control.
XMFLOAT2 pointerDelta;

pointerDelta.x = m_movePointerPosition.x - m_moveFirstDown.x;


pointerDelta.y = m_movePointerPosition.y - m_moveFirstDown.y;

// Figure out the command from the virtual joystick.


XMFLOAT3 commandDirection = XMFLOAT3(0.0f, 0.0f, 0.0f);
if (fabsf(pointerDelta.x) > 16.0f) // Leave 32 pixel-wide dead spot for being still.
m_moveCommand.x -= pointerDelta.x/fabsf(pointerDelta.x);

if (fabsf(pointerDelta.y) > 16.0f)


m_moveCommand.y -= pointerDelta.y/fabsf(pointerDelta.y);
}

// Poll our state bits set by the keyboard input events.


if (m_forward)
{
m_moveCommand.y += 1.0f;
}
if (m_back)
{
m_moveCommand.y -= 1.0f;
}
if (m_left)
{
m_moveCommand.x += 1.0f;
}
if (m_right)
{
m_moveCommand.x -= 1.0f;
}
if (m_up)
{
m_moveCommand.z += 1.0f;
}
if (m_down)
{
m_moveCommand.z -= 1.0f;
}

// Make sure that 45deg cases are not faster.


if (fabsf(m_moveCommand.x) > 0.1f ||
fabsf(m_moveCommand.y) > 0.1f ||
fabsf(m_moveCommand.z) > 0.1f)
{
XMStoreFloat3(&m_moveCommand, XMVector3Normalize(XMLoadFloat3(&m_moveCommand)));
}

// Rotate command to align with our direction (world coordinates).


XMFLOAT3 wCommand;
wCommand.x = m_moveCommand.x * cosf(m_yaw) - m_moveCommand.y * sinf(m_yaw);
wCommand.x = m_moveCommand.x * cosf(m_yaw) - m_moveCommand.y * sinf(m_yaw);
wCommand.y = m_moveCommand.x * sinf(m_yaw) + m_moveCommand.y * cosf(m_yaw);
wCommand.z = m_moveCommand.z;

// Scale for sensitivity adjustment.


// Our velocity is based on the command, y is up.
m_velocity.x = -wCommand.x * MOVEMENT_GAIN;
m_velocity.z = wCommand.y * MOVEMENT_GAIN;
m_velocity.y = wCommand.z * MOVEMENT_GAIN;

// Clear movement input accumulator for use during next frame.


m_moveCommand = XMFLOAT3(0.0f, 0.0f, 0.0f);
}

The game loop can test to see if the player is firing by calling the IsFiring method on the MoveLookController
instance. The MoveLookController checks to see if the player has pressed the fire button on one of the three
input types.

bool MoveLookController::IsFiring()
{
if (m_state == MoveLookControllerState::Active)
{
if (m_autoFire)
{
return (m_fireInUse || (m_mouseInUse && m_mouseLeftInUse) || m_xinputTriggerInUse);
}
else
{
if (m_firePressed)
{
m_firePressed = false;
return true;
}
}
}
return false;
}

If the player moves the pointer outside the main window of the game, or presses the pause button (the P key or
the Xbox controller start button), the game must be paused. The MoveLookController registered the press, and
informs the game loop when it calls the IsPauseRequested method. At that point, if IsPauseRequested returns
true, the game loop then calls WaitForPress on the MoveLookController to move the controller into the
WaitForInput state. Then, the MoveLookController waits for the player to select one of the menu items to load,
continue, or exit the game, and stop processing gameplay input events until it returns to the Active state.
See the complete code sample for this section.
Now, let's look at the implementation of each of the three control types in a little more detail.

Implementing relative mouse controls


If mouse movement is detected, we want use that movement to determine the new pitch and yaw of the camera.
We do that by implementing relative mouse controls, where we handle the relative distance the mouse has
movedthe delta between the start of the movement and the stopas opposed to recording the absolute x-y
pixel coordinates of the motion.
To do that, we obtain the changes in the X (the horizontal motion) and the Y (the vertical motion) coordinates by
examining the MouseDelta::X and MouseDelta::Y fields on the
Windows::Device::Input::MouseEventArgs::MouseDelta argument object returned by the MouseMoved
event.
void MoveLookController::OnMouseMoved(
_In_ MouseDevice^ /* mouseDevice */,
_In_ MouseEventArgs^ args
)
{
// Handle Mouse Input via dedicated relative movement handler.

switch (m_state)
{
case MoveLookControllerState::Active:
XMFLOAT2 mouseDelta;
mouseDelta.x = static_cast<float>(args->MouseDelta.X);
mouseDelta.y = static_cast<float>(args->MouseDelta.Y);

XMFLOAT2 rotationDelta;
rotationDelta.x = mouseDelta.x * ROTATION_GAIN; // scale for control sensitivity
rotationDelta.y = mouseDelta.y * ROTATION_GAIN;

// Update our orientation based on the command.


m_pitch -= rotationDelta.y;
m_yaw += rotationDelta.x;

// Limit pitch to straight up or straight down.


float limit = XM_PI / 2.0f - 0.01f;
m_pitch = __max(-limit, m_pitch);
m_pitch = __min(+limit, m_pitch);

// Keep longitude in same range by wrapping.


if (m_yaw > XM_PI)
{
m_yaw -= XM_PI * 2.0f;
}
else if (m_yaw < -XM_PI)
{
m_yaw += XM_PI * 2.0f;
}
break;
}
}

Implementing touch controls


Touch controls are the trickiest to develop, because they are the most complex and require the most fine-tuning
to be effective. In the game sample, a rectangle in the lower left quadrant of the screen is used as a directional
pad, where sliding your thumb left and right in this space slides the camera left and right, and sliding your thumb
up and down moves the camera forward and backward. A rectangle in the lower right quadrant of the screen can
be pressed to fire the spheres. Aiming (pitch and yaw) are controlled by sliding your finger on the parts of the
screen not reserved for moving and firing; as your finger moves, the camera (with fixed cross hairs) moves
similarly.
The move and fire rectangles are created by two methods in the sample code:

void SetMoveRect(
_In_ DirectX::XMFLOAT2 upperLeft,
_In_ DirectX::XMFLOAT2 lowerRight
);
void SetFireRect(
_In_ DirectX::XMFLOAT2 upperLeft,
_In_ DirectX::XMFLOAT2 lowerRight
);

We treat touch device pointer events for the other regions of the screen as look commands. If the screen is
resized, these rectangles must be computed again (and redrawn).
If a touch device pointer event is raised in one of these regions and the game state is set to Active, it's assigned a
pointer ID, as we discussed earlier.

void MoveLookController::OnPointerPressed(
_In_ CoreWindow^ sender,
_In_ PointerEventArgs^ args
)
{
PointerPoint^ point = args->CurrentPoint;
UINT32 pointerID = point->PointerId;
Point pointerPosition = point->Position;
PointerPointProperties^ pointProperties = point->Properties;
auto pointerDevice = point->PointerDevice;
auto pointerDeviceType = pointerDevice->PointerDeviceType;

XMFLOAT2 position = XMFLOAT2(pointerPosition.X, pointerPosition.Y); // convert to allow math

switch (m_state)
{
case MoveLookControllerState::WaitForInput:
// ...
// Game is paused, wait for click inside the game window.
// ...
break;

case MoveLookControllerState::Active:
switch (pointerDeviceType)
{
case Windows::Devices::Input::PointerDeviceType::Touch:
// Check to see if this pointer is in the move control.
if (position.x > m_moveUpperLeft.x &&
position.x < m_moveLowerRight.x &&
position.y > m_moveUpperLeft.y &&
position.y < m_moveLowerRight.y)
{
if (!m_moveInUse) // If no pointer is in this control yet:
{
// Process a DPad touch down event.
m_moveFirstDown = position; // Save the location of the initial contact.
m_movePointerID = pointerID; // Store the pointer using this control.
m_moveInUse = true;
}
}
// Check to see if this pointer is in the fire control.
else if (position.x > m_fireUpperLeft.x &&
position.x < m_fireLowerRight.x &&
position.y > m_fireUpperLeft.y &&
position.y < m_fireLowerRight.y)
{
if (!m_fireInUse)
{
m_fireLastPoint = position;
m_firePointerID = pointerID;
m_fireInUse = true;
}
}
else
{
if (!m_lookInUse) // If no pointer is in this control yet:
{
m_lookLastPoint = position; // Save the pointer for a later move.
m_lookPointerID = pointerID; // Store the pointer using this control.
m_lookLastDelta.x = m_lookLastDelta.y = 0; // These are for smoothing.
m_lookInUse = true;
}
}
}
break;

default:
// ...
// Handle mouse input here.
// ...
break;
}
break;
}
return;
}

If a PointerPressed event has occurred in one of the three control regions, the move rectangle, the fire rectangle,
or the rest of the screen (the look control), the MoveLookController assigns the pointer ID for the pointer that
fired the event to a specific variable that corresponds to the region of the screen the event was fired in. For
example, if the event occurred in the move rectangle, the m_movePointerID variable is set to the pointer ID that
fired the event. A Boolean "in use" variable (m_lookInUse, in the example) is also set to indicate that the control
has not been released yet.
Now, let's look at how the game sample handles the PointerMoved touch screen event.
void MoveLookController::OnPointerMoved(
_In_ CoreWindow^ sender,
_In_ PointerEventArgs^ args
)
{
PointerPoint^ point = args->CurrentPoint;
UINT32 pointerID = point->PointerId;
Point pointerPosition = point->Position;
PointerPointProperties^ pointProperties = point->Properties;
auto pointerDevice = point->PointerDevice;
auto pointerDeviceType = pointerDevice->PointerDeviceType;

XMFLOAT2 position = XMFLOAT2(pointerPosition.X, pointerPosition.Y); // Convert to allow math.

switch (m_state)
{
case MoveLookControllerState::Active:
// Decide which control this pointer is operating.
if (pointerID == m_movePointerID) // This is the move pointer.
{
m_movePointerPosition = position; // Save the current position.
}
else if (pointerID == m_lookPointerID) // This is the look pointer.
{
// Look control
XMFLOAT2 pointerDelta;
pointerDelta.x = position.x - m_lookLastPoint.x; // How far did the pointer move.
pointerDelta.y = position.y - m_lookLastPoint.y;

XMFLOAT2 rotationDelta;
rotationDelta.x = pointerDelta.x * ROTATION_GAIN; // Scale for control sensitivity.
rotationDelta.y = pointerDelta.y * ROTATION_GAIN;
m_lookLastPoint = position; // Save for the next time through.

// Update our orientation based on the command.


m_pitch -= rotationDelta.y;
m_yaw += rotationDelta.x;

// Limit pitch to straight up or straight down.


m_pitch = __max(-XM_PI / 2.0f, m_pitch);
m_pitch = __min(+XM_PI / 2.0f, m_pitch);
}
else if (pointerID == m_firePointerID)
{
m_fireLastPoint = position;
}
else if (pointerID == m_mousePointerID)
{
// ...
}
break;
}
}

The MoveLookController checks the pointer ID to determine where the event occurred, and takes one of the
following actions:
If the PointerMoved event occurred in the move or fire rectangle, update the pointer position for the
controller.
If the PointerMoved event occurred somewhere in the rest of the screen (defined as the look controls),
calculate the change in pitch and yaw of the look direction vector.
Lastly, let's look at how the game sample handles the PointerReleased touch screen event.
void MoveLookController::OnPointerReleased(
_In_ CoreWindow^ sender,
_In_ PointerEventArgs^ args
)
{
PointerPoint^ point = args->CurrentPoint;
UINT32 pointerID = point->PointerId;
Point pointerPosition = point->Position;
PointerPointProperties^ pointProperties = point->Properties;

XMFLOAT2 position = XMFLOAT2(pointerPosition.X, pointerPosition.Y); // Convert to allow math.


switch (m_state)
{
case MoveLookControllerState::WaitForInput:
if (m_buttonInUse && (pointerID == m_buttonPointerID))
{
m_buttonInUse = false;
m_buttonPressed = true;
}
break;

case MoveLookControllerState::Active:
if (pointerID == m_movePointerID)
{
m_velocity = XMFLOAT3(0, 0, 0); // Stop on release.
m_moveInUse = false;
m_movePointerID = 0;
}
else if (pointerID == m_lookPointerID)
{
m_lookInUse = false;
m_lookPointerID = 0;
}
else if (pointerID == m_firePointerID)
{
m_fireInUse = false;
m_firePointerID = 0;
}
else if (pointerID == m_mousePointerID)
{
// ...
}
break;
}
}

If the ID of the pointer that fired the PointerReleased event is the ID of the previously recorded move pointer,
the MoveLookController sets the velocity to 0 because the player has stopped touching the move rectangle. If it
didn't set the velocity to 0, the player would keep moving! If you want to implement some form of inertia, this is
where you add the method that begins returning the velocity to 0 over future calls to Update from the game
loop.
Otherwise, if the PointerReleased event fired in the fire rectangle or the look region, the MoveLookController
resets the specific pointer IDs.
That's the basics of how touch screen controls are implemented in the game sample. Let's move on to mouse and
keyboard controls.

Implementing mouse and keyboard controls


The game sample implements these mouse and keyboard controls:
The W, S, A, and D keys move the player view forward, backward, left, and right, respectively. Pressing X and
the space bar move the view up and down, respectively.
Pressing the P key pauses the game.
Moving the mouse puts the player in control of the rotation (the pitch and yaw) of the camera view.
Clicking the left button fires a sphere.
To use the keyboard, the game sample registers for two extra events: CoreWindow::KeyUp and
CoreWindow::KeyDown, which handle the press and the release of a key, respectively.

window->KeyDown +=
ref new TypedEventHandler<CoreWindow^, KeyEventArgs^>(this, &MoveLookController::OnKeyDown);

window->KeyUp +=
ref new TypedEventHandler<CoreWindow^, KeyEventArgs^>(this, &MoveLookController::OnKeyUp);

The mouse is treated a little differently from the touch controls, even though it uses a pointer. Obviously, it
doesn't use the move and fire rectangles, as that would be very cumbersome for the player: how could they press
the move and fire controls at the same time? As noted earlier, the MoveLookController controller engages the
look controls whenever the mouse is moved, and engages the fire controls when the left mouse button is
pressed, as shown here.

void MoveLookController::OnPointerPressed(
_In_ CoreWindow^ /* sender */,
_In_ PointerEventArgs^ args
)
{
PointerPoint^ point = args->CurrentPoint;
uint32 pointerID = point->PointerId;
Point pointerPosition = point->Position;
PointerPointProperties^ pointProperties = point->Properties;
auto pointerDevice = point->PointerDevice;
auto pointerDeviceType = pointerDevice->PointerDeviceType;

XMFLOAT2 position = XMFLOAT2(pointerPosition.X, pointerPosition.Y); // convert to allow math

switch (m_state)
{
case MoveLookControllerState::WaitForInput:
if (position.x > m_buttonUpperLeft.x &&
position.x < m_buttonLowerRight.x &&
position.y > m_buttonUpperLeft.y &&
position.y < m_buttonLowerRight.y)
{
// Wait until the button is released before setting the variable.
m_buttonPointerID = pointerID;
m_buttonInUse = true;

}
break;

case MoveLookControllerState::Active:
switch (pointerDeviceType)
{
case Windows::Devices::Input::PointerDeviceType::Touch:
// Check to see if this pointer is in the move control.
if (position.x > m_moveUpperLeft.x &&
position.x < m_moveLowerRight.x &&
position.y > m_moveUpperLeft.y &&
position.y < m_moveLowerRight.y)
{
if (!m_moveInUse) // If no pointer is in this control yet:
{
// Process a DPad touch down event.
m_moveFirstDown = position; // Save the location of the initial contact.
m_movePointerID = pointerID; // Store the pointer using this control.
m_moveInUse = true;
}
}
// Check to see if this pointer is in the fire control.
else if (position.x > m_fireUpperLeft.x &&
position.x < m_fireLowerRight.x &&
position.y > m_fireUpperLeft.y &&
position.y < m_fireLowerRight.y)
{
if (!m_fireInUse)
{
m_fireLastPoint = position;
m_firePointerID = pointerID;
m_fireInUse = true;
if (!m_autoFire)
{
m_firePressed = true;
}
}
}
else
{
if (!m_lookInUse) // If no pointer is in this control yet:
{
m_lookLastPoint = position; // Save the pointer for a later move.
m_lookPointerID = pointerID; // Store the pointer using this control.
m_lookLastDelta.x = m_lookLastDelta.y = 0; // These are for smoothing.
m_lookInUse = true;
}
}
break;

default:
bool rightButton = pointProperties->IsRightButtonPressed;
bool leftButton = pointProperties->IsLeftButtonPressed;

if (!m_autoFire && (!m_mouseLeftInUse && leftButton))


{
m_firePressed = true;
}

if (!m_mouseInUse)
{
m_mouseInUse = true;
m_mouseLastPoint = position;
m_mousePointerID = pointerID;
m_mouseLeftInUse = leftButton;
m_mouseRightInUse = rightButton;
m_lookLastDelta.x = m_lookLastDelta.y = 0; // These are for smoothing.
}
else
{

}
break;
}

break;
}

return;
}

Now, let's look at how the game sample handles the PointerReleased mouse event.
void MoveLookController::OnPointerReleased(
_In_ CoreWindow^ /* sender */,
_In_ PointerEventArgs^ args
)
{
PointerPoint^ point = args->CurrentPoint;
uint32 pointerID = point->PointerId;
Point pointerPosition = point->Position;
PointerPointProperties^ pointProperties = point->Properties;

XMFLOAT2 position = XMFLOAT2(pointerPosition.X, pointerPosition.Y); // Convert to allow math.


switch (m_state)
{
case MoveLookControllerState::WaitForInput:
if (m_buttonInUse && (pointerID == m_buttonPointerID))
{
m_buttonInUse = false;
m_buttonPressed = true;
}
break;

case MoveLookControllerState::Active:
if (pointerID == m_movePointerID)
{
m_velocity = XMFLOAT3(0, 0, 0); // Stop on release.
m_moveInUse = false;
m_movePointerID = 0;
}
else if (pointerID == m_lookPointerID)
{
m_lookInUse = false;
m_lookPointerID = 0;
}
else if (pointerID == m_firePointerID)
{
m_fireInUse = false;
m_firePointerID = 0;
}
else if (pointerID == m_mousePointerID)
{
bool rightButton = pointProperties->IsRightButtonPressed;
bool leftButton = pointProperties->IsLeftButtonPressed;

m_mouseInUse = false;

// Don't clear the mouse pointer ID so that Move events still result in Look changes.
// m_mousePointerID = 0;
m_mouseLeftInUse = leftButton;
m_mouseRightInUse = rightButton;
}
break;
}
}

When the player stops pressing one of the mouse buttons, the input is complete: the spheres stop firing. But,
because look is always enabled, the game continues to use the same mouse pointer to track the ongoing look
events.
Now, let's look at the last of control types: the Xbox controller. It's handled separately from the touch and mouse
controls, because it doesn't use the pointer object.

Implementing Xbox controller controls


In the game sample, Xbox controller support is added by calls to the XInput APIs, which are set of APIs designed
to simplify programming for game controllers. In the game sample, we use the Xbox controller's left analog stick
for player movement, the right analog stick for the look controls, and the right trigger to fire. We use the start
button to pause and resume the game.
The Update method on the MoveLookController instance immediately checks to see if a game controller is
connected, and then checks the controller state.

void MoveLookController::UpdateGameController()
{
if (!m_isControllerConnected)
{
// Check for controller connection by trying to get the capabilties.
DWORD capsResult = XInputGetCapabilities(0, XINPUT_FLAG_GAMEPAD, &m_xinputCaps);
if (capsResult != ERROR_SUCCESS)
{
return;
}
// Device is connected.
m_isControllerConnected = true;
m_xinputStartButtonInUse = false;
m_xinputTriggerInUse = false;
}

DWORD stateResult = XInputGetState(0, &m_xinputState);


if (stateResult != ERROR_SUCCESS)
{
// Device is no longer connected.
m_isControllerConnected = false;
}

switch (m_state)
{
case MoveLookControllerState::WaitForInput:
if (m_xinputState.Gamepad.wButtons & XINPUT_GAMEPAD_START)
{
m_xinputStartButtonInUse = true;
}
else if (m_xinputStartButtonInUse)
{
// Trigger one time only on button release.
m_xinputStartButtonInUse = false;
m_buttonPressed = true;
}
break;

case MoveLookControllerState::Active:
if (m_xinputState.Gamepad.wButtons & XINPUT_GAMEPAD_START)
{
m_xinputStartButtonInUse = true;
}
else if (m_xinputStartButtonInUse)
{
// Trigger one time only on button release.
m_xinputStartButtonInUse = false;
m_pausePressed = true;
}
// Use the Right Thumb joystick on the XBox controller to control
// the eye point position control.
// The controller input goes from -32767 to 32767. We will normalize
// this from -1 to 1 and keep a dead spot in the middle to avoid drift.

if (m_xinputState.Gamepad.sThumbLX > XINPUT_GAMEPAD_LEFT_THUMB_DEADZONE ||


m_xinputState.Gamepad.sThumbLX < -XINPUT_GAMEPAD_LEFT_THUMB_DEADZONE)
{
float x = (float)m_xinputState.Gamepad.sThumbLX/32767.0f;
m_moveCommand.x -= x / fabsf(x);
}
}

if (m_xinputState.Gamepad.sThumbLY > XINPUT_GAMEPAD_LEFT_THUMB_DEADZONE ||


m_xinputState.Gamepad.sThumbLY < -XINPUT_GAMEPAD_LEFT_THUMB_DEADZONE)
{
float y = (float)m_xinputState.Gamepad.sThumbLY/32767.0f;
m_moveCommand.y += y / fabsf(y);
}

// Use the Left Thumb Joystick on the XBox controller to control


// the look at control.
// The controller input goes from -32767 to 32767. We will normalize
// this from -1 to 1 and keep a dead spot in the middle to avoid drift.
XMFLOAT2 pointerDelta;
if (m_xinputState.Gamepad.sThumbRX > XINPUT_GAMEPAD_RIGHT_THUMB_DEADZONE ||
m_xinputState.Gamepad.sThumbRX < -XINPUT_GAMEPAD_RIGHT_THUMB_DEADZONE)
{
pointerDelta.x = (float)m_xinputState.Gamepad.sThumbRX/32767.0f;
pointerDelta.x = pointerDelta.x * pointerDelta.x * pointerDelta.x;
}
else
{
pointerDelta.x = 0.0f;
}
if (m_xinputState.Gamepad.sThumbRY > XINPUT_GAMEPAD_RIGHT_THUMB_DEADZONE ||
m_xinputState.Gamepad.sThumbRY < -XINPUT_GAMEPAD_RIGHT_THUMB_DEADZONE)
{
pointerDelta.y = (float)m_xinputState.Gamepad.sThumbRY/32767.0f;
pointerDelta.y = pointerDelta.y * pointerDelta.y * pointerDelta.y;
}
else
{
pointerDelta.y = 0.0f;
}

XMFLOAT2 rotationDelta;
rotationDelta.x = pointerDelta.x * 0.08f; // Scale for control sensitivity.
rotationDelta.y = pointerDelta.y * 0.08f;

// Update our orientation based on the command.


m_pitch += rotationDelta.y;
m_yaw += rotationDelta.x;

// Limit pitch to straight up or straight down.


m_pitch = __max(-XM_PI / 2.0f, m_pitch);
m_pitch = __min(+XM_PI / 2.0f, m_pitch);

// Check the state of the A button. This is used to indicate fire control.

if (m_xinputState.Gamepad.bRightTrigger > XINPUT_GAMEPAD_TRIGGER_THRESHOLD)


{
if (!m_autoFire && !m_xinputTriggerInUse)
{
m_firePressed = true;
}
m_xinputTriggerInUse = true;
}
else
{
m_xinputTriggerInUse = false;
}
break;
}
}

If the game controller is in the Active state, this method checks to see if a user moved the left analog stick in a
specific direction. But the movement on the stick in a specific direction must register as larger than the radius of
the dead zone; otherwise, nothing will happen. This dead zone radius is necessary to present "drifting," which is
when the controller picks up minute movements from the player's thumb as it rests on the stick. If we don't have
this dead zone, the player can get annoyed very quickly, as the controls feel very fidgety.
The Update method then performs the same check on the right stick, to see if the player has changed the
direction the camera is looking, as long as the movement on the stick is longer than another dead zone radius.
Update computes the new pitch and yaw, and then checks to see if the user pressed the right analog trigger, our
fire button.
And that's how this sample implements a full set of control options. Again, remember that a good UWP app
supports a range of control options, so players with different form factors and devices can play in the way they
prefer!

Next steps
We've reviewed every major component of a UWP DirectX game except one: audio! Music and sound effects are
important to any game, so let's discuss adding sound!

Complete sample code for this section


MoveLookController.h

//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#pragma once

// Uncomment to print debug tracing.


// #define MOVELOOKCONTROLLER_TRACE 1

enum class MoveLookControllerState


{
None,
WaitForInput,
Active,
};

ref class MoveLookController


{
internal:
MoveLookController();

void Initialize(
_In_ Windows::UI::Core::CoreWindow^ window
);
void SetMoveRect(
_In_ DirectX::XMFLOAT2 upperLeft,
_In_ DirectX::XMFLOAT2 lowerRight
);
void SetFireRect(
_In_ DirectX::XMFLOAT2 upperLeft,
_In_ DirectX::XMFLOAT2 lowerRight
);
void WaitForPress(
_In_ DirectX::XMFLOAT2 UpperLeft,
_In_ DirectX::XMFLOAT2 LowerRight
);
void WaitForPress();

void Update();
void Update();
bool IsFiring();
bool IsPressComplete();
bool IsPauseRequested();

DirectX::XMFLOAT3 Velocity();
DirectX::XMFLOAT3 LookDirection();
float Pitch();
void Pitch(_In_ float pitch);
float Yaw();
void Yaw(_In_ float yaw);
bool Active();
void Active(_In_ bool active);

bool AutoFire();
void AutoFire(_In_ bool AutoFire);

protected:
void OnPointerPressed(
_In_ Windows::UI::Core::CoreWindow^ sender,
_In_ Windows::UI::Core::PointerEventArgs^ args
);
void OnPointerMoved(
_In_ Windows::UI::Core::CoreWindow^ sender,
_In_ Windows::UI::Core::PointerEventArgs^ args
);
void OnPointerReleased(
_In_ Windows::UI::Core::CoreWindow^ sender,
_In_ Windows::UI::Core::PointerEventArgs^ args
);
void OnPointerExited(
_In_ Windows::UI::Core::CoreWindow^ sender,
_In_ Windows::UI::Core::PointerEventArgs^ args
);
void OnKeyDown(
_In_ Windows::UI::Core::CoreWindow^ sender,
_In_ Windows::UI::Core::KeyEventArgs^ args
);
void OnKeyUp(
_In_ Windows::UI::Core::CoreWindow^ sender,
_In_ Windows::UI::Core::KeyEventArgs^ args
);

void OnMouseMoved(
_In_ Windows::Devices::Input::MouseDevice^ mouseDevice,
_In_ Windows::Devices::Input::MouseEventArgs^ args
);

#ifdef MOVELOOKCONTROLLER_TRACE
void DebugTrace(const wchar_t *format, ...);
#endif

private:
// Properties of the controller object
MoveLookControllerState m_state;
DirectX::XMFLOAT3 m_velocity; // How far we move it this frame
float m_pitch;
float m_yaw; // Orientation euler angles in radians

// Properties of the Move control


DirectX::XMFLOAT2 m_moveUpperLeft; // Bounding box where this control will activate
DirectX::XMFLOAT2 m_moveLowerRight;
bool m_moveInUse; // The move control is in use.
uint32 m_movePointerID; // The id of the pointer in this control.
DirectX::XMFLOAT2 m_moveFirstDown; // The point where the initial contact occurred.
DirectX::XMFLOAT2 m_movePointerPosition; // The point where the move pointer is currently located.
DirectX::XMFLOAT3 m_moveCommand; // The net command from the move control.

// Properties of the Look control


bool m_lookInUse; // The look control is in use.
bool m_lookInUse; // The look control is in use.
uint32 m_lookPointerID; // The id of the pointer in this control.
DirectX::XMFLOAT2 m_lookLastPoint; // The last point (from last frame)
DirectX::XMFLOAT2 m_lookLastDelta; // for smoothing.

// Properties of the Fire control


bool m_autoFire;
bool m_firePressed;
DirectX::XMFLOAT2 m_fireUpperLeft; // The bounding box where this control will activate.
DirectX::XMFLOAT2 m_fireLowerRight;
bool m_fireInUse; // The fire control is in use.
UINT32 m_firePointerID; // The id of the pointer in this control.
DirectX::XMFLOAT2 m_fireLastPoint; // The last fire position.

// Properties of the Mouse control. This is a combination of Look and Fire.


bool m_mouseInUse;
uint32 m_mousePointerID;
DirectX::XMFLOAT2 m_mouseLastPoint;
bool m_mouseLeftInUse;
bool m_mouseRightInUse;

bool m_buttonInUse;
uint32 m_buttonPointerID;
DirectX::XMFLOAT2 m_buttonUpperLeft;
DirectX::XMFLOAT2 m_buttonLowerRight;
bool m_buttonPressed;
bool m_pausePressed;

// XBox Input related members


bool m_isControllerConnected; // Do we have a controller connected?
XINPUT_CAPABILITIES m_xinputCaps; // The capabilites of the controller.
XINPUT_STATE m_xinputState; // The current state of the controller.
bool m_xinputStartButtonInUse;
bool m_xinputTriggerInUse;

// Input states for Keyboard


bool m_forward;
bool m_back; // States for movement
bool m_left;
bool m_right;
bool m_up;
bool m_down;
bool m_pause;

private:
void ResetState();
void UpdateGameController();
};

MoveLookController.cpp

//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#include "pch.h"
#include "MoveLookController.h"
#include "DirectXSample.h"

using namespace Windows::UI::Core;


using namespace Windows::UI::Input;
using namespace Windows::UI;
using namespace Windows::Foundation;
using namespace Microsoft::WRL;
using namespace DirectX;
using namespace DirectX;
using namespace Windows::Devices::Input;
using namespace Windows::System;

#define ROTATION_GAIN 0.008f // The sensitivity adjustment for the look controller.
#define MOVEMENT_GAIN 2.f // The sensitivity adjustment for the move controller.

// A basic Move/Look Controller class such as in an FPS


// horizontal (x-z-plane) movement on left virtual joystick
// also supports WASD keyboard input
// steering and orientation via left mouse down or touch drag.

//----------------------------------------------------------------------

MoveLookController::MoveLookController():
m_autoFire(true),
m_isControllerConnected(false)
{
}

//----------------------------------------------------------------------
// Set up the controls supported by this controller.

void MoveLookController::Initialize(
_In_ CoreWindow^ window
)
{
window->PointerPressed +=
ref new TypedEventHandler<CoreWindow^, PointerEventArgs^>(this, &MoveLookController::OnPointerPressed);

window->PointerMoved +=
ref new TypedEventHandler<CoreWindow^, PointerEventArgs^>(this, &MoveLookController::OnPointerMoved);

window->PointerReleased +=
ref new TypedEventHandler<CoreWindow^, PointerEventArgs^>(this, &MoveLookController::OnPointerReleased);

window->PointerExited +=
ref new TypedEventHandler<CoreWindow^, PointerEventArgs^>(this, &MoveLookController::OnPointerExited);

window->KeyDown +=
ref new TypedEventHandler<CoreWindow^, KeyEventArgs^>(this, &MoveLookController::OnKeyDown);

window->KeyUp +=
ref new TypedEventHandler<CoreWindow^, KeyEventArgs^>(this, &MoveLookController::OnKeyUp);

// A separate handler for mouse only relative mouse movement events.


Windows::Devices::Input::MouseDevice::GetForCurrentView()->MouseMoved +=
ref new TypedEventHandler<MouseDevice^, MouseEventArgs^>(this, &MoveLookController::OnMouseMoved);

ResetState();
m_state = MoveLookControllerState::None;

m_pitch = 0.0f;
m_yaw = 0.0f;
}

//----------------------------------------------------------------------

bool MoveLookController::IsPauseRequested()
{
switch (m_state)
{
case MoveLookControllerState::Active:
UpdateGameController();
if (m_pausePressed)
{

m_pausePressed = false;
return true;
}
}
else
{
return false;
}
}
return false;
}

//----------------------------------------------------------------------

bool MoveLookController::IsFiring()
{
if (m_state == MoveLookControllerState::Active)
{
if (m_autoFire)
{
return (m_fireInUse || (m_mouseInUse && m_mouseLeftInUse) || m_xinputTriggerInUse);
}
else
{
if (m_firePressed)
{
m_firePressed = false;
return true;
}
}
}
return false;
}

//----------------------------------------------------------------------

bool MoveLookController::IsPressComplete()
{
switch (m_state)
{
case MoveLookControllerState::WaitForInput:
UpdateGameController();
if (m_buttonPressed)
{

m_buttonPressed = false;
return true;
}
else
{
return false;
}
break;
}

return false;
}

//----------------------------------------------------------------------

void MoveLookController::OnPointerPressed(
_In_ CoreWindow^ /* sender */,
_In_ PointerEventArgs^ args
)
{
PointerPoint^ point = args->CurrentPoint;
uint32 pointerID = point->PointerId;
Point pointerPosition = point->Position;
PointerPointProperties^ pointProperties = point->Properties;
auto pointerDevice = point->PointerDevice;
auto pointerDeviceType = pointerDevice->PointerDeviceType;

XMFLOAT2 position = XMFLOAT2(pointerPosition.X, pointerPosition.Y); // Convert to allow math.


XMFLOAT2 position = XMFLOAT2(pointerPosition.X, pointerPosition.Y); // Convert to allow math.

switch (m_state)
{
case MoveLookControllerState::WaitForInput:
if (position.x > m_buttonUpperLeft.x &&
position.x < m_buttonLowerRight.x &&
position.y > m_buttonUpperLeft.y &&
position.y < m_buttonLowerRight.y)
{
// Wait until button released before setting variable.
m_buttonPointerID = pointerID;
m_buttonInUse = true;

}
break;

case MoveLookControllerState::Active:
switch (pointerDeviceType)
{
case Windows::Devices::Input::PointerDeviceType::Touch:
// Check to see if this pointer is in the move control.
if (position.x > m_moveUpperLeft.x &&
position.x < m_moveLowerRight.x &&
position.y > m_moveUpperLeft.y &&
position.y < m_moveLowerRight.y)
{
if (!m_moveInUse) // If no pointer is in this control yet:
{
// Process a DPad touch down event.
m_moveFirstDown = position; // Save the location of the initial contact.
m_movePointerID = pointerID; // Store the pointer using this control.
m_moveInUse = true;
}
}
// Check to see if this pointer is in the fire control.
else if (position.x > m_fireUpperLeft.x &&
position.x < m_fireLowerRight.x &&
position.y > m_fireUpperLeft.y &&
position.y < m_fireLowerRight.y)
{
if (!m_fireInUse)
{
m_fireLastPoint = position;
m_firePointerID = pointerID;
m_fireInUse = true;
if (!m_autoFire)
{
m_firePressed = true;
}
}
}
else
{
if (!m_lookInUse) // If no pointer is in this control yet:
{
m_lookLastPoint = position; // Save the point for a later move.
m_lookPointerID = pointerID; // Store the pointer using this control.
m_lookLastDelta.x = m_lookLastDelta.y = 0; // These are for smoothing.
m_lookInUse = true;
}
}
break;

default:
bool rightButton = pointProperties->IsRightButtonPressed;
bool leftButton = pointProperties->IsLeftButtonPressed;

if (!m_autoFire && (!m_mouseLeftInUse && leftButton))


{
{
m_firePressed = true;
}

if (!m_mouseInUse)
{
m_mouseInUse = true;
m_mouseLastPoint = position;
m_mousePointerID = pointerID;
m_mouseLeftInUse = leftButton;
m_mouseRightInUse = rightButton;
m_lookLastDelta.x = m_lookLastDelta.y = 0; // These are for smoothing.
}
else
{

}
break;
}

break;
}

return;
}

//----------------------------------------------------------------------

void MoveLookController::OnPointerMoved(
_In_ CoreWindow^ /* sender */,
_In_ PointerEventArgs^ args
)
{
PointerPoint^ point = args->CurrentPoint;
uint32 pointerID = point->PointerId;
Point pointerPosition = point->Position;
PointerPointProperties^ pointProperties = point->Properties;
auto pointerDevice = point->PointerDevice;

XMFLOAT2 position = XMFLOAT2(pointerPosition.X, pointerPosition.Y); // Convert to allow math.

switch (m_state)
{
case MoveLookControllerState::Active:
// Decide which control this pointer is operating.
if (pointerID == m_movePointerID) // This is the move pointer.
{
m_movePointerPosition = position; // Save the current position.
}
else if (pointerID == m_lookPointerID) // This is the look pointer.
{
// Look control
XMFLOAT2 pointerDelta;
pointerDelta.x = position.x - m_lookLastPoint.x; // How far did the pointer move?
pointerDelta.y = position.y - m_lookLastPoint.y;

XMFLOAT2 rotationDelta;
rotationDelta.x = pointerDelta.x * ROTATION_GAIN; // Scale for control sensitivity.
rotationDelta.y = pointerDelta.y * ROTATION_GAIN;
m_lookLastPoint = position; // Save for the next time through.

// Update our orientation based on the command.


m_pitch -= rotationDelta.y;
m_yaw += rotationDelta.x;

// Limit pitch to straight up or straight down.


float limit = XM_PI / 2.0f - 0.01f;
m_pitch = __max(-limit, m_pitch);
m_pitch = __min(+limit, m_pitch);
// Keep longitude in the same range by wrapping.
if (m_yaw > XM_PI)
{
m_yaw -= XM_PI * 2.0f;
}
else if (m_yaw < -XM_PI)
{
m_yaw += XM_PI * 2.0f;
}
}
else if (pointerID == m_firePointerID)
{
m_fireLastPoint = position;
}
else if (pointerID == m_mousePointerID)
{
m_mouseLeftInUse = pointProperties->IsLeftButtonPressed;
m_mouseRightInUse = pointProperties->IsRightButtonPressed;;
m_mouseLastPoint = position; // save for next time through

// Handle mouse movement via a separate relative mouse movement handler (OnMouseMoved).
}
break;
}
}

//----------------------------------------------------------------------

void MoveLookController::OnMouseMoved(
_In_ MouseDevice^ /* mouseDevice */,
_In_ MouseEventArgs^ args
)
{
// Handle Mouse Input via dedicated relative movement handler.

switch (m_state)
{
case MoveLookControllerState::Active:
XMFLOAT2 mouseDelta;
mouseDelta.x = static_cast<float>(args->MouseDelta.X);
mouseDelta.y = static_cast<float>(args->MouseDelta.Y);

XMFLOAT2 rotationDelta;
rotationDelta.x = mouseDelta.x * ROTATION_GAIN; // Scale for control sensitivity.
rotationDelta.y = mouseDelta.y * ROTATION_GAIN;

// Update our orientation based on the command.


m_pitch -= rotationDelta.y;
m_yaw += rotationDelta.x;

// Limit pitch to straight up or straight down.


float limit = XM_PI / 2.0f - 0.01f;
m_pitch = __max(-limit, m_pitch);
m_pitch = __min(+limit, m_pitch);

// Keep longitude in same range by wrapping.


if (m_yaw > XM_PI)
{
m_yaw -= XM_PI * 2.0f;
}
else if (m_yaw < -XM_PI)
{
m_yaw += XM_PI * 2.0f;
}
break;
}
}
//----------------------------------------------------------------------

void MoveLookController::OnPointerReleased(
_In_ CoreWindow^ /* sender */,
_In_ PointerEventArgs^ args
)
{
PointerPoint^ point = args->CurrentPoint;
uint32 pointerID = point->PointerId;
Point pointerPosition = point->Position;
PointerPointProperties^ pointProperties = point->Properties;

XMFLOAT2 position = XMFLOAT2(pointerPosition.X, pointerPosition.Y); // Convert to allow math.


switch (m_state)
{
case MoveLookControllerState::WaitForInput:
if (m_buttonInUse && (pointerID == m_buttonPointerID))
{
m_buttonInUse = false;
m_buttonPressed = true;
}
break;

case MoveLookControllerState::Active:
if (pointerID == m_movePointerID)
{
m_velocity = XMFLOAT3(0, 0, 0); // Stop on release.
m_moveInUse = false;
m_movePointerID = 0;
}
else if (pointerID == m_lookPointerID)
{
m_lookInUse = false;
m_lookPointerID = 0;
}
else if (pointerID == m_firePointerID)
{
m_fireInUse = false;
m_firePointerID = 0;
}
else if (pointerID == m_mousePointerID)
{
bool rightButton = pointProperties->IsRightButtonPressed;
bool leftButton = pointProperties->IsLeftButtonPressed;

m_mouseInUse = false;

// Don't clear the mouse pointer ID so that Move events still result in Look changes.
// m_mousePointerID = 0;
m_mouseLeftInUse = leftButton;
m_mouseRightInUse = rightButton;
}
break;
}
}

//----------------------------------------------------------------------

void MoveLookController::OnPointerExited(
_In_ CoreWindow^ /* sender */,
_In_ PointerEventArgs^ args
)
{
PointerPoint^ point = args->CurrentPoint;
uint32 pointerID = point->PointerId;
Point pointerPosition = point->Position;
PointerPointProperties^ pointProperties = point->Properties;

XMFLOAT2 position = XMFLOAT2(pointerPosition.X, pointerPosition.Y); // Convert to allow math.


XMFLOAT2 position = XMFLOAT2(pointerPosition.X, pointerPosition.Y); // Convert to allow math.

switch (m_state)
{
case MoveLookControllerState::WaitForInput:
if (m_buttonInUse && (pointerID == m_buttonPointerID))
{
m_buttonInUse = false;
m_buttonPressed = false;
}
break;

case MoveLookControllerState::Active:
if (pointerID == m_movePointerID)
{
m_velocity = XMFLOAT3(0, 0, 0); // Stop on release.
m_moveInUse = false;
m_movePointerID = 0;
}
else if (pointerID == m_lookPointerID)
{
m_lookInUse = false;
m_lookPointerID = 0;
}
else if (pointerID == m_firePointerID)
{
m_fireInUse = false;
m_firePointerID = 0;
}
else if (pointerID == m_mousePointerID)
{
m_mouseInUse = false;
m_mousePointerID = 0;
m_mouseLeftInUse = false;
m_mouseRightInUse = false;
}
break;
}
}

//----------------------------------------------------------------------

void MoveLookController::OnKeyDown(
_In_ CoreWindow^ /* sender */,
_In_ KeyEventArgs^ args
)
{
Windows::System::VirtualKey Key;
Key = args->VirtualKey;

// Figure out the command from the keyboard.


if (Key == VirtualKey::W) // forward
m_forward = true;
if (Key == VirtualKey::S) // back
m_back = true;
if (Key == VirtualKey::A) // left
m_left = true;
if (Key == VirtualKey::D) // right
m_right = true;
if (Key == VirtualKey::Space) // up
m_up = true;
if (Key == VirtualKey::X) // down
m_down = true;
if (Key == VirtualKey::P) // Pause
m_pause = true;
}

//----------------------------------------------------------------------

void MoveLookController::OnKeyUp(
void MoveLookController::OnKeyUp(
_In_ CoreWindow^ /* sender */,
_In_ KeyEventArgs^ args
)
{
Windows::System::VirtualKey Key;
Key = args->VirtualKey;

// Figure out the command from the keyboard.


if (Key == VirtualKey::W) // Forward
m_forward = false;
if (Key == VirtualKey::S) // Back
m_back = false;
if (Key == VirtualKey::A) // Left
m_left = false;
if (Key == VirtualKey::D) // Right
m_right = false;
if (Key == VirtualKey::Space) // Up
m_up = false;
if (Key == VirtualKey::X) // Down
m_down = false;
if (Key == VirtualKey::P)
{
if (m_pause)
{
// Trigger pause only one time on button release.
m_pausePressed = true;
m_pause = false;
}
}
}

//----------------------------------------------------------------------

void MoveLookController::ResetState()
{
// Reset the state of the controller.
// Disable any active pointer IDs to stop all interaction.
m_buttonPressed = false;
m_pausePressed = false;
m_buttonInUse = false;
m_moveInUse = false;
m_lookInUse = false;
m_fireInUse = false;
m_mouseInUse = false;
m_mouseLeftInUse = false;
m_mouseRightInUse = false;
m_movePointerID = 0;
m_lookPointerID = 0;
m_firePointerID = 0;
m_mousePointerID = 0;
m_velocity = XMFLOAT3(0.0f, 0.0f, 0.0f);

m_xinputStartButtonInUse = false;
m_xinputTriggerInUse = false;

m_moveCommand = XMFLOAT3(0.0f, 0.0f, 0.0f);


m_forward = false;
m_back = false;
m_left = false;
m_right = false;
m_up = false;
m_down = false;
m_pause = false;
}

//----------------------------------------------------------------------

void MoveLookController::SetMoveRect (
_In_ XMFLOAT2 upperLeft,
_In_ XMFLOAT2 upperLeft,
_In_ XMFLOAT2 lowerRight
)
{
m_moveUpperLeft = upperLeft;
m_moveLowerRight = lowerRight;
}

//----------------------------------------------------------------------

void MoveLookController::SetFireRect (
_In_ XMFLOAT2 upperLeft,
_In_ XMFLOAT2 lowerRight
)
{
m_fireUpperLeft = upperLeft;
m_fireLowerRight = lowerRight;
}

//----------------------------------------------------------------------

void MoveLookController::WaitForPress(
_In_ XMFLOAT2 upperLeft,
_In_ XMFLOAT2 lowerRight
)
{

ResetState();
m_state = MoveLookControllerState::WaitForInput;
m_buttonUpperLeft = upperLeft;
m_buttonLowerRight = lowerRight;

// Turn on the mouse cursor.


CoreWindow::GetForCurrentThread()->PointerCursor = ref new CoreCursor(CoreCursorType::Arrow, 0);
}

//----------------------------------------------------------------------

void MoveLookController::WaitForPress()
{
ResetState();
m_state = MoveLookControllerState::WaitForInput;
m_buttonUpperLeft.x = 0.0f;
m_buttonUpperLeft.y = 0.0f;
m_buttonLowerRight.x = 0.0f;
m_buttonLowerRight.y = 0.0f;

// Turn on the mouse cursor.


CoreWindow::GetForCurrentThread()->PointerCursor = ref new CoreCursor(CoreCursorType::Arrow, 0);
}

//----------------------------------------------------------------------

XMFLOAT3 MoveLookController::Velocity()
{
return m_velocity;
}

//----------------------------------------------------------------------

XMFLOAT3 MoveLookController::LookDirection()
{
XMFLOAT3 lookDirection;

float r = cosf(m_pitch); // In the plane


lookDirection.y = sinf(m_pitch); // Vertical
lookDirection.z = r * cosf(m_yaw); // Fwd-back
lookDirection.x = r * sinf(m_yaw); // Left-right

return lookDirection;
return lookDirection;
}

//----------------------------------------------------------------------

float MoveLookController::Pitch()
{
return m_pitch;
}

//----------------------------------------------------------------------

void MoveLookController::Pitch(_In_ float pitch)


{
m_pitch = pitch;
}

//----------------------------------------------------------------------

float MoveLookController::Yaw()
{
return m_yaw;
}

//----------------------------------------------------------------------

void MoveLookController::Yaw(_In_ float yaw)


{
m_yaw = yaw;
}

//----------------------------------------------------------------------

void MoveLookController::Active(_In_ bool active)


{
ResetState();

if (active)
{
m_state = MoveLookControllerState::Active;
// Turn the mouse cursor off (hidden).
CoreWindow::GetForCurrentThread()->PointerCursor = nullptr;
}
else
{
m_state = MoveLookControllerState::None;
// Turn the mouse cursor on.
auto window = CoreWindow::GetForCurrentThread();
if (window)
{
// Protect case where there isn't a window associated with the current thread.
// This happens on initialization.
window->PointerCursor = ref new CoreCursor(CoreCursorType::Arrow, 0);
}
}
}

//----------------------------------------------------------------------

bool MoveLookController::Active()
{
if (m_state == MoveLookControllerState::Active)
{
return true;
}
else
{
return false;
}
}

//----------------------------------------------------------------------

void MoveLookController::AutoFire(_In_ bool autoFire)


{
m_autoFire = autoFire;
}

//----------------------------------------------------------------------

bool MoveLookController::AutoFire()
{
return m_autoFire;
}

//----------------------------------------------------------------------

void MoveLookController::Update()
{
UpdateGameController();

if (m_moveInUse)
{
// Move control.
XMFLOAT2 pointerDelta;

pointerDelta.x = m_movePointerPosition.x - m_moveFirstDown.x;


pointerDelta.y = m_movePointerPosition.y - m_moveFirstDown.y;

// Figure out the command from the virtual joystick.


XMFLOAT3 commandDirection = XMFLOAT3(0.0f, 0.0f, 0.0f);
if (fabsf(pointerDelta.x) > 16.0f) // leave 32 pixel-wide dead spot for being still
m_moveCommand.x -= pointerDelta.x/fabsf(pointerDelta.x);

if (fabsf(pointerDelta.y) > 16.0f)


m_moveCommand.y -= pointerDelta.y/fabsf(pointerDelta.y);
}

// Poll our state bits set by the keyboard input events.


if (m_forward)
{
m_moveCommand.y += 1.0f;
}
if (m_back)
{
m_moveCommand.y -= 1.0f;
}
if (m_left)
{
m_moveCommand.x += 1.0f;
}
if (m_right)
{
m_moveCommand.x -= 1.0f;
}
if (m_up)
{
m_moveCommand.z += 1.0f;
}
if (m_down)
{
m_moveCommand.z -= 1.0f;
}

// Make sure that 45 deg cases are not faster.


if (fabsf(m_moveCommand.x) > 0.1f ||
fabsf(m_moveCommand.y) > 0.1f ||
fabsf(m_moveCommand.z) > 0.1f)
{
XMStoreFloat3(&m_moveCommand, XMVector3Normalize(XMLoadFloat3(&m_moveCommand)));
}

// Rotate command to align with our direction (world coordinates).


XMFLOAT3 wCommand;
wCommand.x = m_moveCommand.x * cosf(m_yaw) - m_moveCommand.y * sinf(m_yaw);
wCommand.y = m_moveCommand.x * sinf(m_yaw) + m_moveCommand.y * cosf(m_yaw);
wCommand.z = m_moveCommand.z;

// Scale for sensitivity adjustment.


// Our velocity is based on the command, y is up.
m_velocity.x = -wCommand.x * MOVEMENT_GAIN;
m_velocity.z = wCommand.y * MOVEMENT_GAIN;
m_velocity.y = wCommand.z * MOVEMENT_GAIN;

// Clear the movement input accumulator for use during next frame.
m_moveCommand = XMFLOAT3(0.0f, 0.0f, 0.0f);
}

//----------------------------------------------------------------------

void MoveLookController::UpdateGameController()
{
if (!m_isControllerConnected)
{
// Check for controller connection by trying to get the capabilties.
DWORD capsResult = XInputGetCapabilities(0, XINPUT_FLAG_GAMEPAD, &m_xinputCaps);
if (capsResult != ERROR_SUCCESS)
{
return;
}
// The device is connected.
m_isControllerConnected = true;
m_xinputStartButtonInUse = false;
m_xinputTriggerInUse = false;
}

DWORD stateResult = XInputGetState(0, &m_xinputState);


if (stateResult != ERROR_SUCCESS)
{
// The device is no longer connected.
m_isControllerConnected = false;
}

switch (m_state)
{
case MoveLookControllerState::WaitForInput:
if (m_xinputState.Gamepad.wButtons & XINPUT_GAMEPAD_START)
{
m_xinputStartButtonInUse = true;
}
else if (m_xinputStartButtonInUse)
{
// Trigger one time only on button release.
m_xinputStartButtonInUse = false;
m_buttonPressed = true;
}
break;

case MoveLookControllerState::Active:
if (m_xinputState.Gamepad.wButtons & XINPUT_GAMEPAD_START)
{
m_xinputStartButtonInUse = true;
}
else if (m_xinputStartButtonInUse)
{
// Trigger one time only on button release.
m_xinputStartButtonInUse = false;
m_xinputStartButtonInUse = false;
m_pausePressed = true;
}
// Use the Right Thumb joystick on the XBox controller to control
// the eye point position control.
// The controller input goes from -32767 to 32767. We will normalize
// this from -1 to 1 and keep a dead spot in the middle to avoid drift.

if (m_xinputState.Gamepad.sThumbLX > XINPUT_GAMEPAD_LEFT_THUMB_DEADZONE ||


m_xinputState.Gamepad.sThumbLX < -XINPUT_GAMEPAD_LEFT_THUMB_DEADZONE)
{
float x = (float)m_xinputState.Gamepad.sThumbLX/32767.0f;
m_moveCommand.x -= x / fabsf(x);
}

if (m_xinputState.Gamepad.sThumbLY > XINPUT_GAMEPAD_LEFT_THUMB_DEADZONE ||


m_xinputState.Gamepad.sThumbLY < -XINPUT_GAMEPAD_LEFT_THUMB_DEADZONE)
{
float y = (float)m_xinputState.Gamepad.sThumbLY/32767.0f;
m_moveCommand.y += y / fabsf(y);
}

// Use the Left Thumb Joystick on the XBox controller to control


// the look at control.
// The controller input goes from -32767 to 32767. We will normalize
// this from -1 to 1 and keep a dead spot in the middle to avoid drift.
XMFLOAT2 pointerDelta;
if (m_xinputState.Gamepad.sThumbRX > XINPUT_GAMEPAD_RIGHT_THUMB_DEADZONE ||
m_xinputState.Gamepad.sThumbRX < -XINPUT_GAMEPAD_RIGHT_THUMB_DEADZONE)
{
pointerDelta.x = (float)m_xinputState.Gamepad.sThumbRX/32767.0f;
pointerDelta.x = pointerDelta.x * pointerDelta.x * pointerDelta.x;
}
else
{
pointerDelta.x = 0.0f;
}
if (m_xinputState.Gamepad.sThumbRY > XINPUT_GAMEPAD_RIGHT_THUMB_DEADZONE ||
m_xinputState.Gamepad.sThumbRY < -XINPUT_GAMEPAD_RIGHT_THUMB_DEADZONE)
{
pointerDelta.y = (float)m_xinputState.Gamepad.sThumbRY/32767.0f;
pointerDelta.y = pointerDelta.y * pointerDelta.y * pointerDelta.y;
}
else
{
pointerDelta.y = 0.0f;
}

XMFLOAT2 rotationDelta;
rotationDelta.x = pointerDelta.x * 0.08f; // scale for control sensitivity
rotationDelta.y = pointerDelta.y * 0.08f;

// Update our orientation based on the command.


m_pitch += rotationDelta.y;
m_yaw += rotationDelta.x;

// Limit pitch to straight up or straight down.


m_pitch = __max(-XM_PI / 2.0f, m_pitch);
m_pitch = __min(+XM_PI / 2.0f, m_pitch);

// Check the state of the A button. This is used to indicate fire control.

if (m_xinputState.Gamepad.bRightTrigger > XINPUT_GAMEPAD_TRIGGER_THRESHOLD)


{
if (!m_autoFire && !m_xinputTriggerInUse)
{
m_firePressed = true;
}
m_xinputTriggerInUse = true;
}
}
else
{
m_xinputTriggerInUse = false;
}
break;
}
}

Note
This article is for Windows 10 developers writing Universal Windows Platform (UWP) apps. If youre
developing for Windows 8.x or Windows Phone 8.x, see the archived documentation.

Related topics
Create a simple UWP game with DirectX
Add sound
3/6/2017 8 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
In this step, we examine how the shooting game sample creates an object for sound playback using the XAudio2
APIs.

Objective
To add sound output using XAudio2.
In the game sample, the audio objects and behaviors are defined in three files:
Audio.h/.cpp. This code file defines the Audio object, which contains the XAudio2 resources for sound
playback. It also defines the method for suspending and resuming audio playback if the game is paused or
deactivated.
MediaReader.h/.cpp. This code defines the methods for reading audio .wav files from local storage.
SoundEffect.h/.cpp. This code defines an object for in-game sound playback.

Defining the audio engine


When the game sample starts, it creates an Audio object that allocates the audio resources for the game. The code
that declares this object looks like this:

public:
Audio();

void Initialize();
void CreateDeviceIndependentResources();
IXAudio2* MusicEngine();
IXAudio2* SoundEffectEngine();
void SuspendAudio();
void ResumeAudio();

protected:
bool m_audioAvailable;
Microsoft::WRL::ComPtr<IXAudio2> m_musicEngine;
Microsoft::WRL::ComPtr<IXAudio2> m_soundEffectEngine;
IXAudio2MasteringVoice* m_musicMasteringVoice;
IXAudio2MasteringVoice* m_soundEffectMasteringVoice;
};

The Audio::MusicEngine and Audio::SoundEffectEngine methods return references to IXAudio2 objects that
define the mastering voice for each audio type. A mastering voice is the audio device used for playback. Sound
data buffers cannot be submitted directly to mastering voices, but data submitted to other types of voices must be
directed to a mastering voice to be heard.

Initializing the audio resources


The sample initializes the IXAudio2 objects for the music and sound effect engines with calls to XAudio2Create.
After the engines have been instantiated, it creates a mastering voice for each with calls to
IXAudio2::CreateMasteringVoice, as here:
void Audio::CreateDeviceIndependentResources()
{
UINT32 flags = 0;

DX::ThrowIfFailed(
XAudio2Create(&m_musicEngine, flags)
);

HRESULT hr = m_musicEngine->CreateMasteringVoice(&m_musicMasteringVoice);
if (FAILED(hr))
{
// Unable to create an audio device
m_audioAvailable = false;
return;
}

DX::ThrowIfFailed(
XAudio2Create(&m_soundEffectEngine, flags)
);

DX::ThrowIfFailed(
m_soundEffectEngine->CreateMasteringVoice(&m_soundEffectMasteringVoice)
);

m_audioAvailable = true;
}

As a music or sound effect audio file is loaded, this method calls IXAudio2::CreateSourceVoice on the mastering
voice, which creates an instance of a source voice for playback. We look at the code for this as soon as we finish
reviewing how the game sample loads audio files.

Reading an audio file


In the game sample, the code for reading audio format files is defined in MediaReader.cpp. The specific method
that reads in an encoded .wav audio file, MediaReader::LoadMedia, looks like this:

Platform::Array<byte>^ MediaReader::LoadMedia(_In_ Platform::String^ filename)


{
DX::ThrowIfFailed(
MFStartup(MF_VERSION)
);

ComPtr<IMFSourceReader> reader;
DX::ThrowIfFailed(
MFCreateSourceReaderFromURL(
Platform::String::Concat(m_installedLocationPath, filename)->Data(),
nullptr,
&reader
)
);

// Set the decoded output format as PCM


// XAudio2 on Windows can process PCM and ADPCM-encoded buffers.
// When using MediaFoundation, this sample always decodes into PCM.
Microsoft::WRL::ComPtr<IMFMediaType> mediaType;
DX::ThrowIfFailed(
MFCreateMediaType(&mediaType)
);

DX::ThrowIfFailed(
mediaType->SetGUID(MF_MT_MAJOR_TYPE, MFMediaType_Audio)
);
DX::ThrowIfFailed(
mediaType->SetGUID(MF_MT_SUBTYPE, MFAudioFormat_PCM)
);

DX::ThrowIfFailed(
reader->SetCurrentMediaType(MF_SOURCE_READER_FIRST_AUDIO_STREAM, 0, mediaType.Get())
);

// Get the complete WAVEFORMAT from the Media Type.


Microsoft::WRL::ComPtr<IMFMediaType> outputMediaType;
DX::ThrowIfFailed(
reader->GetCurrentMediaType(MF_SOURCE_READER_FIRST_AUDIO_STREAM, &outputMediaType)
);

UINT32 size = 0;
WAVEFORMATEX* waveFormat;
DX::ThrowIfFailed(
MFCreateWaveFormatExFromMFMediaType(outputMediaType.Get(), &waveFormat, &size)
);

CopyMemory(&m_waveFormat, waveFormat, sizeof(m_waveFormat));


CoTaskMemFree(waveFormat);

// Get the total length of the stream in bytes.


PROPVARIANT propVariant;
DX::ThrowIfFailed(
reader->GetPresentationAttribute(MF_SOURCE_READER_MEDIASOURCE, MF_PD_DURATION, &propVariant)
);
LONGLONG duration = propVariant.uhVal.QuadPart;
unsigned int maxStreamLengthInBytes;

double durationInSeconds = (duration / static_cast<double>(10000 * 1000));


maxStreamLengthInBytes = static_cast<unsigned int>(durationInSeconds * m_waveFormat.nAvgBytesPerSec);

// Make the length a multiple of 4 bytes.


maxStreamLengthInBytes = (maxStreamLengthInBytes + 3) / 4 * 4;

Platform::Array<byte>^ fileData = ref new Platform::Array<byte>(maxStreamLengthInBytes);

ComPtr<IMFSample> sample;
ComPtr<IMFMediaBuffer> mediaBuffer;
DWORD flags = 0;

int positionInData = 0;
bool done = false;
while (!done)
{
DX::ThrowIfFailed(
reader->ReadSample(MF_SOURCE_READER_FIRST_AUDIO_STREAM, 0, nullptr, &flags, nullptr, &sample)
);

if (sample != nullptr)
{
DX::ThrowIfFailed(
sample->ConvertToContiguousBuffer(&mediaBuffer)
);

BYTE *audioData = nullptr;


DWORD sampleBufferLength = 0;
DX::ThrowIfFailed(
mediaBuffer->Lock(&audioData, nullptr, &sampleBufferLength)
);

for (DWORD i = 0; i < sampleBufferLength; i++)


{
fileData[positionInData++] = audioData[i];
}
}
if (flags & MF_SOURCE_READERF_ENDOFSTREAM)
{
done = true;
}
}

// Fix up the array size on match the actual length.


Platform::Array<byte>^ realfileData = ref new Platform::Array<byte>((positionInData + 3) / 4 * 4);
memcpy(realfileData->Data, fileData->Data, positionInData);
return realfileData;
}

This method uses the Media Foundation APIs to read in the .wav audio file as a Pulse Code Modulation (PCM)
buffer.
1. Creates a media source reader (IMFSourceReader) object by calling MFCreateSourceReaderFromURL.
2. Creates a media type (IMFMediaType) for the decoding of the audio file by calling MFCreateMediaType. This
method specifies that the decoded output is PCM audio, which is an audio type that XAudio2 can use.
3. Sets the decoded output media type for the reader by calling IMFSourceReader::SetCurrentMediaType.
4. Creates a WAVEFORMATEX buffer and copies the results of a call to
IMFMediaType::MFCreateWaveFormatExFromMFMediaType on the IMFMediaType object. This formats
the buffer that holds the audio file after it is loaded.
5. Gets the duration, in seconds, of the audio stream by calling IMFSourceReader::GetPresentationAttribute
and then converts the duration to bytes.
6. Reads the audio file in as a stream by calling IMFSourceReader::ReadSample.
7. Copies the contents of the audio sample buffer into an array returned by the method.
The most important thing in SoundEffect::Initialize is the creation of the source voice object, m_sourceVoice,
from the mastering voice. We use the source voice for the actual play back of the sound data buffer obtained from
MediaReader::LoadMedia.
The sample game calls this method when it initializes the SoundEffect object, like this:

void SoundEffect::Initialize(
_In_ IXAudio2 *masteringEngine,
_In_ WAVEFORMATEX *sourceFormat,
_In_ Platform::Array<byte>^ soundData)
{
m_soundData = soundData;

if (masteringEngine == nullptr)
{
// Audio is not available, so return.
m_audioAvailable = false;
return;
}

// Create and reuse a single source voice for the single sound effect in this sample.
DX::ThrowIfFailed(
masteringEngine->CreateSourceVoice(
&m_sourceVoice,
sourceFormat
)
);
m_audioAvailable = true;
}

This method is passed the results of calls to Audio::SoundEffectEngine (or Audio::MusicEngine),


MediaReader::GetOutputWaveFormatEx, and the buffer returned by a call to MediaReader::LoadMedia, as
seen here.
MediaReader^ mediaReader = ref new MediaReader;
auto targetHitSound = mediaReader->LoadMedia("hit.wav");

myTarget->HitSound(ref new SoundEffect());


myTarget->HitSound()->Initialize(
m_audioController->SoundEffectEngine(),
mediaReader->GetOutputWaveFormatEx(),
targetHitSound);

SoundEffect::Initialize is called from the Simple3DGame:Initialize method that initializes the main game
object.
Now that the sample game has an audio file in memory, let's see how it plays it back during game play!

Playing back an audio file


void SoundEffect::PlaySound(_In_ float volume)
{
XAUDIO2_BUFFER buffer = {0};
XAUDIO2_VOICE_STATE state = {0};

if (!m_audioAvailable)
{
// Audio is not available, so just return.
return;
}

// Interrupt sound effect if currently playing.


DX::ThrowIfFailed(
m_sourceVoice->Stop()
);
DX::ThrowIfFailed(
m_sourceVoice->FlushSourceBuffers()
);

// Queue in-memory buffer for playback and start the voice.


buffer.AudioBytes = m_soundData->Length;
buffer.pAudioData = m_soundData->Data;
buffer.Flags = XAUDIO2_END_OF_STREAM;

m_sourceVoice->SetVolume(volume);
DX::ThrowIfFailed(
m_sourceVoice->SubmitSourceBuffer(&buffer)
);
DX::ThrowIfFailed(
m_sourceVoice->Start()
);
}

To play the sound, this method uses the source voice object m_sourceVoice to start the playback of the sound
data buffer m_soundData. It creates an XAUDIO2_BUFFER, to which it provides a reference to the sound data
buffer, and then submits it with a call to IXAudio2SourceVoice::SubmitSourceBuffer. With the sound data
queued up, SoundEffect::PlaySound starts play back by calling IXAudio2SourceVoice::Start.
Now, whenever a collision between the ammo and a target occurs, a call to SoundEffect::PlaySound causes a
noise to play.

Next steps
That was a whirlwind tour of Universal Windows Platform (UWP) DirectX game development! At this point, you
have an idea of what you need to do to make your own game for Windows 8 a great experience. Remember, your
game can be played on a wide variety of Windows 8 devices and platforms, so design your components: your
graphics, your controls, your user interface, and your audio for as wide a set of configurations as you can!
For more info about ways to modify the game sample provided in these documents, see Extending the game
sample.

Complete sample code for this section


Audio.h

//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#pragma once

ref class Audio


{
public:
Audio();

void Initialize();
void CreateDeviceIndependentResources();
IXAudio2* MusicEngine();
IXAudio2* SoundEffectEngine();
void SuspendAudio();
void ResumeAudio();

protected:
bool m_audioAvailable;
Microsoft::WRL::ComPtr<IXAudio2> m_musicEngine;
Microsoft::WRL::ComPtr<IXAudio2> m_soundEffectEngine;
IXAudio2MasteringVoice* m_musicMasteringVoice;
IXAudio2MasteringVoice* m_soundEffectMasteringVoice;
};

Audio.cpp

//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#include "pch.h"
#include "Audio.h"
#include "DirectXSample.h"

using namespace Microsoft::WRL;


using namespace Windows::Foundation;
using namespace Windows::UI::Core;
using namespace Windows::Graphics::Display;

Audio::Audio():
m_audioAvailable(false)
{
}
void Audio::Initialize()
{
}

void Audio::CreateDeviceIndependentResources()
{
UINT32 flags = 0;

DX::ThrowIfFailed(
XAudio2Create(&m_musicEngine, flags)
);

#if defined(_DEBUG)
XAUDIO2_DEBUG_CONFIGURATION debugConfiguration = {0};
debugConfiguration.BreakMask = XAUDIO2_LOG_ERRORS;
debugConfiguration.TraceMask = XAUDIO2_LOG_ERRORS;
m_musicEngine->SetDebugConfiguration(&debugConfiguration);
#endif

HRESULT hr = m_musicEngine->CreateMasteringVoice(&m_musicMasteringVoice);
if (FAILED(hr))
{
// Unable to create an audio device
m_audioAvailable = false;
return;
}

DX::ThrowIfFailed(
XAudio2Create(&m_soundEffectEngine, flags)
);

#if defined(_DEBUG)
m_soundEffectEngine->SetDebugConfiguration(&debugConfiguration);
#endif

DX::ThrowIfFailed(
m_soundEffectEngine->CreateMasteringVoice(&m_soundEffectMasteringVoice)
);

m_audioAvailable = true;
}

IXAudio2* Audio::MusicEngine()
{
return m_musicEngine.Get();
}

IXAudio2* Audio::SoundEffectEngine()
{
return m_soundEffectEngine.Get();
}

void Audio::SuspendAudio()
{
if (m_audioAvailable)
{
m_musicEngine->StopEngine();
m_soundEffectEngine->StopEngine();
}
}

void Audio::ResumeAudio()
{
if (m_audioAvailable)
{
DX::ThrowIfFailed(m_musicEngine->StartEngine());
DX::ThrowIfFailed(m_soundEffectEngine->StartEngine());
}
}
}

SoundEffect.h

//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#pragma once

ref class SoundEffect


{
public:
SoundEffect();

void Initialize(
_In_ IXAudio2* masteringEngine,
_In_ WAVEFORMATEX* sourceFormat,
_In_ Platform::Array<byte>^ soundData
);

void PlaySound(_In_ float volume);

protected:
bool m_audioAvailable;
IXAudio2SourceVoice* m_sourceVoice;
Platform::Array<byte>^ m_soundData;
};

SoundEffect.cpp

//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#include "pch.h"
#include "SoundEffect.h"
#include "DirectXSample.h"

SoundEffect::SoundEffect():
m_audioAvailable(false)
{
}
//----------------------------------------------------------------------
void SoundEffect::Initialize(
_In_ IXAudio2 *masteringEngine,
_In_ WAVEFORMATEX *sourceFormat,
_In_ Platform::Array<byte>^ soundData)
{
m_soundData = soundData;

if (masteringEngine == nullptr)
{
// Audio is not available, so return.
m_audioAvailable = false;
return;
}

// Create and reuse a single source voice for the single sound effect in this sample.
// Create and reuse a single source voice for the single sound effect in this sample.
DX::ThrowIfFailed(
masteringEngine->CreateSourceVoice(
&m_sourceVoice,
sourceFormat
)
);
m_audioAvailable = true;
}
//----------------------------------------------------------------------
void SoundEffect::PlaySound(_In_ float volume)
{
XAUDIO2_BUFFER buffer = {0};
XAUDIO2_VOICE_STATE state = {0};

if (!m_audioAvailable)
{
// Audio is not available, so just return.
return;
}

// Interrupt sound effect if currently playing.


DX::ThrowIfFailed(
m_sourceVoice->Stop()
);
DX::ThrowIfFailed(
m_sourceVoice->FlushSourceBuffers()
);

// Queue in-memory buffer for playback and start the voice.


buffer.AudioBytes = m_soundData->Length;
buffer.pAudioData = m_soundData->Data;
buffer.Flags = XAUDIO2_END_OF_STREAM;

m_sourceVoice->SetVolume(volume);
DX::ThrowIfFailed(
m_sourceVoice->SubmitSourceBuffer(&buffer)
);
DX::ThrowIfFailed(
m_sourceVoice->Start()
);
}
Extend the game sample
3/6/2017 19 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Congratulations! At this point, you understand the key components of a basic Universal Windows Platform (UWP)
DirectX 3D game. You can set up the framework for a game, including the view provider and rendering pipeline,
and implement a basic game loop. You can also create a basic user interface overlay, and incorporate sounds and
controls. You're on your way to creating a game of your own, and here are some resources to further your
knowledge of DirectX game development.
DirectX Graphics and Gaming
Direct3D 11 Overview
Direct3D 11 Reference

Extending the game sample: using XAML for the overlay


One alternative that we didn't discuss in depth is the use of XAML instead of Direct2D for the overlay. XAML has
many benefits over Direct2D for drawing user interface elements, and the most important one is that it makes
incorporating the Windows 10 look and feel into your DirectX game more convenient. Many of the common
elements, styles, and behaviors that define a UWP app are tightly integrated into the XAML model, making it far
less work for a game developer to implement. If your own game design has a complicated user interface, consider
using XAML instead of Direct2D.
So, what is the difference between the implementation of a user interface with Direct2D, and implementing that
same interface with XAML?
You define the overlay in a XAML file, *.xaml, rather than as a collection of Direct2D primitives and DirectWrite
strings manually placed and written to a Direct2D target buffer. If you understand XAML, you'll find it much
easier to create and configure more complicated overlays, especially if you use Visual Studio's XAML editing
tools.
The user interface elements come from standardized elements that are part of the Windows Runtime XAML
APIs, including Windows::UI::Xaml and Windows::UI::Xaml::Controls. The code that handles the behavior of
the XAML user interface elements is defined in a codebehind file, Main.xaml.cpp.
XAML, as a tightly integrated Windows Runtime component, naturally handles resize and view state change
events, transforming the overlay accordingly, so you don't have to manually specify how to redraw the
overlay's components.
The swap chain is not directly attached to a Windows::UI::Core::CoreWindow object, or at least you don't
have to do this. Instead, a DirectX app that incorporates XAML associates a swap chain when a new
SwapChainBackgroundPanel object is constructed. The SwapChainBackgroundPanel object is set as the
Content property of the current window object created at launch by the app singleton, and the window is
passed to Simple3DGame::Initialize as a CoreWindow object.
You declare the XAML for the SwapChainBackgroundPanel like this in Main.app.xaml file:
<Page
x:Name="DXMainPage"
x:Class="Simple3DGameXaml.MainPage"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
mc:Ignorable="d"
d:DesignWidth="1366"
d:DesignHeight="768">

<SwapChainBackgroundPanel x:Name="DXSwapChainPanel">

<!-- ... XAML user controls and elements -->

</SwapChainBackgroundPanel>
</Page>

void App::OnLaunched(LaunchActivatedEventArgs^ /* args */)


{
Suspending += ref new SuspendingEventHandler(this, &App::OnSuspending);
Resuming += ref new EventHandler<Object^>(this, &App::OnResuming);

m_mainPage = ref new MainPage();


m_mainPage->SetApp(this);

Window::Current->Content = m_mainPage;
Window::Current->Activated += ref new WindowActivatedEventHandler(this, &App::OnWindowActivationChanged);
Window::Current->Activate();

m_controller = ref new MoveLookController();


m_renderer = ref new GameRenderer();
m_game = ref new Simple3DGame();

auto window = Window::Current->CoreWindow;

window->SizeChanged +=
ref new TypedEventHandler<CoreWindow^, WindowSizeChangedEventArgs^>(this, &App::OnWindowSizeChanged);

window->VisibilityChanged +=
ref new TypedEventHandler<CoreWindow^, VisibilityChangedEventArgs^>(this, &App::OnVisibilityChanged);

DisplayProperties::LogicalDpiChanged +=
ref new DisplayPropertiesEventHandler(this, &App::OnLogicalDpiChanged);

m_controller->Initialize(window);
m_controller->AutoFire(false);

m_controller->SetMoveRect(
XMFLOAT2(0.0f, window->Bounds.Height - GameConstants::TouchRectangleSize),
XMFLOAT2(GameConstants::TouchRectangleSize, window->Bounds.Height)
);
m_controller->SetFireRect(
XMFLOAT2(window->Bounds.Width - GameConstants::TouchRectangleSize, window->Bounds.Height -
GameConstants::TouchRectangleSize),
XMFLOAT2(window->Bounds.Width, window->Bounds.Height)
);

m_renderer->Initialize(window, m_mainPage->GetSwapChainBackgroundPanel(), DisplayProperties::LogicalDpi);


SetGameInfoOverlay(GameInfoOverlayState::Loading);
SetAction(GameInfoOverlayCommand::None);
ShowGameInfoOverlay();

m_onRenderingEventToken = CompositionTarget::Rendering::add(ref new EventHandler<Object^>(this, &App::OnRendering));


m_renderNeeded = true;
create_task([this]()
{
// Asynchronously initialize the game class and load the renderer device resources.
// This way, the game gets to its main loop faster
// and in parallel all the necessary resources are loaded on other threads.
m_game->Initialize(m_controller, m_renderer);

return m_renderer->CreateGameDeviceResourcesAsync(m_game);

}).then([this]()
{
// The finalize code needs to run in the same thread context
// as the m_renderer object was created because the D3D device context
// can be accessed only on a single thread.
m_renderer->FinalizeCreateGameDeviceResources();

InitializeLicense();
InitializeGameState();

if (m_updateState == UpdateEngineState::WaitingForResources)
{
// In the middle of a game, so spin up the async task to load the level.
create_task([this]()
{
return m_game->LoadLevelAsync();

}).then([this]()
{
// The m_game object may need to deal with D3D device context work so
// again the finalize code needs to run in the same thread
// context as the m_renderer object was created because the D3D
// device context can be accessed only on a single thread.
m_game->FinalizeLoadLevel();
m_updateState = UpdateEngineState::ResourcesLoaded;

}, task_continuation_context::use_current());
}
}, task_continuation_context::use_current());
}

To attach the configured swap chain to the SwapChainBackgroundPanel panel instance defined by your XAML,
you must obtain a pointer to the underlying native ISwapChainBackgroundPanel interface implementation and
call SetSwapChain on it, passing it your configured swap chain. From a method derived from
DirectXBase::CreateWindowSizeDependentResources specifically for DirectX/XAML interop:
ComPtr<IDXGIDevice1> dxgiDevice;
DX::ThrowIfFailed(
m_d3dDevice.As(&dxgiDevice)
);

// Next, get the associated adapter from the DXGI Device.


ComPtr<IDXGIAdapter> dxgiAdapter;
DX::ThrowIfFailed(
dxgiDevice->GetAdapter(&dxgiAdapter)
);

// Next, get the parent factory from the DXGI adapter.


ComPtr<IDXGIFactory2> dxgiFactory;
DX::ThrowIfFailed(
dxgiAdapter->GetParent(IID_PPV_ARGS(&dxgiFactory))
);

// Create the swap chain and then associate it with the SwapChainBackgroundPanel.
DX::ThrowIfFailed(
dxgiFactory->CreateSwapChainForComposition(
m_d3dDevice.Get(),
&swapChainDesc,
nullptr,
&m_swapChain
)
);

ComPtr<ISwapChainBackgroundPanelNative> dxRootPanelAsNative;

// Set the swap chain on the SwapChainBackgroundPanel.


reinterpret_cast<IUnknown*>(m_swapChainPanel)->QueryInterface(__uuidof(ISwapChainBackgroundPanelNative),
(void**)&dxRootPanelAsNative);

DX::ThrowIfFailed(
dxRootPanelAsNative->SetSwapChain(m_swapChain.Get())
);

DX::ThrowIfFailed(
dxgiDevice->SetMaximumFrameLatency(1)
);

For more info about this process, see DirectX and XAML interop.

Complete code for the XAML game sample XAML codebehinds


Here's the complete code for the codebehinds found in the XAML version of the Direct3D 11.1 shooting game
sample.
(Unlike the version of the game sample discussed in the rest of these topics, the XAML version defines its
framework in the App.xaml.cpp and MainPage.xaml.cpp files, instead of DirectXApp.cpp and
GameInfoOverlay.cpp, respectively.)
App.xaml.h

//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#pragma once

#include "MainPage.xaml.h"
#include "Simple3DGame.h"
#include "App.g.h"

namespace Simple3DGameXaml
{
private enum class UpdateEngineState
{
Uninitialized,
WaitingForResources,
WaitingForPress,
Dynamics,
Snapped,
Suspended,
Deactivated,
};

private enum class PressResultState


{
LoadGame,
PlayLevel,
ContinueLevel,
};

private enum class GameInfoOverlayState


{
GameStats,
GameOverExpired,
GameOverCompleted,
LevelStart,
Pause,
Snapped,
};

public ref class App sealed


{
public:
App();

virtual void OnLaunched(Windows::ApplicationModel::Activation::LaunchActivatedEventArgs^ pArgs);

void PauseRequested() { if (m_updateState == UpdateEngineState::Dynamics) m_pauseRequested = true; };


void PressComplete() { if (m_updateState == UpdateEngineState::WaitingForPress) m_pressComplete = true; };
void ResetGame();

private:
~App();

void OnSuspending(
_In_ Platform::Object^ sender,
_In_ Windows::ApplicationModel::SuspendingEventArgs^ args
);
void OnResuming(
_In_ Platform::Object^ sender,
_In_ Platform::Object^ args
);

void OnViewStateChanged(
_In_ Windows::UI::ViewManagement::ApplicationView^ view,
_In_ Windows::UI::ViewManagement::ApplicationViewStateChangedEventArgs^ args
);

void OnWindowActivationChanged(
_In_ Platform::Object^ sender,
_In_ Windows::UI::Core::WindowActivatedEventArgs^ args
);

void OnWindowSizeChanged(
_In_ Windows::UI::Core::CoreWindow^ sender,
_In_ Windows::UI::Core::WindowSizeChangedEventArgs^ args
_In_ Windows::UI::Core::WindowSizeChangedEventArgs^ args
);

void OnLogicalDpiChanged(
_In_ Platform::Object^ sender
);

void OnRendering(
_In_ Object^ sender,
_In_ Object^ args
);

void InitializeGameState();
void Update();
void SetGameInfoOverlay(GameInfoOverlayState state);
void SetAction (GameInfoOverlayCommand command);
void ShowGameInfoOverlay();
void HideGameInfoOverlay();
void SetSnapped();
void HideSnapped();

Windows::Foundation::EventRegistrationToken m_eventToken;
bool m_pauseRequested;
bool m_pressComplete;
bool m_renderNeeded;
bool m_haveFocus;

MainPage^ m_mainPage;
Simple3DGame^ m_game;
MoveLookController^ m_controller; // Controller to handle user input

UpdateEngineState m_updateState;
UpdateEngineState m_updateStateNext;
PressResultState m_pressResult;
GameInfoOverlayState m_gameInfoOverlayState;
};
}

App.xaml.cpp

//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#include "pch.h"
#include "App.xaml.h"

using namespace Simple3DGameXaml;

using namespace Platform;


using namespace Windows::ApplicationModel::Activation;
using namespace Windows::ApplicationModel;
using namespace Windows::ApplicationModel::Core;
using namespace Windows::ApplicationModel::Activation;
using namespace Windows::UI::Core;
using namespace Windows::UI::ViewManagement;
using namespace Windows::Foundation;
using namespace Windows::Foundation::Collections;
using namespace Windows::UI::Xaml;
using namespace Windows::UI::Xaml::Controls;
using namespace Windows::UI::Xaml::Controls::Primitives;
using namespace Windows::UI::Xaml::Data;
using namespace Windows::UI::Xaml::Input;
using namespace Windows::UI::Xaml::Media;
using namespace Windows::UI::Xaml::Navigation;
using namespace Windows::UI::Xaml::Navigation;
using namespace Windows::UI::Xaml::Media::Animation;
using namespace Windows::Graphics::Display;

//----------------------------------------------------------------------
App::App():
m_pauseRequested(false),
m_pressComplete(false),
m_renderNeeded(false),
m_haveFocus(false)
{
InitializeComponent();
}
//----------------------------------------------------------------------
App::~App()
{
CompositionTarget::Rendering::remove(m_eventToken);
}
//----------------------------------------------------------------------
void App::OnLaunched(Windows::ApplicationModel::Activation::LaunchActivatedEventArgs^ args)
{
m_mainPage = ref new MainPage(this);

Window::Current->Content = m_mainPage;
Window::Current->Activated += ref new WindowActivatedEventHandler(this, &App::OnWindowActivationChanged);
Window::Current->Activate();

// Create the game and pass to window and root panel for swap chain setup.
m_controller = ref new MoveLookController();
m_controller->Initialize(Window::Current->CoreWindow);

m_game = ref new Simple3DGame();


m_game->Initialize(Window::Current->CoreWindow, m_mainPage, DisplayProperties::LogicalDpi, m_controller);

m_eventToken = CompositionTarget::Rendering::add(ref new EventHandler<Object^>(this, &App::OnRendering));

ApplicationView::GetForCurrentView()->ViewStateChanged +=
ref new TypedEventHandler<ApplicationView^, ApplicationViewStateChangedEventArgs^>(
this,
&App::OnViewStateChanged
);

CoreApplication::Suspending += ref new EventHandler<SuspendingEventArgs^>(this, &App::OnSuspending);


CoreApplication::Resuming += ref new EventHandler<Object^>(this, &App::OnResuming);

Window::Current->CoreWindow->SizeChanged +=
ref new TypedEventHandler<CoreWindow^, WindowSizeChangedEventArgs^>(this, &App::OnWindowSizeChanged);

DisplayProperties::LogicalDpiChanged +=
ref new DisplayPropertiesEventHandler(this, &App::OnLogicalDpiChanged);

InitializeGameState();
}
//----------------------------------------------------------------------
void App::OnRendering(
_In_ Object^ sender,
_In_ Object^ args
)
{
Update();
if (m_updateState == UpdateEngineState::Dynamics || m_renderNeeded)
{
m_game->Render();
m_renderNeeded = false;
}
}
//--------------------------------------------------------------------------------------
void App::OnWindowSizeChanged(
_In_ CoreWindow^ sender,
_In_ WindowSizeChangedEventArgs^ args
_In_ WindowSizeChangedEventArgs^ args
)
{
m_renderNeeded = true;
m_game->UpdateForWindowSizeChange();
}
//--------------------------------------------------------------------------------------
void App::OnLogicalDpiChanged(
_In_ Object^ sender
)
{
m_game->SetDpi(DisplayProperties::LogicalDpi);
}
//--------------------------------------------------------------------------------------
void App::InitializeGameState()
{
//
// Set up the initial state machine for handling the Game playing state.
//
if (m_game->GameActive() && m_game->LevelActive())
{
// The last time the game terminated it was in the middle
// of a level.
// We are waiting for the user to continue the game.
m_updateState = UpdateEngineState::WaitingForResources;
m_pressResult = PressResultState::ContinueLevel;
SetGameInfoOverlay(GameInfoOverlayState::Pause);
}
else if (!m_game->GameActive() && (m_game->HighScore().totalHits > 0))
{
// The last time the game terminated the game had been completed.
// Show the high score.
// We are waiting for the user to start a new game.
m_updateState = UpdateEngineState::WaitingForResources;
m_pressResult = PressResultState::LoadGame;
SetGameInfoOverlay(GameInfoOverlayState::GameStats);
}
else
{
// This is either the first time the game has run or
// the last time the game terminated the level was completed.
// We are waiting for the user to begin the next level.
m_updateState = UpdateEngineState::WaitingForResources;
m_pressResult = PressResultState::PlayLevel;
SetGameInfoOverlay(GameInfoOverlayState::LevelStart);
}
SetAction(GameInfoOverlayCommand::PleaseWait);
ShowGameInfoOverlay();
}
//--------------------------------------------------------------------------------------
void App::Update()
{
m_controller->Update();

switch (m_updateState)
{
case UpdateEngineState::WaitingForResources:
if (m_game->IsResourceLoadingComplete())
{
switch (m_pressResult)
{
case PressResultState::LoadGame:
SetGameInfoOverlay(GameInfoOverlayState::GameStats);
break;

case PressResultState::PlayLevel:
SetGameInfoOverlay(GameInfoOverlayState::LevelStart);
break;

case PressResultState::ContinueLevel:
case PressResultState::ContinueLevel:
SetGameInfoOverlay(GameInfoOverlayState::Pause);
break;
}
m_updateState = UpdateEngineState::WaitingForPress;
SetAction(GameInfoOverlayCommand::TapToContinue);
m_controller->WaitForPress();
ShowGameInfoOverlay();
m_renderNeeded = true;
}
break;

case UpdateEngineState::WaitingForPress:
if (m_controller->IsPressComplete() || m_pressComplete)
{
m_pressComplete = false;

switch (m_pressResult)
{
case PressResultState::LoadGame:
m_updateState = UpdateEngineState::WaitingForResources;
m_pressResult = PressResultState::PlayLevel;
m_controller->Active(false);
m_game->LoadGame();
SetAction(GameInfoOverlayCommand::PleaseWait);
SetGameInfoOverlay(GameInfoOverlayState::LevelStart);
ShowGameInfoOverlay();
break;

case PressResultState::PlayLevel:
m_updateState = UpdateEngineState::Dynamics;
HideGameInfoOverlay();
m_controller->Active(true);
m_game->StartLevel();
break;

case PressResultState::ContinueLevel:
m_updateState = UpdateEngineState::Dynamics;
HideGameInfoOverlay();
m_controller->Active(true);
m_game->ContinueGame();
break;
}
}
break;

case UpdateEngineState::Dynamics:
if (m_controller->IsPauseRequested() || m_pauseRequested)
{
m_pauseRequested = false;

m_game->PauseGame();
SetGameInfoOverlay(GameInfoOverlayState::Pause);
SetAction(GameInfoOverlayCommand::TapToContinue);
m_updateState = UpdateEngineState::WaitingForPress;
m_pressResult = PressResultState::ContinueLevel;
ShowGameInfoOverlay();
}
else
{
GameState runState = m_game->RunGame();
switch (runState)
{
case GameState::TimeExpired:
SetAction(GameInfoOverlayCommand::TapToContinue);
SetGameInfoOverlay(GameInfoOverlayState::GameOverExpired);
ShowGameInfoOverlay();
m_updateState = UpdateEngineState::WaitingForPress;
m_pressResult = PressResultState::LoadGame;
break;
break;

case GameState::LevelComplete:
SetAction(GameInfoOverlayCommand::PleaseWait);
SetGameInfoOverlay(GameInfoOverlayState::LevelStart);
ShowGameInfoOverlay();
m_updateState = UpdateEngineState::WaitingForResources;
m_pressResult = PressResultState::PlayLevel;
break;

case GameState::GameComplete:
SetAction(GameInfoOverlayCommand::TapToContinue);
SetGameInfoOverlay(GameInfoOverlayState::GameOverCompleted);
ShowGameInfoOverlay();
m_updateState = UpdateEngineState::WaitingForPress;
m_pressResult = PressResultState::LoadGame;
break;
}
}

if (m_updateState == UpdateEngineState::WaitingForPress)
{
// Transitioning state, so enable waiting for the press event
m_controller->WaitForPress();
}
break;
}
}
//--------------------------------------------------------------------------------------
void App::OnWindowActivationChanged(
_In_ Platform::Object^ sender,
_In_ Windows::UI::Core::WindowActivatedEventArgs^ args
)
{
if (args->WindowActivationState == CoreWindowActivationState::Deactivated)
{
m_haveFocus = false;

switch (m_updateState)
{
case UpdateEngineState::Dynamics:
// From Dynamic mode, when coming out of Deactivated rather than going directly back into game play
// go to the paused state waiting for user input to continue.
m_updateStateNext = UpdateEngineState::WaitingForPress;
m_pressResult = PressResultState::ContinueLevel;
SetGameInfoOverlay(GameInfoOverlayState::Pause);
ShowGameInfoOverlay();
m_game->PauseGame();
m_updateState = UpdateEngineState::Deactivated;
SetAction(GameInfoOverlayCommand::None);
m_renderNeeded = true;
break;

case UpdateEngineState::WaitingForResources:
case UpdateEngineState::WaitingForPress:
m_updateStateNext = m_updateState;
m_updateState = UpdateEngineState::Deactivated;
SetAction(GameInfoOverlayCommand::None);
ShowGameInfoOverlay();
m_renderNeeded = true;
break;
}
}
else if (args->WindowActivationState == CoreWindowActivationState::CodeActivated
|| args->WindowActivationState == CoreWindowActivationState::PointerActivated)
{
m_haveFocus = true;

if (m_updateState == UpdateEngineState::Deactivated)
{
{
m_updateState = m_updateStateNext;

if (m_updateState == UpdateEngineState::WaitingForPress)
{
SetAction(GameInfoOverlayCommand::TapToContinue);
m_controller->WaitForPress();
}
else if (m_updateStateNext == UpdateEngineState::WaitingForResources)
{
SetAction(GameInfoOverlayCommand::PleaseWait);
}
}
}
}
//--------------------------------------------------------------------------------------
void App::OnSuspending(
_In_ Platform::Object^ sender,
_In_ SuspendingEventArgs^ args
)
{
// Save application state.
// If your application needs time to complete a lengthy operation, it can request a deferral.
// The SuspendingOperation has a deadline time. Make sure all your operations are complete by that time!
// If the app doesn't return from this handler within five seconds, it will be terminated.
SuspendingOperation^ op = args->SuspendingOperation;
SuspendingDeferral^ deferral = op->GetDeferral();

switch (m_updateState)
{
case UpdateEngineState::Dynamics:
// Game is in the active game play state, Stop Game Timer and Pause play and save state.
SetAction(GameInfoOverlayCommand::None);
SetGameInfoOverlay(GameInfoOverlayState::Pause);
m_updateStateNext = UpdateEngineState::WaitingForPress;
m_pressResult = PressResultState::ContinueLevel;
m_game->PauseGame();
break;

case UpdateEngineState::WaitingForResources:
case UpdateEngineState::WaitingForPress:
m_updateStateNext = m_updateState;
break;

default:
// Any other state don't save as next state as they are transient states and have already set m_updateStateNext
break;
}
m_updateState = UpdateEngineState::Suspended;

m_controller->Active(false);
m_game->OnSuspending();

deferral->Complete();
}
//--------------------------------------------------------------------------------------
void App::OnResuming(
_In_ Platform::Object^ sender,
_In_ Platform::Object^ args
)
{
if (m_haveFocus)
{
m_updateState = m_updateStateNext;
}
else
{
m_updateState = UpdateEngineState::Deactivated;
}
if (m_updateState == UpdateEngineState::WaitingForPress)
{
SetAction(GameInfoOverlayCommand::TapToContinue);
m_controller->WaitForPress();
}
m_game->OnResuming();
ShowGameInfoOverlay();
m_renderNeeded = true;
}
//--------------------------------------------------------------------------------------
void App::OnViewStateChanged(
_In_ ApplicationView^ view,
_In_ ApplicationViewStateChangedEventArgs^ args
)
{
m_renderNeeded = true;

if (args->ViewState == ApplicationViewState::Snapped)
{
switch (m_updateState)
{
case UpdateEngineState::Dynamics:
// From Dynamic mode, when coming out of SNAPPED layout rather than going directly back into game play
// go to the paused state waiting for user input to continue.
m_updateStateNext = UpdateEngineState::WaitingForPress;
m_pressResult = PressResultState::ContinueLevel;
SetGameInfoOverlay(GameInfoOverlayState::Pause);
SetAction(GameInfoOverlayCommand::TapToContinue);
m_game->PauseGame();
break;

case UpdateEngineState::WaitingForResources:
case UpdateEngineState::WaitingForPress:
// Avoid corrupting the m_updateStateNext on a transition from Snapped -> Snapped.
// Otherwise, just cache the current state and return to it when leaving SNAPPED layout.

m_updateStateNext = m_updateState;
break;

default:
break;
}

m_updateState = UpdateEngineState::Snapped;
m_controller->Active(false);
HideGameInfoOverlay();
SetSnapped();
}
else if (args->ViewState == ApplicationViewState::Filled ||
args->ViewState == ApplicationViewState::FullScreenLandscape ||
args->ViewState == ApplicationViewState::FullScreenPortrait)
{
if (m_updateState == UpdateEngineState::Snapped)
{

HideSnapped();
ShowGameInfoOverlay();
m_renderNeeded = true;

if (m_haveFocus)
{
if (m_updateStateNext == UpdateEngineState::WaitingForPress)
{
SetAction(GameInfoOverlayCommand::TapToContinue);
m_controller->WaitForPress();
}
else if (m_updateStateNext == UpdateEngineState::WaitingForResources)
{
SetAction(GameInfoOverlayCommand::PleaseWait);
}

m_updateState = m_updateStateNext;
}
else
{
m_updateState = UpdateEngineState::Deactivated;
SetAction(GameInfoOverlayCommand::None);
}
}
}
}
//--------------------------------------------------------------------------------------
void App::SetGameInfoOverlay(GameInfoOverlayState state)
{
m_gameInfoOverlayState = state;
switch (state)
{
case GameInfoOverlayState::GameStats:
m_mainPage->SetGameStats(
m_game->HighScore().levelCompleted + 1,
m_game->HighScore().totalHits,
m_game->HighScore().totalShots
);
break;

case GameInfoOverlayState::LevelStart:
m_mainPage->SetLevelStart(
m_game->LevelCompleted() + 1,
m_game->CurrentLevel()->Objective(),
m_game->CurrentLevel()->TimeLimit(),
m_game->BonusTime()
);
break;

case GameInfoOverlayState::GameOverCompleted:
m_mainPage->SetGameOver(
true,
m_game->LevelCompleted() + 1,
m_game->TotalHits(),
m_game->TotalShots(),
m_game->HighScore().totalHits
);
break;

case GameInfoOverlayState::GameOverExpired:
m_mainPage->SetGameOver(
false,
m_game->LevelCompleted(),
m_game->TotalHits(),
m_game->TotalShots(),
m_game->HighScore().totalHits
);
break;

case GameInfoOverlayState::Pause:
m_mainPage->SetPause(
m_game->LevelCompleted() + 1,
m_game->TotalHits(),
m_game->TotalShots(),
m_game->TimeRemaining()
);
break;
}
}
//--------------------------------------------------------------------------------------
void App::SetAction(GameInfoOverlayCommand command)
{
{
m_mainPage->SetAction(command);
}
//--------------------------------------------------------------------------------------
void App::ShowGameInfoOverlay()
{
m_mainPage->ShowGameInfoOverlay();
}
//--------------------------------------------------------------------------------------
void App::HideGameInfoOverlay()
{
m_mainPage->HideGameInfoOverlay();
}
//--------------------------------------------------------------------------------------
void App::SetSnapped()
{
m_mainPage->SetSnapped();
}
//--------------------------------------------------------------------------------------
void App::HideSnapped()
{
m_mainPage->HideSnapped();
}
//--------------------------------------------------------------------------------------
void App::ResetGame()
{
m_updateState = UpdateEngineState::WaitingForResources;
m_pressResult = PressResultState::PlayLevel;
m_controller->Active(false);
m_game->LoadGame();
SetAction(GameInfoOverlayCommand::PleaseWait);
SetGameInfoOverlay(GameInfoOverlayState::LevelStart);
ShowGameInfoOverlay();
m_renderNeeded = true;
}

MainPage.xaml

<SwapChainBackgroundPanel
x:Name="DXSwapChainPanel"
x:Class="Simple3DGameXaml.MainPage"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
mc:Ignorable="d"
d:DesignWidth="1366"
d:DesignHeight="768">
<UserControl x:Name="LayoutControl" Background="Transparent">
<Grid x:Name="LayoutRoot">
<VisualStateManager.VisualStateGroups>
<VisualStateGroup x:Name="GameInfoOverlayStates">
<VisualState x:Name="NormalState">
<Storyboard>
<DoubleAnimation Storyboard.TargetName="GameInfoOverlay" Storyboard.TargetProperty="(UIElement.Opacity)"
Duration="00:00:00.25" To="0">
<DoubleAnimation.EasingFunction>
<CubicEase EasingMode="EaseIn" />
</DoubleAnimation.EasingFunction>
</DoubleAnimation>

<ObjectAnimationUsingKeyFrames Storyboard.TargetProperty="Visibility" Storyboard.TargetName="GameInfoOverlay">


<DiscreteObjectKeyFrame KeyTime="0:0:1" Value="Collapsed" />
</ObjectAnimationUsingKeyFrames>
</Storyboard>
</VisualState>

<VisualState x:Name="GameInfoOverlayState">
<VisualState x:Name="GameInfoOverlayState">
<Storyboard>
<DoubleAnimation Storyboard.TargetName="GameInfoOverlay" Storyboard.TargetProperty="(UIElement.Opacity)"
Duration="00:00:00.25" To="1">
<DoubleAnimation.EasingFunction>
<CubicEase EasingMode="EaseIn" />
</DoubleAnimation.EasingFunction>
</DoubleAnimation>

<ObjectAnimationUsingKeyFrames Storyboard.TargetProperty="Visibility" Storyboard.TargetName="GameInfoOverlay">


<DiscreteObjectKeyFrame KeyTime="0:0:0" Value="Visible" />
</ObjectAnimationUsingKeyFrames>
</Storyboard>
</VisualState>

<VisualState x:Name="SnappedState">
<Storyboard>
<DoubleAnimation Storyboard.TargetName="SDKHeader" Storyboard.TargetProperty="(UIElement.Opacity)" Duration="00:00:00.25"
To="0">
<DoubleAnimation.EasingFunction>
<CubicEase EasingMode="EaseIn" />
</DoubleAnimation.EasingFunction>
</DoubleAnimation>
<DoubleAnimation Storyboard.TargetName="FullscreenView" Storyboard.TargetProperty="(UIElement.Opacity)"
Duration="00:00:00.25" To="0">
<DoubleAnimation.EasingFunction>
<CubicEase EasingMode="EaseIn" />
</DoubleAnimation.EasingFunction>
</DoubleAnimation>
<DoubleAnimation Storyboard.TargetName="SnappedView" Storyboard.TargetProperty="(UIElement.Opacity)" Duration="00:00:00.25"
To="1">
<DoubleAnimation.EasingFunction>
<CubicEase EasingMode="EaseIn" />
</DoubleAnimation.EasingFunction>
</DoubleAnimation>

<ObjectAnimationUsingKeyFrames Storyboard.TargetProperty="Visibility" Storyboard.TargetName="SDKHeader">


<DiscreteObjectKeyFrame KeyTime="0:0:1" Value="Collapsed" />
</ObjectAnimationUsingKeyFrames>
<ObjectAnimationUsingKeyFrames Storyboard.TargetProperty="Visibility" Storyboard.TargetName="FullscreenView">
<DiscreteObjectKeyFrame KeyTime="0:0:1" Value="Collapsed" />
</ObjectAnimationUsingKeyFrames>
<ObjectAnimationUsingKeyFrames Storyboard.TargetProperty="Visibility" Storyboard.TargetName="SnappedView">
<DiscreteObjectKeyFrame KeyTime="0:0:0" Value="Visible" />
</ObjectAnimationUsingKeyFrames>
</Storyboard>
</VisualState>

<VisualState x:Name="UnsnappedState">
<Storyboard>
<DoubleAnimation Storyboard.TargetName="SDKHeader" Storyboard.TargetProperty="(UIElement.Opacity)" Duration="00:00:00.25"
To="1">
<DoubleAnimation.EasingFunction>
<CubicEase EasingMode="EaseIn" />
</DoubleAnimation.EasingFunction>
</DoubleAnimation>
<DoubleAnimation Storyboard.TargetName="FullscreenView" Storyboard.TargetProperty="(UIElement.Opacity)"
Duration="00:00:00.25" To="1">
<DoubleAnimation.EasingFunction>
<CubicEase EasingMode="EaseIn" />
</DoubleAnimation.EasingFunction>
</DoubleAnimation>
<DoubleAnimation Storyboard.TargetName="SnappedView" Storyboard.TargetProperty="(UIElement.Opacity)" Duration="00:00:00.25"
To="0">
<DoubleAnimation.EasingFunction>
<CubicEase EasingMode="EaseIn" />
</DoubleAnimation.EasingFunction>
</DoubleAnimation>

<ObjectAnimationUsingKeyFrames Storyboard.TargetProperty="Visibility" Storyboard.TargetName="SDKHeader">


<ObjectAnimationUsingKeyFrames Storyboard.TargetProperty="Visibility" Storyboard.TargetName="SDKHeader">
<DiscreteObjectKeyFrame KeyTime="0:0:0" Value="Visible" />
</ObjectAnimationUsingKeyFrames>
<ObjectAnimationUsingKeyFrames Storyboard.TargetProperty="Visibility" Storyboard.TargetName="FullscreenView">
<DiscreteObjectKeyFrame KeyTime="0:0:0" Value="Visible" />
</ObjectAnimationUsingKeyFrames>
<ObjectAnimationUsingKeyFrames Storyboard.TargetProperty="Visibility" Storyboard.TargetName="SnappedView">
<DiscreteObjectKeyFrame KeyTime="0:0:1" Value="Collapsed" />
</ObjectAnimationUsingKeyFrames>
</Storyboard>
</VisualState>
</VisualStateGroup>
</VisualStateManager.VisualStateGroups>
<Grid x:Name="ContentRoot">
<Grid.RowDefinitions>
<RowDefinition Height="1*"/>
<RowDefinition Height="2*"/>
<RowDefinition Height="1*"/>
</Grid.RowDefinitions>

<!-- Sample Overlay Title -->


<StackPanel x:Name="SDKHeader" Grid.Row="0">
<StackPanel Orientation="Horizontal">
<Image Source="windows-sdk.png"/>
<TextBlock Text="Windows 8 SDK Samples" VerticalAlignment="Bottom" Style="{StaticResource OverlayTitleStyle}"
TextWrapping="Wrap"/>
</StackPanel>
<TextBlock x:Name="FeatureName" Text="UWP DirectX/XAML first-person game sample" Style="{StaticResource OverlayH1Style}"
TextWrapping="Wrap"/>
</StackPanel>
<!-- End Sample Overlay Title -->

<Grid Grid.Row="1" x:Name="SnappedView" Background="{StaticResource PageBackgroundBrush}" Visibility="Collapsed">


<Grid.RowDefinitions>
<RowDefinition Height="2*"/>
<RowDefinition Height="5*"/>
<RowDefinition Height="1*"/>
</Grid.RowDefinitions>

<!-- Title of the Game Info Overlay -->


<StackPanel Grid.Row="1" Orientation="Horizontal" HorizontalAlignment="Center">
<TextBlock
Text="Game Paused"
Style="{StaticResource TitleStyle}"/>
</StackPanel>
</Grid>

<Grid Grid.Row="1" x:Name="FullscreenView">


<Grid.ColumnDefinitions>
<ColumnDefinition Width="1*"/>
<ColumnDefinition Width="2*"/>
<ColumnDefinition Width="1*"/>
</Grid.ColumnDefinitions>

<!-- Center of the outer Grid. This is the Center 50% of the screen -->
<Grid x:Name="GameInfoOverlay" Grid.Column="1" Background="{StaticResource PageBackgroundBrush}" Visibility="Collapsed"
Tapped="OnGameInfoOverlayTapped">
<Grid.RowDefinitions>
<RowDefinition Height="2*"/>
<RowDefinition Height="5*"/>
<RowDefinition Height="1*"/>
</Grid.RowDefinitions>

<!-- Title of the Game Info Overlay -->


<StackPanel Grid.Row="0" Orientation="Horizontal" HorizontalAlignment="Center">
<TextBlock x:Name="GameInfoOverlayTitle"
Text="Title"
Style="{StaticResource TitleStyle}"/>
</StackPanel>
<!-- Body1: Game Statistics -->
<Grid x:Name="Stats" Grid.Row="1" Visibility="Collapsed">
<Grid.ColumnDefinitions>
<ColumnDefinition Width="Auto"/>
<ColumnDefinition Width="*"/>
</Grid.ColumnDefinitions>
<Grid.RowDefinitions>
<RowDefinition Height="Auto"/>
<RowDefinition Height="Auto"/>
<RowDefinition Height="Auto"/>
<RowDefinition Height="*"/>
</Grid.RowDefinitions>

<StackPanel Grid.Column="0" Grid.Row="0" Orientation="Horizontal">


<TextBlock
Text="Levels Completed"
Style="{StaticResource H1Style}"/>
</StackPanel>
<StackPanel Grid.Column="0" Grid.Row="1" Orientation="Horizontal">
<TextBlock
Text="Total Points"
Style="{StaticResource H1Style}"/>
</StackPanel>
<StackPanel Grid.Column="0" Grid.Row="2" Orientation="Horizontal">
<TextBlock
Text="Total Shots"
Style="{StaticResource H1Style}"/>
</StackPanel>
<StackPanel x:Name="HighScoreTitle" Grid.Column="0" Grid.Row="3" Orientation="Horizontal" Visibility="Visible">
<TextBlock
Text="High Score"
Style="{StaticResource H1StyleSpace}"/>
</StackPanel>
<StackPanel Grid.Column="1" Grid.Row="0" Orientation="Horizontal">
<TextBlock x:Name="LevelsCompleted"
Text="1"
Style="{StaticResource H1Style}"/>
</StackPanel>
<StackPanel Grid.Column="1" Grid.Row="1" Orientation="Horizontal">
<TextBlock x:Name="TotalPoints"
Text="9"
Style="{StaticResource H1Style}"/>
</StackPanel>
<StackPanel Grid.Column="1" Grid.Row="2" Orientation="Horizontal">
<TextBlock x:Name="TotalShots"
Text="25"
Style="{StaticResource H1Style}"/>
</StackPanel>
<StackPanel x:Name="HighScoreData" Grid.Column="1" Grid.Row="3" Orientation="Horizontal">
<TextBlock x:Name="HighScore"
Text="120"
Style="{StaticResource H1StyleSpace}"/>
</StackPanel>
</Grid>

<!-- Body2: Level Start -->


<Grid x:Name="LevelStart" Grid.Row="1" Visibility="Visible">
<Grid.ColumnDefinitions>
<ColumnDefinition Width="Auto"/>
<ColumnDefinition Width="*"/>
</Grid.ColumnDefinitions>
<Grid.RowDefinitions>
<RowDefinition Height="Auto"/>
<RowDefinition Height="Auto"/>
<RowDefinition Height="*"/>
</Grid.RowDefinitions>

<StackPanel Grid.Column="0" Grid.Row="0" Orientation="Horizontal">


<TextBlock
<TextBlock
Text="Objective"
Style="{StaticResource H1Style}"/>
</StackPanel>
<StackPanel Grid.Column="0" Grid.Row="1" Orientation="Horizontal">
<TextBlock
Text="Time Limit"
Style="{StaticResource H1Style}"/>
</StackPanel>
<StackPanel x:Name="BonusTimeTitle" Grid.Column="0" Grid.Row="2" Orientation="Horizontal">
<TextBlock
Text="Bonus Time"
Style="{StaticResource H1Style}"/>
</StackPanel>
<Grid Grid.Column="1" Grid.Row="0">
<TextBlock x:Name="Objective"
Text="Objective Text - replaced before it is displayed"
TextWrapping="Wrap"
Style="{StaticResource H1Style}"/>
</Grid>
<StackPanel Grid.Column="1" Grid.Row="1" Orientation="Horizontal">
<TextBlock x:Name="TimeLimit"
Text="30 sec"
Style="{StaticResource H1Style}"/>
</StackPanel>
<StackPanel x:Name="BonusTimeData" Grid.Column="1" Grid.Row="2" Orientation="Horizontal">
<TextBlock x:Name="BonusTime"
Text="20 sec"
Style="{StaticResource H1Style}"/>
</StackPanel>
</Grid>

<!-- Body3: Pause -->


<Grid x:Name="PauseData" Grid.Row="1" Visibility="Visible">
<Grid.ColumnDefinitions>
<ColumnDefinition Width="Auto"/>
<ColumnDefinition Width="*"/>
</Grid.ColumnDefinitions>
<Grid.RowDefinitions>
<RowDefinition Height="Auto"/>
<RowDefinition Height="Auto"/>
<RowDefinition Height="Auto"/>
<RowDefinition Height="*"/>
</Grid.RowDefinitions>

<StackPanel Grid.Column="0" Grid.Row="0" Orientation="Horizontal">


<TextBlock
Text="Level"
Style="{StaticResource H1Style}"/>
</StackPanel>
<StackPanel Grid.Column="0" Grid.Row="1" Orientation="Horizontal">
<TextBlock
Text="Hits"
Style="{StaticResource H1Style}"/>
</StackPanel>
<StackPanel Grid.Column="0" Grid.Row="2" Orientation="Horizontal">
<TextBlock
Text="Shots"
Style="{StaticResource H1Style}"/>
</StackPanel>
<StackPanel Grid.Column="0" Grid.Row="3" Orientation="Horizontal">
<TextBlock
Text="Time"
Style="{StaticResource H1Style}"/>
</StackPanel>
<StackPanel Grid.Column="1" Grid.Row="0" Orientation="Horizontal">
<TextBlock x:Name="PauseLevel"
Text="1"
TextWrapping="Wrap"
Style="{StaticResource H1Style}"/>
Style="{StaticResource H1Style}"/>
</StackPanel>
<StackPanel Grid.Column="1" Grid.Row="1" Orientation="Horizontal">
<TextBlock x:Name="PauseHits"
Text="0"
Style="{StaticResource H1Style}"/>
</StackPanel>
<StackPanel Grid.Column="1" Grid.Row="2" Orientation="Horizontal">
<TextBlock x:Name="PauseShots"
Text="0"
Style="{StaticResource H1Style}"/>
</StackPanel>
<StackPanel Grid.Column="1" Grid.Row="3" Orientation="Horizontal">
<TextBlock x:Name="PauseTimeRemaining"
Text="20.0 sec"
Style="{StaticResource H1Style}"/>
</StackPanel>
</Grid>

<!-- Footer of Game Info Overlay. There are several options -->
<StackPanel x:Name="TapToContinue" Grid.Row="2" Orientation="Horizontal" Visibility="Collapsed">
<TextBlock
Text="Tap to continue ..."
Style="{StaticResource H3Style}"
TextWrapping="Wrap"/>
</StackPanel>
<StackPanel x:Name="PleaseWait" Grid.Row="2" Orientation="Horizontal" Visibility="Collapsed">
<TextBlock
Text="Level Loading Please Wait ..."
Style="{StaticResource H3Style}"
TextWrapping="Wrap"/>
</StackPanel>
<StackPanel x:Name="PlayAgain" Grid.Row="2" Orientation="Horizontal" Visibility="Collapsed">
<TextBlock
Text="Tap to play again ..."
Style="{StaticResource H3Style}"
TextWrapping="Wrap"/>
</StackPanel>
</Grid>
</Grid>
</Grid>
<AppBar x:Name="GameAppBar" Height="88" VerticalAlignment="Bottom">
<Grid>
<Grid.ColumnDefinitions>
<ColumnDefinition Width="*"/>
<ColumnDefinition Width="Auto"/>
</Grid.ColumnDefinitions>
<StackPanel Grid.Column="0" Orientation="Horizontal" HorizontalAlignment="Left" VerticalAlignment="Top">
<Button x:Name="Reset" Tag="Reset" Style="{StaticResource ResetButtonStyle}" Click="OnResetButtonClicked"/>
</StackPanel>
<StackPanel Grid.Column="1" Orientation="Horizontal" HorizontalAlignment="Right" VerticalAlignment="Top">
<Button x:Name="Pause" Tag="Pause" Style="{StaticResource PauseButtonStyle}" Click="OnPauseButtonClicked"/>
<Button x:Name="Play" Tag="Play" Style="{StaticResource PlayButtonStyle}" Click="OnPlayButtonClicked"/>
</StackPanel>
</Grid>
</AppBar>
</Grid>
</UserControl>
</SwapChainBackgroundPanel>

MainPage.xaml.h
//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#pragma once

#include "MainPage.g.h"

namespace Simple3DGameXaml
{
ref class App;

public enum class GameInfoOverlayCommand


{
None,
TapToContinue,
PleaseWait,
PlayAgain,
};

public ref class MainPage sealed


{
public:
MainPage(App^ app);

void SetGameStats(int maxLevel, int hitCount, int shotCount);


void SetGameOver(bool win, int maxLevel, int hitCount, int shotCount, int highScore);
void SetLevelStart(int level, Platform::String^ objective, float timeLimit, float bonusTime);
void SetPause(int level, int hitCount, int shotCount, float timeRemaining);
void SetSnapped();
void HideSnapped();
void SetAction(GameInfoOverlayCommand action);
void HideGameInfoOverlay();
void ShowGameInfoOverlay();

protected:
void OnPauseButtonClicked(Object^ sender, Windows::UI::Xaml::RoutedEventArgs^ e);
void OnPlayButtonClicked(Object^ sender, Windows::UI::Xaml::RoutedEventArgs^ e);
void OnResetButtonClicked(Object^ sender, Windows::UI::Xaml::RoutedEventArgs^ e);
void OnGameInfoOverlayTapped(Object^ sender, Windows::UI::Xaml::Input::TappedRoutedEventArgs^ args);

private:
App^ m_app;
};
}

Main.xaml.cpp codebehind

//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#include "pch.h"
#include "App.xaml.h"
#include "MainPage.xaml.h"

using namespace Simple3DGameXaml;

using namespace Platform;


using namespace Windows::Foundation;
using namespace Windows::Foundation::Collections;
using namespace Windows::Graphics::Display;
using namespace Windows::UI::ViewManagement;
using namespace Windows::UI::Xaml;
using namespace Windows::UI::Xaml::Controls;
using namespace Windows::UI::Xaml::Controls::Primitives;
using namespace Windows::UI::Xaml::Data;
using namespace Windows::UI::Xaml::Input;
using namespace Windows::UI::Xaml::Media;
using namespace Windows::UI::Xaml::Navigation;

//----------------------------------------------------------------------
MainPage::MainPage(App^ app)
{
InitializeComponent();

m_app = app;
}
//----------------------------------------------------------------------
void MainPage::HideGameInfoOverlay()
{
VisualStateManager::GoToState(this->LayoutControl, ref new String(L"NormalState"), true);
}
//----------------------------------------------------------------------
void MainPage::ShowGameInfoOverlay()
{
VisualStateManager::GoToState(this->LayoutControl, ref new String(L"GameInfoOverlayState"), true);
}
//----------------------------------------------------------------------
void MainPage::SetAction(GameInfoOverlayCommand action)
{
// Enable only one of the four possible commands at the bottom of the
// Game Info Overlay.

PlayAgain->Visibility = ::Visibility::Collapsed;
PleaseWait->Visibility = ::Visibility::Collapsed;
TapToContinue->Visibility = ::Visibility::Collapsed;

switch (action)
{
case GameInfoOverlayCommand::PlayAgain:
PlayAgain->Visibility = ::Visibility::Visible;
break;
case GameInfoOverlayCommand::PleaseWait:
PleaseWait->Visibility = ::Visibility::Visible;
break;
case GameInfoOverlayCommand::TapToContinue:
TapToContinue->Visibility = ::Visibility::Visible;
break;
case GameInfoOverlayCommand::None:
break;
}
}
//----------------------------------------------------------------------
void MainPage::SetGameStats(
int maxLevel,
int hitCount,
int shotCount
)
{
GameInfoOverlayTitle->Text = "Game Statistics";
Stats->Visibility = ::Visibility::Visible;
LevelStart->Visibility = ::Visibility::Collapsed;
PauseData->Visibility = ::Visibility::Collapsed;

static const int bufferLength = 20;


static char16 wsbuffer[bufferLength];

int length = swprintf_s(wsbuffer, bufferLength, L"%d", maxLevel);


int length = swprintf_s(wsbuffer, bufferLength, L"%d", maxLevel);
LevelsCompleted->Text = ref new Platform::String(wsbuffer, length);

length = swprintf_s(wsbuffer, bufferLength, L"%d", hitCount);


TotalPoints->Text = ref new Platform::String(wsbuffer, length);

length = swprintf_s(wsbuffer, bufferLength, L"%d", shotCount);


TotalShots->Text = ref new Platform::String(wsbuffer, length);

// High Score is not used for showing Game Statistics


HighScoreTitle->Visibility = ::Visibility::Collapsed;
HighScoreData->Visibility = ::Visibility::Collapsed;
}
//----------------------------------------------------------------------
void MainPage::SetGameOver(
bool win,
int maxLevel,
int hitCount,
int shotCount,
int highScore
)
{
if (win)
{
GameInfoOverlayTitle->Text = "You Won!";
}
else
{
GameInfoOverlayTitle->Text = "Game Over";
}
Stats->Visibility = ::Visibility::Visible;
LevelStart->Visibility = ::Visibility::Collapsed;
PauseData->Visibility = ::Visibility::Collapsed;

static const int bufferLength = 20;


static char16 wsbuffer[bufferLength];

int length = swprintf_s(wsbuffer, bufferLength, L"%d", maxLevel);


LevelsCompleted->Text = ref new Platform::String(wsbuffer, length);

length = swprintf_s(wsbuffer, bufferLength, L"%d", hitCount);


TotalPoints->Text = ref new Platform::String(wsbuffer, length);

length = swprintf_s(wsbuffer, bufferLength, L"%d", shotCount);


TotalShots->Text = ref new Platform::String(wsbuffer, length);

// Show High Score


HighScoreTitle->Visibility = ::Visibility::Visible;
HighScoreData->Visibility = ::Visibility::Visible;
length = swprintf_s(wsbuffer, bufferLength, L"%d", highScore);
HighScore->Text = ref new Platform::String(wsbuffer, length);
}
//----------------------------------------------------------------------
void MainPage::SetLevelStart(
int level,
Platform::String^ objective,
float timeLimit,
float bonusTime
)
{
static const int bufferLength = 20;
static char16 wsbuffer[bufferLength];

int length = swprintf_s(wsbuffer, bufferLength, L"Level %d", level);


GameInfoOverlayTitle->Text = ref new Platform::String(wsbuffer, length);

Stats->Visibility = ::Visibility::Collapsed;
LevelStart->Visibility = ::Visibility::Visible;
PauseData->Visibility = ::Visibility::Collapsed;
Objective->Text = objective;

length = swprintf_s(wsbuffer, bufferLength, L"%6.1f sec", timeLimit);


TimeLimit->Text = ref new Platform::String(wsbuffer, length);

if (bonusTime > 0.0)


{
BonusTimeTitle->Visibility = ::Visibility::Visible;
BonusTimeData->Visibility = ::Visibility::Visible;
length = swprintf_s(wsbuffer, bufferLength, L"%6.1f sec", bonusTime);
BonusTime->Text = ref new Platform::String(wsbuffer, length);
}
else
{
BonusTimeTitle->Visibility = ::Visibility::Collapsed;
BonusTimeData->Visibility = ::Visibility::Collapsed;
}
}
//----------------------------------------------------------------------
void MainPage::SetPause(int level, int hitCount, int shotCount, float timeRemaining)
{
GameInfoOverlayTitle->Text = "Paused";
Stats->Visibility = ::Visibility::Collapsed;
LevelStart->Visibility = ::Visibility::Collapsed;
PauseData->Visibility = ::Visibility::Visible;

static const int bufferLength = 20;


static char16 wsbuffer[bufferLength];

int length = swprintf_s(wsbuffer, bufferLength, L"%d", level);


PauseLevel->Text = ref new Platform::String(wsbuffer, length);

length = swprintf_s(wsbuffer, bufferLength, L"%d", hitCount);


PauseHits->Text = ref new Platform::String(wsbuffer, length);

length = swprintf_s(wsbuffer, bufferLength, L"%d", shotCount);


PauseShots->Text = ref new Platform::String(wsbuffer, length);

length = swprintf_s(wsbuffer, bufferLength, L"%6.1f sec", timeRemaining);


PauseTimeRemaining->Text = ref new Platform::String(wsbuffer, length);
}
//----------------------------------------------------------------------
void MainPage::SetSnapped()
{
VisualStateManager::GoToState(this->LayoutControl, ref new String(L"SnappedState"), true);
}
//----------------------------------------------------------------------
void MainPage::HideSnapped()
{
VisualStateManager::GoToState(this->LayoutControl, ref new String(L"UnsnappedState"), true);
}
//----------------------------------------------------------------------
void MainPage::OnGameInfoOverlayTapped(Object^ sender, TappedRoutedEventArgs^ args)
{
m_app->PressComplete();
}
//----------------------------------------------------------------------
void MainPage::OnPauseButtonClicked(Object^ sender, RoutedEventArgs^ args)
{
m_app->PauseRequested();
}
//----------------------------------------------------------------------
void MainPage::OnPlayButtonClicked(Object^ sender, RoutedEventArgs^ args)
{
m_app->PressComplete();
}
//----------------------------------------------------------------------
void MainPage::OnResetButtonClicked(Object^ sender, RoutedEventArgs^ args)
{
{
m_app->ResetGame();
}
//----------------------------------------------------------------------

To download a version of the sample game that uses XAML for the overlay, go to the Direct3D shooting game
sample (XAML).
Developing Marble Maze, a UWP game in C++ and
DirectX
3/6/2017 4 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This section of the documentation describes how to use DirectX and Visual C++ to create a 3-D Universal
Windows Platform (UWP) game. This documentation shows how to create a 3-D game named Marble Maze that
embraces new form factors such as tablets and also works on traditional desktop and laptop PCs.

Note To download the Marble Maze source code, see DirectX Marble Maze game sample.
Important Marble Maze illustrates design patterns that we consider to be best practices for creating UWP
games. You can adapt many of the implementation details to fit your own practices and the unique
requirements of the game you are developing. Feel free to use different techniques or libraries when those
better suit your needs. (However, always ensure that your code passes the Windows App Certification Kit.)
When we consider a Marble Maze implementation to be essential for successful game development, we
emphasize it in this documentation.

Introducing Marble Maze


We chose Marble Maze because it is relatively basic, but still demonstrates the breadth of features that are found
in most games. It shows how to use graphics, input handling, and audio. It also demonstrates game mechanics
such as rules and goals.
Marble Maze resembles the table-top labyrinth game that is typically constructed from a box that contains holes
and a steel or glass marble. The goal of Marble Maze is the same as the table-top version: tilt the maze to guide
the marble from the start to the end of the maze in as little time as possible, without letting the marble fall into
any of the holes. Marble Maze adds the concept of checkpoints. If the marble falls into a hole, the game is restarted
at the last checkpoint location that the marble passed over.
Marble Maze offers multiple ways for a user to interact with the game board. If you have a touch-enabled or
accelerometer-enabled device, you can use those devices to move the game board. You can also use an Xbox 360
controller or a mouse to control game play.
Prerequisites
Windows 10
Microsoft Visual Studio 2015
C++ programming knowledge
Familiarity with DirectX and DirectX terminology
Basic knowledge of COM

Who should read this?


If youre interested in creating 3-D games or other graphics-intensive applications for Windows 10, this is for you.
We hope you use the principles and practices that this documentation outlines to create your own UWP game. A
background or strong interest in C++ and DirectX programming will help you get the most out of this
documentation. If you don't have experience with DirectX, you can still benefit if you have experience with similar
3-D graphics programming environments.
The document Walkthrough: create a simple UWP game with DirectX describes another sample that implements a
basic 3-D shooting game by using DirectX and C++.

What this documentation covers


This documentation teaches how to:
Use the Windows Runtime API and DirectX to create a UWP game.
Use Direct3D and Direct2D to work with visual content such as models, textures, vertex and pixel shaders, and
2-D overlays.
Integrate input mechanisms such as touch, accelerometer, and the Xbox 360 controller.
Use XAudio2 to incorporate music and sound effects.

What this documentation does not cover


This documentation does not cover the following aspects of game development. These aspects are followed by
additional resources that cover them.
3-D game design principles.
C++ or DirectX programming basics.
How to design resources such as textures, models, or audio.
How to troubleshoot behavior or performance issues in your game.
How to prepare your game for use in other parts of the world.
How to certify and publish your game to the Windows Store.
Marble Maze also uses the DirectXMath library to work with 3-D geometry and perform physics calculations, such
as collisions. DirectXMath is not covered in-depth in this section. For more info about DirectXMath, see
DirectXMath Programming Guide. For details about how Marble Maze uses DirectXMath, refer to the source code.
Although Marble Maze provides many reusable components, it is not a complete game development framework.
When we consider a Marble Maze component to be reusable in your game, we emphasize it in the documentation.

Next steps
We recommend that you start with Marble Maze sample fundamentals to learn about the Marble Maze structure
and some of the coding and style guidelines that the Marble Maze source code follows. The following table
outlines the documents in this section so that you can more easily refer to them.
Related Topics
TITLE DESCRIPTION

Marble Maze sample fundamentals Provides an overview of the game structure and some of the
code and style guidelines that the source code follows.

Marble Maze application structure Describes how the Marble Maze application code is structured
and how the structure of a DirectX UWP app differs from that
of a traditional desktop application.

Adding visual content to the Marble Maze sample Describes some of the key practices to keep in mind when
you work with Direct3D and Direct2D. Also describes how
Marble Maze applies these practices for visual content.

Adding input and interactivity to the Marble Maze sample Describes how Marble Maze works with accelerometer, touch,
and Xbox 360 controller devices to enable users to navigate
menus and interact with the game board. Also describes
some of the best practices to keep in mind when you work
with input.

Adding audio to the Marble Maze sample Describes how Marble Maze works with audio to add music
and sound effects to the game experience.
Marble Maze sample fundamentals
3/6/2017 8 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This document describes the fundamental characteristics of the Marble Maze project, for example, how it uses
Visual C++ in the Windows Runtime environment, how it is created and structured, and how it is built. The
document also describes several of the conventions that are used in the code.

Note The sample code that corresponds to this document is found in the DirectX Marble Maze game sample.

Here are some of the key points that this document discusses for when you plan and develop your Universal
Windows Platform (UWP) game.
Use the DirectX 11 App (Universal Windows) template in a C++ application to create your DirectX UWP
game. Use Visual Studio to build a UWP app project as you would build a standard project.
The Windows Runtime provides classes and interfaces so that you can develop UWP apps in a more modern,
object-oriented manner.
Use object references with the hat (^) symbol to manage the lifetime of Windows Runtime variables,
Microsoft::WRL::ComPtr to manage the lifetime of COM objects, and std::shared_ptr or std::unique_ptr to
manage the lifetime of all other heap-allocated C++ objects.
In most cases, use exception handling, instead of result codes, to deal with unexpected errors.
Use SAL annotations together with code analysis tools to help discover errors in your app.

Creating the Visual Studio project


If you've downloaded and extracted the sample, you can open the MarbleMaze.sln solution file in Visual Studio, and
you'll have the code in front of you. You can also view the source on the DirectX Marble Maze game sample MSDN
Samples Gallery page by selecting the Browse Code tab.
When we created the Visual Studio project for Marble Maze, we started with an existing project. However, if you do
not already have an existing project that provides the basic functionality that your DirectX UWP game requires, we
recommend that you create a project based on the Visual Studio DirectX 11 App (Universal Windows) template
because it provides a basic working 3-D application.
One important project setting in the DirectX 11 App (Universal Windows) template is the /ZW option, which
enables the program to use the Windows Runtime language extensions. This option is enabled by default when you
use the Visual Studio template.

Caution The /ZW option is not compatible with options such as /clr.In the case of /clr, this means that you
cannot target both the .NET Framework and the Windows Runtime from the same Visual C++ project.

Every UWP app that you acquire from the Windows Store comes in the form of an app package. An app package
contains a package manifest, which contains information about your app. For example, you can specify the
capabilities (that is, the required access to protected system resources or user data) of your app. If you determine
that your app requires certain capabilities, use the package manifest to declare the required capabilities. The
manifest also lets you specify project properties such as supported device rotations, tile images, and the splash
screen. For more info about app packages, see Packaging apps.
Building, deploying, and running the game
Build a UWP app project as you would build a standard project. (On the menu bar, choose Build, Build Solution.)
The build step compiles the code and also packages it for use as a UWP app.
After you build the project, you must deploy it.(On the menu bar, choose Build, Deploy Solution.) Visual Studio
also deploys the project when you run the game from the debugger.
After you deploy the project, pick the Marble Maze tile to run the game. Alternatively, from Visual Studio, on the
menu bar, choose Debug, Start Debugging.
Controlling the game
You can use touch, the accelerometer, the Xbox 360 controller, or the mouse to control Marble Maze.
Use the directional pad on the controller to change the active menu item.
Use touch, the A button, the Start button, or the mouse to pick a menu item.
Use touch, the accelerometer, the left thumbstick, or the mouse to tilt the maze.
Use touch, the A button, the Start button, or the mouse to close menus such as the high score table.
Use the Start button or the P key to pause or resume the game.
Use the Back button on the controller or the Home key on the keyboard to restart the game.
When the high-score table is visible, use the Back button or Home key to clear all scores.

Code conventions
The Windows Runtime is a programming interface that you can use to create UWP apps that run only in a special
application environment. Such apps use authorized functions, data types, and devices, and are distributed from the
Windows Store. At the lowest level, the Windows Runtime consists of an Application Binary Interface (ABI). The ABI
is a low-level binary contract that makes Windows Runtime APIs accessible to multiple programming languages
such as JavaScript, the .NET languages, and Visual C++.
In order to call Windows Runtime APIs from JavaScript and .NET, those languages require projections that are
specific to each language environment. When you call a Windows Runtime API from JavaScript or .NET, you are
invoking the projection, which in turn calls the underlying ABI function. Although you can call the ABI functions
directly in C++, Microsoft provides projections for C++ as well, because they make it much simpler to consume the
Windows Runtime APIs, while still maintaining high performance. Microsoft also provides language extensions to
Visual C++ that specifically support the Windows Runtime projections. Many of these language extensions
resemble the syntax for the C++/CLI language. However, instead of targeting the common language runtime (CLR),
native apps use this syntax to target the Windows Runtime. The object reference, or hat (^), modifier is an
important part of this new syntax because it enables the automatic deletion of runtime objects by means of
reference counting. Instead of calling methods such as AddRef and Release to manage the lifetime of a Windows
Runtime object, the runtime deletes the object when no other component references it, for example, when it leaves
scope or you set all references to nullptr. Another important part of using Visual C++ to create UWP apps is the
ref new keyword. Use ref new instead of new to create reference-counted Windows Runtime objects. For more
info, see Type System (C++/CX).

Important
You only have to use ^ and ref new when you create Windows Runtime objects or create Windows Runtime
components. You can use the standard C++ syntax when you write core application code that does not use the
Windows Runtime.

Marble Maze uses ^ together with Microsoft::WRL::ComPtr to manage heap-allocated objects and minimize
memory leaks. We recommend that you use ^ to manage the lifetime of Windows Runtime variables, ComPtr to
manage the lifetime of COM variables (such as when you use DirectX), and std::std::shared_ptr or std::unique_ptr
to manage the lifetime of all other heap-allocated C++ objects.
For more info about the language extensions that are available to a C++ UWP app, see Visual C++ Language
Reference (C++/CX).
Error handling
Marble Maze uses exception handling as the primary way to deal with unexpected errors. Although game code
traditionally uses logging or error codes, such as HRESULT values, to indicate errors, exception handling has two
main advantages. First, it can make the code easier to read and maintain. From a code perspective, exception
handling is a more efficient way to propagate an error to a routine that can handle that error. The use of error
codes typically requires each function to explicitly propagate errors. A second advantage is that you can configure
the Visual Studio debugger to break when an exception occurs so that you can stop immediately at the location and
context of the error. The Windows Runtime also uses exception handling extensively. Therefore, by using exception
handling in your code, you can combine all error handling into one model.
We recommend that you use the following conventions in your error handling model:
Use exceptions to communicate unexpected errors.
Do not use exceptions to control the flow of code.
Catch only the exceptions that you can safely handle and recover from. Otherwise, do not catch the exception
and allow the app to terminate.
When you call a DirectX routine that returns HRESULT, use the DX::ThrowIfFailed function. This function is
defined in DirectXSample.h.ThrowIfFailed throws an exception if the provided HRESULT is an error code.
For example, E_POINTER causes ThrowIfFailed to throw Platform::NullReferenceException.
When you use ThrowIfFailed, put the DirectX call on a separate line to help improve code readability, as
shown in the following example.

// Identify the physical adapter (GPU or card) this device is running on.
ComPtr<IDXGIAdapter> dxgiAdapter;
DX::ThrowIfFailed(
dxgiDevice->GetAdapter(&dxgiAdapter)
);

Although we recommend that you avoid the use of HRESULT for unexpected errors , it is more important to
avoid the use of exception handling to control the flow of code. Therefore, it is preferred to use an HRESULT
return value when necessary to control the flow of code.
SAL annotations
Use SAL annotations together with code analysis tools to help discover errors in your app.
By using Microsoft source-code annotation language (SAL), you can annotate, or describe, how a function uses its
parameters. SAL annotations also describe return values. SAL annotations work with the C/C++ Code Analysis tool
to discover possible defects in C and C++ source code. Common coding errors reported by the tool include buffer
overruns, uninitialized memory, null pointer dereferences, and memory and resource leaks.
Consider the BasicLoader::LoadMesh method, which is declared in BasicLoader.h. This method uses _In_ to
specify that filename is an input parameter (and therefore will only be read from), _Out_ to specify that vertexBuffer
and indexBuffer are output parameters (and therefore will only be written to), and _Out_opt_ to specify that
vertexCount and indexCount are optional output parameters (and might be written to). Because vertexCount and
indexCount are optional output parameters, they are allowed to be nullptr. The C/C++ Code Analysis tool
examines calls to this method to ensure that the parameters it passes meet these criteria.
void LoadMesh(
_In_ Platform::String^ filename,
_Out_ ID3D11Buffer** vertexBuffer,
_Out_ ID3D11Buffer** indexBuffer,
_Out_opt_ uint32* vertexCount,
_Out_opt_ uint32* indexCount
);

To perform code analysis on your app, on the menu bar, choose Build, Run Code Analysis on Solution. For more
info about code analysis, see Analyzing C/C++ Code Quality by Using Code Analysis.
The complete list of available annotations is defined in sal.h. For more info, see SAL Annotations.

Next steps
Read Marble Maze application structure for information about how the Marble Maze application code is structured
and how the structure of a DirectX UWP app differs from that of a traditional desktop application.

Related topics
Marble Maze application structure
Developing Marble Maze, a UWP game in C++ and DirectX
Marble Maze application structure
3/6/2017 15 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
The structure of a DirectX Universal Windows Platform (UWP) app differs from that of a traditional desktop
application. Instead of working with handle types such as HWND and functions such as CreateWindow, the
Windows Runtime provides interfaces such as Windows::UI::Core::ICoreWindow so that you can develop UWP
apps in a more modern, object-oriented manner. This section of the documentation shows how the Marble Maze
application code is structured.

Note The sample code that corresponds to this document is found in the DirectX Marble Maze game sample.

Here are some of the key points that this document discusses for when you structure your game code:
In the initialization phase, set up runtime and library components that your game uses. Also load game-specific
resources.
UWP apps must start processing events within 5 seconds of launch. Therefore, load only essential resources
when you load your app. Games should load large resources in the background and display a progress screen.
In the game loop, respond to Windows events, read user input, update scene objects, and render the scene.
Use event handlers to respond to window events. (These replace the window messages from desktop Windows
applications.)
Use a state machine to control the flow and order of the game logic.

File organization
Some of the components in Marble Maze can be reused with any game with little or no modification. For your
own game, you can adapt the organization and ideas that these files provide. The following table briefly describes
the important source code files.

FILES DESCRIPTION

Audio.h, Audio.cpp Defines the Audio class, which manages audio resources.

BasicLoader.h, BasicLoader.cpp Defines the BasicLoader class, which provides utility methods
that help you load textures, meshes, and shaders.

BasicMath.h Defines structures and functions that help you work with
vector and matrix data and computations. Many of these
functions are compatible with HLSL shader types.

BasicReaderWriter.h, BasicReaderWriter.cpp Defines the BasicReaderWriter class, which uses the


Windows Runtime to read and write file data in a UWP app.

BasicShapes.h, BasicShapes.cpp Defines the BasicShapes class, which provides utility methods
for creating basic shapes such as cubes and spheres. (These
files are not used by the Marble Maze implementation).

BasicTimer.h, BasicTimer.cpp Defines the BasicTimer class, which provides an easy way to
get total and elapsed times.
FILES DESCRIPTION

Camera.h, Camera.cpp Defines the Camera class, which provides the position and
orientation of a camera.

Collision.h, Collision.cpp Manages collision info between the marble and other objects,
such as the maze.

DDSTextureLoader.h, DDSTextureLoader.cpp Defines the CreateDDSTextureFromMemory function,


which loads textures that are in .dds format from a memory
buffer.

DirectXApp.h, DirectXApp.cpp Defines the DirectXApp and DirectXAppSource classes,


which encapsulate the view (window, thread, and events) of
the app.

DirectXBase.h, DirectXBase.cpp Defines the DirectXBase class, which provides infrastructure


that is common to many DirectX UWP apps.

DirectXSample.h Defines utility functions that can be used by DirectX UWP


apps.

LoadScreen.h, LoadScreen.cpp Defines the LoadScreen class, which displays a loading screen
during app initialization.

MarbleMaze.h, MarbleMaze.cpp Defines the MarbleMaze class, which manages game-specific


resources and defines much of the game logic.

MediaStreamer.h, MediaStreamer.cpp Defines the MediaStreamer class, which uses Media


Foundation to help the game manage audio resources.

PersistentState.h, PersistentState.cpp Defines the PersistentState class, which reads and writes
primitive data types from and to a backing store.

Physics.h, Physics.cpp Defines the Physics class, which implements the physics
simulation between the marble and the maze.

Primitives.h Defines geometric types that are used by the game.

SampleOverlay.h, SampleOverlay.cpp Defines the SampleOverlay class, which provides common


2-D and user-interface data and operations.

SDKMesh.h, SDKMesh.cpp Defines the SDKMesh class, which loads and renders meshes
that are in SDK Mesh (.sdkmesh) format.

UserInterface.h, UserInterface.cpp Defines functionality that's related to the user interface, such
as the menu system and the high score table.

Design-time versus run-time resource formats


When you can, use run-time formats instead of design-time formats to more efficiently load game resources.
A design-time format is the format you use when you design your resource. Typically, 3-D designers work with
design-time formats. Some design-time formats are also text-based so that you can modify them in any text-
based editor. Design-time formats can be verbose and contain more information than your game requires. A run-
time format is the binary format that is read by your game. Run-time formats are typically more compact and
more efficient to load than the corresponding design-time formats. This is why the majority of games use run-
time assets at run time.
Although your game can directly read a design-time format, there are several benefits to using a separate run-
time format. Because run-time formats are often more compact, they require less disk space and require less time
to transfer over a network. Also, run-time formats are often represented as memory-mapped data structures.
Therefore, they can be loaded into memory much faster than, for example, an XML-based text file. Finally, because
separate run-time formats are typically binary-encoded, they are more difficult for the end-user to modify.
HLSL shaders are one example of resources that use different design-time and run-time formats. Marble Maze
uses .hlsl as the design-time format, and .cso as the run-time format. A .hlsl file holds source code for the shader; a
.cso file holds the corresponding shader byte code. When you convert .hlsl files offline and provide .cso files with
your game, you avoid the need to convert HLSL source files to byte code when your game loads.
For instructional reasons, the Marble Maze project includes both the design-time format and the run-time format
for many resources, but you only have to maintain the design-time formats in the source project for your own
game because you can convert them to run-time formats when you need them. This documentation shows how to
convert the design-time formats to the run-time formats.

Application life cycle


Marble Maze follows the life cycle of a typical UWP app. For more info about the life cycle of a UWP app, see App
lifecycle.
When a UWP game initializes, it typically initializes runtime components such as Direct3D, Direct2D, and any input,
audio, or physics libraries that it uses. It also loads game-specific resources that are required before the game
begins. This initialization occurs one time during a game session.
After initialization, games typically run the game loop. In this loop, games typically perform four actions: process
Windows events, collect input, update scene objects, and render the scene. When the game updates the scene, it
can apply the current input state to the scene objects and simulate physical events, such as object collisions. The
game can also perform other activities such as playing sound effects or sending data over the network. When the
game renders the scene, it captures the current state of the scene and draws it to the display device. The following
sections describe these activities in greater detail.

Adding to the template


The DirectX 11 App (Universal Windows) template creates a core window that you can render to with Direct3D.
The template also includes the DeviceResources class that creates all of the Direct3D device resources needed for
rendering 3D content in a UWP app. The AppMain class creates the MarbleMaze class object, starts the loading
of resources, loops to update the timer, and calls the MarbleMaze render method each frame. The
CreateWindowSizeDependentResources, Update, and Render methods for this class call the corresponding
methods in the MarbleMaze class. The following example shows where the AppMain constructor creates the
MarbleMaze class object. The device resources class is passed to the class so it can use the Direct3D objects for
rendering.

m_marbleMaze = std::unique_ptr<MarbleMaze>(new MarbleMaze(m_deviceResources));


m_marbleMaze->CreateWindowSizeDependentResources();

The AppMain class also starts loading the deferred resources for the game. See the next section for more detail.
The DirectXPage constructor sets up the event handlers, creates the DeviceResources class, and creates the
AppMain class.
When the handlers for these events are called, they pass the input to the MarbleMaze class.
Loading game assets in the background
To ensure that your game can respond to window events within 5 seconds after it is launched, we recommend that
you load your game assets asynchronously, or in the background. As assets load in the background, your game
can respond to window events.

Note You can also display the main menu when it is ready, and allow the remaining assets to continue loading
in the background. If the user selects an option from the menu before all resources are loaded, you can
indicate that scene resources are continuing to load by displaying a progress bar, for example.

Even if your game contains relatively few game assets, it is good practice to load them asynchronously for two
reasons. One reason is that it is difficult to guarantee that all of your resources will load quickly on all devices and
all configurations. Also, by incorporating asynchronous loading early, your code is ready to scale as you add
functionality.
Asynchronous asset loading begins with the AppMain::Load method. This method uses the task Class
(Concurrency Runtime) class to load game assets in the background.

task<void>([=]()
{
m_marbleMaze->LoadDeferredResources();
});

The MarbleMaze class defines the m_deferredResourcesReady flag to indicate that asynchronous loading is
complete. The MarbleMaze::LoadDeferredResources method loads the game resources and then sets this flag.
The update (MarbleMaze::Update) and render (MarbleMaze::Render) phases of the app check this flag. When
this flag is set, the game continues as normal. If the flag is not yet set, the game shows the loading screen.
For more information about asynchronous programming for UWP apps, see Asynchronous programming in C++.

Tip If youre writing game code that is part of a Windows Runtime C++ Library (in other words, a DLL),
consider whether to read Creating Asynchronous Operations in C++ for Windows Store Apps to learn how to
create asynchronous operations that can be consumed by apps and other libraries.

The game loop


The DirectPage::OnRendering method runs the main game loop. This method is called every frame.
To help separate view and window code from game-specific code, we implemented the DirectXApp::Run method
to forward update and render calls to the MarbleMaze object. The DirectPage::OnRendering method also
defines the game timer, which is used for animation and physics simulation.
The following example shows the DirectPage::OnRendering method, which includes the main game loop. The
game loop updates the total time and frame time variables, and then updates and renders the scene. This also
makes sure that content is only rendered when the window is visible.
void DirectXPage::OnRendering(Object^ sender, Object^ args)
{
if (m_windowVisible)
{
m_main->Update();

if (m_main->Render())
{
m_deviceResources->Present();
}
}
}

The state machine


Games typically contain a state machine (also known as a finite state machine, or FSM) to control the flow and
order of the game logic. A state machine contains a given number of states and the ability to transition among
them. A state machine typically starts from an initial state, transitions to one or more intermediate states, and
possibly ends at a terminal state.
A game loop often uses a state machine so that it can perform the logic that is specific to the current game state.
Marble Maze defines the GameState enumeration, which defines each possible state of the game.

enum class GameState


{
Initial,
MainMenu,
HighScoreDisplay,
PreGameCountdown,
InGameActive,
InGamePaused,
PostGameResults,
};

The MainMenu state, for example, defines that the main menu appears, and that the game is not active.
Conversely, the InGameActive state defines that the game is active, and that the menu does not appear. The
MarbleMaze class defines the m_gameState member variable to hold the active game state.
The MarbleMaze::Update and MarbleMaze::Render methods use the switch statement to perform logic for the
current state. The following example shows what this switch statement might look like for the
MarbleMaze::Update method (details are removed to illustrate the structure).
switch (m_gameState)
{
case GameState::MainMenu:
// Do something with the main menu.
break;

case GameState::HighScoreDisplay:
// Do something with the high-score table.
break;

case GameState::PostGameResults:
// Do something with the game results.
break;

case GameState::InGamePaused:
// Handle the paused state.
break;
}

When game logic or rendering depends on a specific game state, we emphasize it in this documentation.

Handling app and window events


The Windows Runtime provides an object-oriented event-handling system so that you can more easily manage
Windows messages. To consume an event in an application, you must provide an event handler, or event-handling
method, that responds to the event. You must also register the event handler with the event source. This process is
often referred to as event wiring.
Supporting suspend, resume, and restart
Marble Maze is suspended when the user switches away from it or when Windows enters a low power state. The
game is resumed when the user moves it to the foreground or when Windows comes out of a low power state.
Generally, you don't close apps. Windows can terminate the app when it's in the suspended state and Windows
requires the resources, such as memory, that the app is using. Windows notifies an app when it is about to be
suspended or resumed, but it doesn't notify the app when it's about to be terminated. Therefore, your app must be
able to saveat the point when Windows notifies your app that it is about to be suspendedany data that would
be required to restore the current user state when the app is restarted. If your app has significant user state that is
expensive to save, you may also need to save state regularly, even before your app receives the suspend
notification. Marble Maze responds to suspend and resume notifications for two reasons:
1. When the app is suspended, the game saves the current game state and pauses audio playback. When the app
is resumed, the game resumes audio playback.
2. When the app is closed and later restarted, the game resumes from its previous state.
Marble Maze performs the following tasks to support suspend and resume:
It saves its state to persistent storage at key points in the game, such as when the user reaches a checkpoint.
It responds to suspend notifications by saving its state to persistent storage.
It responds to resume notifications by loading its state from persistent storage. It also loads the previous state
during startup.
To support suspend and resume, Marble Maze defines the PersistentState class. (See PersistentState.h and
PersistentState.cpp). This class uses the Windows::Foundation::Collections::IPropertySet interface to read and
write properties. The PersistentState class provides methods that read and write primitive data types (such as
bool, int, float, XMFLOAT3, and Platform::String), from and to a backing store.
ref class PersistentState
{
public:
void Initialize(
_In_ Windows::Foundation::Collections::IPropertySet^ m_settingsValues,
_In_ Platform::String^ key
);

void SaveBool(Platform::String^ key, bool value);


void SaveInt32(Platform::String^ key, int value);
void SaveSingle(Platform::String^ key, float value);
void SaveXMFLOAT3(Platform::String^ key, DirectX::XMFLOAT3 value);
void SaveString(Platform::String^ key, Platform::String^ string);

bool LoadBool(Platform::String^ key, bool defaultValue);


int LoadInt32(Platform::String^ key, int defaultValue);
float LoadSingle(Platform::String^ key, float defaultValue);
DirectX::XMFLOAT3 LoadXMFLOAT3(Platform::String^ key, DirectX::XMFLOAT3 defaultValue);
Platform::String^ LoadString(Platform::String^ key, Platform::String^ defaultValue);

private:
Platform::String^ m_keyName;
Windows::Foundation::Collections::IPropertySet^ m_settingsValues;
};

The MarbleMaze class holds a PersistentState object. The MarbleMaze constructor initializes this object and
provides the local application data store as the backing data store.

m_persistentState = ref new PersistentState();


m_persistentState->Initialize(ApplicationData::Current->LocalSettings->Values, "MarbleMaze");

Marble Maze saves its state when the marble passes over a checkpoint or the goal (in the MarbleMaze::Update
method), and when the window loses focus (in the MarbleMaze::OnFocusChange method). If your game holds a
large amount of state data, we recommend that you occasionally save state to persistent storage in a similar
manner because you only have a few seconds to respond to the suspend notification. Therefore, when your app
receives a suspend notification, it only has to save the state data that has changed.
To respond to suspend and resume notifications, the DirectXPage class defines the SaveInternalState and
LoadInternalState methods that are called on suspend and resume. The MarbleMaze::OnSuspending method
handles the suspend event and the MarbleMaze::OnResuming method handles the resume event.
The MarbleMaze::OnSuspending method saves game state and suspends audio.

void MarbleMaze::OnSuspending()
{
SaveState();
m_audio.SuspendAudio();
}

The MarbleMaze::SaveState method saves game state values such as the current position and velocity of the
marble, the most recent checkpoint, and the high-score table.
void MarbleMaze::SaveState()
{
m_persistentState->SaveXMFLOAT3(":Position", m_physics.GetPosition());
m_persistentState->SaveXMFLOAT3(":Velocity", m_physics.GetVelocity());
m_persistentState->SaveSingle(":ElapsedTime", m_inGameStopwatchTimer.GetElapsedTime());

m_persistentState->SaveInt32(":GameState", static_cast<int>(m_gameState));
m_persistentState->SaveInt32(":Checkpoint", static_cast<int>(m_currentCheckpoint));

int i = 0;
HighScoreEntries entries = m_highScoreTable.GetEntries();
const int bufferLength = 16;
char16 str[bufferLength];

m_persistentState->SaveInt32(":ScoreCount", static_cast<int>(entries.size()));
for (auto iter = entries.begin(); iter != entries.end(); ++iter)
{
int len = swprintf_s(str, bufferLength, L"%d", i++);
Platform::String^ string = ref new Platform::String(str, len);

m_persistentState->SaveSingle(Platform::String::Concat(":ScoreTime", string), iter->elapsedTime);


m_persistentState->SaveString(Platform::String::Concat(":ScoreTag", string), iter->tag);
}
}

When the game resumes, it only has to resume audio. It doesn't have to load state from persistent storage
because the state is already loaded in memory.
How the game suspends and resumes audio is explained in the document Adding audio to the Marble Maze
sample.
To support restart, the MarbleMaze::Initialize method, which is called during startup, calls the
MarbleMaze::LoadState method. The MarbleMaze::LoadState method reads and applies the state to the game
objects. This method also sets the current game state to paused if the game was paused or active when it was
suspended. We pause the game so that the user is not surprised by unexpected activity. It also moves to the main
menu if the game was not in a gameplay state when it was suspended.
void MarbleMaze::LoadState()
{
XMFLOAT3 position = m_persistentState->LoadXMFLOAT3(":Position", m_physics.GetPosition());
XMFLOAT3 velocity = m_persistentState->LoadXMFLOAT3(":Velocity", m_physics.GetVelocity());
float elapsedTime = m_persistentState->LoadSingle(":ElapsedTime", 0.0f);

int gameState = m_persistentState->LoadInt32(":GameState", static_cast<int>(m_gameState));


int currentCheckpoint = m_persistentState->LoadInt32(":Checkpoint", static_cast<int>(m_currentCheckpoint));

switch (static_cast<GameState>(gameState))
{
case GameState::Initial:
break;

case GameState::MainMenu:
case GameState::HighScoreDisplay:
case GameState::PreGameCountdown:
case GameState::PostGameResults:
SetGameState(GameState::MainMenu);
break;

case GameState::InGameActive:
case GameState::InGamePaused:
m_inGameStopwatchTimer.SetVisible(true);
m_inGameStopwatchTimer.SetElapsedTime(elapsedTime);
m_physics.SetPosition(position);
m_physics.SetVelocity(velocity);
m_currentCheckpoint = currentCheckpoint;
SetGameState(GameState::InGamePaused);
break;
}

int count = m_persistentState->LoadInt32(":ScoreCount", 0);

const int bufferLength = 16;


char16 str[bufferLength];

for (int i = 0; i < count; i++)


{
HighScoreEntry entry;
int len = swprintf_s(str, bufferLength, L"%d", i);
Platform::String^ string = ref new Platform::String(str, len);

entry.elapsedTime = m_persistentState->LoadSingle(Platform::String::Concat(":ScoreTime", string), 0.0f);


entry.tag = m_persistentState->LoadString(Platform::String::Concat(":ScoreTag", string), L"");
m_highScoreTable.AddScoreToTable(entry);
}
}

Important Marble Maze doesn't distinguish between cold startingthat is, starting for the first time without a
prior suspend eventand resuming from a suspended state. This is recommended design for all UWP apps.

For more examples that demonstrate how to store and retrieve settings and files from the local application data
store, see Quickstart: Local application data. For more info about application data, see Store and retrieve settings
and other app data.

Next steps
Read Adding visual content to the Marble Maze sample for information about some of the key practices to keep in
mind when you work with visual resources.

Related topics
Adding visual content to the Marble Maze sample
Marble Maze sample fundamentals
Developing Marble Maze, a UWP game in C++ and DirectX
Adding visual content to the Marble Maze sample
3/6/2017 32 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This document describes how the Marble Maze game uses Direct3D and Direct2D in the Universal Windows
Platform (UWP) app environment so that you can learn the patterns and adapt them when you work with your
own game content. To learn how visual game components fit in the overall application structure of Marble Maze,
see Marble Maze application structure.
We followed these basic steps as we developed the visual aspects of Marble Maze:
1. Create a basic framework that initializes the Direct3D and Direct2D environments.
2. Use image and model editing programs to design the 2-D and 3-D assets that appear in the game.
3. Ensure that 2-D and 3-D assets properly load and appear in the game.
4. Integrate vertex and pixel shaders that enhance the visual quality of the game assets.
5. Integrate game logic, such as animation and user input.
We also focused first on adding 3-D assets and then on 2-D assets. For example, we focused on core game logic
before we added the menu system and timer.
We also needed to iterate through some of these steps multiple times during the development process. For
example, as we make changes to the mesh and marble models, we had to also change some of the shader code
that supports those models.

Note The sample code that corresponds to this document is found in the DirectX Marble Maze game sample.

Here are some of the key points that this document discusses for when you work with DirectX and visual game
content, namely, when you initialize the DirectX graphics libraries, load scene resources, and update and render the
scene:
Adding game content typically involves many steps. These steps also often require iteration. Game developers
often focus first on adding 3-D game content and then on adding 2-D content.
Reach more customers and give them all a great experience by supporting the greatest range of graphics
hardware as possible.
Cleanly separate design-time and run-time formats. Structure your design-time assets to maximize flexibility
and enable rapid iterations on content. Format and compress your assets to load and render as efficiently as
possible at run time.
You create the Direct3D and Direct2D devices in a UWP app much like you do in a classic Windows desktop
app. One important difference is how the swap chain is associated with the output window.
When you design your game, ensure that the mesh format that you choose supports your key scenarios. For
example, if your game requires collision, make sure that you can obtain collision data from your meshes.
Separate game logic from rendering logic by first updating all scene objects before you render them.
You typically draw your 3-D scene objects, and then any 2-D objects that appear in front of the scene.
Synchronize drawing to the vertical blank to ensure that your game does not spend time drawing frames that
will never be actually shown on the display.

Getting started with DirectX graphics


When we planned the Marble Maze Universal Windows Platform (UWP) game, we chose C++ and Direct3D 11.1
because they are the best choices for creating 3-D games that require maximum control over rendering and high
performance. DirectX 11.1 supports hardware from DirectX 9 to DirectX 11, and therefore can help you reach more
customers more efficiently because you don't have to rewrite code for each of the earlier DirectX versions.
Marble Maze uses Direct3D 11.1 to render the 3-D game assets, namely the marble and the maze. Marble Maze
also uses Direct2D, DirectWrite, and Windows Imaging Component (WIC) to draw the 2-D game assets, such as the
menus and the timer. Finally, Marble Maze uses XAML to provide an app bar and allows you to add XAML controls.
Game development requires planning. If you are new to DirectX graphics, we recommend that you read Creating a
DirectX game to familiarize yourself with the basic concepts of creating a UWP DirectX game. As you read this
document and work through the Marble Maze source code, you can refer to the following resources for more in-
depth information about DirectX graphics.
Direct3D 11 Graphics Describes Direct3D 11, a powerful, hardware-accelerated 3-D graphics API for rendering
3-D geometry on the Windows platform.
Direct2D Describes Direct2D, a hardware-accelerated, 2-D graphics API that provides high performance and
high-quality rendering for 2-D geometry, bitmaps, and text.
DirectWrite Describes DirectWrite, which supports high-quality text rendering.
Windows Imaging Component Describes WIC, an extensible platform that provides low-level API for digital
images.
Feature levels
Direct3D 11 introduces a paradigm named feature levels. A feature level is a well-defined set of GPU functionality.
Use feature levels to target your game to run on earlier versions of Direct3D hardware. Marble Maze supports
feature level 9.1 because it requires no advanced features from the higher levels. We recommend that you support
the greatest range of hardware possible and scale your game content so that your customers that have either high
or low-end computers all have a great experience. For more information about feature levels, see Direct3D 11 on
Downlevel Hardware.

Initializing Direct3D and Direct2D


A device represents the display adapter. You create the Direct3D and Direct2D devices in a UWP app much like you
do in a classic Windows desktop app. The main difference is how you connect the Direct3D swap chain to the
windowing system.
The DirectX 11 and XAML App (Universal Windows) factors out some generic operating system and 3-D rendering
functions from the game-specific functions. The DeviceResources class is a foundation for managing Direct3D
and Direct2D. This class handles general infrastructure, and not game-specific assets. Marble Maze defines the
MarbleMaze class to handle game-specific assets, which has a reference to a DeviceResources object to give it
access to Direct3D and Direct2D.
During initialization, the DeviceResources::Initialize method creates device-independent resources and the
Direct3D and Direct2D devices.

// Initialize the Direct3D resources required to run.


void DeviceResources::DeviceResources(CoreWindow^ window, float dpi)
{
m_window = window;

CreateDeviceIndependentResources();
CreateDeviceResources();
CreateWindowSizeDependentResources();
SetDpi(dpi);
}

The DeviceResources class separates this functionality so that it can more easily respond when the environment
changes. For example, it calls the CreateWindowSizeDependentResources method when the window size
changes.
Initializing the Direct2D, DirectWrite, and WIC factories
The DeviceResources::CreateDeviceIndependentResources method creates the factories for Direct2D,
DirectWrite, and WIC. In DirectX graphics, factories are the starting points for creating graphics resources. Marble
Maze specifies D2D1_FACTORY_TYPE_SINGLE_THREADED because it performs all drawing on the main thread.

// These are the resources required independent of hardware.


void DeviceResources::CreateDeviceIndependentResources()
{
D2D1_FACTORY_OPTIONS options;
ZeroMemory(&options, sizeof(D2D1_FACTORY_OPTIONS));

#if defined(_DEBUG)
// If the project is in a debug build, enable Direct2D debugging via SDK Layers.
options.debugLevel = D2D1_DEBUG_LEVEL_INFORMATION;
#endif

DX::ThrowIfFailed(
D2D1CreateFactory(
D2D1_FACTORY_TYPE_SINGLE_THREADED,
__uuidof(ID2D1Factory1),
&options,
&m_d2dFactory
)
);

DX::ThrowIfFailed(
DWriteCreateFactory(
DWRITE_FACTORY_TYPE_SHARED,
__uuidof(IDWriteFactory),
&m_dwriteFactory
)
);

DX::ThrowIfFailed(
CoCreateInstance(
CLSID_WICImagingFactory,
nullptr,
CLSCTX_INPROC_SERVER,
IID_PPV_ARGS(&m_wicFactory)
)
);
}

Creating the Direct3D and Direct2D devices


The DeviceResources::CreateDeviceResources method calls D3D11CreateDevice to create the device object
that represents the Direct3D display adapter. Because Marble Maze supports feature level 9.1 and above, the
DeviceResources::CreateDeviceResources method specifies levels 9.1 through 11.1 in the array of \ values.
Direct3D walks the list in order and gives the app the first feature level that is available. Therefore the
D3D_FEATURE_LEVEL array entries are listed from highest to lowest so that the app will get the highest level
feature level available. The DeviceResources::CreateDeviceResources method obtains the Direct3D 11.1 device
by querying the Direct3D 11 device that's returned from D3D11CreateDevice.
// This array defines the set of DirectX hardware feature levels this app will support.
// Note the ordering should be preserved.
// Don't forget to declare your application's minimum required feature level in its
// description. All applications are assumed to support 9.1 unless otherwise stated.
D3D_FEATURE_LEVEL featureLevels[] =
{
D3D_FEATURE_LEVEL_11_1,
D3D_FEATURE_LEVEL_11_0,
D3D_FEATURE_LEVEL_10_1,
D3D_FEATURE_LEVEL_10_0,
D3D_FEATURE_LEVEL_9_3,
D3D_FEATURE_LEVEL_9_2,
D3D_FEATURE_LEVEL_9_1
};

// Create the DX11 API device object, and get a corresponding context.
ComPtr<ID3D11Device> device;
ComPtr<ID3D11DeviceContext> context;
DX::ThrowIfFailed(
D3D11CreateDevice(
nullptr, // Specify null to use the default adapter.
D3D_DRIVER_TYPE_HARDWARE,
0, // Leave as 0 unless it is a software device.
creationFlags, // Optionally, set debug and Direct2D compatibility flags.
featureLevels, // A list of feature levels that this app can support.
ARRAYSIZE(featureLevels), // The number of entries in the above list.
D3D11_SDK_VERSION, // Always set this to D3D11_SDK_VERSION for modern.
&device, // Returns the Direct3D device created.
&m_featureLevel, // Returns the feature level of the device created.
&context // Returns the device immediate context.
)
);

// Get the Direct3D 11.1 device by querying the Direct3D 11 device.


DX::ThrowIfFailed(
device.As(&m_d3dDevice)
);

The DeviceResources::CreateDeviceResources method then creates the Direct2D device. Direct2D uses
Microsoft DirectX Graphics Infrastructure (DXGI) to interoperate with Direct3D. DXGI enables video memory
surfaces to be shared between graphics runtimes. Marble Maze uses the underlying DXGI device from the Direct3D
device to create the Direct2D device from the Direct2D factory.

// Obtain the underlying DXGI device of the Direct3D 11.1 device.


DX::ThrowIfFailed(
m_d3dDevice.As(&dxgiDevice)
);

// Obtain the Direct2D device for 2-D rendering.


DX::ThrowIfFailed(
m_d2dFactory->CreateDevice(dxgiDevice.Get(), &m_d2dDevice)
);

// And get its corresponding device context object.


DX::ThrowIfFailed(
m_d2dDevice->CreateDeviceContext(
D2D1_DEVICE_CONTEXT_OPTIONS_NONE,
&m_d2dContext
)
);
// Obtain the underlying DXGI device of the Direct3D 11.1 device.
DX::ThrowIfFailed(
m_d3dDevice.As(&dxgiDevice)
);

// Obtain the Direct2D device for 2-D rendering.


DX::ThrowIfFailed(
m_d2dFactory->CreateDevice(dxgiDevice.Get(), &m_d2dDevice)
);

// And get its corresponding device context object.


DX::ThrowIfFailed(
m_d2dDevice->CreateDeviceContext(
D2D1_DEVICE_CONTEXT_OPTIONS_NONE,
&m_d2dContext
)
);

For more information about DXGI and interoperability between Direct2D and Direct3D, see DXGI Overview and
Direct2D and Direct3D Interoperability Overview.
Associating Direct3D with the view
The DeviceResources::CreateWindowSizeDependentResources method creates the graphics resources that
depend on a given window size such as the swap chain and Direct3D and Direct2D render targets. One important
way that a DirectX UWP app differs from a desktop app is how the swap chain is associated with the output
window. A swap chain is responsible for displaying the buffer to which the device renders on the monitor. The
document Marble Maze application structure describes how the windowing system for a UWP app differs from a
desktop app. Because a Windows Store app does not work with HWND objects, Marble Maze must use the
IDXGIFactory2::CreateSwapChainForCoreWindow method to associate the device output to the view. The
following example shows the part of the DeviceResources::CreateWindowSizeDependentResources method
that creates the swap chain.

// Obtain the final swap chain for this window from the DXGI factory.
DX::ThrowIfFailed(
dxgiFactory->CreateSwapChainForCoreWindow(
m_d3dDevice.Get(),
reinterpret_cast<IUnknown*>(m_window),
&swapChainDesc,
nullptr, // Allow on all displays.
&m_swapChain
)
);

To minimize power consumption, which is important to do on battery-powered devices such as laptops and
tablets, the DeviceResources::CreateWindowSizeDependentResources method calls the
IDXGIDevice1::SetMaximumFrameLatency method to ensure that the game is rendered only after the vertical
blank. Synchronizing with the vertical blank is described in greater detail in the section Presenting the scene in this
document.

// Ensure that DXGI does not queue more than one frame at a time. This both reduces
// latency and ensures that the application will only render after each VSync, minimizing
// power consumption.
DX::ThrowIfFailed(
dxgiDevice->SetMaximumFrameLatency(1)
);

The DeviceResources::CreateWindowSizeDependentResources method initializes graphics resources in a way


that works for most games.
Note The term view has a different meaning in the Windows Runtime than it has in Direct3D. In the Windows
Runtime, a view refers to the collection of user interface settings for an app, including the display area and the
input behaviors, plus the thread it uses for processing. You specify the configuration and settings you need
when you create a view. The process of setting up the app view is described in Marble Maze application
structure. In Direct3D, the term view has multiple meanings. First, a resource view defines the subresources
that a resource can access. For example, when a texture object is associated with a shader resource view, that
shader can later access the texture. One advantage of a resource view is that you can interpret data in different
ways at different stages in the rendering pipeline. For more information about resource views, see Texture
Views (Direct3D 10). When used in the context of a view transform or view transform matrix, view refers to the
location and orientation of the camera. A view transform relocates objects in the world around the cameras
position and orientation. For more information about view transforms, see View Transform (Direct3D 9). How
Marble Maze uses resource and matrix views is described in greater detail in this topic.

Loading scene resources


Marble Maze uses the BasicLoader class, which is declared in BasicLoader.h, to load textures and shaders. Marble
Maze uses the SDKMesh class to load the 3-D meshes for the maze and the marble.
To ensure a responsive app, Marble Maze loads scene resources asynchronously, or in the background. As assets
load in the background, your game can respond to window events. This process is explained in greater detail in
Loading game assets in the background in this guide.
Loading the 2-D overlay and user interface
In Marble Maze, the overlay is the image that appears at the top of the screen. The overlay always appears in front
of the scene. In Marble Maze, the overlay contains the Windows logo and the text string "DirectX Marble Maze
game sample". The management of the overlay is performed by the SampleOverlay class, which is defined in
SampleOverlay.h. Although we use the overlay as part of the Direct3D samples, you can adapt this code to display
any image that appears in front of your scene.
One important aspect of the overlay is that, because its contents do not change, the SampleOverlay class draws,
or caches, its contents to an ID2D1Bitmap1 object during initialization. At draw time, the SampleOverlay class
only has to draw the bitmap to the screen. In this way, expensive routines such as text drawing do not have to be
performed for every frame.
The user interface (UI) consists of 2-D components, such as menus and heads-up displays (HUDs), which appear in
front of your scene. Marble Maze defines the following UI elements:
Menu items that enable the user to start the game or view high scores.
A timer that counts down for three seconds before play begins.
A timer that tracks the elapsed play time.
A table that lists the fastest finish times.
Text that reads "Paused" when the game is paused.
Marble Maze defines game-specific UI elements in UserInterface.h. Marble Maze defines the ElementBase class as
a base type for all UI elements. The ElementBase class defines attributes such as the size, position, alignment, and
visibility of a UI element. It also controls how elements are updated and rendered.
class ElementBase
{
public:
virtual void Initialize() { }
virtual void Update(float timeTotal, float timeDelta) { }
virtual void Render() { }

void SetAlignment(AlignType horizontal, AlignType vertical);


virtual void SetContainer(const D2D1_RECT_F& container);
void SetVisible(bool visible);

D2D1_RECT_F GetBounds();

bool IsVisible() const { return m_visible; }

protected:
ElementBase();

virtual void CalculateSize() { }

Alignment m_alignment;
D2D1_RECT_F m_container;
D2D1_SIZE_F m_size;
bool m_visible;
};

By providing a common base class for UI elements, the UserInterface class, which manages the user interface,
need only hold a collection of ElementBase objects, which simplifies UI management and provides a user
interface manager that is reusable. Marble Maze defines types that derive from ElementBase that implement
game-specific behaviors. For example, HighScoreTable defines the behavior for the high score table. For more
info about these types, refer to the source code.

Note Because XAML enables you to more easily create complex user interfaces, like those found in simulation
and strategy games, consider whether to use XAML to define your UI. For info about how to develop a user
interface in XAML in a DirectX UWP game, see Extend the game sample (Windows). This document refers to
the DirectX 3-D shooting game sample.

Loading shaders
Marble Maze uses the BasicLoader::LoadShader method to load a shader from a file.
Shaders are the fundamental unit of GPU programming in games today. Nearly all 3-D graphics processing is
driven through shaders, whether it is model transformation and scene lighting, or more complex geometry
processing, from character skinning to tessellation. For more information about the shader programming model,
see HLSL.
Marble Maze uses vertex and pixel shaders. A vertex shader always operates on one input vertex and produces one
vertex as output. A pixel shader takes numeric values, texture data, interpolated per-vertex values, and other data
to produce a pixel color as output. Because a shader transforms one element at a time, graphics hardware that
provides multiple shader pipelines can process sets of elements in parallel. The number of parallel pipelines that
are available to the GPU can be vastly greater than the number that is available to the CPU. Therefore, even basic
shaders can greatly improve throughput.
The MarbleMaze::LoadDeferredResources method loads one vertex shader and one pixel shader after it loads
the overlay. The design-time versions of these shaders are defined in BasicVertexShader.hlsl and
BasicPixelShader.hlsl, respectively. Marble Maze applies these shaders to both the ball and the maze during the
rendering phase.
The Marble Maze project includes both .hlsl (the design-time format) and .cso (the run-time format) versions of the
shader files. At build time, Visual Studio uses the fxc.exe effect-compiler to compile your .hlsl source file into a .cso
binary shader. For more information about the effect-compiler tool, see Effect-Compiler Tool.
The vertex shader uses the supplied model, view and projection matrices to transform the input geometry. Position
data from the input geometry is transformed and output twice: once in screen space, which is necessary for
rendering, and again in world space to enable the pixel shader to perform lighting calculations. The surface normal
vector is transformed to world space, which is also used by the pixel shader for lighting. The texture coordinates
are passed through unchanged to the pixel shader.

sPSInput main(sVSInput input)


{
sPSInput output;
float4 temp = float4(input.pos, 1.0f);
temp = mul(temp, model);
output.worldPos = temp.xyz / temp.w;
temp = mul(temp, view);
temp = mul(temp, projection);
output.pos = temp;
output.tex = input.tex;
output.norm = mul(float4(input.norm, 0.0f), model).xyz;
return output;
}

The pixel shader receives the output of the vertex shader as input. This shader performs lighting calculations to
mimic a soft-edged spotlight that hovers over the maze and is aligned with the position of the marble. Lighting is
strongest for surfaces that point directly toward the light. The diffuse component tapers off to zero as the surface
normal becomes perpendicular to the light, and the ambient term diminishes as the normal points away from the
light. Points closer to the marble (and therefore closer to the center of the spotlight) are lit more strongly.
However, lighting is modulated for points underneath the marble to simulate a soft shadow. In a real environment,
an object like the white marble would diffusely reflect the spotlight onto other objects in the scene. This is
approximated for the surfaces that are in view of the bright half of the marble. The additional illumination factors
are in relative angle and distance to the marble. The resulting pixel color is a composition of the sampled texture
with the result of the lighting calculations.
float4 main(sPSInput input) : SV_TARGET
{
float3 lightDirection = float3(0, 0, -1);
float3 ambientColor = float3(0.43, 0.31, 0.24);
float3 lightColor = 1 - ambientColor;
float spotRadius = 50;

// Basic ambient (Ka) and diffuse (Kd) lighting from above.


float3 N = normalize(input.norm);
float NdotL = dot(N, lightDirection);
float Ka = saturate(NdotL + 1);
float Kd = saturate(NdotL);

// Spotlight.
float3 vec = input.worldPos - marblePosition;
float dist2D = sqrt(dot(vec.xy, vec.xy));
Kd = Kd * saturate(spotRadius / dist2D);

// Shadowing from ball.


if (input.worldPos.z > marblePosition.z)
Kd = Kd * saturate(dist2D / (marbleRadius * 1.5));

// Diffuse reflection of light off ball.


float dist3D = sqrt(dot(vec, vec));
float3 V = normalize(vec);
Kd += saturate(dot(-V, N)) * saturate(dot(V, lightDirection))
* saturate(marbleRadius / dist3D);

// Final composite.
float4 diffuseTexture = Texture.Sample(Sampler, input.tex);
float3 color = diffuseTexture.rgb * ((ambientColor * Ka) + (lightColor * Kd));
return float4(color * lightStrength, diffuseTexture.a);
}

Caution The compiled pixel shader contains 32 arithmetic instructions and 1 texture instruction. This shader
should perform well on desktop computers and higher-end tablets. However, a lower-end computer might not
be able to process this shader and still provide an interactive frame rate. Consider the typical hardware of your
target audience and design your shaders to meet the capabilities of that hardware.

The MarbleMaze::LoadDeferredResources method uses the BasicLoader::LoadShader method to load the


shaders. The following example loads the vertex shader. The run-time format for this shader is
BasicVertexShader.cso. The m_vertexShader member variable is an ID3D11VertexShader object.

\loader->LoadShader(
L"BasicVertexShader.cso",
layoutDesc,
ARRAYSIZE(layoutDesc),
&m_vertexShader,
&m_inputLayout
);

The m_inputLayout member variable is an ID3D11InputLayout object. The input-layout object encapsulates the
input state of the input assembler (IA) stage. One job of the IA stage is to make shaders more efficient by using
system-generated values, also known as semantics, to process only those primitives or vertices that have not
already been processed. Use the ID3D11Device::CreateInputLayout method to create an input-layout from an
array of input-element descriptions. The array contains one or more input elements; each input element describes
one vertex-data element from one vertex buffer. The entire set of input-element descriptions describes all of the
vertex-data elements from all of the vertex buffers that will be bound to the IA stage. The following example shows
the layout description that Marble Maze uses. The layout description describes a vertex buffer that contains four
vertex-data elements. The important parts of each entry in the array are the semantic name, data format, and byte
offset . For example, the POSITION element specifies the vertex position in object space. It starts at byte offset 0
and contains three floating-point components (for a total of 12 bytes). The NORMAL element specifies the normal
vector. It starts at byte offset 12 because it appears directly after POSITION in the layout, which requires 12 bytes.
The NORMAL element contains a four-component, 32-bit unsigned-integer.

D3D11_INPUT_ELEMENT_DESC layoutDesc[] =
{
{ "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 },
{ "NORMAL", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0 },
{ "TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT, 0, 24, D3D11_INPUT_PER_VERTEX_DATA, 0 },
{ "TANGENT", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 32, D3D11_INPUT_PER_VERTEX_DATA, 0 },
};
m_vertexStride = 44; // You must set this to match the size of layoutDesc above.

Compare the input layout with the sVSInput structure that is defined by the vertex shader, as shown in the
following example. The sVSInput structure defines the POSITION, NORMAL, and TEXCOORD0 elements. The
DirectX runtime maps each element in the layout to the input structure that is defined by the shader.

struct sVSInput
{
float3 pos : POSITION;
float3 norm : NORMAL;
float2 tex : TEXCOORD0;
};

struct sPSInput
{
float4 pos : SV_POSITION;
float3 norm : NORMAL;
float2 tex : TEXCOORD0;
float3 worldPos : TEXCOORD1;
};

sPSInput main(sVSInput input)


{
sPSInput output;
float4 temp = float4(input.pos, 1.0f);
temp = mul(temp, model);
output.worldPos = temp.xyz / temp.w;
temp = mul(temp, view);
temp = mul(temp, projection);
output.pos = temp;
output.tex = input.tex;
output.norm = mul(float4(input.norm, 0.0f), model).xyz;
return output;
}

The document Semantics describes each of the available semantics in greater detail.

Note In a layout, you can specify additional components that are not used to enable multiple shaders to share
the same layout. For example, the TANGENT element is not used by the shader. You can use the TANGENT
element if you want to experiment with techniques such as normal mapping. By using normal mapping, also
known as bump mapping, you can create the effect of bumps on the surfaces of objects. For more information
about bump mapping, see Bump Mapping (Direct3D 9).

For more information about the input assembly stage state, see Input-Assembler Stage and Getting Started with
the Input-Assembler Stage.
The process of using the vertex and pixel shaders to render the scene are described in the section Rendering the
scene later in this document.
Creating the constant buffer
Direct3D buffer groups a collection of data. A constant buffer is a kind of buffer that you can use to pass data to
shaders. Marble Maze uses a constant buffer to hold the model (or world) view, and the projection matrices for the
active scene object.
The following example shows how the MarbleMaze::LoadDeferredResources method creates a constant buffer
that will later hold matrix data. The example creates a D3D11_BUFFER_DESC structure that uses the
D3D11_BIND_CONSTANT_BUFFER flag to specify usage as a constant buffer. This example then passes that
structure to the ID3D11Device::CreateBuffer method. The m_constantBuffer variable is an ID3D11Buffer
object.

// Create the constant buffer for updating model and camera data.
D3D11_BUFFER_DESC constantBufferDesc = {0};
constantBufferDesc.ByteWidth = ((sizeof(ConstantBuffer) + 15) / 16) * 16; // Multiple of 16 bytes
constantBufferDesc.Usage = D3D11_USAGE_DEFAULT;
constantBufferDesc.BindFlags = D3D11_BIND_CONSTANT_BUFFER;
constantBufferDesc.CPUAccessFlags = 0;
constantBufferDesc.MiscFlags = 0;
// This will not be used as a structured buffer, so this parameter is ignored.
constantBufferDesc.StructureByteStride = 0;

DX::ThrowIfFailed(
m_d3dDevice->CreateBuffer(
&constantBufferDesc,
nullptr, // Leave the buffer uninitialized.
&m_constantBuffer
)
);

The MarbleMaze::Update method later updates ConstantBuffer objects, one for the maze and one for the
marble. The MarbleMaze::Render method then binds each ConstantBuffer object to the constant buffer before
each object is rendered. The following example shows the ConstantBuffer structure, which is in MarbleMaze.h.

// Describes the constant buffer that draws the meshes.


struct ConstantBuffer
{
float4x4 model;
float4x4 view;
float4x4 projection;

float3 marblePosition;
float marbleRadius;
float lightStrength;
};

To better understand how constant buffers map to shader code, compare the ConstantBuffer structure to the
SimpleConstantBuffer constant buffer that is defined by the vertex shader in BasicVertexShader.hlsl:

cbuffer ConstantBuffer : register(b0)


{
matrix model;
matrix view;
matrix projection;
float3 marblePosition;
float marbleRadius;
float lightStrength;
};
The layout of the ConstantBuffer structure matches the cbuffer object. The cbuffer variable specifies register b0,
which means that the constant buffer data is stored in register 0. The MarbleMaze::Render method specifies
register 0 when it activates the constant buffer. This process is described in greater detail later in this document.
For more information about constant buffers, see Introduction to Buffers in Direct3D 11. For more information
about the register keyword, see register.
Loading meshes
Marble Maze uses SDK-Mesh as the run-time format because this format provides a basic way to load mesh data
for sample applications. For production use, you should use a mesh format that meets the specific requirements of
your game.
The MarbleMaze::LoadDeferredResources method loads mesh data after it loads the vertex and pixel shaders. A
mesh is a collection of vertex data that often includes information such as positions, normal data, colors, materials,
and texture coordinates. Meshes are typically created in 3-D authoring software and maintained in files that are
separate from application code. The marble and the maze are two examples of meshes that the game uses.
Marble Maze uses the SDKMesh class to manage meshes. This class is declared in SDKMesh.h. SDKMesh provides
methods to load, render, and destroy mesh data.

Important Marble Maze uses the SDK-Mesh format and provides the SDKMesh class for illustration only.
Although the SDK-Mesh format is useful for learning, and for creating prototypes, it is a very basic format that
might not meet the requirements of most game development. We recommend that you use a mesh format
that meets the specific requirements of your game.

The following example shows how the MarbleMaze::LoadDeferredResources method uses the
SDKMesh::Create method to load mesh data for the maze and for the ball.

// Load the meshes.


DX::ThrowIfFailed(
m_mazeMesh.Create(
m_d3dDevice.Get(),
L"Media\\Models\\maze1.sdkmesh",
false
)
);

DX::ThrowIfFailed(
m_marbleMesh.Create(
m_d3dDevice.Get(),
L"Media\\Models\\marble2.sdkmesh",
false
)
);

Loading collision data


Although this section does not focus on how Marble Maze implements the physics simulation between the marble
and the maze, note that mesh geometry for the physics system is read when the meshes are loaded.
// Extract mesh geometry for the physics system.
DX::ThrowIfFailed(
ExtractTrianglesFromMesh(
m_mazeMesh,
"Mesh_walls",
m_collision.m_wallTriList
)
);

DX::ThrowIfFailed(
ExtractTrianglesFromMesh(
m_mazeMesh,
"Mesh_Floor",
m_collision.m_groundTriList
)
);

DX::ThrowIfFailed(
ExtractTrianglesFromMesh(
m_mazeMesh,
"Mesh_floorSides",
m_collision.m_floorTriList
)
);

m_physics.SetCollision(&m_collision);
float radius = m_marbleMesh.GetMeshBoundingBoxExtents(0).x / 2;
m_physics.SetRadius(radius);

The way that you load collision data large depends on the run-time format that you use. For more information
about how Marble Maze loads the collision geometry from an SDK-Mesh file, see the
MarbleMaze::ExtractTrianglesFromMesh method in the source code.

Updating game state


Marble Maze separates game logic from rendering logic by first updating all scene objects before rendering them.
The document Marble Maze application structure describes the main game loop. Updating the scene, which is part
of the game loop, happens after Windows events and input are processed and before the scene is rendered. The
MarbleMaze::Update method handles the update of the UI and the game.
Updating the user interface
The MarbleMaze::Update method calls the UserInterface::Update method to update the state of the UI.

UserInterface::GetInstance().Update(timeTotal, timeDelta);

The UserInterface::Update method updates each element in the UI collection.

void UserInterface::Update(float timeTotal, float timeDelta)


{
for (auto iter = m_elements.begin(); iter != m_elements.end(); ++iter)
{
(*iter)->Update(timeTotal, timeDelta);
}
}

Classes that derive from ElementBase implement the Update method to perform specific behaviors. For
example, the StopwatchTimer::Update method updates the elapsed time by the provided amount and updates
the text that it later displays.
void StopwatchTimer::Update(float timeTotal, float timeDelta)
{
if (m_active)
{
m_elapsedTime += timeDelta;

WCHAR buffer[16];
GetFormattedTime(buffer);
SetText(buffer);
}

TextElement::Update(timeTotal, timeDelta);
}

Updating the scene


The MarbleMaze::Update method updates the game based on the current state machine state. When the game is
in the active state, Marble Maze updates the camera to follow the marble, updates the view matrix part of the
constant buffers, and updates the physics simulation.
The following example shows how the MarbleMaze::Update method updates the position of the camera. Marble
Maze uses the m_resetCamera variable to flag that the camera must be reset to be located directly above the
marble. The camera is reset when the game starts or the marble falls through the maze. When the main menu or
high score display screen is active, the camera is set at a constant location. Otherwise, Marble Maze uses the
timeDelta parameter to interpolate the position of the camera between its current and target positions. The target
position is slightly above and in front of the marble. Using the elapsed frame time enables the camera to gradually
follow, or chase, the marble.

static float eyeDistance = 200.0f;


static float3 eyePosition = float3(0, 0, 0);

// Gradually move the camera above the marble.


float3 targetEyePosition = marblePosition - (eyeDistance * float3(g.x, g.y, g.z));
if (m_resetCamera)
{
eyePosition = targetEyePosition;
m_resetCamera = false;
}
else
{
eyePosition = eyePosition + ((targetEyePosition - eyePosition) * min(1, timeDelta * 8));
}

// Look at the marble.


if ((m_gameState == GameState::MainMenu) || (m_gameState == GameState::HighScoreDisplay))
{
// Override camera position for menus.
eyePosition = marblePosition + float3(75.0f, -150.0f, -75.0f);
m_camera->SetViewParameters(eyePosition, marblePosition, float3(0.0f, 0.0f, -1.0f));
}
else
{
m_camera->SetViewParameters(eyePosition, marblePosition, float3(0.0f, 1.0f, 0.0f));
}

The following example shows how the MarbleMaze::Update method updates the constant buffers for the marble
and the maze. The mazes model, or world, matrix always remains the identity matrix. Except for the main diagonal,
whose elements are all ones, the identity matrix is a square matrix composed of zeros. The marbles model matrix
is based on its position matrix times its rotation matrix. The mul and translation functions are defined in
BasicMath.h.
// Update the model matrices based on the simulation.
m_mazeConstantBufferData.model = identity();
m_marbleConstantBufferData.model = mul(
translation(marblePosition.x, marblePosition.y, marblePosition.z),
marbleRotationMatrix
);

// Update the view matrix based on the camera.


float4x4 view;
m_camera->GetViewMatrix(&view);
m_mazeConstantBufferData.view = view;
m_marbleConstantBufferData.view = view;

For information about how the MarbleMaze::Update method reads user input and simulates the motion of the
marble, see Adding input and interactivity to the Marble Maze sample.

Rendering the scene


When a scene is rendered, these steps are typically included.
1. Set the current render target depth-stencil buffer.
2. Clear the render and stencil views.
3. Prepare the vertex and pixel shaders for drawing.
4. Render the 3-D objects in the scene.
5. Render any 2-D object that you want to appear in front of the scene.
6. Present the rendered image to the monitor.
The MarbleMaze::Render method binds the render target and depth stencil views, clears those views, draws the
scene, and then draws the overlay.
Preparing the render targets
Before you render your scene, you must set the current render target depth-stencil buffer. If your scene is not
guaranteed to draw over every pixel on the screen, also clear the render and stencil views. Marble Maze clears the
render and stencil views on every frame to ensure that there are no visible artifacts from the previous frame.
The following example shows how the MarbleMaze::Render method calls the
ID3D11DeviceContext::OMSetRenderTargets method to set the render target and the depth-stencil buffer as
the current ones. The m_renderTargetView member variable, an ID3D11RenderTargetView object, and the
m_depthStencilView member variable, an ID3D11DepthStencilView object, are defined and initialized by the
DirectXBase class.
// Bind the render targets.
m_d3dContext->OMSetRenderTargets(
1,
m_renderTargetView.GetAddressOf(),
m_depthStencilView.Get()
);

// Clear the render target and depth stencil to default values.


const float clearColor[4] = { 0.0f, 0.0f, 0.0f, 1.0f };

m_d3dContext->ClearRenderTargetView(
m_renderTargetView.Get(),
clearColor
);

m_d3dContext->ClearDepthStencilView(
m_depthStencilView.Get(),
D3D11_CLEAR_DEPTH,
1.0f,
0
);

The ID3D11RenderTargetView and ID3D11DepthStencilView interfaces support the texture view mechanism
that is provided by Direct3D 10 and later. For more information about texture views, see Texture Views (Direct3D
10). The OMSetRenderTargets method prepares the output-merger stage of the Direct3D pipeline. For more
information about the output-merger stage, see Output-Merger Stage.
Preparing the vertex and pixel shaders
Before you render the scene objects, perform the following steps to prepare the vertex and pixel shaders for
drawing:
1. Set the shader input layout as the current layout.
2. Set the vertex and pixel shaders as the current shaders.
3. Update any constant buffers with data that you have to pass to the shaders.

Important Marble Maze uses one pair of vertex and pixel shaders for all 3-D objects. If your game uses more
than one pair of shaders, you must perform these steps each time you draw objects that use different shaders.
To reduce the overhead that is associated with changing the shader state, we recommend that you group
render calls for all objects that use the same shaders.

The section Loading shaders in this document describes how the input layout is created when the vertex shader is
created. The following example shows how the MarbleMaze::Render method uses the
ID3D11DeviceContext::IASetInputLayout method to set this layout as the current layout.

m_d3dContext->IASetInputLayout(m_inputLayout.Get());

The following example shows how the MarbleMaze::Render method uses the
ID3D11DeviceContext::VSSetShader and ID3D11DeviceContext::PSSetShader methods to set the vertex and
pixel shaders as the current shaders, respectively.
// Set the vertex shader stage state.
m_d3dContext->VSSetShader(
m_vertexShader.Get(), // Use this vertex shader.
nullptr, // Don't use shader linkage.
0 // Don't use shader linkage.
);

// Set the pixel shader stage state.


m_d3dContext->PSSetShader(
m_pixelShader.Get(), // Use this pixel shader.
nullptr, // Don't use shader linkage.
0 // Don't use shader linkage.
);

m_d3dContext->PSSetSamplers(
0, // Starting at the first sampler slot
1, // set one sampler binding
m_sampler.GetAddressOf() // to use this sampler.
);

After the MarbleMaze::Render sets the shaders and their input layout, it uses the
ID3D11DeviceContext::UpdateSubresource method to update the constant buffer with the model, view, and
projection matrices for the maze. The UpdateSubresource method copies the matrix data from CPU memory to
GPU memory. Recall that the model and view components of the ConstantBuffer structure are updated in the
MarbleMaze::Update method. The MarbleMaze::Render method then calls the
ID3D11DeviceContext::VSSetConstantBuffers and ID3D11DeviceContext::PSSetConstantBuffers methods
to set this constant buffer as the current one.

// Update the constant buffer with the new data.


m_d3dContext->UpdateSubresource(
m_constantBuffer.Get(),
0,
nullptr,
&m_mazeConstantBufferData,
0,
0
);

m_d3dContext->VSSetConstantBuffers(
0, // Starting at the first constant buffer slot
1, // set one constant buffer binding
m_constantBuffer.GetAddressOf() // to use this buffer.
);

m_d3dContext->PSSetConstantBuffers(
0, // Starting at the first constant buffer slot
1, // set one constant buffer binding
m_constantBuffer.GetAddressOf() // to use this buffer.
);

The MarbleMaze::Render method performs similar steps to prepare the marble to be rendered.
Rendering the maze and the marble
After you activate the current shaders, you can draw your scene objects. The MarbleMaze::Render method calls
the SDKMesh::Render method to render the maze mesh.

m_mazeMesh.Render(m_d3dContext.Get(), 0, INVALID_SAMPLER_SLOT, INVALID_SAMPLER_SLOT);

The MarbleMaze::Render method performs similar steps to render the marble.


As mentioned earlier in this document, the SDKMesh class is provided for demonstration purposes, but we do not
recommend it for use in a production-quality game. However, notice that the SDKMesh::RenderMesh method,
which is called by SDKMesh::Render, uses the ID3D11DeviceContext::IASetVertexBuffers and
ID3D11DeviceContext::IASetIndexBuffer methods to set the current vertex and index buffers that define the
mesh, and the ID3D11DeviceContext::DrawIndexed method to draw the buffers. For more information about
how to work with vertex and index buffers, see Introduction to Buffers in Direct3D 11.
Drawing the user interface and overlay
After drawing 3-D scene objects, Marble Maze draws the 2-D UI elements that appear in front of the scene.
The MarbleMaze::Render method ends by drawing the user interface and the overlay.

// Draw the user interface and the overlay.


UserInterface::GetInstance().Render();

m_sampleOverlay->Render();

The UserInterface::Render method uses an ID2D1DeviceContext object to draw the UI elements. This method
sets the drawing state, draws all active UI elements, and then restores the previous drawing state.

void UserInterface::Render()
{
m_d2dContext->SaveDrawingState(m_stateBlock.Get());
m_d2dContext->BeginDraw();
m_d2dContext->SetTransform(D2D1::Matrix3x2F::Identity());
m_d2dContext->SetTextAntialiasMode(D2D1_TEXT_ANTIALIAS_MODE_GRAYSCALE);

for (auto iter = m_elements.begin(); iter != m_elements.end(); ++iter)


{
if ((*iter)->IsVisible())
(*iter)->Render();
}

m_d2dContext->EndDraw();
m_d2dContext->RestoreDrawingState(m_stateBlock.Get());
}

The SampleOverlay::Render method uses a similar technique to draw the overlay bitmap.
Presenting the scene
After drawing all 2-D and 3-D scene objects, Marble Maze presents the rendered image to the monitor. It
synchronizes drawing to the vertical blank to ensure that time is not spent time drawing frames that will never be
actually shown on the display. Marble Maze also handles device changes when it presents the scene.
After the MarbleMaze::Render method returns, the game loop calls the MarbleMaze::Present method to send
the rendered image to the monitor or display. The MarbleMaze class does not override the DirectXBase::Present
method. The DirectXBase::Present method calls IDXGISwapChain1::Present to perform the present operation,
as shown in the following example:
// The application may optionally specify "dirty" or "scroll" rects
// to improve efficiency in certain scenarios.
// In this sample, however, we do not utilize those features.
DXGI_PRESENT_PARAMETERS parameters = {0};
parameters.DirtyRectsCount = 0;
parameters.pDirtyRects = nullptr;
parameters.pScrollRect = nullptr;
parameters.pScrollOffset = nullptr;

// The first argument instructs DXGI to block until VSync, putting the
// application to sleep until the next VSync.
// This ensures we don't waste any cycles rendering frames that will
// never be displayed to the screen.
HRESULT hr = m_swapChain->Present1(1, 0, &parameters);

In this example, m_swapChain is an IDXGISwapChain1 object. The initialization of this object is described in the
section Initializing Direct3D and Direct2D in this document.
The first parameter to IDXGISwapChain1::Present, SyncInterval, specifies the number of vertical blanks to wait
before presenting the frame. Marble Maze specifies 1 so that it waits until the next vertical blank. A vertical blank is
the time between when one frame finishes drawing to the monitor and the next frame begins.
The IDXGISwapChain1::Present1 method returns an error code that indicates that the device was removed or
otherwise failed. In this case, Marble Maze reinitializes the device.

// Reinitialize the renderer if the device was disconnected


// or the driver was upgraded.
if (hr == DXGI_ERROR_DEVICE_REMOVED || hr == DXGI_ERROR_DEVICE_RESET)
{
Initialize(m_window, m_dpi);
}
else
{
DX::ThrowIfFailed(hr);
}

Next steps
Read Adding input and interactivity to the Marble Maze sample for information about some of the key practices to
keep in mind when you work with input devices. This document discusses how Marble Maze supports touch,
accelerometer, Xbox 360 controller, and mouse input.

Related topics
Adding input and interactivity to the Marble Maze sample
Marble Maze application structure
Developing Marble Maze, a UWP game in C++ and DirectX
Adding input and interactivity to the Marble Maze
sample
3/6/2017 15 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Universal Windows Platform (UWP) app games run on a wide variety of devices, such as desktop computers,
laptops, and tablets. A device can have a wide variety of input and control mechanisms. Support multiple input
devices to enable your game to accommodate a wider range of preferences and abilities among your customers.
This document describes the key practices to keep in mind when you work with input devices and shows how
Marble Maze applies these practices.

Note The sample code that corresponds to this document is found in the DirectX Marble Maze game sample.

Here are some of the key points that this document discusses for when you work with input in your game:
When possible, support multiple input devices to enable your game to accommodate a wider range of
preferences and abilities among your customers. Although game controller and sensor usage is optional, we
strongly recommend it to enhance the player experience. We designed the game controller and sensor API to
help you more easily integrate these input devices.
To initialize touch, you must register for window events such as when the pointer is activated, released, and
moved. To initialize the accelerometer, create a Windows::Devices::Sensors::Accelerometer object when you
initialize the application. The Xbox 360 controller doesn't require initialization.
For single-player games, consider whether to combine input from all possible Xbox 360 controllers. This way,
you dont have to track what input comes from which controller.
Process Windows events before you process input devices.
The Xbox 360 controller and the accelerometer support polling. That is, you can poll for data when you need it.
For touch, record touch events in data structures that are available to your input processing code.
Consider whether to normalize input values to a common format. Doing so can simplify how input is
interpreted by other components of your game, such as physics simulation, and can make it easier to write
games that work on different screen resolutions.

Input devices supported by Marble Maze


Marble Maze supports Xbox 360 common controller devices, mouse, and touch to select menu items, and the Xbox
360 controller, mouse, touch, and the accelerometer to control game play. Marble Maze uses the XInput API to poll
the controller for input. Touch enables applications to track and respond to fingertip input. An accelerometer is a
sensor that measures the force that is applied along the x, y, and z axes. By using the Windows Runtime, you can
poll the current state of the accelerometer device, as well as receive touch events through the Windows Runtime
event-handling mechanism.

Note This document uses touch to refer to both touch and mouse input and pointer to refer to any device that
uses pointer events. Because touch and the mouse use standard pointer events, you can use either device to
select menu items and control game play.
Note The package manifest sets Landscape as the supported rotation for the game to prevent the orientation
from changing when you rotate the device to roll the marble.
Initializing input devices
The Xbox 360 controller does not require initialization. To initialize touch, you must register for windowing events
such as when the pointer is activated (for example, your user presses the mouse button or touches the screen),
released, and moved. To initialize the accelerometer, you have to create a
Windows::Devices::Sensors::Accelerometer object when you initialize the application.
The following example shows how the DirectXPage constructor registers for the
Windows::UI::Core::CoreIndependentInputSource::PointerPressed,
Windows::UI::Core::CoreIndependentInputSource::PointerReleased, and
Windows::UI::Core::CoreIndependentInputSource::PointerMoved pointer events for the SwapChainPanel.
These events are registered during app initialization and before the game loop.
These events are handled in a separate thread that invokes the event handlers.
For more information about how the application is initialized, see Marble Maze application structure.

coreInput->PointerPressed += ref new TypedEventHandler<Object^, PointerEventArgs^>(this, &DirectXPage::OnPointerPressed);


coreInput->PointerMoved += ref new TypedEventHandler<Object^, PointerEventArgs^>(this, &DirectXPage::OnPointerMoved);
coreInput->PointerReleased += ref new TypedEventHandler<Object^, PointerEventArgs^>(this, &DirectXPage::OnPointerReleased);

The MarbleMaze class also creates a std::map object to hold touch events. The key for this map object is a value
that uniquely identifies the input pointer. Each key maps to the distance between every touch point and the center
of the screen. Marble Maze later uses these values to calculate the amount by which the maze is tilted.

typedef std::map<int, XMFLOAT2> TouchMap;


TouchMap m_touches;

The MarbleMaze class holds an Accelerometer object.

Windows::Devices::Sensors::Accelerometer^ m_accelerometer;

The Accelerometer object is initialized in the MarbleMaze::Initialize method, as shown in the following example.
The Windows::Devices::Sensors::Accelerometer::GetDefault method returns an instance of the default
accelerometer. If there is no default accelerometer, Accelerometer::GetDefault the value of m_accelerometer
remains nullptr.

// Returns accelerometer ref if there is one; nullptr otherwise.


m_accelerometer = Windows::Devices::Sensors::Accelerometer::GetDefault();

Navigating the menus


Tracking Xbox 360 controller input
You can use the mouse, touch, or the Xbox 360 controller to navigate the menus, as follows:
Use the directional pad to change the active menu item.
Use touch, the A button, or the Start button to pick a menu item or close the current menu, such as the high-
score table.
Use the Start button to pause or resume the game.
Click on a menu item with the mouse to choose that action.
Tracking touch and mouse input
To track Xbox 360 controller input, the MarbleMaze::Update method defines an array of buttons that define the
input behaviors. XInput provides only the current state of the controller. Therefore, MarbleMaze::Update also
defines two arrays that track, for each possible Xbox 360 controller, whether each button was pressed during the
previous frame and whether each button is currently pressed.

// This array contains the constants from XINPUT that map to the
// particular buttons that are used by the game.
// The index of the array is used to associate the state of that button in
// the wasButtonDown, isButtonDown, and combinedButtonPressed variables.

static const WORD buttons[] = {


XINPUT_GAMEPAD_A,
XINPUT_GAMEPAD_START,
XINPUT_GAMEPAD_DPAD_UP,
XINPUT_GAMEPAD_DPAD_DOWN,
XINPUT_GAMEPAD_DPAD_LEFT,
XINPUT_GAMEPAD_DPAD_RIGHT,
XINPUT_GAMEPAD_BACK,
};

static const int buttonCount = ARRAYSIZE(buttons);


static bool wasButtonDown[XUSER_MAX_COUNT][buttonCount] = { false, };
bool isButtonDown[XUSER_MAX_COUNT][buttonCount] = { false, };

You can connect up to four Xbox 360 controllers to a Windows device. To avoid having to figure out which
controller is the active one, the MarbleMaze::Update method combines input across all controllers.

bool combinedButtonPressed[buttonCount] = { false, };

If your game supports more than one player, you have to track input for each player separately.
In a loop, the MarbleMaze::Update method polls each controller for input and reads the state of each button.

// Account for input on any connected controller.


XINPUT_STATE inputState = {0};
for (DWORD userIndex = 0; userIndex < XUSER_MAX_COUNT; ++userIndex)
{
DWORD result = XInputGetState(userIndex, &inputState);
if(result != ERROR_SUCCESS)
continue;

SHORT thumbLeftX = inputState.Gamepad.sThumbLX;


if (abs(thumbLeftX) < XINPUT_GAMEPAD_LEFT_THUMB_DEADZONE)
thumbLeftX = 0;

SHORT thumbLeftY = inputState.Gamepad.sThumbLY;


if (abs(thumbLeftY) < XINPUT_GAMEPAD_LEFT_THUMB_DEADZONE)
thumbLeftY = 0;

combinedTiltX += static_cast<float>(thumbLeftX) / 32768.0f;


combinedTiltY += static_cast<float>(thumbLeftY) / 32768.0f;

for (int i = 0; i < buttonCount; ++i)


isButtonDown[userIndex][i] = (inputState.Gamepad.wButtons & buttons[i]) == buttons[i];
}

After the MarbleMaze::Update method polls for input, it updates the combined input array. The combined input
array tracks only which buttons are pressed but were not previously pressed. This enables the game to perform an
action only at the time a button is initially pressed, and not when the button is held.
bool combinedButtonPressed[buttonCount] = { false, };
for (int i = 0; i < buttonCount; ++i)
{
for (DWORD userIndex = 0; userIndex < XUSER_MAX_COUNT; ++userIndex)
{
bool pressed = !wasButtonDown[userIndex][i] && isButtonDown[userIndex][i];
combinedButtonPressed[i] = combinedButtonPressed[i] || pressed;
}
}

After the MarbleMaze::Update method collects button input, it performs any actions that must happen. For
example, when the Start button (XINPUT_GAMEPAD_START) is pressed, the game state changes from active to
paused or from paused to active.

// Check whether the user paused or resumed the game.


// XINPUT_GAMEPAD_START
if (combinedButtonPressed[1] || m_pauseKeyPressed)
{
m_pauseKeyPressed = false;
if (m_gameState == GameState::InGameActive)
SetGameState(GameState::InGamePaused);
else if (m_gameState == GameState::InGamePaused)
SetGameState(GameState::InGameActive);
}

If the main menu is active, the active menu item changes when the directional pad is pressed up or down. If the
user chooses the current selection, the appropriate UI element is marked as being chosen.
// Handle menu navigation.

// XINPUT_GAMEPAD_A or XINPUT_GAMEPAD_START
bool chooseSelection = (combinedButtonPressed[0] || combinedButtonPressed[1]);

// XINPUT_GAMEPAD_DPAD_UP
bool moveUp = combinedButtonPressed[2];

// XINPUT_GAMEPAD_DPAD_DOWN
bool moveDown = combinedButtonPressed[3];

switch (m_gameState)
{
case GameState::MainMenu:
if (chooseSelection)
{
m_audio.PlaySoundEffect(MenuSelectedEvent);

if (m_startGameButton.GetSelected())
m_startGameButton.SetPressed(true);

if (m_highScoreButton.GetSelected())
m_highScoreButton.SetPressed(true);
}
if (moveUp || moveDown)
{
m_startGameButton.SetSelected(!m_startGameButton.GetSelected());
m_highScoreButton.SetSelected(!m_startGameButton.GetSelected());

m_audio.PlaySoundEffect(MenuChangeEvent);
}
break;

case GameState::HighScoreDisplay:
if (chooseSelection || anyPoints)
SetGameState(GameState::MainMenu);
break;

case GameState::PostGameResults:
if (chooseSelection || anyPoints)
SetGameState(GameState::HighScoreDisplay);
break;

case GameState::InGamePaused:
if (m_pausedText.IsPressed())
{
m_pausedText.SetPressed(false);
SetGameState(GameState::InGameActive);
}
break;
}

After the MarbleMaze::Update method processes controller input, it saves the current input state for the next
frame.

// Update the button state for the next frame.


memcpy(wasButtonDown, isButtonDown, sizeof(wasButtonDown));

Tracking touch and mouse input


For touch and mouse input, a menu item is chosen when the user touches or clicks it. The following example
shows how the MarbleMaze::Update method processes pointer input to select menu items. The m_pointQueue
member variable tracks the locations where the user touched or clicked on the screen. The way in which Marble
Maze collects pointer input is described in greater detail later in this document in the section Processing pointer
input.

// Check whether the user chose a button from the UI.


bool anyPoints = !m_pointQueue.empty();
while (!m_pointQueue.empty())
{
UserInterface::GetInstance().HitTest(m_pointQueue.front());
m_pointQueue.pop();
}

The UserInterface::HitTest method determines whether the provided point is located in the bounds of any UI
element. Any UI elements that pass this test are marked as being touched. This method uses the PointInRect
helper function to determine whether the provided point is located in the bounds of each UI element.

void UserInterface::HitTest(D2D1_POINT_2F point)


{
for (auto iter = m_elements.begin(); iter != m_elements.end(); ++iter)
{
if (!(*iter)->IsVisible())
continue;

TextButton* textButton = dynamic_cast<TextButton*>(*iter);


if (textButton != nullptr)
{
D2D1_RECT_F bounds = (*iter)->GetBounds();
textButton->SetPressed(PointInRect(point, bounds));
}
}
}

Updating the game state


After the MarbleMaze::Update method processes controller and touch input, it updates the game state if any
button was pressed.

// Update the game state if the user chose a menu option.


if (m_startGameButton.IsPressed())
{
SetGameState(GameState::PreGameCountdown);
m_startGameButton.SetPressed(false);
}
if (m_highScoreButton.IsPressed())
{
SetGameState(GameState::HighScoreDisplay);
m_highScoreButton.SetPressed(false);
}

Controlling game play


The game loop and the MarbleMaze::Update method work together to update the state of game objects. If your
game accepts input from multiple devices, you can accumulate the input from all devices into one set of variables
so that you can write code that's easier to maintain. The MarbleMaze::Update method defines one set of
variables that accumulates movement from all devices.

float combinedTiltX = 0.0f;


float combinedTiltY = 0.0f;

The input mechanism can vary from one input device to another. For example, pointer input is handled by using
the Windows Runtime event-handling model. Conversely, you poll for input data from the Xbox 360 controller
when you need it. We recommend that you always follow the input mechanism that is prescribed for a given
device. This section describes how Marble Maze reads input from each device, how it updates the combined input
values, and how it uses the combined input values to update the state of the game.
Processing pointer input
When you work with pointer input, call the Windows::UI::Core::CoreDispatcher::ProcessEvents method to
process window events. Call this method in your game loop before you update or render the scene. Marble Maze
passes CoreProcessEventsOption::ProcessAllIfPresent to this method to process all queued events, and then
immediately return. After events are processed, Marble Maze renders and presents the next frame.

// Process windowing events.


CoreWindow::GetForCurrentThread()->Dispatcher->ProcessEvents(CoreProcessEventsOption::ProcessAllIfPresent);

The Windows Runtime calls the registered handler for each event that occurred. The DirectXApp class registers
for events and forwards pointer information to the MarbleMaze class.

void DirectXApp::OnPointerPressed(
_In_ Windows::UI::Core::CoreWindow^ sender,
_In_ Windows::UI::Core::PointerEventArgs^ args
)
{
m_renderer->AddTouch(args->CurrentPoint->PointerId, args->CurrentPoint->Position);
}

void DirectXApp::OnPointerReleased(
_In_ Windows::UI::Core::CoreWindow^ sender,
_In_ Windows::UI::Core::PointerEventArgs^ args
)
{
m_renderer->RemoveTouch(args->CurrentPoint->PointerId);
}

void DirectXApp::OnPointerMoved(
_In_ Windows::UI::Core::CoreWindow^ sender,
_In_ Windows::UI::Core::PointerEventArgs^ args
)
{
m_renderer->UpdateTouch(args->CurrentPoint->PointerId, args->CurrentPoint->Position);
}

The MarbleMaze class reacts to pointer events by updating the map object that holds touch events. The
MarbleMaze::AddTouch method is called when the pointer is first pressed, for example, when the user initially
touches the screen on a touch-enabled device. The MarbleMaze::AddTouch method is called when the pointer
position moves. The MarbleMaze::RemoveTouch method is called when the pointer is released, for example,
when the user stops touching the screen.
void MarbleMaze::AddTouch(int id, Windows::Foundation::Point point)
{
m_touches[id] = PointToTouch(point, m_windowBounds);

m_pointQueue.push(D2D1::Point2F(point.X, point.Y));
}

void MarbleMaze::UpdateTouch(int id, Windows::Foundation::Point point)


{
if (m_touches.find(id) != m_touches.end())
m_touches[id] = PointToTouch(point, m_windowBounds);
}

void MarbleMaze::RemoveTouch(int id)


{
m_touches.erase(id);
}

The PointToTouch function translates the current pointer position so that the origin is in the center of the screen,
and then scales the coordinates so that they range approximately between -1.0 and +1.0. This makes it easier to
calculate the tilt of the maze in a consistent way across different input methods.

inline XMFLOAT2 PointToTouch(Windows::Foundation::Point point, Windows::Foundation::Rect bounds)


{
float touchRadius = min(bounds.Width, bounds.Height);
float dx = (point.X - (bounds.Width / 2.0f)) / touchRadius;
float dy = ((bounds.Height / 2.0f) - point.Y) / touchRadius;

return XMFLOAT2(dx, dy);


}

The MarbleMaze::Update method updates the combined input values by incrementing the tilt factor by a
constant scaling value. This scaling value was determined by experimenting with several different values.

// Account for touch input.


const float touchScalingFactor = 2.0f;
for (TouchMap::const_iterator iter = m_touches.cbegin(); iter != m_touches.cend(); ++iter)
{
combinedTiltX += iter->second.x * touchScalingFactor;
combinedTiltY += iter->second.y * touchScalingFactor;
}

Processing accelerometer input


To process accelerometer input, the MarbleMaze::Update method calls the
Windows::Devices::Sensors::Accelerometer::GetCurrentReading method. This method returns a
Windows::Devices::Sensors::AccelerometerReading object, which represents an accelerometer reading. The
Windows::Devices::Sensors::AccelerometerReading::AccelerationX and
Windows::Devices::Sensors::AccelerometerReading::AccelerationY properties hold the g-force acceleration
along the x and y axes, respectively.
The following example shows how the MarbleMaze::Update method polls the accelerometer and updates the
combined input values. As you tilt the device, gravity causes the marble to move faster.
// Account for sensors.
const float acceleromterScalingFactor = 3.5f;
if (m_accelerometer != nullptr)
{
Windows::Devices::Sensors::AccelerometerReading^ reading =
m_accelerometer->GetCurrentReading();

if (reading != nullptr)
{
combinedTiltX += static_cast<float>(reading->AccelerationX) * acceleromterScalingFactor;
combinedTiltY += static_cast<float>(reading->AccelerationY) * acceleromterScalingFactor;
}
}

Because you cannot be sure that an accelerometer is present on the users computer, always ensure that you have
a valid Accelerometer object before you poll the accelerometer.
Processing Xbox 360 controller input
The following example shows how the MarbleMaze::Update method reads from the Xbox 360 controller and
updates the combined input values. The MarbleMaze::Update method uses a for loop to enable input to be
received from any connected controller. The XInputGetState method fills an XINPUT_STATE object with current
state of the controller. The combinedTiltX and combinedTiltY values are updated according to the x and y
values of the left thumbstick.

// Account for input on any connected controller.


XINPUT_STATE inputState = {0};
for (DWORD userIndex = 0; userIndex < XUSER_MAX_COUNT; ++userIndex)
{
DWORD result = XInputGetState(userIndex, &inputState);
if(result != ERROR_SUCCESS)
continue;

SHORT thumbLeftX = inputState.Gamepad.sThumbLX;


if (abs(thumbLeftX) < XINPUT_GAMEPAD_LEFT_THUMB_DEADZONE)
thumbLeftX = 0;

SHORT thumbLeftY = inputState.Gamepad.sThumbLY;


if (abs(thumbLeftY) < XINPUT_GAMEPAD_LEFT_THUMB_DEADZONE)
thumbLeftY = 0;

combinedTiltX += static_cast<float>(thumbLeftX) / 32768.0f;


combinedTiltY += static_cast<float>(thumbLeftY) / 32768.0f;

for (int i = 0; i < buttonCount; ++i)


isButtonDown[userIndex][i] = (inputState.Gamepad.wButtons & buttons[i]) == buttons[i];
}

XInput defines the XINPUT_GAMEPAD_LEFT_THUMB_DEADZONE constant for the left thumbstick. This is an
appropriate dead zone threshold for most games.

Important When you work with the Xbox 360 controller, always account for the dead zone. The dead zone
refers to the variance among gamepads in their sensitivity to initial movement. In some controllers, a small
movement may generate no reading, but in others it may generate a measurable reading. To account for this
in your game, create a zone of non-movement for initial thumbstick movement. For more information about
the dead zone, see Getting Started With XInput.

Applying input to the game state


Devices report input values in different ways. For example, pointer input might be in screen coordinates, and
controller input might be in a completely different format. One challenge with combining input from multiple
devices into one set of input values is normalization, or converting values to a common format. Marble Maze
normalizes values by scaling them to the range [-1.0, 1.0]. To normalize Xbox 360 controller input, Marble Maze
divides the input values by 32768 because thumbstick input values always fall between -32768 and 32767. The
PointToTouch function, which is previously described in this section, achieves a similar result by converting
screen coordinates to normalized values that range approximately between -1.0 and +1.0.

Tip Even if your application uses one input method, we recommend that you always normalize input values.
Doing so can simplify how input is interpreted by other components of your game, such as physics simulation,
and makes it easier to write games that work on different screen resolutions.

After the MarbleMaze::Update method processes input, it creates a vector that represents the effect of the tilt of
the maze on the marble. The following example shows how Marble Maze uses the XMVector3Normalize
function to create a normalized gravity vector. The MaxTilt variable constrains the amount by which the maze tilts
and prevents the maze from tilting on its side.

const float maxTilt = 1.0f / 8.0f;


XMVECTOR gravity = XMVectorSet(combinedTiltX * maxTilt, combinedTiltY * maxTilt, 1.0f, 0.0f);
gravity = XMVector3Normalize(gravity);

To complete the update of scene objects, Marble Maze passes the updated gravity vector to the physics simulation,
updates the physics simulation for the time that has elapsed since the previous frame, and updates the position
and orientation of the marble. If the marble has fallen through the maze, the MarbleMaze::Update method
places the marble back at the last checkpoint that the marble touched and resets the state of the physics
simulation.

XMFLOAT3 g;
XMStoreFloat3(&g, gravity);
m_physics.SetGravity(g);

// Only update physics when gameplay is active.


m_physics.UpdatePhysicsSimulation(timeDelta);

// Check whether the marble fell off of the maze.


const float fadeOutDepth = 0.0f;
const float resetDepth = 80.0f;
if (marblePosition.z >= fadeOutDepth)
{
m_targetLightStrength = 0.0f;
}
if (marblePosition.z >= resetDepth)
{
// Reset marble.
memcpy(&marblePosition, &m_checkpoints[m_currentCheckpoint], sizeof(XMFLOAT3));
oldMarblePosition = marblePosition;
m_physics.SetPosition((const XMFLOAT3&)marblePosition);
m_physics.SetVelocity(XMFLOAT3(0, 0, 0));
m_lightStrength = 0.0f;
m_targetLightStrength = 1.0f;

m_resetCamera = true;
m_resetMarbleRotation = true;
m_audio.PlaySoundEffect(FallingEvent);
}

This section does not describe how the physics simulation works. For details about that, see Physics.h and
Physics.cpp in the Marble Maze sources.

Next steps
Read Adding audio to the Marble Maze sample for information about some of the key practices to keep in mind
when you work with audio. The document discusses how Marble Maze uses Microsoft Media Foundation and
XAudio2 to load, mix, and play audio resources.

Related topics
Adding audio to the Marble Maze sample
Adding visual content to the Marble Maze sample
Developing Marble Maze, a UWP game in C++ and DirectX
Adding audio to the Marble Maze sample
3/6/2017 27 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This document describes the key practices to consider when you work with audio and shows how Marble Maze
applies these practices. Marble Maze uses Microsoft Media Foundation to load audio resources from file, and
XAudio2 to mix and play audio and to apply effects to audio.
Marble Maze plays music in the background, and also uses game-play sounds to indicate game events, such as
when the marble hits a wall. An important part of the implementation is that Marble Maze uses a reverb, or echo,
effect to simulate the sound of a marble when it bounces. The reverb effect implementation causes echoes to reach
you more quickly and loudly in small rooms; echoes are quieter and reach you more slowly in larger rooms.

Note The sample code that corresponds to this document is found in the DirectX Marble Maze game sample.

Here are some of the key points that this document discusses for when you work with audio in your game:
Consider using Media Foundation to decode audio assets and XAudio2 to play audio. However, if you have an
existing audio asset-loading mechanism that works in a Universal Windows Platform (UWP) app, you can use it.
An audio graph contains one source voice for each active sound, zero or more submix voices, and one
mastering voice. Source voices can feed into submix voices and/or the mastering voice. Submix voices feed into
other submix voices or the mastering voice.
If your background music files are large, consider streaming your music into smaller buffers so that less
memory is used.
If it makes sense to do so, pause audio playback when the app loses focus or visibility, or is suspended. Resume
playback when your app regains focus, becomes visible, or is resumed.
Set audio categories to reflect the role of each sound. For example, you typically use
AudioCategory_GameMedia for game background audio and AudioCategory_GameEffects for sound
effects.
Handle device changes, including headphones, by releasing and recreating all audio resources and interfaces.
Consider whether to compress audio files when minimizing disk space and streaming costs is a requirement.
Otherwise, you can leave audio uncompressed so that it loads faster.

Introducing XAudio2 and Microsoft Media Foundation


XAudio2 is a low-level audio library for Windows that specifically supports game audio. It provides a digital signal
processing (DSP) and audio-graph engine for games. XAudio2 expands on its predecessors, DirectSound and
XAudio, by supporting computing trends such as SIMD floating-point architectures and HD audio. It also supports
the more complex sound processing demands of todays games.
The document XAudio2 Key Concepts explains the key concepts for using XAudio2. In brief, the concepts are:
The IXAudio2 interface is the core of the XAudio2 engine. Marble Maze uses this interface to create voices and
to receive notification when the output device changes or fails.
A voice processes, adjusts, and plays audio data.
A source voice is a collection of audio channels (mono, 5.1, and so on) and represents one stream of audio data.
In XAudio2, a source voice is where audio processing begins. Typically, sound data is loaded from an external
source, such as a file or a network, and is sent to a source voice. Marble Maze uses Media Foundation to load
sound data from files. Media Foundation is introduced later in this document.
A submix voice processes audio data. This processing can include changing the audio stream or combining
multiple streams into one. Marble Maze uses submixes to create the reverb effect.
A mastering voice combines data from source and submix voices and sends that data to the audio hardware.
An audio graph contains one source voice for each active sound, zero or more submix voices, and only one
mastering voice.
A callback informs client code that some event has occurred in a voice or in an engine object. By using
callbacks, you can reuse memory when XAudio2 is finished with a buffer, react when the audio device changes
(for example, when you connect or disconnect headphones), and more. The Handling headphones and device
changes section later in this document explains how Marble Maze uses this mechanism to handle device
changes.
Marble Maze uses two audio engines (in other words, two IXAudio2 objects) to process audio. One engine
processes the background music, and the other engine processes game-play sounds.
Marble Maze must also create one mastering voice for each engine. Recall that a mastering engine combines
audio streams into one stream and sends that stream to the audio hardware. The background music stream, a
source voice, outputs data to a mastering voice and to two submix voices. The submix voices perform the reverb
effect.
Media Foundation is a multimedia library that supports many audio and video formats. XAudio2 and Media
Foundation complement each other. Marble Maze uses Media Foundation to load audio assets from file and uses
XAudio2 to play audio. You don't have to use Media Foundation to load audio assets. If you have an existing audio
asset loading mechanism that works in Universal Windows Platform (UWP) apps, use it.
For more information about XAudio2, see Programming Guide. For more information about Media Foundation,
see Microsoft Media Foundation.

Initializing audio resources


Marble Mazes uses a Windows Media Audio (.wma) file for the background music, and WAV (.wav) files for game
play sounds. These formats are supported by Media Foundation. Although the .wav file format is natively
supported by XAudio2, a game has to parse the file format manually to fill out the appropriate XAudio2 data
structures. Marble Maze uses Media Foundation to more easily work with .wav files. For the complete list of the
media formats that are supported by Media Foundation, see Supported Media Formats in Media Foundation.
Marble Maze does not use separate design-time and run-time audio formats, and does not use XAudio2 ADPCM
compression support. For more information about ADPCM compression in XAudio2, see ADPCM Overview.
The Audio::CreateResources method, which is called from
MarbleMaze::CreateDeviceIndependentResources, loads the audio streams from file, initializes the XAudio2
engine objects, and creates the source, submix, and mastering voices.
Creating the XAudio2 engines
Recall that Marble Maze creates one IXAudio2 object to represent each audio engine that it uses. To create an
audio engine, call the XAudio2Create function. The following example shows how Marble Maze creates the audio
engine that processes background music.

DX::ThrowIfFailed(
XAudio2Create(&m_musicEngine)
);

Marble Maze performs a similar step to create the audio engine that plays game-play sounds.
How to work with the IXAudio2 interface in a UWP app differs from a desktop app in two ways. First, you don't
have to call CoInitializeEx before you call XAudio2Create. In addition, IXAudio2 no longer supports device
enumeration. For information about how to enumerate audio devices, see Enumerating devices.
Creating the mastering voices
The following example shows how the Audio::CreateResources method creates the mastering voice for the
background music. The call to IXAudio2::CreateMasteringVoice specifies two input channels. This simplifies the
logic for the reverb effect. The XAUDIO2_DEFAULT_SAMPLERATE specification tells the audio engine to use the
sample rate that is specified in the Sound Control Panel. In this example, m_musicMasteringVoice is an
IXAudio2MasteringVoice object.

// This sample plays the equivalent of background music, which we tag on the
// mastering voice as AudioCategory_GameMedia. In ordinary usage, if we were
// playing the music track with no effects, we could route it entirely through
// Media Foundation. Here, we are using XAudio2 to apply a reverb effect to the
// music, so we use Media Foundation to decode the data then we feed it through
// the XAudio2 pipeline as a separate Mastering Voice, so that we can tag it
// as Game Media. We default the mastering voice to 2 channels to simplify
// the reverb logic.
DX::ThrowIfFailed(
m_musicEngine->CreateMasteringVoice(
&m_musicMasteringVoice,
2,
48000,
0,
nullptr,
nullptr,
AudioCategory_GameMedia
)
);

The Audio::CreateResources method performs a similar step to create the mastering voice for the game play
sounds, except that it specifies AudioCategory_GameEffects for the StreamCategory parameter, which is the
default. Marble Maze specifies AudioCategory_GameMedia for background music so that users can listen to
music from a different application as they play the game. When a music app is playing, Windows mutes any voices
that are created by the AudioCategory_GameMedia option. The user still hears game-play sounds because they
are created by the AudioCategory_GameEffects option. For more info about audio categories, see
AUDIO_STREAM_CATEGORY enumeration.
Creating the reverb effect
For each voice, you can use XAudio2 to create sequences of effects that process audio. Such a sequence is known
as an effect chain. Use effect chains when you want to apply one or more effects to a voice. Effect chains can be
destructive; that is, each effect in the chain can overwrite the audio buffer. This property is important because
XAudio2 makes no guarantee that output buffers are initialized with silence. Effect objects are represented in
XAudio2 by cross-platform audio processing objects (XAPO). For more information about XAPO, see XAPO
Overview.
When you create an effect chain, follow these steps:
1. Create the effect object.
2. Populate an XAUDIO2_EFFECT_DESCRIPTOR structure with effect data.
3. Populate an XAUDIO2_EFFECT_CHAIN structure with data.
4. Apply the effect chain to a voice.
5. Populate an effect parameter structure and apply it to the effect.
6. Disable or enable the effect whenever appropriate.
The Audio class defines the CreateReverb method to create the effect chain that implements reverb. This method
calls the XAudio2CreateReverb function to create a IXAudio2SubmixVoice object, which acts as the submix
voice for the reverb effect.
DX::ThrowIfFailed(
XAudio2CreateReverb(&soundEffectXAPO)
);

The XAUDIO2_EFFECT_DESCRIPTOR structure contains information about an XAPO for use in an effect chain, for
example, the target number of output channels. The Audio::CreateReverb method creates an
XAUDIO2_EFFECT_DESCRIPTOR object that is set to the disabled state, uses two output channels, and references
the IXAudio2SubmixVoice object for the reverb effect. The XAUDIO2_EFFECT_DESCRIPTOR object starts in the
disabled state because the game must set parameters before the effect starts modifying game sounds. Marble
Maze uses two output channels to simplify the logic for the reverb effect.

soundEffectdescriptor.InitialState = false;
soundEffectdescriptor.OutputChannels = 2;
soundEffectdescriptor.pEffect = soundEffectXAPO.Get();

If your effect chain has multiple effects, each effect requires an object. The XAUDIO2_EFFECT_CHAIN structure
holds the array of XAUDIO2_EFFECT_DESCRIPTOR objects that participate in the effect. The following example
shows how the Audio::CreateReverb method specifies the one effect to implement reverb.

soundEffectChain.EffectCount = 1;
soundEffectChain.pEffectDescriptors = &soundEffectdescriptor;

The Audio::CreateReverb method calls the IXAudio2::CreateSubmixVoice method to create the submix voice
for the effect. It specifies the XAUDIO2_EFFECT_CHAIN object for the pEffectChain parameter to associate the
effect chain with the voice. Marble Maze also specifies two output channels and a sample rate of 48 kilohertz. We
chose this sample rate because it represented a balance between audio quality and the amount of required CPU
processing. A greater sample rate would have required more CPU processing without having a noticeable quality
benefit.

DX::ThrowIfFailed(
engine->CreateSubmixVoice(newSubmix, 2, 48000, 0, 0, nullptr, &soundEffectChain)
);

Tip If you want to attach an existing effect chain to an existing submix voice, or you want to replace the current
effect chain, use the IXAudio2Voice::SetEffectChain method.

The Audio::XAudio2CreateReverb method calls IXAudio2Voice::SetEffectParameters to set additional


parameters that are associated with the effect. This method takes a parameter structure that is specific to the effect.
An XAUDIO2FX_REVERB_PARAMETERS object, which contains the effect parameters for reverb, is initialized in
the Audio::Initialize method because every reverb effect shares the same parameters. The following example
shows how the Audio::Initialize method initializes the reverb parameters for near-field reverb.
m_reverbParametersSmall.ReflectionsDelay = XAUDIO2FX_REVERB_DEFAULT_REFLECTIONS_DELAY;
m_reverbParametersSmall.ReverbDelay = XAUDIO2FX_REVERB_DEFAULT_REVERB_DELAY;
m_reverbParametersSmall.RearDelay = XAUDIO2FX_REVERB_DEFAULT_REAR_DELAY;
m_reverbParametersSmall.PositionLeft = XAUDIO2FX_REVERB_DEFAULT_POSITION;
m_reverbParametersSmall.PositionRight = XAUDIO2FX_REVERB_DEFAULT_POSITION;
m_reverbParametersSmall.PositionMatrixLeft = XAUDIO2FX_REVERB_DEFAULT_POSITION_MATRIX;
m_reverbParametersSmall.PositionMatrixRight = XAUDIO2FX_REVERB_DEFAULT_POSITION_MATRIX;
m_reverbParametersSmall.EarlyDiffusion = 4;
m_reverbParametersSmall.LateDiffusion = 15;
m_reverbParametersSmall.LowEQGain = XAUDIO2FX_REVERB_DEFAULT_LOW_EQ_GAIN;
m_reverbParametersSmall.LowEQCutoff = XAUDIO2FX_REVERB_DEFAULT_LOW_EQ_CUTOFF;
m_reverbParametersSmall.HighEQGain = XAUDIO2FX_REVERB_DEFAULT_HIGH_EQ_GAIN;
m_reverbParametersSmall.HighEQCutoff = XAUDIO2FX_REVERB_DEFAULT_HIGH_EQ_CUTOFF;
m_reverbParametersSmall.RoomFilterFreq = XAUDIO2FX_REVERB_DEFAULT_ROOM_FILTER_FREQ;
m_reverbParametersSmall.RoomFilterMain = XAUDIO2FX_REVERB_DEFAULT_ROOM_FILTER_MAIN;
m_reverbParametersSmall.RoomFilterHF = XAUDIO2FX_REVERB_DEFAULT_ROOM_FILTER_HF;
m_reverbParametersSmall.ReflectionsGain = XAUDIO2FX_REVERB_DEFAULT_REFLECTIONS_GAIN;
m_reverbParametersSmall.ReverbGain = XAUDIO2FX_REVERB_DEFAULT_REVERB_GAIN;
m_reverbParametersSmall.DecayTime = XAUDIO2FX_REVERB_DEFAULT_DECAY_TIME;
m_reverbParametersSmall.Density = XAUDIO2FX_REVERB_DEFAULT_DENSITY;
m_reverbParametersSmall.RoomSize = XAUDIO2FX_REVERB_DEFAULT_ROOM_SIZE;
m_reverbParametersSmall.WetDryMix = XAUDIO2FX_REVERB_DEFAULT_WET_DRY_MIX;
m_reverbParametersSmall.DisableLateField = TRUE;

This example uses the default values for most of the reverb parameters, but it sets DisableLateField to TRUE to
specify near-field reverb, EarlyDiffusion to 4 to simulate flat near surfaces, and LateDiffusion to 15 to simulate
very diffuse distant surfaces. Flat near surfaces cause echoes to reach you more quickly and loudly; diffuse distant
surfaces cause echoes to be quieter and reach you more slowly. You can experiment with reverb values to get the
desired effect in your game or use the ReverbConvertI3DL2ToNative function to use industry-standard I3DL2
(Interactive 3D Audio Rendering Guidelines Level 2.0) parameters.
The following example shows how Audio::CreateReverb sets the reverb parameters. The parameters parameter
is an XAUDIO2FX_REVERB_PARAMETERS object.

DX::ThrowIfFailed(
(*newSubmix)->SetEffectParameters(0, parameters, sizeof(m_reverbParametersSmall))
);

The Audio::CreateReverb method finishes by enabling the effect if the enableEffect flag is set and by setting its
volume and output matrix. This part sets the volume to full (1.0) and then specifies the volume matrix to be silence
for both left and right inputs and left and right output speakers. We do this because other code later cross-fades
between the two reverbs (simulating the transition from being near a wall to being in a large room), or mutes both
reverbs if required. When the reverb path is later unmuted, the game sets a matrix of {1.0f, 0.0f, 0.0f, 1.0f} to route
left reverb output to the left input of the mastering voice and right reverb output to the right input of the
mastering voice.
if (enableEffect)
{
DX::ThrowIfFailed(
(*newSubmix)->EnableEffect(0)
);
}

DX::ThrowIfFailed(
(*newSubmix)->SetVolume (1.0f)
);

float outputMatrix[4] = {0, 0, 0, 0};


DX::ThrowIfFailed(
(*newSubmix)->SetOutputMatrix(masteringVoice, 2, 2, outputMatrix)
);

Marble Maze calls the CreateReverb method four times; two times for the background music and two times for
the game-play sounds. The following shows how Marble Maze calls the CreateReverb method for the
background music.

CreateReverb(
m_musicEngine,
m_musicMasteringVoice,
&m_reverbParametersSmall,
&m_musicReverbVoiceSmallRoom,
true
);
CreateReverb(
m_musicEngine,
m_musicMasteringVoice,
&m_reverbParametersLarge,
&m_musicReverbVoiceLargeRoom,
true
);

For a list of possible sources of effects for use with XAudio2, see XAudio2 Audio Effects.
Loading audio data from file
Marble Maze defines the MediaStreamer class, which uses Media Foundation to load audio resources from file.
Marble Maze uses one MediaStreamer object to load each audio file.
Marble Maze calls the MediaStreamer::Initialize method to initialize each audio stream. Here's how the
Audio::CreateResources method calls MediaStreamer::Initialize to initialize the audio stream for the
background music:

// Media Foundation is a convenient way to get both file I/O and format decode for
// audio assets. You can replace the streamer in this sample with your own file I/O
// and decode routines.
m_musicStreamer.Initialize(L"Media\\Audio\\background.wma");

The MediaStreamer::Initialize method starts by calling the MFStartup function to initialize Media Foundation.

DX::ThrowIfFailed(
MFStartup(MF_VERSION)
);

MediaStreamer::Initialize then calls MFCreateSourceReaderFromURL to create an IMFSourceReader object.


An IMFSourceReader object reads media data from the file that is specified by url.
DX::ThrowIfFailed(
MFCreateSourceReaderFromURL(url, nullptr, &m_reader)
);

The MediaStreamer::Initialize method then creates an IMFMediaType object to describe the format of the
audio stream. An audio format has two types: a major type and a subtype. The major type defines the overall
format of the media, such as video, audio, script, and so on. The subtype defines the format, such as PCM, ADPCM,
or WMA. The MediaStreamer::Initialize method uses the IMFMediaType::SetGUID method to specify the
major type as audio (MFMediaType_Audio) and the minor type as uncompressed PCM audio
(MFAudioFormat_PCM). The IMFSourceReader::SetCurrentMediaType method associates the media type with
the stream reader.

// Set the decoded output format as PCM.


// XAudio2 on Windows can process PCM and ADPCM-encoded buffers.
// When this sample uses Media Foundation, it always decodes into PCM.

DX::ThrowIfFailed(
MFCreateMediaType(&mediaType)
);

DX::ThrowIfFailed(
mediaType->SetGUID(MF_MT_MAJOR_TYPE, MFMediaType_Audio)
);

DX::ThrowIfFailed(
mediaType->SetGUID(MF_MT_SUBTYPE, MFAudioFormat_PCM)
);

DX::ThrowIfFailed(
m_reader->SetCurrentMediaType(MF_SOURCE_READER_FIRST_AUDIO_STREAM, 0, mediaType.Get())
);

The MediaStreamer::Initialize method then obtains the complete output media format from Media Foundation
and calls the MFCreateWaveFormatExFromMFMediaType function to convert the Media Foundation audio
media type to a WAVEFORMATEX structure. The WAVEFORMATEX structure defines the format of waveform-
audio data. Marble Maze uses this structure to create the source voices and to apply the low-pass filter to the
marble rolling sound.

// Get the complete WAVEFORMAT from the Media Type.


DX::ThrowIfFailed(
m_reader->GetCurrentMediaType(MF_SOURCE_READER_FIRST_AUDIO_STREAM, &outputMediaType)
);

uint32 formatSize = 0;
WAVEFORMATEX* waveFormat;
DX::ThrowIfFailed(
MFCreateWaveFormatExFromMFMediaType(outputMediaType.Get(), &waveFormat, &formatSize)
);
CopyMemory(&m_waveFormat, waveFormat, sizeof(m_waveFormat));
CoTaskMemFree(waveFormat);

Important The MFCreateWaveFormatExFromMFMediaType function uses CoTaskMemAlloc to allocate


the WAVEFORMATEX object. Therefore, make sure that you call CoTaskMemFree when you are finished
using this object.

The MediaStreamer::Initialize method finishes by computing the length of the stream,


m_maxStreamLengthInBytes, in bytes. To do so, it calls the
IMFSourceReader::IMFSourceReader::GetPresentationAttribute method to get the duration of the audio
stream in 100-nanosecond units, converts the duration to sections, and then multiplies by the average data
transfer rate in bytes per second. Marble Maze later uses this value to allocate the buffer that holds each game
play sound.

// Get the total length of the stream, in bytes.


PROPVARIANT var;
DX::ThrowIfFailed(
m_reader->GetPresentationAttribute(MF_SOURCE_READER_MEDIASOURCE, MF_PD_DURATION, &var)
);
LONGLONG duration = var.uhVal.QuadPart;
// The duration is in 100ns units; convert the value to seconds.
double durationInSeconds = (duration / static_cast<double>(10000000));
m_maxStreamLengthInBytes =
static_cast<unsigned int>(durationInSeconds * m_waveFormat.nAvgBytesPerSec);

// Round up the buffer size to the nearest four bytes.


m_maxStreamLengthInBytes = (m_maxStreamLengthInBytes + 3) / 4 * 4;

Creating the source voices


Marble Maze creates XAudio2 source voices to play each of its game sounds and music in source voices. The
Audio class defines an IXAudio2SourceVoice object for the background music and an array of
SoundEffectData objects to hold the game play sounds. The SoundEffectData structure holds the
IXAudio2SourceVoice object for an effect and also defines other effect-related data, such as the audio buffer.
Audio.h defines the SoundEvent enumeration. Marble Maze uses this enumeration to identify each game play
sound. The Audio class also uses this enumeration to index the array of SoundEffectData objects.

enum SoundEvent
{
RollingEvent = 0,
FallingEvent = 1,
CollisionEvent = 2,
CheckpointEvent = 3,
MenuChangeEvent = 4,
MenuSelectedEvent = 5,
LastSoundEvent,
};

The following table shows the relationship between each of these values, the file that contains the associated
sound data, and a brief description of what each sound represents. The audio files are located in the \Media\Audio
folder.

SOUNDEVENT VALUE FILE NAME DESCRIPTION

RollingEvent MarbleRoll.wav Played as the marble rolls.

FallingEvent MarbleFall.wav Played when the marble falls off the


maze.

CollisionEvent MarbleHit.wav Played when the marble collides with


the maze.

CheckpointEvent Checkpoint.wav Played when the marble passes over a


checkpoint.

MenuChangeEvent MenuChange.wav Played when the game user changes


the current menu item.
SOUNDEVENT VALUE FILE NAME DESCRIPTION

MenuSelectedEvent MenuSelect.wav Played when the game user selects a


menu item.

The following example shows how the Audio::CreateResources method creates the source voice for the
background music. The IXAudio2::CreateSourceVoice method creates and configures a source voice. It takes a
WAVEFORMATEX structure that defines the format of the audio buffers that are sent to the voice. As mentioned
previously, Marble Maze uses the PCM format. The XAUDIO2_SEND_DESCRIPTOR structure defines the target
destination voice from another voice and specifies whether a filter should be used. Marble Maze calls the
Audio::SetSoundEffectFilter function to use the filters to change the sound of the ball as it rolls. The
XAUDIO2_VOICE_SENDS structure defines the set of voices to receive data from a single output voice. Marble
Maze sends data from the source voice to the mastering voice (for the dry, or unaltered, portion of a playing
sound) and to the two submix voices that implement the wet, or reverberant, portion of a playing sound.

XAUDIO2_SEND_DESCRIPTOR descriptors[3];
descriptors[0].pOutputVoice = m_soundEffectMasteringVoice;
descriptors[0].Flags = 0;
descriptors[1].pOutputVoice = m_soundEffectReverbVoiceSmallRoom;
descriptors[1].Flags = 0;
descriptors[2].pOutputVoice = m_soundEffectReverbVoiceLargeRoom;
descriptors[2].Flags = 0;
XAUDIO2_VOICE_SENDS sends = {0};
sends.SendCount = 3;
sends.pSends = descriptors;

// The rolling sound can have pitch shifting and a low-pass filter.
if (sound == RollingEvent)
{
DX::ThrowIfFailed(
m_soundEffectEngine->CreateSourceVoice(
&m_soundEffects[sound].m_soundEffectSourceVoice,
&(soundEffectStream.GetOutputWaveFormatEx()),
XAUDIO2_VOICE_USEFILTER,
2.0f,
&m_voiceContext,
&sends)
);
}
else
{
DX::ThrowIfFailed(
m_soundEffectEngine->CreateSourceVoice(
&m_soundEffects[sound].m_soundEffectSourceVoice,
&(soundEffectStream.GetOutputWaveFormatEx()),
0,
1.0f,
&m_voiceContext,
&sends)
);
}

Playing background music


A source voice is created in the stopped state. Marble Maze starts the background music in the game loop. The first
call to MarbleMaze::Update calls Audio::Start to start the background music.
if (!m_audio.m_isAudioStarted)
{
m_audio.Start();
}

The Audio::Start method calls IXAudio2SourceVoice::Start to start to process the source voice for the
background music.

void Audio::Start()
{
if (m_engineExperiencedCriticalError)
{
return;
}

HRESULT hr = m_musicSourceVoice->Start(0);

if SUCCEEDED(hr) {
m_isAudioStarted = true;
}
else
{
m_engineExperiencedCriticalError = true;
}
}

The source voice passes that audio data to the next stage of the audio graph. In the case of Marble Maze, the next
stage contains two submix voices that apply the two reverb effects to the audio. One submix voice applies a close
late-field reverb; the second applies a far late-field reverb. The amount that each submix voice contributes to the
final mix is determined by the size and shape of the room. The near-field reverb contributes more when the ball is
near a wall or in a small room, and the late-field reverb contributes more when the ball is in a large space. This
technique produces a more realistic echo effect as the marble moves through the maze. To learn more about how
Marble Maze implements this effect, see Audio::SetRoomSize and Physics::CalculateCurrentRoomSize in the
Marble Maze source code.

Note In a game in which most room sizes are relatively the same, you can use a more basic reverb model. For
example, you can use one reverb setting for all rooms or you can create a predefined reverb setting for each
room.

The Audio::CreateResources method uses Media Foundation to load the background music. At this point,
however, the source voice does not have audio data to work with. In addition, because the background music
loops, the source voice must be regularly updated with data so that the music continues to play. To keep the
source voice filled with data, the game loop updates the audio buffers every frame. The MarbleMaze::Render
method calls Audio::Render to process the background music audio buffer. The Audio::Render defines an array
of three audio buffers, m_audioBuffers. Each buffer holds 64 KB (65536 bytes) of data. The loop reads data from
the Media Foundation object and writes that data to the source voice until the source voice has three queued
buffers.

Caution Although Marble Maze uses a 64 KB buffer to hold music data, you may need to use a larger or
smaller buffer. This amount depends on the requirements of your game.
void Audio::Render()
{
if (m_engineExperiencedCriticalError)
{
m_engineExperiencedCriticalError = false;
ReleaseResources();
Initialize();
CreateResources();
Start();
if (m_engineExperiencedCriticalError)
{
return;
}
}

try
{
bool streamComplete;
XAUDIO2_VOICE_STATE state;
uint32 bufferLength;
XAUDIO2_BUFFER buf = {0};

// Use MediaStreamer to stream the buffers.


m_musicSourceVoice->GetState(&state);
while (state.BuffersQueued <= MAX_BUFFER_COUNT - 1)
{
streamComplete = m_musicStreamer.GetNextBuffer(
m_audioBuffers[m_currentBuffer],
STREAMING_BUFFER_SIZE,
&bufferLength
);

if (bufferLength > 0)
{
buf.AudioBytes = bufferLength;
buf.pAudioData = m_audioBuffers[m_currentBuffer];
buf.Flags = (streamComplete) ? XAUDIO2_END_OF_STREAM : 0;
buf.pContext = 0;
DX::ThrowIfFailed(
m_musicSourceVoice->SubmitSourceBuffer(&buf)
);

m_currentBuffer++;
m_currentBuffer %= MAX_BUFFER_COUNT;
}

if (streamComplete)
{
// Loop the stream.
m_musicStreamer.Restart();
break;
}

m_musicSourceVoice->GetState(&state);
}
}
catch(...)
{
m_engineExperiencedCriticalError = true;
}
}

The loop also handles when the Media Foundation object reaches the end of the stream. In this case, it calls the
MediaStreamer::OnClockRestart method to reset the position of the audio source.
void MediaStreamer::Restart()
{
if (m_reader == nullptr)
{
return;
}

PROPVARIANT var = {0};


var.vt = VT_I8;

DX::ThrowIfFailed(
m_reader->SetCurrentPosition(GUID_NULL, var)
);
}

To implement audio looping for a single buffer (or for an entire sound that is fully loaded into memory), you can
set the LoopCount field to XAUDIO2_LOOP_INFINITE when you initialize the sound. Marble Maze uses this
technique to play the rolling sound for the marble.

if(sound == RollingEvent)
{
m_soundEffects[sound].m_audioBuffer.LoopCount = XAUDIO2_LOOP_INFINITE;
}

However, for the background music, Marble Maze manages the buffers directly so that it can better control the
amount of memory that is used. When your music files are large, you can stream the music data into smaller
buffers. Doing so can help balance memory size with the frequency of the games ability to process and stream
audio data.

Tip If your game has a low or varying frame rate, processing audio on the main thread can produce
unexpected pauses or pops in the audio because the audio engine has insufficient buffered audio data to work
with. If your game is sensitive to this issue, consider processing audio on a separate thread that does not
perform rendering. This approach is especially useful on computers that have multiple processors because
your game can use idle processors.

Reacting to game events


The MarbleMaze class provides methods such as PlaySoundEffect, IsSoundEffectStarted, StopSoundEffect,
SetSoundEffectVolume, SetSoundEffectPitch, and SetSoundEffectFilter to enable the game to control when
sounds play and stop, and to control sound properties such as volume and pitch. For example, if the marble falls
off the maze, MarbleMaze::Update method calls the Audio::PlaySoundEffect method to play the FallingEvent
sound.

m_audio.PlaySoundEffect(FallingEvent);

The Audio::PlaySoundEffect method calls the IXAudio2SourceVoice::Start method to begin playback of the
sound. If the IXAudio2SourceVoice::Start method has already been called, it is not started again.
Audio::PlaySoundEffect then performs custom logic for certain sounds.
void Audio::PlaySoundEffect(SoundEvent sound)
{
XAUDIO2_BUFFER buf = {0};
XAUDIO2_VOICE_STATE state = {0};

if (m_engineExperiencedCriticalError) {
// If there's an error, then we'll recreate the engine on the next
// render pass.
return;
}

SoundEffectData* soundEffect = &m_soundEffects[sound];


HRESULT hr = soundEffect->m_soundEffectSourceVoice->Start();
if FAILED(hr)
{
m_engineExperiencedCriticalError = true;
return;
}

// For one-off voices, submit a new buffer if there's none queued up,
// and allow up to two collisions to be queued up.
if(sound != RollingEvent)
{
XAUDIO2_VOICE_STATE state = {0};
soundEffect->m_soundEffectSourceVoice->GetState(&state, XAUDIO2_VOICE_NOSAMPLESPLAYED);
if (state.BuffersQueued == 0)
{
soundEffect->m_soundEffectSourceVoice->SubmitSourceBuffer(&soundEffect->m_audioBuffer);
}
else if (state.BuffersQueued < 2 && sound == CollisionEvent)
{
soundEffect->m_soundEffectSourceVoice->SubmitSourceBuffer(&soundEffect->m_audioBuffer);
}

// For the menu clicks, we want to stop the voice and replay the click right away.
// Note that stopping and then flushing could cause a glitch due to the
// waveform not being at a zero-crossing, but due to the nature of the sound
// (fast and 'clicky'), we don't mind.
if (state.BuffersQueued > 0 && sound == MenuChangeEvent)
{
soundEffect->m_soundEffectSourceVoice->Stop();
soundEffect->m_soundEffectSourceVoice->FlushSourceBuffers();
soundEffect->m_soundEffectSourceVoice->SubmitSourceBuffer(&soundEffect->m_audioBuffer);
soundEffect->m_soundEffectSourceVoice->Start();
}
}

m_soundEffects[sound].m_soundEffectStarted = true;
}

For sounds other than rolling, the Audio::PlaySoundEffect method calls IXAudio2SourceVoice::GetState to
determine the number of buffers that the source voice is playing. It calls
IXAudio2SourceVoice::SubmitSourceBuffer to add the audio data for the sound to the voices input queue if
no buffers are active. The Audio::PlaySoundEffect method also enables the collision sound to be played two
times in sequence. This occurs, for example, when the marble collides with a corner of the maze.
As already described, the Audio class uses the XAUDIO2_LOOP_INFINITE flag when it initializes the sound for the
rolling event. The sound starts looped playback the first time that Audio::PlaySoundEffect is called for this event.
To simplify the playback logic for the rolling sound, Marble Maze mutes the sound instead of stopping it. As the
marble changes velocity, Marble Maze changes the pitch and volume of the sound to give it a more realistic effect.
The following shows how the MarbleMaze::Update method updates the pitch and volume of the marble as its
velocity changes and how it mutes the sound by setting its volume to zero when the marble stops.
// Play the roll sound only if the marble is actually rolling.
if (ci.isRollingOnFloor && volume > 0)
{
if (!m_audio.IsSoundEffectStarted(RollingEvent))
{
m_audio.PlaySoundEffect(RollingEvent);
}

// Update the volume and pitch by the velocity.


m_audio.SetSoundEffectVolume(RollingEvent, volume);
m_audio.SetSoundEffectPitch(RollingEvent, pitch);

// The rolling sound has at most 8000Hz sounds, so we linearly


// ramp up the low-pass filter the faster we go.
// We also reduce the Q-value of the filter, starting with a
// relatively broad cutoff and get progressively tighter.
m_audio.SetSoundEffectFilter(
RollingEvent,
600.0f + 8000.0f * volume,
XAUDIO2_MAX_FILTER_ONEOVERQ - volume*volume
);
}
else
{
m_audio.SetSoundEffectVolume(RollingEvent, 0);
}

Reacting to suspend and resume events


The document Marble Maze application structure describes how Marble Maze supports suspend and resume.
When the game is suspended, the game pauses the audio. When the game resumes, the game resumes the audio
where it left off. We do so to follow the best practice of not using resources when you know theyre not needed.
The Audio::SuspendAudio method is called when the game is suspended. This method calls the
IXAudio2::StopEngine method to stop all audio. Although IXAudio2::StopEngine stops all audio output
immediately, it preserves the audio graph and its effect parameters (for example, the reverb effect thats applied
when the marble bounces).

// Uses the IXAudio2::StopEngine method to stop all audio immediately.


// It leaves the audio graph untouched, which preserves all effect parameters
// and effect histories (like reverb effects) voice states, pending buffers,
// cursor positions and so on.
// When the engines are restarted, the resulting audio will sound as if it had
// never been stopped except for the period of silence.
void Audio::SuspendAudio()
{
if (m_engineExperiencedCriticalError)
{
return;
}

if (m_isAudioStarted)
{
m_musicEngine->StopEngine();
m_soundEffectEngine->StopEngine();
}
m_isAudioStarted = false;
}

The Audio::ResumeAudio method is called when the game is resumed. This method uses the
IXAudio2::StartEngine method to restart the audio. Because the call to IXAudio2::StopEngine preserves the
audio graph and its effect parameters, the audio output resumes where it left off.

// Restarts the audio streams. A call to this method must match a previous call
// to SuspendAudio. This method causes audio to continue where it left off.
// If there is a problem with the restart, the m_engineExperiencedCriticalError
// flag is set. The next call to Render will recreate all the resources and
// reset the audio pipeline.
void Audio::ResumeAudio()
{
if (m_engineExperiencedCriticalError)
{
return;
}

HRESULT hr = m_musicEngine->StartEngine();
HRESULT hr2 = m_soundEffectEngine->StartEngine();

if (FAILED(hr) || FAILED(hr2))
{
m_engineExperiencedCriticalError = true;
}
}

Handling headphones and device changes


Marble maze uses engine callbacks to handle XAudio2 engine failures, such as when the audio device changes. A
likely cause of a device change is when the game user connects or disconnects the headphones. We recommend
that you implement the engine callback that handles device changes. Otherwise, your game will stop playing
sound when the user plugs in or removes headphones, until the game is restarted.
Audio.h defines the AudioEngineCallbacks class. This class implements the IXAudio2EngineCallback interface.

class AudioEngineCallbacks: public IXAudio2EngineCallback


{
private:
Audio* m_audio;

public :
AudioEngineCallbacks(){};
void Initialize(Audio* audio);

// Called by XAudio2 just before an audio processing pass begins.


void _stdcall OnProcessingPassStart(){};

// Called just after an audio processing pass ends.


void _stdcall OnProcessingPassEnd(){};

// Called when a critical system error causes XAudio2


// to be closed and restarted. The error code is given in Error.
void _stdcall OnCriticalError(HRESULT Error);
};

The IXAudio2EngineCallback interface enables your code to be notified when audio processing events occur
and when the engine encounters a critical error. To register for callbacks, Marble Maze calls the
IXAudio2::RegisterForCallbacks method after it creates the IXAudio2 object for the music engine.

m_musicEngineCallback.Initialize(this);
m_musicEngine->RegisterForCallbacks(&m_musicEngineCallback);

Marble Maze does not require notification when audio processing starts or ends. Therefore, it implements the
IXAudio2EngineCallback::OnProcessingPassStart and IXAudio2EngineCallback::OnProcessingPassEnd
methods to do nothing. For the IXAudio2EngineCallback::OnCriticalError method, Marble Maze calls the
SetEngineExperiencedCriticalError method, which sets the m_engineExperiencedCriticalError flag.

// Called when a critical system error causes XAudio2


// to be closed and restarted. The error code is given in Error.
void _stdcall AudioEngineCallbacks::OnCriticalError(HRESULT Error)
{
m_audio->SetEngineExperiencedCriticalError();
}

// This flag can be used to tell when the audio system


// is experiencing critial errors.
// XAudio2 gives a critical error when the user unplugs
// the headphones and a new speaker configuration is generated.
void SetEngineExperiencedCriticalError()
{
m_engineExperiencedCriticalError = true;
}

When a critical error occurs, audio processing stops and all additional calls to XAudio2 fail. To recover from this
situation, you must release the XAudio2 instance and create a new one. The Audio::Render method, which is
called from the game loop every frame, first checks the m_engineExperiencedCriticalError flag. If this flag is set,
it clears the flag, releases the current XAudio2 instance, initializes resources, and then starts the background music.

if (m_engineExperiencedCriticalError)
{
m_engineExperiencedCriticalError = false;
ReleaseResources();
Initialize();
CreateResources();
Start();
if (m_engineExperiencedCriticalError)
{
return;
}
}

Marble Maze also uses the m_engineExperiencedCriticalError flag to guard against calling into XAudio2 when
no audio device is available. For example, the MarbleMaze::Update method does not process audio for rolling or
collision events when this flag is set. The app attempts to repair the audio engine every frame if it is required;
however, the m_engineExperiencedCriticalError flag might always be set if the computer does not have an
audio device or the headphones are unplugged and there is no other available audio device.

Caution As a rule, do not perform blocking operations in the body of an engine callback. Doing so can cause
performance issues. Marble Maze sets a flag in the OnCriticalError callback and later handles the error during
the regular audio processing phase. For more information about XAudio2 callbacks, see XAudio2 Callbacks.

Related topics
Adding input and interactivity to the Marble Maze sample
Developing Marble Maze, a UWP game in C++ and DirectX
Fundamentals of DirectX programming
3/6/2017 1 min to read Edit on GitHub

This section provides information about the basic blocks of DirectX programming.
2D graphics for DirectX games topic explains the use of 2D bitmap graphics and effects, and how to use them in
your game using a combination of elements from both Direct2D and Direct3D libraries.
Direct3D graphics learning guide topic provides an overview of the concepts that Direct3D uses.
Basic 3D graphics for DirectX games topic explains the fundamental concepts of 3D graphics using a five-part
tutorial that introduces the Direct3D concept and API. This tutorial shows you how to initialize Direct3D interfaces
using Windows Runtime, apply per-vertex shader operations, set up the geometry, rasterize the scene, and cull
hidden surfaces.
Load resources in your DirectX game topic guides you through the basic steps for loading meshes (models),
textures (bitmaps), and compiled shader objects from local storage or other data streams for use in your UWP
game.

TOPIC DESCRIPTION

2D graphics for DirectX games Create 2D graphics using DirectX.

Direct3D graphics learning guide Understand Direct3D graphics concepts.

Basic 3D graphics for DirectX games Create basic 3D DirectX graphics.

Load resources in your DirectX game Load meshes in your DirectX game.
2D graphics for DirectX games
3/6/2017 5 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
We discuss the use of 2D bitmap graphics and effects, and how to use them in your game.
2D graphics are a subset of 3D graphics that deal with 2D primitives or bitmaps. More generally, they don't use a z-
coordinate in the way a 3D game might, since the game play is usually confined to the x-y plane. They sometimes
use 3D graphics techniques to create their visual components, and they are generally simpler to develop. If you are
new to gaming, a 2D game is a great place to start, and 2D graphics development can be a good place for you to
get a handle on DirectX.
You can develop 2D gaming graphics in DirectX using either Direct2D or Direct3D, or some combination. Many of
the more useful classes for 2D game development are in Direct3D, such as the Sprite class. Direct2D is a set of APIs
that primarily target user interfaces and apps that require support for drawing primitives (such as circles, lines, and
flat polygon shapes). With that in mind, it still provides a powerful and performant set of classes and methods for
creating game graphics as well, especially when creating game overlays, interfaces, and heads-up displays (HUDs) -
- or for creating a variety of 2D games, from simple to reasonably detailed. The most effective approach when
creating 2D games, though, is to use elements from both libraries, and that's the way we will approach 2D graphics
development in this topic.

Concepts at a glance
Before the advent of modern 3D graphics and the hardware that supports it, games were primarily 2D, and many of
their graphics techniques involved moving blocks of memory around -- usually arrays of color data that would be
translated or transformed to pixels on the screen in a 1:1 fashion.
In DirectX, 2D graphics are part of the 3D pipeline. There is a much greater variety of screen resolutions and
graphics hardware available, and your 2D graphics engine must be able to support them without a significant
change in fidelity.
Here are a few of the basic concepts you should be familiar with when starting 2D graphics development.
Pixels and raster coordinates. A pixel is a single point on a raster display, and has its own (x, y) coordinate pair
indicating its location on the display. (The term "pixel" is often used interchangeably between the physical pixels
that comprise the display and the addressable memory elements used to hold the color and alpha values of the
pixels before they are sent to the display.) The raster is treated by APIs as a rectangular grid of pixel elements,
which often has a 1:1 correspondence with the physical pixel grid of a display. Raster coordinate systems start
from the upper left, with the pixel at (0, 0) in the upper leftmost corner of the grid.
Bitmap graphics (sometimes called raster graphics) are graphic elements represented as a rectangular grid of
pixel values. Sprites -- computed pixel arrays managed independent of the raster -- are one type of bitmap
graphic, commonly used for the active characters or background-independent animated objects in a game. The
various frames of animation for a sprite are represented as collections of bitmaps called "sheets" or "batches."
Backgrounds are larger bitmap objects that are the same resolution or greater than that of the screen raster, and
often serve as the backdrop(s) for a game's playfield.
Vector graphics are graphics that use geometric primitives, such as points, lines, circles, and polygons to define
2D objects. They are represented not as arrays of pixels, but as the mathematical equations that define them in a
2D space. They do not necessarily have a 1:1 correspondence with the pixel grid of the display, and must be
transformed from the coordinate system that you rendered them in into the raster coordinate system of the
display.
Translation is when you take a point or vertex and calculate its new location in the same coordinate system.
Scaling is when you enlarge or shrink an object by a specified scale factor. With a vector image, you shrink and
enlarge its component vertices; with a bitmap, you enlarge the pixel elements or diminish them. With bitmap
images, you lose pixel data when the image shrinks, and you enlarge the individual pixels when the image is
scaled closer. For the latter, you can use pixel color interpolation operations, like bilinear filtering, to smooth out
the harsh color boundaries between the enlarged pixels.
Rotation is when you rotate an object about a specified axis or axes. With a vector image, the vertices of the
geometry are multiplied against a rotation matrix to obtain the rotated vertex; with a bitmap image, different
algorithms can be employed, each with a lesser or greater degree of fidelity in the results. As with scaling and
translation, there are APIs specifically for rotation operations.
Transformation is when you take one point or vertex in one coordinate system and calculate its corresponding
point or vertex in another coordinate system. This includes translation, scaling, and rotation, as well as other
coordinate calculation operations.
Clipping is when you remove portions of bitmaps or geometry that are not within the viewable area of the
display, or are hidden by objects with higher view priority.
The frame buffer is an area in memory -- often in the memory of the graphics hardware itself -- that contains
the final raster map that you will draw to the screen. The swap chain is a collection of buffers, where you draw in
a back buffer and, when the image is ready, you "swap" it to the front and display it.

Design considerations
2D graphics development is a great way to get accustomed to developing with Direct3D, and will allow you to
spend more time on other critical aspects of game development: audio, controls, and the game mechanics.
Always draw to a back buffer. Drawing directly to your frame buffer means that your image will be displayed when
the signal for display is received (usually every 1/60th of second), even if your drawing operation hasn't completed!
Design your graphics engine to support a good selection of resolutions, from 1024x600 to 1920x1080 (or higher).
Your audience will thank you if you support their LCD monitor's native resolution, especially with 2D graphics.
Great artwork will be your greatest asset, when it comes to visuals. While your bitmap graphics may lack the punch
of 3D photorealistic visuals using the latest shader model features, great high-resolution artwork can often convey
as much or more style and personality -- and with far less of a performance penalty.

Reference
Direct2D overview
Direct2D quickstart
Direct2D and Direct3D interoperability overview

Note
This article is for Windows 10 developers writing Universal Windows Platform (UWP) apps. If youre developing
for Windows 8.x or Windows Phone 8.x, see the archived documentation.
Basic 3D graphics for DirectX games
3/6/2017 1 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
We show how to use DirectX programming to implement the fundamental concepts of 3D graphics.
Objective: Learn to program a 3D graphics app.

Prerequisites
We assume that you are familiar with C++. You also need basic experience with graphics programming concepts.
Total time to complete: 30 minutes.

Where to go from here


Here, we talk about how to develop 3D graphics with DirectX and C++\Cx. This five-part tutorial introduces you to
the Direct3D API and the concepts and code that are also used in many of the other DirectX samples. These parts
build upon each other, from configuring DirectX for your UWP C++ app to texturing primitives and adding effects.

Note This tutorial uses a right-handed coordinate system with column vectors. Many DirectX samples and
apps use a left-handed coordinate system with row vectors. For a more complete graphics math solution and
one that supports a left-handed coordinate system with row vectors, consider using DirectXMath. For more
info, see Using DirectXMath with Direct3D.

We show you how to:


Initialize Direct3D interfaces by using the Windows Runtime
Apply per-vertex shader operations
Set up the geometry
Rasterize the scene (flattening the 3D scene to a 2D projection)
Cull the hidden surfaces

Note
This article is for Windows 10 developers writing Universal Windows Platform (UWP) apps. If youre
developing for Windows 8.x or Windows Phone 8.x, see the archived documentation.

Next, we create a Direct3D device, swap chain, and render-target view, and present the rendered image to the
display.
Quickstart: setting up DirectX resources and displaying an image

Related topics
Direct3D 11 Graphics
DXGI
HLSL
Set up DirectX resources and display an image
3/6/2017 7 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Here, we show you how to create a Direct3D device, swap chain, and render-target view, and how to present the
rendered image to the display.
Objective: To set up DirectX resources in a C++ Universal Windows Platform (UWP) app and to display a solid
color.

Prerequisites
We assume that you are familiar with C++. You also need basic experience with graphics programming concepts.
Time to complete: 20 minutes.

Instructions
1. Declaring Direct3D interface variables with ComPtr
We declare Direct3D interface variables with the ComPtr smart pointer template from the Windows Runtime C++
Template Library (WRL), so we can manage the lifetime of those variables in an exception safe manner. We can
then use those variables to access the ComPtr class and its members. For example:

ComPtr<ID3D11RenderTargetView> m_renderTargetView;
m_d3dDeviceContext->OMSetRenderTargets(
1,
m_renderTargetView.GetAddressOf(),
nullptr // Use no depth stencil.
);

If you declare ID3D11RenderTargetView with ComPtr, you can then use ComPtrs GetAddressOf method to get
the address of the pointer to ID3D11RenderTargetView (**ID3D11RenderTargetView) to pass to
ID3D11DeviceContext::OMSetRenderTargets. OMSetRenderTargets binds the render target to the output-
merger stage to specify the render target as the output target.
After the sample app is started, it initializes and loads, and is then ready to run.
2. Creating the Direct3D device
To use the Direct3D API to render a scene, we must first create a Direct3D device that represents the display
adapter. To create the Direct3D device, we call the D3D11CreateDevice function. We specify levels 9.1 through
11.1 in the array of D3D_FEATURE_LEVEL values. Direct3D walks the array in order and returns the highest
supported feature level. So, to get the highest feature level available, we list the D3D_FEATURE_LEVEL array
entries from highest to lowest. We pass the D3D11_CREATE_DEVICE_BGRA_SUPPORT flag to the Flags
parameter to make Direct3D resources interoperate with Direct2D. If we use the debug build, we also pass the
D3D11_CREATE_DEVICE_DEBUG flag. For more info about debugging apps, see Using the debug layer to debug
apps.
We obtain the Direct3D 11.1 device (ID3D11Device1) and device context (ID3D11DeviceContext1) by querying
the Direct3D 11 device and device context that are returned from D3D11CreateDevice.
// First, create the Direct3D device.

// This flag is required in order to enable compatibility with Direct2D.


UINT creationFlags = D3D11_CREATE_DEVICE_BGRA_SUPPORT;

#if defined(_DEBUG)
// If the project is in a debug build, enable debugging via SDK Layers with this flag.
creationFlags |= D3D11_CREATE_DEVICE_DEBUG;
#endif

// This array defines the ordering of feature levels that D3D should attempt to create.
D3D_FEATURE_LEVEL featureLevels[] =
{
D3D_FEATURE_LEVEL_11_1,
D3D_FEATURE_LEVEL_11_0,
D3D_FEATURE_LEVEL_10_1,
D3D_FEATURE_LEVEL_10_0,
D3D_FEATURE_LEVEL_9_3,
D3D_FEATURE_LEVEL_9_1
};

ComPtr<ID3D11Device> d3dDevice;
ComPtr<ID3D11DeviceContext> d3dDeviceContext;
DX::ThrowIfFailed(
D3D11CreateDevice(
nullptr, // Specify nullptr to use the default adapter.
D3D_DRIVER_TYPE_HARDWARE,
nullptr, // leave as nullptr if hardware is used
creationFlags, // optionally set debug and Direct2D compatibility flags
featureLevels,
ARRAYSIZE(featureLevels),
D3D11_SDK_VERSION, // always set this to D3D11_SDK_VERSION
&d3dDevice,
nullptr,
&d3dDeviceContext
)
);

// Retrieve the Direct3D 11.1 interfaces.


DX::ThrowIfFailed(
d3dDevice.As(&m_d3dDevice)
);

DX::ThrowIfFailed(
d3dDeviceContext.As(&m_d3dDeviceContext)
);

3. Creating the swap chain


Next, we create a swap chain that the device uses for rendering and display. We declare and initialize a
DXGI_SWAP_CHAIN_DESC1 structure to describe the swap chain. Then, we set up the swap chain as flip-model
(that is, a swap chain that has the DXGI_SWAP_EFFECT_FLIP_SEQUENTIAL value set in the SwapEffect member)
and set the Format member to DXGI_FORMAT_B8G8R8A8_UNORM. We set the Count member of the
DXGI_SAMPLE_DESC structure that the SampleDesc member specifies to 1 and the Quality member of
DXGI_SAMPLE_DESC to zero because flip-model doesnt support multiple sample antialiasing (MSAA). We set the
BufferCount member to 2 so the swap chain can use a front buffer to present to the display device and a back
buffer that serves as the render target.
We obtain the underlying DXGI device by querying the Direct3D 11.1 device. To minimize power consumption,
which is important to do on battery-powered devices such as laptops and tablets, we call the
IDXGIDevice1::SetMaximumFrameLatency method with 1 as the maximum number of back buffer frames that
DXGI can queue. This ensures that the app is rendered only after the vertical blank.
To finally create the swap chain, we need to get the parent factory from the DXGI device. We call
IDXGIDevice::GetAdapter to get the adapter for the device, and then call IDXGIObject::GetParent on the
adapter to get the parent factory (IDXGIFactory2). To create the swap chain, we call
IDXGIFactory2::CreateSwapChainForCoreWindow with the swap-chain descriptor and the apps core window.
// If the swap chain does not exist, create it.
DXGI_SWAP_CHAIN_DESC1 swapChainDesc = {0};

swapChainDesc.Stereo = false;
swapChainDesc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT;
swapChainDesc.Scaling = DXGI_SCALING_NONE;
swapChainDesc.Flags = 0;

// Use automatic sizing.


swapChainDesc.Width = 0;
swapChainDesc.Height = 0;

// This is the most common swap chain format.


swapChainDesc.Format = DXGI_FORMAT_B8G8R8A8_UNORM;

// Don't use multi-sampling.


swapChainDesc.SampleDesc.Count = 1;
swapChainDesc.SampleDesc.Quality = 0;

// Use two buffers to enable the flip effect.


swapChainDesc.BufferCount = 2;

// We recommend using this swap effect for all applications.


swapChainDesc.SwapEffect = DXGI_SWAP_EFFECT_FLIP_SEQUENTIAL;

// Once the swap chain description is configured, it must be


// created on the same adapter as the existing D3D Device.

// First, retrieve the underlying DXGI Device from the D3D Device.
ComPtr<IDXGIDevice2> dxgiDevice;
DX::ThrowIfFailed(
m_d3dDevice.As(&dxgiDevice)
);

// Ensure that DXGI does not queue more than one frame at a time. This both reduces
// latency and ensures that the application will only render after each VSync, minimizing
// power consumption.
DX::ThrowIfFailed(
dxgiDevice->SetMaximumFrameLatency(1)
);

// Next, get the parent factory from the DXGI Device.


ComPtr<IDXGIAdapter> dxgiAdapter;
DX::ThrowIfFailed(
dxgiDevice->GetAdapter(&dxgiAdapter)
);

ComPtr<IDXGIFactory2> dxgiFactory;
DX::ThrowIfFailed(
dxgiAdapter->GetParent(IID_PPV_ARGS(&dxgiFactory))
);

// Finally, create the swap chain.


CoreWindow^ window = m_window.Get();
DX::ThrowIfFailed(
dxgiFactory->CreateSwapChainForCoreWindow(
m_d3dDevice.Get(),
reinterpret_cast<IUnknown*>(window),
&swapChainDesc,
nullptr, // Allow on all displays.
&m_swapChain
)
);

4. Creating the render-target view


To render graphics to the window, we need to create a render-target view. We call IDXGISwapChain::GetBuffer
to get the swap chains back buffer to use when we create the render-target view. We specify the back buffer as a
2D texture (ID3D11Texture2D). To create the render-target view, we call
ID3D11Device::CreateRenderTargetView with the swap chains back buffer. We must specify to draw to the
entire core window by specifying the view port (D3D11_VIEWPORT) as the full size of the swap chain's back
buffer. We use the view port in a call to ID3D11DeviceContext::RSSetViewports to bind the view port to the
rasterizer stage of the pipeline. The rasterizer stage converts vector information into a raster image. In this case, we
don't require a conversion because we are just displaying a solid color.

// Once the swap chain is created, create a render target view. This will
// allow Direct3D to render graphics to the window.

ComPtr<ID3D11Texture2D> backBuffer;
DX::ThrowIfFailed(
m_swapChain->GetBuffer(0, IID_PPV_ARGS(&backBuffer))
);

DX::ThrowIfFailed(
m_d3dDevice->CreateRenderTargetView(
backBuffer.Get(),
nullptr,
&m_renderTargetView
)
);

// After the render target view is created, specify that the viewport,
// which describes what portion of the window to draw to, should cover
// the entire window.

D3D11_TEXTURE2D_DESC backBufferDesc = {0};


backBuffer->GetDesc(&backBufferDesc);

D3D11_VIEWPORT viewport;
viewport.TopLeftX = 0.0f;
viewport.TopLeftY = 0.0f;
viewport.Width = static_cast<float>(backBufferDesc.Width);
viewport.Height = static_cast<float>(backBufferDesc.Height);
viewport.MinDepth = D3D11_MIN_DEPTH;
viewport.MaxDepth = D3D11_MAX_DEPTH;

m_d3dDeviceContext->RSSetViewports(1, &viewport);

5. Presenting the rendered image


We enter an endless loop to continually render and display the scene.
In this loop, we call:
1. ID3D11DeviceContext::OMSetRenderTargets to specify the render target as the output target.
2. ID3D11DeviceContext::ClearRenderTargetView to clear the render target to a solid color.
3. IDXGISwapChain::Present to present the rendered image to the window.
Because we previously set the maximum frame latency to 1, Windows generally slows down the render loop to the
screen refresh rate, typically around 60 Hz. Windows slows down the render loop by making the app sleep when
the app calls Present. Windows makes the app sleep until the screen is refreshed.
// Enter the render loop. Note that Windows Store apps should never exit.
while (true)
{
// Process events incoming to the window.
m_window->Dispatcher->ProcessEvents(CoreProcessEventsOption::ProcessAllIfPresent);

// Specify the render target we created as the output target.


m_d3dDeviceContext->OMSetRenderTargets(
1,
m_renderTargetView.GetAddressOf(),
nullptr // Use no depth stencil.
);

// Clear the render target to a solid color.


const float clearColor[4] = { 0.071f, 0.04f, 0.561f, 1.0f };
m_d3dDeviceContext->ClearRenderTargetView(
m_renderTargetView.Get(),
clearColor
);

// Present the rendered image to the window. Because the maximum frame latency is set to 1,
// the render loop will generally be throttled to the screen refresh rate, typically around
// 60 Hz, by sleeping the application on Present until the screen is refreshed.
DX::ThrowIfFailed(
m_swapChain->Present(1, 0)
);
}

6. Resizing the app window and the swap chains buffer


If the size of the app window changes, the app must resize the swap chains buffers, recreate the render-target
view, and then present the resized rendered image. To resize the swap chains buffers, we call
IDXGISwapChain::ResizeBuffers. In this call, we leave the number of buffers and the format of the buffers
unchanged (the BufferCount parameter to two and the NewFormat parameter to
DXGI_FORMAT_B8G8R8A8_UNORM). We make the size of the swap chains back buffer the same size as the
resized window. After we resize the swap chains buffers, we create the new render target and present the new
rendered image similarly to when we initialized the app.

// If the swap chain already exists, resize it.


DX::ThrowIfFailed(
m_swapChain->ResizeBuffers(
2,
0,
0,
DXGI_FORMAT_B8G8R8A8_UNORM,
0
)
);

Summary and next steps


We created a Direct3D device, swap chain, and render-target view, and presented the rendered image to the
display.
Next, we also draw a triangle on the display.
Creating shaders and drawing primitives
Create shaders and drawing primitives
3/6/2017 5 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Here, we show you how to use HLSL source files to compile and create shaders that you can then use to draw
primitives on the display.
We create and draw a yellow triangle by using vertex and pixel shaders. After we create the Direct3D device, the
swap chain, and the render-target view, we read data from binary shader object files on the disk.
Objective: To create shaders and to draw primitives.

Prerequisites
We assume that you are familiar with C++. You also need basic experience with graphics programming concepts.
We also assume that you went through Quickstart: setting up DirectX resources and displaying an image.
Time to complete: 20 minutes.

Instructions
1. Compiling HLSL source files
Microsoft Visual Studio uses the fxc.exe HLSL code compiler to compile the .hlsl source files
(SimpleVertexShader.hlsl and SimplePixelShader.hlsl) into .cso binary shader object files (SimpleVertexShader.cso
and SimplePixelShader.cso). For more info about the HLSL code compiler, see Effect-Compiler Tool. For more info
about compiling shader code, see Compiling Shaders.
Here is the code in SimpleVertexShader.hlsl:

struct VertexShaderInput
{
DirectX::XMFLOAT2 pos : POSITION;
};

struct PixelShaderInput
{
float4 pos : SV_POSITION;
};

PixelShaderInput SimpleVertexShader(VertexShaderInput input)


{
PixelShaderInput vertexShaderOutput;

// For this lesson, set the vertex depth value to 0.5, so it is guaranteed to be drawn.
vertexShaderOutput.pos = float4(input.pos, 0.5f, 1.0f);

return vertexShaderOutput;
}

Here is the code in SimplePixelShader.hlsl:


struct PixelShaderInput
{
float4 pos : SV_POSITION;
};

float4 SimplePixelShader(PixelShaderInput input) : SV_TARGET


{
// Draw the entire triangle yellow.
return float4(1.0f, 1.0f, 0.0f, 1.0f);
}

2. Reading data from disk


We use the DX::ReadDataAsync function from DirectXHelper.h in the DirectX 11 App (Universal Windows) template
to asynchronously read data from a file on the disk.
3. Creating vertex and pixel shaders
We read data from the SimpleVertexShader.cso file and assign the data to the vertexShaderBytecode byte array.
We call ID3D11Device::CreateVertexShader with the byte array to create the vertex shader
(ID3D11VertexShader). We set the vertex depth value to 0.5 in the SimpleVertexShader.hlsl source to guarantee
that our triangle is drawn. We populate an array of D3D11_INPUT_ELEMENT_DESC structures to describe the
layout of the vertex shader code and then call ID3D11Device::CreateInputLayout to create the layout. The array
has one layout element that defines the vertex position. We read data from the SimplePixelShader.cso file and
assign the data to the pixelShaderBytecode byte array. We call ID3D11Device::CreatePixelShader with the byte
array to create the pixel shader (ID3D11PixelShader). We set the pixel value to (1,1,1,1) in the
SimplePixelShader.hlsl source to make our triangle yellow. You can change the color by changing this value.
We create vertex and index buffers that define a simple triangle. To do this, we first define the triangle, next
describe the vertex and index buffers (D3D11_BUFFER_DESC and D3D11_SUBRESOURCE_DATA) using the
triangle definition, and finally call ID3D11Device::CreateBuffer once for each buffer.

auto loadVSTask = DX::ReadDataAsync(L"SimpleVertexShader.cso");


auto loadPSTask = DX::ReadDataAsync(L"SimplePixelShader.cso");

// Load the raw vertex shader bytecode from disk and create a vertex shader with it.
auto createVSTask = loadVSTask.then([this](const std::vector<byte>& vertexShaderBytecode) {

ComPtr<ID3D11VertexShader> vertexShader;
DX::ThrowIfFailed(
m_d3dDevice->CreateVertexShader(
vertexShaderBytecode->Data,
vertexShaderBytecode->Length,
nullptr,
&vertexShader
)
);

// Create an input layout that matches the layout defined in the vertex shader code.
// For this lesson, this is simply a DirectX::XMFLOAT2 vector defining the vertex position.
const D3D11_INPUT_ELEMENT_DESC basicVertexLayoutDesc[] =
{
{ "POSITION", 0, DXGI_FORMAT_R32G32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 },
};

ComPtr<ID3D11InputLayout> inputLayout;
DX::ThrowIfFailed(
m_d3dDevice->CreateInputLayout(
basicVertexLayoutDesc,
ARRAYSIZE(basicVertexLayoutDesc),
vertexShaderBytecode->Data,
vertexShaderBytecode->Length,
&inputLayout
&inputLayout
)
);
});

// Load the raw pixel shader bytecode from disk and create a pixel shader with it.
auto createPSTask = loadPSTask.then([this](const std::vector<byte>& pixelShaderBytecode) {
ComPtr<ID3D11PixelShader> pixelShader;
DX::ThrowIfFailed(
m_d3dDevice->CreatePixelShader(
pixelShaderBytecode->Data,
pixelShaderBytecode->Length,
nullptr,
&pixelShader
)
);
});

// Create vertex and index buffers that define a simple triangle.


auto createTriangleTask = (createPSTask && createVSTask).then([this] () {

DirectX::XMFLOAT2 triangleVertices[] =
{
float2(-0.5f, -0.5f),
float2( 0.0f, 0.5f),
float2( 0.5f, -0.5f),
};

unsigned short triangleIndices[] =


{
0, 1, 2,
};

D3D11_BUFFER_DESC vertexBufferDesc = {0};


vertexBufferDesc.ByteWidth = sizeof(float2) * ARRAYSIZE(triangleVertices);
vertexBufferDesc.Usage = D3D11_USAGE_DEFAULT;
vertexBufferDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER;
vertexBufferDesc.CPUAccessFlags = 0;
vertexBufferDesc.MiscFlags = 0;
vertexBufferDesc.StructureByteStride = 0;

D3D11_SUBRESOURCE_DATA vertexBufferData;
vertexBufferData.pSysMem = triangleVertices;
vertexBufferData.SysMemPitch = 0;
vertexBufferData.SysMemSlicePitch = 0;

ComPtr<ID3D11Buffer> vertexBuffer;
DX::ThrowIfFailed(
m_d3dDevice->CreateBuffer(
&vertexBufferDesc,
&vertexBufferData,
&vertexBuffer
)
);

D3D11_BUFFER_DESC indexBufferDesc;
indexBufferDesc.ByteWidth = sizeof(unsigned short) * ARRAYSIZE(triangleIndices);
indexBufferDesc.Usage = D3D11_USAGE_DEFAULT;
indexBufferDesc.BindFlags = D3D11_BIND_INDEX_BUFFER;
indexBufferDesc.CPUAccessFlags = 0;
indexBufferDesc.MiscFlags = 0;
indexBufferDesc.StructureByteStride = 0;

D3D11_SUBRESOURCE_DATA indexBufferData;
indexBufferData.pSysMem = triangleIndices;
indexBufferData.SysMemPitch = 0;
indexBufferData.SysMemSlicePitch = 0;

ComPtr<ID3D11Buffer> indexBuffer;
DX::ThrowIfFailed(
m_d3dDevice->CreateBuffer(
&indexBufferDesc,
&indexBufferData,
&indexBuffer
)
);
});

We use the vertex and pixel shaders, the vertex shader layout, and the vertex and index buffers to draw a yellow
triangle.
4. Drawing the triangle and presenting the rendered image
We enter an endless loop to continually render and display the scene. We call
ID3D11DeviceContext::OMSetRenderTargets to specify the render target as the output target. We call
ID3D11DeviceContext::ClearRenderTargetView with { 0.071f, 0.04f, 0.561f, 1.0f } to clear the render target to a
solid blue color.
In the endless loop, we draw a yellow triangle on the blue surface.
To draw a yellow triangle
1. First, we call ID3D11DeviceContext::IASetInputLayout to describe how vertex buffer data is streamed into
the input-assembler stage.
2. Next, we call ID3D11DeviceContext::IASetVertexBuffers and ID3D11DeviceContext::IASetIndexBuffer to
bind the vertex and index buffers to the input-assembler stage.
3. Next, we call ID3D11DeviceContext::IASetPrimitiveTopology with the
D3D11_PRIMITIVE_TOPOLOGY_TRIANGLESTRIP value to specify for the input-assembler stage to interpret
the vertex data as a triangle strip.
4. Next, we call ID3D11DeviceContext::VSSetShader to initialize the vertex shader stage with the vertex shader
code and ID3D11DeviceContext::PSSetShader to initialize the pixel shader stage with the pixel shader code.
5. Finally, we call ID3D11DeviceContext::DrawIndexed to draw the triangle and submit it to the rendering
pipeline.
We call IDXGISwapChain::Present to present the rendered image to the window.
// Specify the render target we created as the output target.
m_d3dDeviceContext->OMSetRenderTargets(
1,
m_renderTargetView.GetAddressOf(),
nullptr // Use no depth stencil.
);

// Clear the render target to a solid color.


const float clearColor[4] = { 0.071f, 0.04f, 0.561f, 1.0f };
m_d3dDeviceContext->ClearRenderTargetView(
m_renderTargetView.Get(),
clearColor
);

m_d3dDeviceContext->IASetInputLayout(inputLayout.Get());

// Set the vertex and index buffers, and specify the way they define geometry.
UINT stride = sizeof(float2);
UINT offset = 0;
m_d3dDeviceContext->IASetVertexBuffers(
0,
1,
vertexBuffer.GetAddressOf(),
&stride,
&offset
);

m_d3dDeviceContext->IASetIndexBuffer(
indexBuffer.Get(),
DXGI_FORMAT_R16_UINT,
0
);

m_d3dDeviceContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);

// Set the vertex and pixel shader stage state.


m_d3dDeviceContext->VSSetShader(
vertexShader.Get(),
nullptr,
0
);

m_d3dDeviceContext->PSSetShader(
pixelShader.Get(),
nullptr,
0
);

// Draw the cube.


m_d3dDeviceContext->DrawIndexed(
ARRAYSIZE(triangleIndices),
0,
0
);

// Present the rendered image to the window. Because the maximum frame latency is set to 1,
// the render loop will generally be throttled to the screen refresh rate, typically around
// 60 Hz, by sleeping the application on Present until the screen is refreshed.
DX::ThrowIfFailed(
m_swapChain->Present(1, 0)
);

Summary and next steps


We created and drew a yellow triangle by using vertex and pixel shaders.
Next, we create an orbiting 3D cube and apply lighting effects to it.
Using depth and effects on primitives
Use depth and effects on primitives
3/6/2017 8 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Here, we show you how to use depth, perspective, color, and other effects on primitives.
Objective: To create a 3D object and apply basic vertex lighting and coloring to it.

Prerequisites
We assume that you are familiar with C++. You also need basic experience with graphics programming concepts.
We also assume that you went through Quickstart: setting up DirectX resources and displaying an image and
Creating shaders and drawing primitives.
Time to complete: 20 minutes.

Instructions
1. Defining cube variables
First, we need to define the SimpleCubeVertex and ConstantBuffer structures for the cube. These structures
specify the vertex positions and colors for the cube and how the cube will be viewed. We declare
ID3D11DepthStencilView and ID3D11Buffer with ComPtr and declare an instance of ConstantBuffer.

struct SimpleCubeVertex
{
DirectX::XMFLOAT3 pos; // Position
DirectX::XMFLOAT3 color; // Color
};

struct ConstantBuffer
{
DirectX::XMFLOAT4X4 model;
DirectX::XMFLOAT4X4 view;
DirectX::XMFLOAT4X4 projection;
};

// This class defines the application as a whole.


ref class Direct3DTutorialFrameworkView : public IFrameworkView
{
private:
Platform::Agile<CoreWindow> m_window;
ComPtr<IDXGISwapChain1> m_swapChain;
ComPtr<ID3D11Device1> m_d3dDevice;
ComPtr<ID3D11DeviceContext1> m_d3dDeviceContext;
ComPtr<ID3D11RenderTargetView> m_renderTargetView;
ComPtr<ID3D11DepthStencilView> m_depthStencilView;
ComPtr<ID3D11Buffer> m_constantBuffer;
ConstantBuffer m_constantBufferData;

2. Creating a depth stencil view


In addition to creating the render-target view, we also create a depth-stencil view. The depth-stencil view enables
Direct3D to efficiently render objects closer to the camera in front of objects further from the camera. Before we
can create a view to a depth-stencil buffer, we must create the depth-stencil buffer. We populate a
D3D11_TEXTURE2D_DESC to describe the depth-stencil buffer and then call ID3D11Device::CreateTexture2D
to create the depth-stencil buffer. To create the depth-stencil view, we populate a
D3D11_DEPTH_STENCIL_VIEW_DESC to describe the depth-stencil view and pass the depth-stencil view
description and the depth-stencil buffer to ID3D11Device::CreateDepthStencilView.

// Once the render target view is created, create a depth stencil view. This
// allows Direct3D to efficiently render objects closer to the camera in front
// of objects further from the camera.

D3D11_TEXTURE2D_DESC backBufferDesc = {0};


backBuffer->GetDesc(&backBufferDesc);

D3D11_TEXTURE2D_DESC depthStencilDesc;
depthStencilDesc.Width = backBufferDesc.Width;
depthStencilDesc.Height = backBufferDesc.Height;
depthStencilDesc.MipLevels = 1;
depthStencilDesc.ArraySize = 1;
depthStencilDesc.Format = DXGI_FORMAT_D24_UNORM_S8_UINT;
depthStencilDesc.SampleDesc.Count = 1;
depthStencilDesc.SampleDesc.Quality = 0;
depthStencilDesc.Usage = D3D11_USAGE_DEFAULT;
depthStencilDesc.BindFlags = D3D11_BIND_DEPTH_STENCIL;
depthStencilDesc.CPUAccessFlags = 0;
depthStencilDesc.MiscFlags = 0;
ComPtr<ID3D11Texture2D> depthStencil;
DX::ThrowIfFailed(
m_d3dDevice->CreateTexture2D(
&depthStencilDesc,
nullptr,
&depthStencil
)
);

D3D11_DEPTH_STENCIL_VIEW_DESC depthStencilViewDesc;
depthStencilViewDesc.Format = depthStencilDesc.Format;
depthStencilViewDesc.ViewDimension = D3D11_DSV_DIMENSION_TEXTURE2D;
depthStencilViewDesc.Flags = 0;
depthStencilViewDesc.Texture2D.MipSlice = 0;
DX::ThrowIfFailed(
m_d3dDevice->CreateDepthStencilView(
depthStencil.Get(),
&depthStencilViewDesc,
&m_depthStencilView
)
);

3. Updating perspective with the window


We update the perspective projection parameters for the constant buffer depending on the window dimensions.
We fix the parameters to a 70-degree field of view with a depth range of 0.01 to 100.
// Finally, update the constant buffer perspective projection parameters
// to account for the size of the application window. In this sample,
// the parameters are fixed to a 70-degree field of view, with a depth
// range of 0.01 to 100. For a generalized camera class, see Lesson 5.

float xScale = 1.42814801f;


float yScale = 1.42814801f;
if (backBufferDesc.Width > backBufferDesc.Height)
{
xScale = yScale *
static_cast<float>(backBufferDesc.Height) /
static_cast<float>(backBufferDesc.Width);
}
else
{
yScale = xScale *
static_cast<float>(backBufferDesc.Width) /
static_cast<float>(backBufferDesc.Height);
}

m_constantBufferData.projection = DirectX::XMFLOAT4X4(
xScale, 0.0f, 0.0f, 0.0f,
0.0f, yScale, 0.0f, 0.0f,
0.0f, 0.0f, -1.0f, -0.01f,
0.0f, 0.0f, -1.0f, 0.0f
);

4. Creating vertex and pixel shaders with color elements


In this app, we create more complex vertex and pixel shaders than what we described in the previous tutorial,
Creating shaders and drawing primitives. The app's vertex shader transforms each vertex position into projection
space and passes the vertex color through to the pixel shader.
The app's array of D3D11_INPUT_ELEMENT_DESC structures that describe the layout of the vertex shader code
has two layout elements: one element defines the vertex position and the other element defines the color.
We create vertex, index, and constant buffers to define an orbiting cube.
To define an orbiting cube
1. First, we define the cube. We assign each vertex a color in addition to a position. This allows the pixel shader to
color each face differently so the face can be distinguished.
2. Next, we describe the vertex and index buffers (D3D11_BUFFER_DESC and D3D11_SUBRESOURCE_DATA)
using the cube definition. We call ID3D11Device::CreateBuffer once for each buffer.
3. Next, we create a constant buffer (D3D11_BUFFER_DESC) for passing model, view, and projection matrices to
the vertex shader. We can later use the constant buffer to rotate the cube and apply a perspective projection to
it. We call ID3D11Device::CreateBuffer to create the constant buffer.
4. Next, we specify the view transform that corresponds to a camera position of X = 0, Y = 1, Z = 2.
5. Finally, we declare a degree variable that we will use to animate the cube by rotating it every frame.

auto loadVSTask = DX::ReadDataAsync(L"SimpleVertexShader.cso");


auto loadPSTask = DX::ReadDataAsync(L"SimplePixelShader.cso");

auto createVSTask = loadVSTask.then([this](const std::vector<byte>& vertexShaderBytecode) {


ComPtr<ID3D11VertexShader> vertexShader;
DX::ThrowIfFailed(
m_d3dDevice->CreateVertexShader(
vertexShaderBytecode->Data,
vertexShaderBytecode->Length,
nullptr,
&vertexShader
)
);

// Create an input layout that matches the layout defined in the vertex shader code.
// For this lesson, this is simply a DirectX::XMFLOAT3 vector defining the vertex position, and
// a DirectX::XMFLOAT3 vector defining the vertex color.
const D3D11_INPUT_ELEMENT_DESC basicVertexLayoutDesc[] =
{
{ "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 },
{ "COLOR", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0 },
};

ComPtr<ID3D11InputLayout> inputLayout;
DX::ThrowIfFailed(
m_d3dDevice->CreateInputLayout(
basicVertexLayoutDesc,
ARRAYSIZE(basicVertexLayoutDesc),
vertexShaderBytecode->Data,
vertexShaderBytecode->Length,
&inputLayout
)
);
});

// Load the raw pixel shader bytecode from disk and create a pixel shader with it.
auto createPSTask = loadPSTask.then([this](const std::vector<byte>& pixelShaderBytecode) {
ComPtr<ID3D11PixelShader> pixelShader;
DX::ThrowIfFailed(
m_d3dDevice->CreatePixelShader(
pixelShaderBytecode->Data,
pixelShaderBytecode->Length,
nullptr,
&pixelShader
)
);
});

// Create vertex and index buffers that define a simple unit cube.
auto createCubeTask = (createPSTask && createVSTask).then([this] () {

// In the array below, which will be used to initialize the cube vertex buffers,
// each vertex is assigned a color in addition to a position. This will allow
// the pixel shader to color each face differently, enabling them to be distinguished.
SimpleCubeVertex cubeVertices[] =
{
{ float3(-0.5f, 0.5f, -0.5f), float3(0.0f, 1.0f, 0.0f) }, // +Y (top face)
{ float3( 0.5f, 0.5f, -0.5f), float3(1.0f, 1.0f, 0.0f) },
{ float3( 0.5f, 0.5f, 0.5f), float3(1.0f, 1.0f, 1.0f) },
{ float3(-0.5f, 0.5f, 0.5f), float3(0.0f, 1.0f, 1.0f) },

{ float3(-0.5f, -0.5f, 0.5f), float3(0.0f, 0.0f, 1.0f) }, // -Y (bottom face)


{ float3( 0.5f, -0.5f, 0.5f), float3(1.0f, 0.0f, 1.0f) },
{ float3( 0.5f, -0.5f, -0.5f), float3(1.0f, 0.0f, 0.0f) },
{ float3(-0.5f, -0.5f, -0.5f), float3(0.0f, 0.0f, 0.0f) },
};

unsigned short cubeIndices[] =


{
0, 1, 2,
0, 2, 3,

4, 5, 6,
4, 6, 7,

3, 2, 5,
3, 5, 4,
3, 5, 4,

2, 1, 6,
2, 6, 5,

1, 7, 6,
1, 0, 7,

0, 3, 4,
0, 4, 7
};

D3D11_BUFFER_DESC vertexBufferDesc = {0};


vertexBufferDesc.ByteWidth = sizeof(SimpleCubeVertex) * ARRAYSIZE(cubeVertices);
vertexBufferDesc.Usage = D3D11_USAGE_DEFAULT;
vertexBufferDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER;
vertexBufferDesc.CPUAccessFlags = 0;
vertexBufferDesc.MiscFlags = 0;
vertexBufferDesc.StructureByteStride = 0;

D3D11_SUBRESOURCE_DATA vertexBufferData;
vertexBufferData.pSysMem = cubeVertices;
vertexBufferData.SysMemPitch = 0;
vertexBufferData.SysMemSlicePitch = 0;

ComPtr<ID3D11Buffer> vertexBuffer;
DX::ThrowIfFailed(
m_d3dDevice->CreateBuffer(
&vertexBufferDesc,
&vertexBufferData,
&vertexBuffer
)
);

D3D11_BUFFER_DESC indexBufferDesc;
indexBufferDesc.ByteWidth = sizeof(unsigned short) * ARRAYSIZE(cubeIndices);
indexBufferDesc.Usage = D3D11_USAGE_DEFAULT;
indexBufferDesc.BindFlags = D3D11_BIND_INDEX_BUFFER;
indexBufferDesc.CPUAccessFlags = 0;
indexBufferDesc.MiscFlags = 0;
indexBufferDesc.StructureByteStride = 0;

D3D11_SUBRESOURCE_DATA indexBufferData;
indexBufferData.pSysMem = cubeIndices;
indexBufferData.SysMemPitch = 0;
indexBufferData.SysMemSlicePitch = 0;

ComPtr<ID3D11Buffer> indexBuffer;
DX::ThrowIfFailed(
m_d3dDevice->CreateBuffer(
&indexBufferDesc,
&indexBufferData,
&indexBuffer
)
);

// Create a constant buffer for passing model, view, and projection matrices
// to the vertex shader. This will allow us to rotate the cube and apply
// a perspective projection to it.

D3D11_BUFFER_DESC constantBufferDesc = {0};


constantBufferDesc.ByteWidth = sizeof(m_constantBufferData);
constantBufferDesc.Usage = D3D11_USAGE_DEFAULT;
constantBufferDesc.BindFlags = D3D11_BIND_CONSTANT_BUFFER;
constantBufferDesc.CPUAccessFlags = 0;
constantBufferDesc.MiscFlags = 0;
constantBufferDesc.StructureByteStride = 0;
DX::ThrowIfFailed(
m_d3dDevice->CreateBuffer(
m_d3dDevice->CreateBuffer(
&constantBufferDesc,
nullptr,
&m_constantBuffer
)
);

// Specify the view transform corresponding to a camera position of


// X = 0, Y = 1, Z = 2. For a generalized camera class, see Lesson 5.

m_constantBufferData.view = DirectX::XMFLOAT4X4(
-1.00000000f, 0.00000000f, 0.00000000f, 0.00000000f,
0.00000000f, 0.89442718f, 0.44721359f, 0.00000000f,
0.00000000f, 0.44721359f, -0.89442718f, -2.23606800f,
0.00000000f, 0.00000000f, 0.00000000f, 1.00000000f
);

});

// This value will be used to animate the cube by rotating it every frame.
float degree = 0.0f;

5. Rotating and drawing the cube and presenting the rendered image
We enter an endless loop to continually render and display the scene. We call the rotationY inline function
(BasicMath.h) with a rotation amount to set values that will rotate the cubes model matrix around the Y axis. We
then call ID3D11DeviceContext::UpdateSubresource to update the constant buffer and rotate the cube model.
We call ID3D11DeviceContext::OMSetRenderTargets to specify the render target as the output target. In this
OMSetRenderTargets call, we pass the depth-stencil view. We call
ID3D11DeviceContext::ClearRenderTargetView to clear the render target to a solid blue color and call
ID3D11DeviceContext::ClearDepthStencilView to clear the depth buffer.
In the endless loop, we also draw the cube on the blue surface.
To draw the cube
1. First, we call ID3D11DeviceContext::IASetInputLayout to describe how vertex buffer data is streamed into
the input-assembler stage.
2. Next, we call ID3D11DeviceContext::IASetVertexBuffers and ID3D11DeviceContext::IASetIndexBuffer to
bind the vertex and index buffers to the input-assembler stage.
3. Next, we call ID3D11DeviceContext::IASetPrimitiveTopology with the
D3D11_PRIMITIVE_TOPOLOGY_TRIANGLESTRIP value to specify for the input-assembler stage to interpret
the vertex data as a triangle strip.
4. Next, we call ID3D11DeviceContext::VSSetShader to initialize the vertex shader stage with the vertex shader
code and ID3D11DeviceContext::PSSetShader to initialize the pixel shader stage with the pixel shader code.
5. Next, we call ID3D11DeviceContext::VSSetConstantBuffers to set the constant buffer that is used by the
vertex shader pipeline stage.
6. Finally, we call ID3D11DeviceContext::DrawIndexed to draw the cube and submit it to the rendering
pipeline.
We call IDXGISwapChain::Present to present the rendered image to the window.

// Update the constant buffer to rotate the cube model.


m_constantBufferData.model = XMMatrixRotationY(-degree);
degree += 1.0f;

m_d3dDeviceContext->UpdateSubresource(
m_constantBuffer.Get(),
0,
nullptr,
&m_constantBufferData,
0,
0,
0
);

// Specify the render target and depth stencil we created as the output target.
m_d3dDeviceContext->OMSetRenderTargets(
1,
m_renderTargetView.GetAddressOf(),
m_depthStencilView.Get()
);

// Clear the render target to a solid color, and reset the depth stencil.
const float clearColor[4] = { 0.071f, 0.04f, 0.561f, 1.0f };
m_d3dDeviceContext->ClearRenderTargetView(
m_renderTargetView.Get(),
clearColor
);

m_d3dDeviceContext->ClearDepthStencilView(
m_depthStencilView.Get(),
D3D11_CLEAR_DEPTH,
1.0f,
0
);

m_d3dDeviceContext->IASetInputLayout(inputLayout.Get());

// Set the vertex and index buffers, and specify the way they define geometry.
UINT stride = sizeof(SimpleCubeVertex);
UINT offset = 0;
m_d3dDeviceContext->IASetVertexBuffers(
0,
1,
vertexBuffer.GetAddressOf(),
&stride,
&offset
);

m_d3dDeviceContext->IASetIndexBuffer(
indexBuffer.Get(),
DXGI_FORMAT_R16_UINT,
0
);

m_d3dDeviceContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);

// Set the vertex and pixel shader stage state.


m_d3dDeviceContext->VSSetShader(
vertexShader.Get(),
nullptr,
0
);

m_d3dDeviceContext->VSSetConstantBuffers(
0,
1,
m_constantBuffer.GetAddressOf()
);

m_d3dDeviceContext->PSSetShader(
pixelShader.Get(),
nullptr,
0
);

// Draw the cube.


m_d3dDeviceContext->DrawIndexed(
ARRAYSIZE(cubeIndices),
0,
0
);

// Present the rendered image to the window. Because the maximum frame latency is set to 1,
// the render loop will generally be throttled to the screen refresh rate, typically around
// 60 Hz, by sleeping the application on Present until the screen is refreshed.
DX::ThrowIfFailed(
m_swapChain->Present(1, 0)
);

Summary and next steps


We used depth, perspective, color, and other effects on primitives.
Next, we apply textures to primitives.
Applying textures to primitives
Apply textures to primitives
3/6/2017 11 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Here, we load raw texture data and apply that data to a 3D primitive by using the cube that we created in Using
depth and effects on primitives. We also introduce a simple dot-product lighting model, where the cube surfaces
are lighter or darker based on their distance and angle relative to a light source.
Objective: To apply textures to primitives.

Prerequisites
We assume that you are familiar with C++. You also need basic experience with graphics programming concepts.
We also assume that you went through Quickstart: setting up DirectX resources and displaying an image, Creating
shaders and drawing primitives, and Using depth and effects on primitives.
Time to complete: 20 minutes.

Instructions
1. Defining variables for a textured cube
First, we need to define the BasicVertex and ConstantBuffer structures for the textured cube. These structures
specify the vertex positions, orientations, and textures for the cube and how the cube will be viewed. Otherwise, we
declare variables similarly to the previous tutorial, Using depth and effects on primitives.

struct BasicVertex
{
DirectX::XMFLOAT3 pos; // Position
DirectX::XMFLOAT3 norm; // Surface normal vector
DirectX::XMFLOAT2 tex; // Texture coordinate
};

struct ConstantBuffer
{
DirectX::XMFLOAT4X4 model;
DirectX::XMFLOAT4X4 view;
DirectX::XMFLOAT4X4 projection;
};

// This class defines the application as a whole.


ref class Direct3DTutorialFrameworkView : public IFrameworkView
{
private:
Platform::Agile<CoreWindow> m_window;
ComPtr<IDXGISwapChain1> m_swapChain;
ComPtr<ID3D11Device1> m_d3dDevice;
ComPtr<ID3D11DeviceContext1> m_d3dDeviceContext;
ComPtr<ID3D11RenderTargetView> m_renderTargetView;
ComPtr<ID3D11DepthStencilView> m_depthStencilView;
ComPtr<ID3D11Buffer> m_constantBuffer;
ConstantBuffer m_constantBufferData;

2. Creating vertex and pixel shaders with surface and texture elements
Here, we create more complex vertex and pixel shaders than in the previous tutorial, Using depth and effects on
primitives. This app's vertex shader transforms each vertex position into projection space and passes the vertex
texture coordinate through to the pixel shader.
The app's array of D3D11_INPUT_ELEMENT_DESC structures that describe the layout of the vertex shader code
has three layout elements: one element defines the vertex position, another element defines the surface normal
vector (the direction that the surface normally faces), and the third element defines the texture coordinates.
We create vertex, index, and constant buffers that define an orbiting textured cube.
To define an orbiting textured cube
1. First, we define the cube. Each vertex is assigned a position, a surface normal vector, and texture coordinates. We
use multiple vertices for each corner to allow different normal vectors and texture coordinates to be defined for
each face.
2. Next, we describe the vertex and index buffers (D3D11_BUFFER_DESC and D3D11_SUBRESOURCE_DATA)
using the cube definition. We call ID3D11Device::CreateBuffer once for each buffer.
3. Next, we create a constant buffer (D3D11_BUFFER_DESC) for passing model, view, and projection matrices to
the vertex shader. We can later use the constant buffer to rotate the cube and apply a perspective projection to
it. We call ID3D11Device::CreateBuffer to create the constant buffer.
4. Finally, we specify the view transform that corresponds to a camera position of X = 0, Y = 1, Z = 2.

auto loadVSTask = DX::ReadDataAsync(L"SimpleVertexShader.cso");


auto loadPSTask = DX::ReadDataAsync(L"SimplePixelShader.cso");

auto createVSTask = loadVSTask.then([this](const std::vector<byte>& vertexShaderBytecode) {

ComPtr<ID3D11VertexShader> vertexShader;
DX::ThrowIfFailed(
m_d3dDevice->CreateVertexShader(
vertexShaderBytecode->Data,
vertexShaderBytecode->Length,
nullptr,
&vertexShader
)
);

// Create an input layout that matches the layout defined in the vertex shader code.
// These correspond to the elements of the BasicVertex struct defined above.
const D3D11_INPUT_ELEMENT_DESC basicVertexLayoutDesc[] =
{
{ "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 },
{ "NORMAL", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0 },
{ "TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT, 0, 24, D3D11_INPUT_PER_VERTEX_DATA, 0 },
};

ComPtr<ID3D11InputLayout> inputLayout;
DX::ThrowIfFailed(
m_d3dDevice->CreateInputLayout(
basicVertexLayoutDesc,
ARRAYSIZE(basicVertexLayoutDesc),
vertexShaderBytecode->Data,
vertexShaderBytecode->Length,
&inputLayout
)
);
});

// Load the raw pixel shader bytecode from disk and create a pixel shader with it.
auto createPSTask = loadPSTask.then([this](const std::vector<byte>& pixelShaderBytecode) {

ComPtr<ID3D11PixelShader> pixelShader;
DX::ThrowIfFailed(
m_d3dDevice->CreatePixelShader(
m_d3dDevice->CreatePixelShader(
pixelShaderBytecode->Data,
pixelShaderBytecode->Length,
nullptr,
&pixelShader
)
);
});

// Create vertex and index buffers that define a simple unit cube.
auto createCubeTask = (createPSTask && createVSTask).then([this] () {

// In the array below, which will be used to initialize the cube vertex buffers,
// multiple vertices are used for each corner to allow different normal vectors and
// texture coordinates to be defined for each face.
BasicVertex cubeVertices[] =
{
{ DirectX::XMFLOAT3(-0.5f, 0.5f, -0.5f), DirectX::XMFLOAT3(0.0f, 1.0f, 0.0f), DirectX::XMFLOAT2(0.0f, 0.0f) }, // +Y (top face)
{ DirectX::XMFLOAT3( 0.5f, 0.5f, -0.5f), DirectX::XMFLOAT3(0.0f, 1.0f, 0.0f), DirectX::XMFLOAT2(1.0f, 0.0f) },
{ DirectX::XMFLOAT3( 0.5f, 0.5f, 0.5f), DirectX::XMFLOAT3(0.0f, 1.0f, 0.0f), DirectX::XMFLOAT2(1.0f, 1.0f) },
{ DirectX::XMFLOAT3(-0.5f, 0.5f, 0.5f), DirectX::XMFLOAT3(0.0f, 1.0f, 0.0f), DirectX::XMFLOAT2(0.0f, 1.0f) },

{ DirectX::XMFLOAT3(-0.5f, -0.5f, 0.5f), DirectX::XMFLOAT3(0.0f, -1.0f, 0.0f), DirectX::XMFLOAT2(0.0f, 0.0f) }, // -Y (bottom face)
{ DirectX::XMFLOAT3( 0.5f, -0.5f, 0.5f), DirectX::XMFLOAT3(0.0f, -1.0f, 0.0f), DirectX::XMFLOAT2(1.0f, 0.0f) },
{ DirectX::XMFLOAT3( 0.5f, -0.5f, -0.5f), DirectX::XMFLOAT3(0.0f, -1.0f, 0.0f), DirectX::XMFLOAT2(1.0f, 1.0f) },
{ DirectX::XMFLOAT3(-0.5f, -0.5f, -0.5f), DirectX::XMFLOAT3(0.0f, -1.0f, 0.0f), DirectX::XMFLOAT2(0.0f, 1.0f) },

{ DirectX::XMFLOAT3(0.5f, 0.5f, 0.5f), DirectX::XMFLOAT3(1.0f, 0.0f, 0.0f), DirectX::XMFLOAT2(0.0f, 0.0f) }, // +X (right face)
{ DirectX::XMFLOAT3(0.5f, 0.5f, -0.5f), DirectX::XMFLOAT3(1.0f, 0.0f, 0.0f), DirectX::XMFLOAT2(1.0f, 0.0f) },
{ DirectX::XMFLOAT3(0.5f, -0.5f, -0.5f), DirectX::XMFLOAT3(1.0f, 0.0f, 0.0f), DirectX::XMFLOAT2(1.0f, 1.0f) },
{ DirectX::XMFLOAT3(0.5f, -0.5f, 0.5f), DirectX::XMFLOAT3(1.0f, 0.0f, 0.0f), DirectX::XMFLOAT2(0.0f, 1.0f) },

{ DirectX::XMFLOAT3(-0.5f, 0.5f, -0.5f), DirectX::XMFLOAT3(-1.0f, 0.0f, 0.0f), DirectX::XMFLOAT2(0.0f, 0.0f) }, // -X (left face)
{ DirectX::XMFLOAT3(-0.5f, 0.5f, 0.5f), DirectX::XMFLOAT3(-1.0f, 0.0f, 0.0f), DirectX::XMFLOAT2(1.0f, 0.0f) },
{ DirectX::XMFLOAT3(-0.5f, -0.5f, 0.5f), DirectX::XMFLOAT3(-1.0f, 0.0f, 0.0f), DirectX::XMFLOAT2(1.0f, 1.0f) },
{ DirectX::XMFLOAT3(-0.5f, -0.5f, -0.5f), DirectX::XMFLOAT3(-1.0f, 0.0f, 0.0f), DirectX::XMFLOAT2(0.0f, 1.0f) },

{ DirectX::XMFLOAT3(-0.5f, 0.5f, 0.5f), DirectX::XMFLOAT3(0.0f, 0.0f, 1.0f), DirectX::XMFLOAT2(0.0f, 0.0f) }, // +Z (front face)
{ DirectX::XMFLOAT3( 0.5f, 0.5f, 0.5f), DirectX::XMFLOAT3(0.0f, 0.0f, 1.0f), DirectX::XMFLOAT2(1.0f, 0.0f) },
{ DirectX::XMFLOAT3( 0.5f, -0.5f, 0.5f), DirectX::XMFLOAT3(0.0f, 0.0f, 1.0f), DirectX::XMFLOAT2(1.0f, 1.0f) },
{ DirectX::XMFLOAT3(-0.5f, -0.5f, 0.5f), DirectX::XMFLOAT3(0.0f, 0.0f, 1.0f), DirectX::XMFLOAT2(0.0f, 1.0f) },

{ DirectX::XMFLOAT3( 0.5f, 0.5f, -0.5f), DirectX::XMFLOAT3(0.0f, 0.0f, -1.0f), DirectX::XMFLOAT2(0.0f, 0.0f) }, // -Z (back face)
{ DirectX::XMFLOAT3(-0.5f, 0.5f, -0.5f), DirectX::XMFLOAT3(0.0f, 0.0f, -1.0f), DirectX::XMFLOAT2(1.0f, 0.0f) },
{ DirectX::XMFLOAT3(-0.5f, -0.5f, -0.5f), DirectX::XMFLOAT3(0.0f, 0.0f, -1.0f), DirectX::XMFLOAT2(1.0f, 1.0f) },
{ DirectX::XMFLOAT3( 0.5f, -0.5f, -0.5f), DirectX::XMFLOAT3(0.0f, 0.0f, -1.0f), DirectX::XMFLOAT2(0.0f, 1.0f) },
};

unsigned short cubeIndices[] =


{
0, 1, 2,
0, 2, 3,

4, 5, 6,
4, 6, 7,

8, 9, 10,
8, 10, 11,

12, 13, 14,


12, 14, 15,

16, 17, 18,


16, 18, 19,

20, 21, 22,


20, 22, 23
};
D3D11_BUFFER_DESC vertexBufferDesc = {0};
vertexBufferDesc.ByteWidth = sizeof(BasicVertex) * ARRAYSIZE(cubeVertices);
vertexBufferDesc.Usage = D3D11_USAGE_DEFAULT;
vertexBufferDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER;
vertexBufferDesc.CPUAccessFlags = 0;
vertexBufferDesc.MiscFlags = 0;
vertexBufferDesc.StructureByteStride = 0;

D3D11_SUBRESOURCE_DATA vertexBufferData;
vertexBufferData.pSysMem = cubeVertices;
vertexBufferData.SysMemPitch = 0;
vertexBufferData.SysMemSlicePitch = 0;

ComPtr<ID3D11Buffer> vertexBuffer;
DX::ThrowIfFailed(
m_d3dDevice->CreateBuffer(
&vertexBufferDesc,
&vertexBufferData,
&vertexBuffer
)
);

D3D11_BUFFER_DESC indexBufferDesc;
indexBufferDesc.ByteWidth = sizeof(unsigned short) * ARRAYSIZE(cubeIndices);
indexBufferDesc.Usage = D3D11_USAGE_DEFAULT;
indexBufferDesc.BindFlags = D3D11_BIND_INDEX_BUFFER;
indexBufferDesc.CPUAccessFlags = 0;
indexBufferDesc.MiscFlags = 0;
indexBufferDesc.StructureByteStride = 0;

D3D11_SUBRESOURCE_DATA indexBufferData;
indexBufferData.pSysMem = cubeIndices;
indexBufferData.SysMemPitch = 0;
indexBufferData.SysMemSlicePitch = 0;

ComPtr<ID3D11Buffer> indexBuffer;
DX::ThrowIfFailed(
m_d3dDevice->CreateBuffer(
&indexBufferDesc,
&indexBufferData,
&indexBuffer
)
);

// Create a constant buffer for passing model, view, and projection matrices
// to the vertex shader. This will allow us to rotate the cube and apply
// a perspective projection to it.

D3D11_BUFFER_DESC constantBufferDesc = {0};


constantBufferDesc.ByteWidth = sizeof(m_constantBufferData);
constantBufferDesc.Usage = D3D11_USAGE_DEFAULT;
constantBufferDesc.BindFlags = D3D11_BIND_CONSTANT_BUFFER;
constantBufferDesc.CPUAccessFlags = 0;
constantBufferDesc.MiscFlags = 0;
constantBufferDesc.StructureByteStride = 0;
DX::ThrowIfFailed(
m_d3dDevice->CreateBuffer(
&constantBufferDesc,
nullptr,
&m_constantBuffer
)
);

// Specify the view transform corresponding to a camera position of


// X = 0, Y = 1, Z = 2. For a generalized camera class, see Lesson 5.

m_constantBufferData.view = DirectX::XMFLOAT4X4(
-1.00000000f, 0.00000000f, 0.00000000f, 0.00000000f,
-1.00000000f, 0.00000000f, 0.00000000f, 0.00000000f,
0.00000000f, 0.89442718f, 0.44721359f, 0.00000000f,
0.00000000f, 0.44721359f, -0.89442718f, -2.23606800f,
0.00000000f, 0.00000000f, 0.00000000f, 1.00000000f
);
});

3. Creating textures and samplers


Here, we apply texture data to a cube rather than applying colors as in the previous tutorial, Using depth and effects
on primitives.
We use raw texture data to create textures.
To create textures and samplers
1. First, we read raw texture data from the texturedata.bin file on disk.
2. Next, we construct a D3D11_SUBRESOURCE_DATA structure that references that raw texture data.
3. Then, we populate a D3D11_TEXTURE2D_DESC structure to describe the texture. We then pass the
D3D11_SUBRESOURCE_DATA and D3D11_TEXTURE2D_DESC structures in a call to
ID3D11Device::CreateTexture2D to create the texture.
4. Next, we create a shader-resource view of the texture so shaders can use the texture. To create the shader-
resource view, we populate a D3D11_SHADER_RESOURCE_VIEW_DESC to describe the shader-resource view
and pass the shader-resource view description and the texture to
ID3D11Device::CreateShaderResourceView. In general, you match the view description with the texture
description.
5. Next, we create sampler state for the texture. This sampler state uses the relevant texture data to define how the
color for a particular texture coordinate is determined. We populate a D3D11_SAMPLER_DESC structure to
describe the sampler state. We then pass the D3D11_SAMPLER_DESC structure in a call to
ID3D11Device::CreateSamplerState to create the sampler state.
6. Finally, we declare a degree variable that we will use to animate the cube by rotating it every frame.

// Load the raw texture data from disk and construct a subresource description that references it.
auto loadTDTask = DX::ReadDataAsync(L"texturedata.bin");

auto constructSubresourceTask = loadTDTask.then([this](const std::vector<byte>& vertexShaderBytecode) {

D3D11_SUBRESOURCE_DATA textureSubresourceData = {0};


textureSubresourceData.pSysMem = textureData->Data;

// Specify the size of a row in bytes, known as a priori about the texture data.
textureSubresourceData.SysMemPitch = 1024;

// As this is not a texture array or 3D texture, this parameter is ignored.


textureSubresourceData.SysMemSlicePitch = 0;

// Create a texture description from information known as a priori about the data.
// Generalized texture loading code can be found in the Resource Loading sample.
D3D11_TEXTURE2D_DESC textureDesc = {0};
textureDesc.Width = 256;
textureDesc.Height = 256;
textureDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
textureDesc.Usage = D3D11_USAGE_DEFAULT;
textureDesc.CPUAccessFlags = 0;
textureDesc.MiscFlags = 0;

// Most textures contain more than one MIP level. For simplicity, this sample uses only one.
textureDesc.MipLevels = 1;

// As this will not be a texture array, this parameter is ignored.


textureDesc.ArraySize = 1;
textureDesc.ArraySize = 1;

// Don't use multi-sampling.


textureDesc.SampleDesc.Count = 1;
textureDesc.SampleDesc.Quality = 0;

// Allow the texture to be bound as a shader resource.


textureDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE;

ComPtr<ID3D11Texture2D> texture;
DX::ThrowIfFailed(
m_d3dDevice->CreateTexture2D(
&textureDesc,
&textureSubresourceData,
&texture
)
);

// Once the texture is created, we must create a shader resource view of it


// so that shaders may use it. In general, the view description will match
// the texture description.
D3D11_SHADER_RESOURCE_VIEW_DESC textureViewDesc;
ZeroMemory(&textureViewDesc, sizeof(textureViewDesc));
textureViewDesc.Format = textureDesc.Format;
textureViewDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
textureViewDesc.Texture2D.MipLevels = textureDesc.MipLevels;
textureViewDesc.Texture2D.MostDetailedMip = 0;

ComPtr<ID3D11ShaderResourceView> textureView;
DX::ThrowIfFailed(
m_d3dDevice->CreateShaderResourceView(
texture.Get(),
&textureViewDesc,
&textureView
)
);

// Once the texture view is created, create a sampler. This defines how the color
// for a particular texture coordinate is determined using the relevant texture data.
D3D11_SAMPLER_DESC samplerDesc;
ZeroMemory(&samplerDesc, sizeof(samplerDesc));

samplerDesc.Filter = D3D11_FILTER_MIN_MAG_MIP_LINEAR;

// The sampler does not use anisotropic filtering, so this parameter is ignored.
samplerDesc.MaxAnisotropy = 0;

// Specify how texture coordinates outside of the range 0..1 are resolved.
samplerDesc.AddressU = D3D11_TEXTURE_ADDRESS_WRAP;
samplerDesc.AddressV = D3D11_TEXTURE_ADDRESS_WRAP;
samplerDesc.AddressW = D3D11_TEXTURE_ADDRESS_WRAP;

// Use no special MIP clamping or bias.


samplerDesc.MipLODBias = 0.0f;
samplerDesc.MinLOD = 0;
samplerDesc.MaxLOD = D3D11_FLOAT32_MAX;

// Don't use a comparison function.


samplerDesc.ComparisonFunc = D3D11_COMPARISON_NEVER;

// Border address mode is not used, so this parameter is ignored.


samplerDesc.BorderColor[0] = 0.0f;
samplerDesc.BorderColor[1] = 0.0f;
samplerDesc.BorderColor[2] = 0.0f;
samplerDesc.BorderColor[3] = 0.0f;

ComPtr<ID3D11SamplerState> sampler;
DX::ThrowIfFailed(
m_d3dDevice->CreateSamplerState(
&samplerDesc,
&samplerDesc,
&sampler
)
);
});

// This value will be used to animate the cube by rotating it every frame;
float degree = 0.0f;

4. Rotating and drawing the textured cube and presenting the rendered image
As in the previous tutorials, we enter an endless loop to continually render and display the scene. We call the
rotationY inline function (BasicMath.h) with a rotation amount to set values that will rotate the cubes model
matrix around the Y axis. We then call ID3D11DeviceContext::UpdateSubresource to update the constant buffer
and rotate the cube model. Next, we call ID3D11DeviceContext::OMSetRenderTargets to specify the render
target and the depth-stencil view. We call ID3D11DeviceContext::ClearRenderTargetView to clear the render
target to a solid blue color and call ID3D11DeviceContext::ClearDepthStencilView to clear the depth buffer.
In the endless loop, we also draw the textured cube on the blue surface.
To draw the textured cube
1. First, we call ID3D11DeviceContext::IASetInputLayout to describe how vertex buffer data is streamed into
the input-assembler stage.
2. Next, we call ID3D11DeviceContext::IASetVertexBuffers and ID3D11DeviceContext::IASetIndexBuffer to
bind the vertex and index buffers to the input-assembler stage.
3. Next, we call ID3D11DeviceContext::IASetPrimitiveTopology with the
D3D11_PRIMITIVE_TOPOLOGY_TRIANGLESTRIP value to specify for the input-assembler stage to interpret
the vertex data as a triangle strip.
4. Next, we call ID3D11DeviceContext::VSSetShader to initialize the vertex shader stage with the vertex shader
code and ID3D11DeviceContext::PSSetShader to initialize the pixel shader stage with the pixel shader code.
5. Next, we call ID3D11DeviceContext::VSSetConstantBuffers to set the constant buffer that is used by the
vertex shader pipeline stage.
6. Next, we call PSSetShaderResources to bind the shader-resource view of the texture to the pixel shader
pipeline stage.
7. Next, we call PSSetSamplers to set the sampler state to the pixel shader pipeline stage.
8. Finally, we call ID3D11DeviceContext::DrawIndexed to draw the cube and submit it to the rendering pipeline.
As in the previous tutorials, we call IDXGISwapChain::Present to present the rendered image to the window.

// Update the constant buffer to rotate the cube model.


m_constantBufferData.model = DirectX::XMMatrixRotationY(-degree);
degree += 1.0f;

m_d3dDeviceContext->UpdateSubresource(
m_constantBuffer.Get(),
0,
nullptr,
&m_constantBufferData,
0,
0
);

// Specify the render target and depth stencil we created as the output target.
m_d3dDeviceContext->OMSetRenderTargets(
1,
m_renderTargetView.GetAddressOf(),
m_depthStencilView.Get()
);
// Clear the render target to a solid color, and reset the depth stencil.
const float clearColor[4] = { 0.071f, 0.04f, 0.561f, 1.0f };
m_d3dDeviceContext->ClearRenderTargetView(
m_renderTargetView.Get(),
clearColor
);

m_d3dDeviceContext->ClearDepthStencilView(
m_depthStencilView.Get(),
D3D11_CLEAR_DEPTH,
1.0f,
0
);

m_d3dDeviceContext->IASetInputLayout(inputLayout.Get());

// Set the vertex and index buffers, and specify the way they define geometry.
UINT stride = sizeof(BasicVertex);
UINT offset = 0;
m_d3dDeviceContext->IASetVertexBuffers(
0,
1,
vertexBuffer.GetAddressOf(),
&stride,
&offset
);

m_d3dDeviceContext->IASetIndexBuffer(
indexBuffer.Get(),
DXGI_FORMAT_R16_UINT,
0
);

m_d3dDeviceContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);

// Set the vertex and pixel shader stage state.


m_d3dDeviceContext->VSSetShader(
vertexShader.Get(),
nullptr,
0
);

m_d3dDeviceContext->VSSetConstantBuffers(
0,
1,
m_constantBuffer.GetAddressOf()
);

m_d3dDeviceContext->PSSetShader(
pixelShader.Get(),
nullptr,
0
);

m_d3dDeviceContext->PSSetShaderResources(
0,
1,
textureView.GetAddressOf()
);

m_d3dDeviceContext->PSSetSamplers(
0,
1,
sampler.GetAddressOf()
);

// Draw the cube.


m_d3dDeviceContext->DrawIndexed(
ARRAYSIZE(cubeIndices),
ARRAYSIZE(cubeIndices),
0,
0
);

// Present the rendered image to the window. Because the maximum frame latency is set to 1,
// the render loop will generally be throttled to the screen refresh rate, typically around
// 60 Hz, by sleeping the application on Present until the screen is refreshed.
DX::ThrowIfFailed(
m_swapChain->Present(1, 0)
);

Summary
We loaded raw texture data and applied that data to a 3D primitive.
Create and display a basic mesh
3/6/2017 15 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
3-D Universal Windows Platform (UWP) games typically use polygons to represent objects and surfaces in the
game. The lists of vertices that comprise the structure of these polygonal objects and surfaces are called meshes.
Here, we create a basic mesh for a cube object and provide it to the shader pipeline for rendering and display.

Important The example code included here uses types (such as DirectX::XMFLOAT3 and DirectX::XMFLOAT4X4)
and inline methods declared in DirectXMath.h. If you're cutting and pasting this code, #include <DirectXMath.h>
in your project.

What you need to know


Technologies
Direct3D
Prerequisites
Basic knowledge of linear algebra and 3-D coordinate systems
A Visual Studio 2015 Direct3D template

Instructions
Step 1: Construct the mesh for the model
In most games, the mesh for a game object is loaded from a file that contains the specific vertex data. The ordering
of these vertices is app-dependent, but they are usually serialized as strips or fans. Vertex data can come from any
software source, or it can be created manually. It's up to your game to interpret the data in a way that the vertex
shader can effectively process it.
In our example, we use a simple mesh for a cube. The cube, like any object mesh at this stage in the pipeline, is
represented using its own coordinate system. The vertex shader takes its coordinates and, by applying the
transformation matrices you provide, returns the final 2-D view projection in a homogeneous coordinate system.
Define the mesh for a cube. (Or load it from a file. It's your call!)

SimpleCubeVertex cubeVertices[] =
{
{ DirectX::XMFLOAT3(-0.5f, 0.5f, -0.5f), DirectX::XMFLOAT3(0.0f, 1.0f, 0.0f) }, // +Y (top face)
{ DirectX::XMFLOAT3( 0.5f, 0.5f, -0.5f), DirectX::XMFLOAT3(1.0f, 1.0f, 0.0f) },
{ DirectX::XMFLOAT3( 0.5f, 0.5f, 0.5f), DirectX::XMFLOAT3(1.0f, 1.0f, 1.0f) },
{ DirectX::XMFLOAT3(-0.5f, 0.5f, 0.5f), DirectX::XMFLOAT3(0.0f, 1.0f, 1.0f) },

{ DirectX::XMFLOAT3(-0.5f, -0.5f, 0.5f), DirectX::XMFLOAT3(0.0f, 0.0f, 1.0f) }, // -Y (bottom face)


{ DirectX::XMFLOAT3( 0.5f, -0.5f, 0.5f), DirectX::XMFLOAT3(1.0f, 0.0f, 1.0f) },
{ DirectX::XMFLOAT3( 0.5f, -0.5f, -0.5f), DirectX::XMFLOAT3(1.0f, 0.0f, 0.0f) },
{ DirectX::XMFLOAT3(-0.5f, -0.5f, -0.5f), DirectX::XMFLOAT3(0.0f, 0.0f, 0.0f) },
};

The cube's coordinate system places the center of the cube at the origin, with the y-axis running top to bottom
using a left-handed coordinate system. Coordinate values are expressed as 32-bit floating values between -1 and 1.
In each bracketed pairing, the second DirectX::XMFLOAT3 value group specifies the color associated with the vertex
as an RGB value. For example, the first vertex at (-0.5, 0.5, -0.5) has a full green color (the G value is set to 1.0, and
the "R" and "B" values are set to 0).
Therefore, you have 8 vertices, each with a specific color. Each vertex/color pairing is the complete data for a vertex
in our example. When you specify our vertex buffer, you must keep this specific layout in mind. We provide this
input layout to the vertex shader so it can understand your vertex data.
Step 2: Set up the input layout
Now, you have the vertices in memory. But, your graphics device has its own memory, and you use Direct3D to
access it. To get your vertex data into the graphics device for processing, you need to clear the way, as it were: you
must declare how the vertex data is laid out so that the graphics device can interpret it when it gets it from your
game. To do that, you use ID3D11InputLayout.
Declare and set the input layout for the vertex buffer.

const D3D11_INPUT_ELEMENT_DESC basicVertexLayoutDesc[] =


{
{ "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 },
{ "COLOR", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0 },
};

ComPtr<ID3D11InputLayout> inputLayout;
m_d3dDevice->CreateInputLayout(
basicVertexLayoutDesc,
ARRAYSIZE(basicVertexLayoutDesc),
vertexShaderBytecode->Data,
vertexShaderBytecode->Length,
&inputLayout)
);

In this code, you specify a layout for the vertices, specifically, what data each element in the vertex list contains.
Here, in basicVertexLayoutDesc, you specify two data components:
POSITION: This is an HLSL semantic for position data provided to a shader. In this code, it's a
DirectX::XMFLOAT3, or more specifically, a structure with 3 32-bit floating point values that correspond to a
3D coordinate (x, y, z). You could also use a float4 if you are supplying the homogeneous "w" coordinate, and
in that case, you specify DXGI_FORMAT_R32G32B32A32_FLOAT. Whether you use a DirectX::XMFLOAT3 or a
float4 is up to the specific needs of your game. Just make sure that the vertex data for your mesh
corresponds correctly to the format you use!
Each coordinate value is expressed as a floating point value between -1 and 1, in the object's coordinate
space. When the vertex shader completes, the transformed vertex is in the homogeneous (perspective
corrected) view projection space.
"But the enumeration value indicates RGB, not XYZ!" you smartly note. Good eye! In both the cases of color
data and coordinate data, you typically use 3 or 4 component values, so why not use the same format for
both? The HLSL semantic, not the format name, indicates how the shader treats the data.
COLOR: This is an HLSL semantic for color data. Like POSITION, it consists of 3 32-bit floating point values
(DirectX::XMFLOAT3). Each value contains a color component: red (r), blue (b), or green (g), expressed as a
floating number between 0 and 1.
COLOR values are typically returned as a 4-component RGBA value at the end of the shader pipeline. For this
example, you will be setting the "A" alpha value to 1.0 (maximum opacity) in the shader pipeline for all pixels.
For a complete list of formats, see DXGI_FORMAT. For a complete list of HLSL semantics, see Semantics.
Call ID3D11Device::CreateInputLayout and create the input layout on the Direct3D device. Now, you need to
create a buffer that can actually hold the data!
Step 3: Populate the vertex buffers
Vertex buffers contain the list of vertices for each triangle in the mesh. Every vertex must be unique in this list. In
our example, you have 8 vertices for the cube. The vertex shader runs on the graphics device and reads from the
vertex buffer, and it interprets the data based on the input layout you specified in the previous step.
In the next example, you provide a description and a subresource for the buffer, which tell Direct3D a number of
things about the physical mapping of the vertex data and how to treat it in memory on the graphics device. This is
necessary because you use a generic ID3D11Buffer, which could contain anything! The D3D11_BUFFER_DESC
and D3D11_SUBRESOURCE_DATA structures are supplied to ensure that Direct3D understands the physical
memory layout of the buffer, including the size of each vertex element in the buffer as well as the maximum size of
the vertex list. You can also control access to the buffer memory here and how it is traversed, but that's a bit beyond
the scope of this tutorial.
After you configure the buffer, you call ID3D11Device::CreateBuffer to actually create it. Obviously, if you have
more than one object, create buffers for each unique model.
Declare and create the vertex buffer.

D3D11_BUFFER_DESC vertexBufferDesc = {0};


vertexBufferDesc.ByteWidth = sizeof(SimpleCubeVertex) * ARRAYSIZE(cubeVertices);
vertexBufferDesc.Usage = D3D11_USAGE_DEFAULT;
vertexBufferDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER;
vertexBufferDesc.CPUAccessFlags = 0;
vertexBufferDesc.MiscFlags = 0;
vertexBufferDesc.StructureByteStride = 0;

D3D11_SUBRESOURCE_DATA vertexBufferData;
vertexBufferData.pSysMem = cubeVertices;
vertexBufferData.SysMemPitch = 0;
vertexBufferData.SysMemSlicePitch = 0;

ComPtr<ID3D11Buffer> vertexBuffer;
m_d3dDevice->CreateBuffer(
&vertexBufferDesc,
&vertexBufferData,
&vertexBuffer);

Vertices loaded. But what's the order of processing these vertices? That's handled when you provide a list of indices
to the verticesthe ordering of these indices is the order in which the vertex shader processes them.
Step 4: Populate the index buffers
Now, you provide a list of the indices for each of the vertices. These indices correspond to the position of the vertex
in the vertex buffer, starting with 0. To help you visualize this, consider that each unique vertex in your mesh has a
unique number assigned to it, like an ID. This ID is the integer position of the vertex in the vertex buffer.
In our example cube, you have 8 vertices, which create 6 quads for the sides. You split the quads into triangles, for a
total of 12 triangles that use our 8 vertices. At 3 vertices per triangle, you have 36 entries in our index buffer. In our
example, this index pattern is known as a triangle list, and you indicate it to Direct3D as a
D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST when you set the primitive topology.
This is probably the most inefficient way to list indices, as there are many redundancies when triangles share points
and sides. For example, when a triangle shares a side in a rhombus shape, you list 6 indices for the four vertices, like
this:

Triangle 1: [0, 1, 2]
Triangle 2: [0, 2, 3]
In a strip or fan topology, you order the vertices in a way that eliminates many redundant sides during traversal
(such as the side from index 0 to index 2 in the image.) For large meshes, this dramatically reduces the number of
times the vertex shader is run, and improves performance significantly. However, we'll keep it simple and stick with
the triangle list.
Declare the indices for the vertex buffer as a simple triangle list topology.

unsigned short cubeIndices[] =


{ 0, 1, 2,
0, 2, 3,

4, 5, 6,
4, 6, 7,

3, 2, 5,
3, 5, 4,

2, 1, 6,
2, 6, 5,

1, 7, 6,
1, 0, 7,

0, 3, 4,
0, 4, 7 };

Thirty six index elements in the buffer is very redundant when you only have 8 vertices! If you choose to eliminate
some of the redundancies and use a different vertex list type, such as a strip or a fan, you must specify that type
when you provide a specific D3D11_PRIMITIVE_TOPOLOGY value to the
ID3D11DeviceContext::IASetPrimitiveTopology method.
For more information about different index list techniques, see Primitive Topologies.
Step 5: Create a constant buffer for your transformation matrices
Before you can start processing vertices, you need to provide the transformation matrices that will be applied
(multiplied) to each vertex when it runs. For most 3-D games, there are three of them:
The 4x4 matrix that transforms from the object (model) coordinate system to the overall world coordinate
system.
The 4x4 matrix that transforms from the world coordinate system to the camera (view) coordinate system.
The 4x4 matrix that transforms from the camera coordinate system to the 2-D view projection coordinate
system.
These matrices are passed to the shader in a constant buffer. A constant buffer is a region of memory that remains
constant throughout the execution of the next pass of the shader pipeline, and which can be directly accessed by the
shaders from your HLSL code. You define each constant buffer two times: first in your game's C++ code, and (at
least) one time in the C-like HLSL syntax for your shader code. The two declarations must directly correspond in
terms of types and data alignment. It's easy to introduce hard to find errors when the shader uses the HLSL
declaration to interpret data declared in C++, and the types don't match or the alignment of data is off!
Constant buffers don't get changed by the HLSL. You can change them when your game updates specific data.
Often, game devs create 4 classes of constant buffers: one type for updates per frame; one type for updates per
model/object; one type for updates per game state refresh; and one type for data that never changes through the
lifetime of the game.
In this example, we just have one that never changes: the DirectX::XMFLOAT4X4 data for the three matrices.

Note The example code presented here uses column-major matrices. You can use row-major matrices instead
by using the row_major keyword in HLSL, and ensuring your source matrix data is also row-major.
DirectXMath uses row-major matrices and can be used directly with HLSL matrices defined with the row_major
keyword.
Declare and create a constant buffer for the three matrices you use to transform each vertex.

struct ConstantBuffer
{
DirectX::XMFLOAT4X4 model;
DirectX::XMFLOAT4X4 view;
DirectX::XMFLOAT4X4 projection;
};
ComPtr<ID3D11Buffer> m_constantBuffer;
ConstantBuffer m_constantBufferData;

// ...

// Create a constant buffer for passing model, view, and projection matrices
// to the vertex shader. This allows us to rotate the cube and apply
// a perspective projection to it.

D3D11_BUFFER_DESC constantBufferDesc = {0};


constantBufferDesc.ByteWidth = sizeof(m_constantBufferData);
constantBufferDesc.Usage = D3D11_USAGE_DEFAULT;
constantBufferDesc.BindFlags = D3D11_BIND_CONSTANT_BUFFER;
constantBufferDesc.CPUAccessFlags = 0;
constantBufferDesc.MiscFlags = 0;
constantBufferDesc.StructureByteStride = 0;
m_d3dDevice->CreateBuffer(
&constantBufferDesc,
nullptr,
&m_constantBuffer
);

m_constantBufferData.model = DirectX::XMFLOAT4X4( // Identity matrix, since you are not animating the object
1.0f, 0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f, 0.0f,
0.0f, 0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f);

);
// Specify the view (camera) transform corresponding to a camera position of
// X = 0, Y = 1, Z = 2.

m_constantBufferData.view = DirectX::XMFLOAT4X4(
-1.00000000f, 0.00000000f, 0.00000000f, 0.00000000f,
0.00000000f, 0.89442718f, 0.44721359f, 0.00000000f,
0.00000000f, 0.44721359f, -0.89442718f, -2.23606800f,
0.00000000f, 0.00000000f, 0.00000000f, 1.00000000f);

Note You usually declare the projection matrix when you set up device specific resources, because the results
of multiplication with it must match the current 2-D viewport size parameters (which often correspond with the
pixel height and width of the display). If those change, you must scale the x- and y-coordinate values
accordingly.
// Finally, update the constant buffer perspective projection parameters
// to account for the size of the application window. In this sample,
// the parameters are fixed to a 70-degree field of view, with a depth
// range of 0.01 to 100.

float xScale = 1.42814801f;


float yScale = 1.42814801f;
if (backBufferDesc.Width > backBufferDesc.Height)
{
xScale = yScale *
static_cast<float>(backBufferDesc.Height) /
static_cast<float>(backBufferDesc.Width);
}
else
{
yScale = xScale *
static_cast<float>(backBufferDesc.Width) /
static_cast<float>(backBufferDesc.Height);
}
m_constantBufferData.projection = DirectX::XMFLOAT4X4(
xScale, 0.0f, 0.0f, 0.0f,
0.0f, yScale, 0.0f, 0.0f,
0.0f, 0.0f, -1.0f, -0.01f,
0.0f, 0.0f, -1.0f, 0.0f
);

While you're here, set the vertex and index buffers on theID3D11DeviceContext, plus the topology you're using.

// Set the vertex and index buffers, and specify the way they define geometry.
UINT stride = sizeof(SimpleCubeVertex);
UINT offset = 0;
m_d3dDeviceContext->IASetVertexBuffers(
0,
1,
vertexBuffer.GetAddressOf(),
&stride,
&offset);

m_d3dDeviceContext->IASetIndexBuffer(
indexBuffer.Get(),
DXGI_FORMAT_R16_UINT,
0);

m_d3dDeviceContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);

All right! Input assembly complete. Everything's in place for rendering. Let's get that vertex shader going.
Step 6: Process the mesh with the vertex shader
Now that you have a vertex buffer with the vertices that define your mesh, and the index buffer that defines the
order in which the vertices are processed, you send them to the vertex shader. The vertex shader code, expressed as
compiled high-level shader language, runs one time for each vertex in the vertex buffer, allowing you to perform
your per-vertex transforms. The final result is typically a 2-D projection.
(Did you load your vertex shader? If not, review How to load resources in your DirectX game.)
Here, you create the vertex shader...
// Set the vertex and pixel shader stage state.
m_d3dDeviceContext->VSSetShader(
vertexShader.Get(),
nullptr,
0);

...and set the constant buffers.

m_d3dDeviceContext->VSSetConstantBuffers(
0,
1,
m_constantBuffer.GetAddressOf());

Here's the vertex shader code that handles the transformation from object coordinates to world coordinates and
then to the 2-D view projection coordinate system. You also apply some simple per-vertex lighting to make things
pretty. This goes in your vertex shader's HLSL file (SimplerVertexShader.hlsl, in this example).

cbuffer simpleConstantBuffer : register( b0 )


{
matrix model;
matrix view;
matrix projection;
};

struct VertexShaderInput
{
DirectX::XMFLOAT3 pos : POSITION;
DirectX::XMFLOAT3 color : COLOR;
};

struct PixelShaderInput
{
float4 pos : SV_POSITION;
float4 color : COLOR;
};

PixelShaderInput SimpleVertexShader(VertexShaderInput input)


{
PixelShaderInput vertexShaderOutput;
float4 pos = float4(input.pos, 1.0f);

// Transform the vertex position into projection space.


pos = mul(pos, model);
pos = mul(pos, view);
pos = mul(pos, projection);
vertexShaderOutput.pos = pos;

// Pass the vertex color through to the pixel shader.


vertexShaderOutput.color = float4(input.color, 1.0f);

return vertexShaderOutput;
}

See that cbuffer at the top? That's the HLSL analogue to the same constant buffer we declared in our C++ code
previously. And the VertexShaderInputstruct? Why, that looks just like your input layout and vertex data
declaration! It's important that the constant buffer and vertex data declarations in your C++ code match the
declarations in your HLSL codeand that includes signs, types, and data alignment.
PixelShaderInput specifies the layout of the data that is returned by the vertex shader's main function. When you
finish processing a vertex, you'll return a vertex position in the 2-D projection space and a color used for per-vertex
lighting. The graphics card uses data output by the shader to calculate the "fragments" (possible pixels) that must be
colored when the pixel shader is run in the next stage of the pipeline.
Step 7: Passing the mesh through the pixel shader
Typically, at this stage in the graphics pipeline, you perform per-pixel operations on the visible projected surfaces of
your objects. (People like textures.) For the purposes of sample, though, you simply pass it through this stage.
First, let's create an instance of the pixel shader. The pixel shader runs for every pixel in the 2-D projection of your
scene, assigning a color to that pixel. In this case, we pass the color for the pixel returned by the vertex shader
straight through.
Set the pixel shader.

m_d3dDeviceContext->PSSetShader( pixelShader.Get(), nullptr, 0 );

Define a passthrough pixel shader in HLSL.

struct PixelShaderInput
{
float4 pos : SV_POSITION;
};

float4 SimplePixelShader(PixelShaderInput input) : SV_TARGET


{
// Draw the entire triangle yellow.
return float4(1.0f, 1.0f, 0.0f, 1.0f);
}

Put this code in an HLSL file separate from the vertex shader HLSL (such as SimplePixelShader.hlsl). This code is run
one time for every visible pixel in your viewport (an in-memory representation of the portion of the screen you are
drawing to), which, in this case, maps to the entire screen. Now, your graphics pipeline is completely defined!
Step 8: Rasterizing and displaying the mesh
Let's run the pipeline. This is easy: call ID3D11DeviceContext::DrawIndexed.
Draw that cube!

// Draw the cube.


m_d3dDeviceContext->DrawIndexed( ARRAYSIZE(cubeIndices), 0, 0 );

Inside the graphics card, each vertex is processed in the order specified in your index buffer. After your code has
executed the vertex shader and the 2-D fragments are defined, the pixel shader is invoked and the triangles colored.
Now, put the cube on the screen.
Present that frame buffer to the display.

// Present the rendered image to the window. Because the maximum frame latency is set to 1,
// the render loop is generally throttled to the screen refresh rate, typically around
// 60 Hz, by sleeping the app on Present until the screen is refreshed.

m_swapChain->Present(1, 0);

And you're done! For a scene full of models, use multiple vertex and index buffers, and you might even have
different shaders for different model types. Remember that each model has its own coordinate system, and you
need to transform them to the shared world coordinate system using the matrices you defined in the constant
buffer.
Remarks
This topic covers creating and displaying simple geometry that you create yourself. For more info about loading
more complex geometry from a file and converting it to the sample-specific vertex buffer object (.vbo) format, see
How to load resources in your DirectX game.

Note
This article is for Windows 10 developers writing Universal Windows Platform (UWP) apps. If youre developing
for Windows 8.x or Windows Phone 8.x, see the archived documentation.

Related topics
How to load resources in your DirectX game
Load resources in your DirectX game
3/6/2017 19 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Most games, at some point, load resources and assets (such as shaders, textures, predefined meshes or other
graphics data) from local storage or some other data stream. Here, we walk you through a high-level view of what
you must consider when loading these files to use in your Universal Windows Platform (UWP) game.
For example, the meshes for polygonal objects in your game might have been created with another tool, and
exported to a specific format. The same is true for textures, and more so: while a flat, uncompressed bitmap can be
commonly written by most tools and understood by most graphics APIs, it can be extremely inefficient for use in
your game. Here, we guide you through the basic steps for loading three different types of graphic resources for
use with Direct3D: meshes (models), textures (bitmaps), and compiled shader objects.

What you need to know


Technologies
Parallel Patterns Library (ppltasks.h)
Prerequisites
Understand the basic Windows Runtime
Understand asynchronous tasks
Understand the basic concepts of 3-D graphics programming.
This sample also includes three code files for resource loading and management. You'll encounter the code objects
defined in these files throughout this topic.
BasicLoader.h/.cpp
BasicReaderWriter.h/.cpp
DDSTextureLoader.h/.cpp
The complete code for these samples can be found in the following links.

TOPIC DESCRIPTION

Complete code for BasicLoader Complete code for a class and methods that convert and
load graphics mesh objects into memory.

Complete code for BasicReaderWriter Complete code for a class and methods for reading and
writing binary data files in general. Used by the
BasicLoader class.

Complete code for DDSTextureLoader Complete code for a class and method that loads a DDS
texture from memory.

Instructions
Asynchronous loading
Asynchronous loading is handled using the task template from the Parallel Patterns Library (PPL). A task contains
a method call followed by a lambda that processes the results of the async call after it completes, and usually
follows the format of:
task<generic return type>(async code to execute).then((parameters for lambda){ lambda code contents }); .
Tasks can be chained together using the .then() syntax, so that when one operation completes, another async
operation that depends on the results of the prior operation can be run. In this way, you can load, convert, and
manage complex assets on separate threads in a way that appears almost invisible to the player.
For more details, read Asynchronous programming in C++.
Now, let's look at the basic structure for declaring and creating an async file loading method, ReadDataAsync.

#include <ppltasks.h>

// ...
concurrency::task<Platform::Array<byte>^> ReadDataAsync(
_In_ Platform::String^ filename);

// ...

using concurrency;

task<Platform::Array<byte>^> BasicReaderWriter::ReadDataAsync(
_In_ Platform::String^ filename
)
{
return task<StorageFile^>(m_location->GetFileAsync(filename)).then([=](StorageFile^ file)
{
return FileIO::ReadBufferAsync(file);
}).then([=](IBuffer^ buffer)
{
auto fileData = ref new Platform::Array<byte>(buffer->Length);
DataReader::FromBuffer(buffer)->ReadBytes(fileData);
return fileData;
});
}

In this code, when your code calls the ReadDataAsync method defined above, a task is created to read a buffer
from the file system. Once it completes, a chained task takes the buffer and streams the bytes from that buffer into
an array using the static DataReader type.

m_basicReaderWriter = ref new BasicReaderWriter();

// ...
return m_basicReaderWriter->ReadDataAsync(filename).then([=](const Platform::Array<byte>^ bytecode)
{
// Perform some operation with the data when the async load completes.
});

Here's the call you make to ReadDataAsync. When it completes, your code receives an array of bytes read from
the provided file. Since ReadDataAsync itself is defined as a task, you can use a lambda to perform a specific
operation when the byte array is returned, such as passing that byte data to a DirectX function that can use it.
If your game is sufficiently simple, load your resources with a method like this when the user starts the game. You
can do this before you start the main game loop from some point in the call sequence of your
IFrameworkView::Run implementation. Again, you call your resource loading methods asynchronously so the
game can start quicker and so the player doesn't have to wait until the loading completes before engaging in early
interactions.
However, you don't want to start the game proper until all of the async loading has completed! Create some
method for signaling when loading is complete, such as a specific field, and use the lambdas on your loading
method(s) to set that signal when finished. Check the variable before starting any components that use those
loaded resources.
Here's an example using the async methods defined in BasicLoader.cpp to load shaders, a mesh, and a texture
when the game starts up. Notice that it sets a specific field on the game object, m_loadingComplete, when all of
the loading methods finish.

void ResourceLoading::CreateDeviceResources()
{
// DirectXBase is a common sample class that implements a basic view provider.

DirectXBase::CreateDeviceResources();

// ...

// This flag will keep track of whether or not all application


// resources have been loaded. Until all resources are loaded,
// only the sample overlay will be drawn on the screen.
m_loadingComplete = false;

// Create a BasicLoader, and use it to asynchronously load all


// application resources. When an output value becomes non-null,
// this indicates that the asynchronous operation has completed.
BasicLoader^ loader = ref new BasicLoader(m_d3dDevice.Get());

auto loadVertexShaderTask = loader->LoadShaderAsync(


"SimpleVertexShader.cso",
nullptr,
0,
&m_vertexShader,
&m_inputLayout
);

auto loadPixelShaderTask = loader->LoadShaderAsync(


"SimplePixelShader.cso",
&m_pixelShader
);

auto loadTextureTask = loader->LoadTextureAsync(


"reftexture.dds",
nullptr,
&m_textureSRV
);

auto loadMeshTask = loader->LoadMeshAsync(


"refmesh.vbo",
&m_vertexBuffer,
&m_indexBuffer,
nullptr,
&m_indexCount
);

// The && operator can be used to create a single task that represents
// a group of multiple tasks. The new task's completed handler will only
// be called once all associated tasks have completed. In this case, the
// new task represents a task to load various assets from the package.
(loadVertexShaderTask && loadPixelShaderTask && loadTextureTask && loadMeshTask).then([=]()
{
m_loadingComplete = true;
});

// Create constant buffers and other graphics device-specific resources here.


}
Note that the tasks have been aggregated using the && operator such that the lambda that sets the loading
complete flag is triggered only when all of the tasks complete. Note that if you have multiple flags, you have the
possibility of race conditions. For example, if the lambda sets two flags sequentially to the same value, another
thread may only see the first flag set if it examines them before the second flag is set.
You've seen how to load resource files asynchronously. Synchronous file loads are much simpler, and you can find
examples of them in Complete code for BasicReaderWriter and Complete code for BasicLoader.
Of course, different resource and asset types often require additional processing or conversion before they are
ready to be used in your graphics pipeline. Let's take a look at three specific types of resources: meshes, textures,
and shaders.
Loading meshes
Meshes are vertex data, either generated procedurally by code within your game or exported to a file from another
app (like 3DStudio MAX or Alias WaveFront) or tool. These meshes represent the models in your game, from
simple primitives like cubes and spheres to cars and houses and characters. They often contain color and
animation data, as well, depending on their format. We'll focus on meshes that contain only vertex data.
To load a mesh correctly, you must know the format of the data in the file for the mesh. Our simple
BasicReaderWriter type above simply reads the data in as a byte stream; it doesn't know that the byte data
represents a mesh, much less a specific mesh format as exported by another application! You must perform the
conversion as you bring the mesh data into memory.
(You should always try to package asset data in a format that's as close to the internal representation as possible.
Doing so will reduce resource utilization and save time.)
Let's get the byte data from the mesh's file. The format in the example assumes that the file is a sample-specific
format suffixed with .vbo. (Again, this format is not the same as OpenGL's VBO format.) Each vertex itself maps to
the BasicVertex type, which is a struct defined in the code for the obj2vbo converter tool. The layout of the vertex
data in the .vbo file looks like this:
The first 32 bits (4 bytes) of the data stream contain the number of vertices (numVertices) in the mesh,
represented as a uint32 value.
The next 32 bits (4 bytes) of the data stream contain the number of indices in the mesh (numIndices),
represented as a uint32 value.
After that, the subsequent (numVertices * sizeof(BasicVertex)) bits contain the vertex data.
The last (numIndices * 16) bits of data contain the index data, represented as a sequence of uint16 values.
The point is this: know the bit-level layout of the mesh data you have loaded. Also, be sure you are consistent with
endian-ness. All Windows 8 platforms are little-endian.
In the example, you call a method, CreateMesh, from the LoadMeshAsync method to perform this bit-level
interpretation.
task<void> BasicLoader::LoadMeshAsync(
_In_ Platform::String^ filename,
_Out_ ID3D11Buffer** vertexBuffer,
_Out_ ID3D11Buffer** indexBuffer,
_Out_opt_ uint32* vertexCount,
_Out_opt_ uint32* indexCount
)
{
return m_basicReaderWriter->ReadDataAsync(filename).then([=](const Platform::Array<byte>^ meshData)
{
CreateMesh(
meshData->Data,
vertexBuffer,
indexBuffer,
vertexCount,
indexCount,
filename
);
});
}

CreateMesh interprets the byte data loaded from the file, and creates a vertex buffer and an index buffer for the
mesh by passing the vertex and index lists, respectively, to ID3D11Device::CreateBuffer and specifying either
D3D11_BIND_VERTEX_BUFFER or D3D11_BIND_INDEX_BUFFER. Here's the code used in BasicLoader:
void BasicLoader::CreateMesh(
_In_ byte* meshData,
_Out_ ID3D11Buffer** vertexBuffer,
_Out_ ID3D11Buffer** indexBuffer,
_Out_opt_ uint32* vertexCount,
_Out_opt_ uint32* indexCount,
_In_opt_ Platform::String^ debugName
)
{
// The first 4 bytes of the BasicMesh format define the number of vertices in the mesh.
uint32 numVertices = *reinterpret_cast<uint32*>(meshData);

// The following 4 bytes define the number of indices in the mesh.


uint32 numIndices = *reinterpret_cast<uint32*>(meshData + sizeof(uint32));

// The next segment of the BasicMesh format contains the vertices of the mesh.
BasicVertex* vertices = reinterpret_cast<BasicVertex*>(meshData + sizeof(uint32) * 2);

// The last segment of the BasicMesh format contains the indices of the mesh.
uint16* indices = reinterpret_cast<uint16*>(meshData + sizeof(uint32) * 2 + sizeof(BasicVertex) * numVertices);

// Create the vertex and index buffers with the mesh data.

D3D11_SUBRESOURCE_DATA vertexBufferData = {0};


vertexBufferData.pSysMem = vertices;
vertexBufferData.SysMemPitch = 0;
vertexBufferData.SysMemSlicePitch = 0;
CD3D11_BUFFER_DESC vertexBufferDesc(numVertices * sizeof(BasicVertex), D3D11_BIND_VERTEX_BUFFER);

m_d3dDevice->CreateBuffer(
&vertexBufferDesc,
&vertexBufferData,
vertexBuffer
);

D3D11_SUBRESOURCE_DATA indexBufferData = {0};


indexBufferData.pSysMem = indices;
indexBufferData.SysMemPitch = 0;
indexBufferData.SysMemSlicePitch = 0;
CD3D11_BUFFER_DESC indexBufferDesc(numIndices * sizeof(uint16), D3D11_BIND_INDEX_BUFFER);

m_d3dDevice->CreateBuffer(
&indexBufferDesc,
&indexBufferData,
indexBuffer
);

if (vertexCount != nullptr)
{
*vertexCount = numVertices;
}
if (indexCount != nullptr)
{
*indexCount = numIndices;
}
}

You typically create a vertex/index buffer pair for every mesh you use in your game. Where and when you load the
meshes is up to you. If you have a lot of meshes, you may only want to load some from the disk at specific points
in the game, such as during specific, pre-defined loading states. For large meshes, like terrain data, you can stream
the vertices from a cache, but that is a more complex procedure and not in the scope of this topic.
Again, know your vertex data format! There are many, many ways to represent vertex data across the tools used to
create models. There are also many different ways to represent the input layout of the vertex data to Direct3D,
such as triangle lists and strips. For more information about vertex data, read Introduction to Buffers in Direct3D
11 and Primitives.
Next, let's look at loading textures.
Loading textures
The most common asset in a gameand the one that comprises most of the files on disk and in memoryare
textures. Like meshes, textures can come in a variety of formats, and you convert them to a format that Direct3D
can use when you load them. Textures also come in a wide variety of types and are used to create different effects.
MIP levels for textures can be used to improve the look and performance of distance objects; dirt and light maps
are used to layer effects and detail atop a base texture; and normal maps are used in per-pixel lighting calculations.
In a modern game, a typical scene can potentially have thousands of individual textures, and your code must
effectively manage them all!
Also like meshes, there are a number of specific formats that are used to make memory usage for efficient. Since
textures can easily consume a large portion of the GPU (and system) memory, they are often compressed in some
fashion. You aren't required to use compression on your game's textures, and you can use any
compression/decompression algorithm(s) you want as long as you provide the Direct3D shaders with data in a
format it can understand (like a Texture2D bitmap).
Direct3D provides support for the DXT texture compression algorithms, although every DXT format may not be
supported in the player's graphics hardware. DDS files contain DXT textures (and other texture compression
formats as well), and are suffixed with .dds.
A DDS file is a binary file that contains the following information:
A DWORD (magic number) containing the four character code value 'DDS ' (0x20534444).
A description of the data in the file.
The data is described with a header description using DDS_HEADER; the pixel format is defined using
DDS_PIXELFORMAT. Note that the DDS_HEADER and DDS_PIXELFORMAT structures replace the
deprecated DDSURFACEDESC2, DDSCAPS2 and DDPIXELFORMAT DirectDraw 7 structures. DDS_HEADER
is the binary equivalent of DDSURFACEDESC2 and DDSCAPS2. DDS_PIXELFORMAT is the binary
equivalent of DDPIXELFORMAT.

DWORD dwMagic;
DDS_HEADER header;

If the value of dwFlags in DDS_PIXELFORMAT is set to DDPF_FOURCC and dwFourCC is set to "DX10" an
additional DDS_HEADER_DXT10 structure will be present to accommodate texture arrays or DXGI formats
that cannot be expressed as an RGB pixel format such as floating point formats, sRGB formats etc. When the
DDS_HEADER_DXT10 structure is present, the entire data description will looks like this.

DWORD dwMagic;
DDS_HEADER header;
DDS_HEADER_DXT10 header10;

A pointer to an array of bytes that contains the main surface data.

BYTE bdata[]

A pointer to an array of bytes that contains the remaining surfaces such as; mipmap levels, faces in a cube
map, depths in a volume texture. Follow these links for more information about the DDS file layout for a:
texture, a cube map, or a volume texture.
BYTE bdata2[]

Many tools export to the DDS format. If you don't have a tool to export your texture to this format, consider
creating one. For more detail on the DDS format and how to work with it in your code, read Programming Guide
for DDS. In our example, we'll use DDS.
As with other resource types, you read the data from a file as a stream of bytes. Once your loading task completes,
the lambda call runs code (the CreateTexture method) to process the stream of bytes into a format that Direct3D
can use.

task<void> BasicLoader::LoadTextureAsync(
_In_ Platform::String^ filename,
_Out_opt_ ID3D11Texture2D** texture,
_Out_opt_ ID3D11ShaderResourceView** textureView
)
{
return m_basicReaderWriter->ReadDataAsync(filename).then([=](const Platform::Array<byte>^ textureData)
{
CreateTexture(
GetExtension(filename) == "dds",
textureData->Data,
textureData->Length,
texture,
textureView,
filename
);
});
}

In the previous snippet, the lambda checks to see if the filename has an extension of "dds". If it does, you assume
that it is a DDS texture. If not, well, use the Windows Imaging Component (WIC) APIs to discover the format and
decode the data as a bitmap. Either way, the result is a Texture2D bitmap (or an error).

void BasicLoader::CreateTexture(
_In_ bool decodeAsDDS,
_In_reads_bytes_(dataSize) byte* data,
_In_ uint32 dataSize,
_Out_opt_ ID3D11Texture2D** texture,
_Out_opt_ ID3D11ShaderResourceView** textureView,
_In_opt_ Platform::String^ debugName
)
{
ComPtr<ID3D11ShaderResourceView> shaderResourceView;
ComPtr<ID3D11Texture2D> texture2D;

if (decodeAsDDS)
{
ComPtr<ID3D11Resource> resource;

if (textureView == nullptr)
{
CreateDDSTextureFromMemory(
m_d3dDevice.Get(),
data,
dataSize,
&resource,
nullptr
);
}
else
{
CreateDDSTextureFromMemory(
m_d3dDevice.Get(),
data,
dataSize,
&resource,
&shaderResourceView
);
}

resource.As(&texture2D);
}
else
{
if (m_wicFactory.Get() == nullptr)
{
// A WIC factory object is required in order to load texture
// assets stored in non-DDS formats. If BasicLoader was not
// initialized with one, create one as needed.
CoCreateInstance(
CLSID_WICImagingFactory,
nullptr,
CLSCTX_INPROC_SERVER,
IID_PPV_ARGS(&m_wicFactory));
}

ComPtr<IWICStream> stream;
m_wicFactory->CreateStream(&stream);

stream->InitializeFromMemory(
data,
dataSize);

ComPtr<IWICBitmapDecoder> bitmapDecoder;
m_wicFactory->CreateDecoderFromStream(
stream.Get(),
nullptr,
WICDecodeMetadataCacheOnDemand,
&bitmapDecoder);

ComPtr<IWICBitmapFrameDecode> bitmapFrame;
bitmapDecoder->GetFrame(0, &bitmapFrame);

ComPtr<IWICFormatConverter> formatConverter;
m_wicFactory->CreateFormatConverter(&formatConverter);

formatConverter->Initialize(
bitmapFrame.Get(),
GUID_WICPixelFormat32bppPBGRA,
WICBitmapDitherTypeNone,
nullptr,
0.0,
WICBitmapPaletteTypeCustom);

uint32 width;
uint32 height;
bitmapFrame->GetSize(&width, &height);

std::unique_ptr<byte[]> bitmapPixels(new byte[width * height * 4]);


formatConverter->CopyPixels(
nullptr,
width * 4,
width * height * 4,
bitmapPixels.get());

D3D11_SUBRESOURCE_DATA initialData;
ZeroMemory(&initialData, sizeof(initialData));
initialData.pSysMem = bitmapPixels.get();
initialData.SysMemPitch = width * 4;
initialData.SysMemSlicePitch = 0;
CD3D11_TEXTURE2D_DESC textureDesc(
DXGI_FORMAT_B8G8R8A8_UNORM,
width,
height,
1,
1
);

m_d3dDevice->CreateTexture2D(
&textureDesc,
&initialData,
&texture2D);

if (textureView != nullptr)
{
CD3D11_SHADER_RESOURCE_VIEW_DESC shaderResourceViewDesc(
texture2D.Get(),
D3D11_SRV_DIMENSION_TEXTURE2D
);

m_d3dDevice->CreateShaderResourceView(
texture2D.Get(),
&shaderResourceViewDesc,
&shaderResourceView);
}
}

if (texture != nullptr)
{
*texture = texture2D.Detach();
}
if (textureView != nullptr)
{
*textureView = shaderResourceView.Detach();
}
}

When this code completes, you have a Texture2D in memory, loaded from an image file. As with meshes, you
probably have a lot of them in your game and in any given scene. Consider creating caches for regularly accessed
textures per-scene or per-level, rather than loading them all when the game or level starts.
(The CreateDDSTextureFromMemory method called in the above sample can be explored in full in Complete
code for DDSTextureLoader.)
Also, individual textures or texture "skins" may map to specific mesh polygons or surfaces. This mapping data is
usually exported by the tool an artist or designer used to create the model and the textures. Make sure that you
capture this information as well when you load the exported data, as you will use it map the correct textures to the
corresponding surfaces when you perform fragment shading.
Loading shaders
Shaders are compiled High Level Shader Language (HLSL) files that are loaded into memory and invoked at
specific stages of the graphics pipeline. The most common and essential shaders are the vertex and pixel shaders,
which process the individual vertices of your mesh and the pixels in the scene's viewport(s), respectively. The HLSL
code is executed to transform the geometry, apply lighting effects and textures, and perform post-processing on
the rendered scene.
A Direct3D game can have a number of different shaders, each one compiled into a separate CSO (Compiled
Shader Object, .cso) file. Normally, you don't have so many that you need to load them dynamically, and in most
cases, you can simply load them when the game is starting, or on a per-level basis (such as a shader for rain
effects).
The code in the BasicLoader class provides a number of overloads for different shaders, including vertex,
geometry, pixel, and hull shaders. The code below covers pixel shaders as an example. (You can review the
complete code in Complete code for BasicLoader.)

concurrency::task<void> LoadShaderAsync(
_In_ Platform::String^ filename,
_Out_ ID3D11PixelShader** shader
);

// ...

task<void> BasicLoader::LoadShaderAsync(
_In_ Platform::String^ filename,
_Out_ ID3D11PixelShader** shader
)
{
return m_basicReaderWriter->ReadDataAsync(filename).then([=](const Platform::Array<byte>^ bytecode)
{

m_d3dDevice->CreatePixelShader(
bytecode->Data,
bytecode->Length,
nullptr,
shader);
});
}

In this example, you use the BasicReaderWriter instance (m_basicReaderWriter) to read in the supplied
compiled shader object (.cso) file as a byte stream. Once that task completes, the lambda calls
ID3D11Device::CreatePixelShader with the byte data loaded from the file. Your callback must set some flag
indicating that the load was successful, and your code must check this flag before running the shader.
Vertex shaders are bit more complex. For a vertex shader, you also load a separate input layout that defines the
vertex data. The following code can be used to asynchronously load a vertex shader along with a custom vertex
input layout. Be sure that the vertex information that you load from your meshes can be correctly represented by
this input layout!
Let's create the input layout before you load the vertex shader.
void BasicLoader::CreateInputLayout(
_In_reads_bytes_(bytecodeSize) byte* bytecode,
_In_ uint32 bytecodeSize,
_In_reads_opt_(layoutDescNumElements) D3D11_INPUT_ELEMENT_DESC* layoutDesc,
_In_ uint32 layoutDescNumElements,
_Out_ ID3D11InputLayout** layout
)
{
if (layoutDesc == nullptr)
{
// If no input layout is specified, use the BasicVertex layout.
const D3D11_INPUT_ELEMENT_DESC basicVertexLayoutDesc[] =
{
{ "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 },
{ "NORMAL", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0 },
{ "TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT, 0, 24, D3D11_INPUT_PER_VERTEX_DATA, 0 },
};

m_d3dDevice->CreateInputLayout(
basicVertexLayoutDesc,
ARRAYSIZE(basicVertexLayoutDesc),
bytecode,
bytecodeSize,
layout);
}
else
{
m_d3dDevice->CreateInputLayout(
layoutDesc,
layoutDescNumElements,
bytecode,
bytecodeSize,
layout);
}
}

In this particular layout, each vertex has the following data processed by the vertex shader:
A 3D coordinate position (x, y, z) in the model's coordinate space, represented as a trio of 32-bit floating point
values.
A normal vector for the vertex, also represented as three 32-bit floating point values.
A transformed 2D texture coordinate value (u, v) , represented as a pair of 32-bit floating values.
These per-vertex input elements are called HLSL semantics, and they are a set of defined registers used to pass
data to and from your compiled shader object. Your pipeline runs the vertex shader once for every vertex in the
mesh that you've loaded. The semantics define the input to (and output from) the vertex shader as it runs, and
provide this data for your per-vertex computations in your shader's HLSL code.
Now, load the vertex shader object.

concurrency::task<void> LoadShaderAsync(
_In_ Platform::String^ filename,
_In_reads_opt_(layoutDescNumElements) D3D11_INPUT_ELEMENT_DESC layoutDesc[],
_In_ uint32 layoutDescNumElements,
_Out_ ID3D11VertexShader** shader,
_Out_opt_ ID3D11InputLayout** layout
);

// ...

task<void> BasicLoader::LoadShaderAsync(
_In_ Platform::String^ filename,
_In_reads_opt_(layoutDescNumElements) D3D11_INPUT_ELEMENT_DESC layoutDesc[],
_In_ uint32 layoutDescNumElements,
_In_ uint32 layoutDescNumElements,
_Out_ ID3D11VertexShader** shader,
_Out_opt_ ID3D11InputLayout** layout
)
{
// This method assumes that the lifetime of input arguments may be shorter
// than the duration of this task. In order to ensure accurate results, a
// copy of all arguments passed by pointer must be made. The method then
// ensures that the lifetime of the copied data exceeds that of the task.

// Create copies of the layoutDesc array as well as the SemanticName strings,


// both of which are pointers to data whose lifetimes may be shorter than that
// of this method's task.
shared_ptr<vector<D3D11_INPUT_ELEMENT_DESC>> layoutDescCopy;
shared_ptr<vector<string>> layoutDescSemanticNamesCopy;
if (layoutDesc != nullptr)
{
layoutDescCopy.reset(
new vector<D3D11_INPUT_ELEMENT_DESC>(
layoutDesc,
layoutDesc + layoutDescNumElements
)
);

layoutDescSemanticNamesCopy.reset(
new vector<string>(layoutDescNumElements)
);

for (uint32 i = 0; i < layoutDescNumElements; i++)


{
layoutDescSemanticNamesCopy->at(i).assign(layoutDesc[i].SemanticName);
}
}

return m_basicReaderWriter->ReadDataAsync(filename).then([=](const Platform::Array<byte>^ bytecode)


{
m_d3dDevice->CreateVertexShader(
bytecode->Data,
bytecode->Length,
nullptr,
shader);

if (layout != nullptr)
{
if (layoutDesc != nullptr)
{
// Reassign the SemanticName elements of the layoutDesc array copy to point
// to the corresponding copied strings. Performing the assignment inside the
// lambda body ensures that the lambda will take a reference to the shared_ptr
// that holds the data. This will guarantee that the data is still valid when
// CreateInputLayout is called.
for (uint32 i = 0; i < layoutDescNumElements; i++)
{
layoutDescCopy->at(i).SemanticName = layoutDescSemanticNamesCopy->at(i).c_str();
}
}

CreateInputLayout(
bytecode->Data,
bytecode->Length,
layoutDesc == nullptr ? nullptr : layoutDescCopy->data(),
layoutDescNumElements,
layout);
}
});
}

In this code, once you've read in the byte data for the vertex shader's CSO file, you create the vertex shader by
calling ID3D11Device::CreateVertexShader. After that, you create your input layout for the shader in the same
lambda.
Other shader types, such as hull and geometry shaders, can also require specific configuration. Complete code for
a variety of shader loading methods is provided in Complete code for BasicLoader and in the Direct3D resource
loading sample.

Remarks
At this point, you should understand and be able to create or modify methods for asynchronously loading
common game resources and assets, such as meshes, textures, and compiled shaders.

Related topics
Direct3D resource loading sample
Complete code for BasicLoader
Complete code for BasicReaderWriter
Complete code for DDSTextureLoader
Complete code for BasicLoader
3/6/2017 10 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Complete code for a class and methods that convert and load common graphics resources, such as meshes,
textures, and various shader objects.
This topic contains these sections:
Technologies
Requirements
View the code (C++)

Download location
This sample is not available for download.

Technologies
Programming languages - C++
Programming models - Windows Runtime

Requirements
Minimum supported client - Windows 10
Minimum supported server - Windows Server 2016 Technical Preview

View the code (C++)


BasicLoader.h
//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#pragma once

#include "BasicReaderWriter.h"

// A simple loader class that provides support for loading shaders, textures,
// and meshes from files on disk. Provides synchronous and asynchronous methods.
ref class BasicLoader
{
internal:
BasicLoader(
_In_ ID3D11Device* d3dDevice,
_In_opt_ IWICImagingFactory2* wicFactory = nullptr
);

void LoadTexture(
_In_ Platform::String^ filename,
_Out_opt_ ID3D11Texture2D** texture,
_Out_opt_ ID3D11ShaderResourceView** textureView
);

concurrency::task<void> LoadTextureAsync(
_In_ Platform::String^ filename,
_Out_opt_ ID3D11Texture2D** texture,
_Out_opt_ ID3D11ShaderResourceView** textureView
);

void LoadShader(
_In_ Platform::String^ filename,
_In_reads_opt_(layoutDescNumElements) D3D11_INPUT_ELEMENT_DESC layoutDesc[],
_In_ uint32 layoutDescNumElements,
_Out_ ID3D11VertexShader** shader,
_Out_opt_ ID3D11InputLayout** layout
);

concurrency::task<void> LoadShaderAsync(
_In_ Platform::String^ filename,
_In_reads_opt_(layoutDescNumElements) D3D11_INPUT_ELEMENT_DESC layoutDesc[],
_In_ uint32 layoutDescNumElements,
_Out_ ID3D11VertexShader** shader,
_Out_opt_ ID3D11InputLayout** layout
);

void LoadShader(
_In_ Platform::String^ filename,
_Out_ ID3D11PixelShader** shader
);

concurrency::task<void> LoadShaderAsync(
_In_ Platform::String^ filename,
_Out_ ID3D11PixelShader** shader
);

void LoadShader(
_In_ Platform::String^ filename,
_Out_ ID3D11ComputeShader** shader
);

concurrency::task<void> LoadShaderAsync(
_In_ Platform::String^ filename,
_Out_ ID3D11ComputeShader** shader
);

void LoadShader(
_In_ Platform::String^ filename,
_Out_ ID3D11GeometryShader** shader
);

concurrency::task<void> LoadShaderAsync(
_In_ Platform::String^ filename,
_Out_ ID3D11GeometryShader** shader
);

void LoadShader(
_In_ Platform::String^ filename,
_In_reads_opt_(numEntries) const D3D11_SO_DECLARATION_ENTRY* streamOutDeclaration,
_In_ uint32 numEntries,
_In_reads_opt_(numStrides) const uint32* bufferStrides,
_In_ uint32 numStrides,
_In_ uint32 rasterizedStream,
_Out_ ID3D11GeometryShader** shader
);

concurrency::task<void> LoadShaderAsync(
_In_ Platform::String^ filename,
_In_reads_opt_(numEntries) const D3D11_SO_DECLARATION_ENTRY* streamOutDeclaration,
_In_ uint32 numEntries,
_In_reads_opt_(numStrides) const uint32* bufferStrides,
_In_ uint32 numStrides,
_In_ uint32 rasterizedStream,
_Out_ ID3D11GeometryShader** shader
);

void LoadShader(
_In_ Platform::String^ filename,
_Out_ ID3D11HullShader** shader
);

concurrency::task<void> LoadShaderAsync(
_In_ Platform::String^ filename,
_Out_ ID3D11HullShader** shader
);

void LoadShader(
_In_ Platform::String^ filename,
_Out_ ID3D11DomainShader** shader
);

concurrency::task<void> LoadShaderAsync(
_In_ Platform::String^ filename,
_Out_ ID3D11DomainShader** shader
);

void LoadMesh(
_In_ Platform::String^ filename,
_Out_ ID3D11Buffer** vertexBuffer,
_Out_ ID3D11Buffer** indexBuffer,
_Out_opt_ uint32* vertexCount,
_Out_opt_ uint32* indexCount
);

concurrency::task<void> LoadMeshAsync(
_In_ Platform::String^ filename,
_Out_ ID3D11Buffer** vertexBuffer,
_Out_ ID3D11Buffer** indexBuffer,
_Out_opt_ uint32* vertexCount,
_Out_opt_ uint32* indexCount
);

private:
Microsoft::WRL::ComPtr<ID3D11Device> m_d3dDevice;
Microsoft::WRL::ComPtr<IWICImagingFactory2> m_wicFactory;
BasicReaderWriter^ m_basicReaderWriter;

template <class DeviceChildType>


inline void SetDebugName(
_In_ DeviceChildType* object,
_In_ Platform::String^ name
);

Platform::String^ GetExtension(
_In_ Platform::String^ filename
);

void CreateTexture(
_In_ bool decodeAsDDS,
_In_reads_bytes_(dataSize) byte* data,
_In_ uint32 dataSize,
_Out_opt_ ID3D11Texture2D** texture,
_Out_opt_ ID3D11ShaderResourceView** textureView,
_In_opt_ Platform::String^ debugName
);

void CreateInputLayout(
_In_reads_bytes_(bytecodeSize) byte* bytecode,
_In_reads_bytes_(bytecodeSize) byte* bytecode,
_In_ uint32 bytecodeSize,
_In_reads_opt_(layoutDescNumElements) D3D11_INPUT_ELEMENT_DESC* layoutDesc,
_In_ uint32 layoutDescNumElements,
_Out_ ID3D11InputLayout** layout
);

void CreateMesh(
_In_ byte* meshData,
_Out_ ID3D11Buffer** vertexBuffer,
_Out_ ID3D11Buffer** indexBuffer,
_Out_opt_ uint32* vertexCount,
_Out_opt_ uint32* indexCount,
_In_opt_ Platform::String^ debugName
);
};

BasicLoader.cpp
//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#include "pch.h"
#include "BasicLoader.h"
#include "BasicShapes.h"
#include "DDSTextureLoader.h"
#include "DirectXSample.h"
#include <memory>

using namespace Microsoft::WRL;


using namespace Windows::Storage;
using namespace Windows::Storage::Streams;
using namespace Windows::Foundation;
using namespace Windows::ApplicationModel;
using namespace std;
using namespace concurrency;

BasicLoader::BasicLoader(
_In_ ID3D11Device* d3dDevice,
_In_opt_ IWICImagingFactory2* wicFactory
):
m_d3dDevice(d3dDevice),
m_wicFactory(wicFactory)
{
// Create a new BasicReaderWriter to do raw file I/O.
m_basicReaderWriter = ref new BasicReaderWriter();
}

template <class DeviceChildType>


inline void BasicLoader::SetDebugName(
_In_ DeviceChildType* object,
_In_ Platform::String^ name
)
{
#if defined(_DEBUG)
// Only assign debug names in debug builds.

char nameString[1024];
int nameStringLength = WideCharToMultiByte(
CP_ACP,
0,
name->Data(),
-1,
-1,
nameString,
1024,
nullptr,
nullptr
);

if (nameStringLength == 0)
{
char defaultNameString[] = "BasicLoaderObject";
DX::ThrowIfFailed(
object->SetPrivateData(
WKPDID_D3DDebugObjectName,
sizeof(defaultNameString) - 1,
defaultNameString
)
);
}
else
{
DX::ThrowIfFailed(
object->SetPrivateData(
WKPDID_D3DDebugObjectName,
nameStringLength - 1,
nameString
)
);
}
#endif
}

Platform::String^ BasicLoader::GetExtension(
_In_ Platform::String^ filename
)
{
int lastDotIndex = -1;
for (int i = filename->Length() - 1; i >= 0 && lastDotIndex == -1; i--)
{
if (*(filename->Data() + i) == '.')
{
lastDotIndex = i;
}
}
if (lastDotIndex != -1)
{
std::unique_ptr<wchar_t[]> extension(new wchar_t[filename->Length() - lastDotIndex]);
for (unsigned int i = 0; i < filename->Length() - lastDotIndex; i++)
{
extension[i] = tolower(*(filename->Data() + lastDotIndex + 1 + i));
}
return ref new Platform::String(extension.get());
}
return "";
}

void BasicLoader::CreateTexture(
_In_ bool decodeAsDDS,
_In_reads_bytes_(dataSize) byte* data,
_In_ uint32 dataSize,
_Out_opt_ ID3D11Texture2D** texture,
_Out_opt_ ID3D11ShaderResourceView** textureView,
_In_opt_ Platform::String^ debugName
)
{
ComPtr<ID3D11ShaderResourceView> shaderResourceView;
ComPtr<ID3D11Texture2D> texture2D;

if (decodeAsDDS)
{
ComPtr<ID3D11Resource> resource;
ComPtr<ID3D11Resource> resource;

if (textureView == nullptr)
{
CreateDDSTextureFromMemory(
m_d3dDevice.Get(),
data,
dataSize,
&resource,
nullptr
);
}
else
{
CreateDDSTextureFromMemory(
m_d3dDevice.Get(),
data,
dataSize,
&resource,
&shaderResourceView
);
}

DX::ThrowIfFailed(
resource.As(&texture2D)
);
}
else
{
if (m_wicFactory.Get() == nullptr)
{
// A WIC factory object is required in order to load texture
// assets stored in non-DDS formats. If BasicLoader was not
// initialized with one, create one as needed.
DX::ThrowIfFailed(
CoCreateInstance(
CLSID_WICImagingFactory,
nullptr,
CLSCTX_INPROC_SERVER,
IID_PPV_ARGS(&m_wicFactory)
)
);
}

ComPtr<IWICStream> stream;
DX::ThrowIfFailed(
m_wicFactory->CreateStream(&stream)
);

DX::ThrowIfFailed(
stream->InitializeFromMemory(
data,
dataSize
)
);

ComPtr<IWICBitmapDecoder> bitmapDecoder;
DX::ThrowIfFailed(
m_wicFactory->CreateDecoderFromStream(
stream.Get(),
nullptr,
WICDecodeMetadataCacheOnDemand,
&bitmapDecoder
)
);

ComPtr<IWICBitmapFrameDecode> bitmapFrame;
DX::ThrowIfFailed(
bitmapDecoder->GetFrame(0, &bitmapFrame)
);
);

ComPtr<IWICFormatConverter> formatConverter;
DX::ThrowIfFailed(
m_wicFactory->CreateFormatConverter(&formatConverter)
);

DX::ThrowIfFailed(
formatConverter->Initialize(
bitmapFrame.Get(),
GUID_WICPixelFormat32bppPBGRA,
WICBitmapDitherTypeNone,
nullptr,
0.0,
WICBitmapPaletteTypeCustom
)
);

uint32 width;
uint32 height;
DX::ThrowIfFailed(
bitmapFrame->GetSize(&width, &height)
);

std::unique_ptr<byte[]> bitmapPixels(new byte[width * height * 4]);


DX::ThrowIfFailed(
formatConverter->CopyPixels(
nullptr,
width * 4,
width * height * 4,
bitmapPixels.get()
)
);

D3D11_SUBRESOURCE_DATA initialData;
ZeroMemory(&initialData, sizeof(initialData));
initialData.pSysMem = bitmapPixels.get();
initialData.SysMemPitch = width * 4;
initialData.SysMemSlicePitch = 0;

CD3D11_TEXTURE2D_DESC textureDesc(
DXGI_FORMAT_B8G8R8A8_UNORM,
width,
height,
1,
1
);

DX::ThrowIfFailed(
m_d3dDevice->CreateTexture2D(
&textureDesc,
&initialData,
&texture2D
)
);

if (textureView != nullptr)
{
CD3D11_SHADER_RESOURCE_VIEW_DESC shaderResourceViewDesc(
texture2D.Get(),
D3D11_SRV_DIMENSION_TEXTURE2D
);

DX::ThrowIfFailed(
m_d3dDevice->CreateShaderResourceView(
texture2D.Get(),
&shaderResourceViewDesc,
&shaderResourceView
)
);
);
}
}

SetDebugName(texture2D.Get(), debugName);

if (texture != nullptr)
{
*texture = texture2D.Detach();
}
if (textureView != nullptr)
{
*textureView = shaderResourceView.Detach();
}
}

void BasicLoader::CreateInputLayout(
_In_reads_bytes_(bytecodeSize) byte* bytecode,
_In_ uint32 bytecodeSize,
_In_reads_opt_(layoutDescNumElements) D3D11_INPUT_ELEMENT_DESC* layoutDesc,
_In_ uint32 layoutDescNumElements,
_Out_ ID3D11InputLayout** layout
)
{
if (layoutDesc == nullptr)
{
// If no input layout is specified, use the BasicVertex layout.
const D3D11_INPUT_ELEMENT_DESC basicVertexLayoutDesc[] =
{
{ "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 },
{ "NORMAL", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0 },
{ "TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT, 0, 24, D3D11_INPUT_PER_VERTEX_DATA, 0 },
};

DX::ThrowIfFailed(
m_d3dDevice->CreateInputLayout(
basicVertexLayoutDesc,
ARRAYSIZE(basicVertexLayoutDesc),
bytecode,
bytecodeSize,
layout
)
);
}
else
{
DX::ThrowIfFailed(
m_d3dDevice->CreateInputLayout(
layoutDesc,
layoutDescNumElements,
bytecode,
bytecodeSize,
layout
)
);
}
}

void BasicLoader::CreateMesh(
_In_ byte* meshData,
_Out_ ID3D11Buffer** vertexBuffer,
_Out_ ID3D11Buffer** indexBuffer,
_Out_opt_ uint32* vertexCount,
_Out_opt_ uint32* indexCount,
_In_opt_ Platform::String^ debugName
)
{
// The first 4 bytes of the BasicMesh format define the number of vertices in the mesh.
uint32 numVertices = *reinterpret_cast<uint32*>(meshData);
// The following 4 bytes define the number of indices in the mesh.
uint32 numIndices = *reinterpret_cast<uint32*>(meshData + sizeof(uint32));

// The next segment of the BasicMesh format contains the vertices of the mesh.
BasicVertex* vertices = reinterpret_cast<BasicVertex*>(meshData + sizeof(uint32) * 2);

// The last segment of the BasicMesh format contains the indices of the mesh.
uint16* indices = reinterpret_cast<uint16*>(meshData + sizeof(uint32) * 2 + sizeof(BasicVertex) * numVertices);

// Create the vertex and index buffers with the mesh data.

D3D11_SUBRESOURCE_DATA vertexBufferData = {0};


vertexBufferData.pSysMem = vertices;
vertexBufferData.SysMemPitch = 0;
vertexBufferData.SysMemSlicePitch = 0;
CD3D11_BUFFER_DESC vertexBufferDesc(numVertices * sizeof(BasicVertex), D3D11_BIND_VERTEX_BUFFER);
DX::ThrowIfFailed(
m_d3dDevice->CreateBuffer(
&vertexBufferDesc,
&vertexBufferData,
vertexBuffer
)
);

D3D11_SUBRESOURCE_DATA indexBufferData = {0};


indexBufferData.pSysMem = indices;
indexBufferData.SysMemPitch = 0;
indexBufferData.SysMemSlicePitch = 0;
CD3D11_BUFFER_DESC indexBufferDesc(numIndices * sizeof(uint16), D3D11_BIND_INDEX_BUFFER);
DX::ThrowIfFailed(
m_d3dDevice->CreateBuffer(
&indexBufferDesc,
&indexBufferData,
indexBuffer
)
);

SetDebugName(*vertexBuffer, Platform::String::Concat(debugName, "_VertexBuffer"));


SetDebugName(*indexBuffer, Platform::String::Concat(debugName, "_IndexBuffer"));

if (vertexCount != nullptr)
{
*vertexCount = numVertices;
}
if (indexCount != nullptr)
{
*indexCount = numIndices;
}
}

void BasicLoader::LoadTexture(
_In_ Platform::String^ filename,
_Out_opt_ ID3D11Texture2D** texture,
_Out_opt_ ID3D11ShaderResourceView** textureView
)
{
Platform::Array<byte>^ textureData = m_basicReaderWriter->ReadData(filename);

CreateTexture(
GetExtension(filename) == "dds",
textureData->Data,
textureData->Length,
texture,
textureView,
filename
);
}
task<void> BasicLoader::LoadTextureAsync(
_In_ Platform::String^ filename,
_Out_opt_ ID3D11Texture2D** texture,
_Out_opt_ ID3D11ShaderResourceView** textureView
)
{
return m_basicReaderWriter->ReadDataAsync(filename).then([=](const Platform::Array<byte>^ textureData)
{
CreateTexture(
GetExtension(filename) == "dds",
textureData->Data,
textureData->Length,
texture,
textureView,
filename
);
});
}

void BasicLoader::LoadShader(
_In_ Platform::String^ filename,
_In_reads_opt_(layoutDescNumElements) D3D11_INPUT_ELEMENT_DESC layoutDesc[],
_In_ uint32 layoutDescNumElements,
_Out_ ID3D11VertexShader** shader,
_Out_opt_ ID3D11InputLayout** layout
)
{
Platform::Array<byte>^ bytecode = m_basicReaderWriter->ReadData(filename);

DX::ThrowIfFailed(
m_d3dDevice->CreateVertexShader(
bytecode->Data,
bytecode->Length,
nullptr,
shader
)
);

SetDebugName(*shader, filename);

if (layout != nullptr)
{
CreateInputLayout(
bytecode->Data,
bytecode->Length,
layoutDesc,
layoutDescNumElements,
layout
);

SetDebugName(*layout, filename);
}
}

task<void> BasicLoader::LoadShaderAsync(
_In_ Platform::String^ filename,
_In_reads_opt_(layoutDescNumElements) D3D11_INPUT_ELEMENT_DESC layoutDesc[],
_In_ uint32 layoutDescNumElements,
_Out_ ID3D11VertexShader** shader,
_Out_opt_ ID3D11InputLayout** layout
)
{
// This method assumes that the lifetime of input arguments may be shorter
// than the duration of this task. In order to ensure accurate results, a
// copy of all arguments passed by pointer must be made. The method then
// ensures that the lifetime of the copied data exceeds that of the task.

// Create copies of the layoutDesc array as well as the SemanticName strings,


// both of which are pointers to data whose lifetimes may be shorter than that
// of this method's task.
shared_ptr<vector<D3D11_INPUT_ELEMENT_DESC>> layoutDescCopy;
shared_ptr<vector<string>> layoutDescSemanticNamesCopy;
if (layoutDesc != nullptr)
{
layoutDescCopy.reset(
new vector<D3D11_INPUT_ELEMENT_DESC>(
layoutDesc,
layoutDesc + layoutDescNumElements
)
);

layoutDescSemanticNamesCopy.reset(
new vector<string>(layoutDescNumElements)
);

for (uint32 i = 0; i < layoutDescNumElements; i++)


{
layoutDescSemanticNamesCopy->at(i).assign(layoutDesc[i].SemanticName);
}
}

return m_basicReaderWriter->ReadDataAsync(filename).then([=](const Platform::Array<byte>^ bytecode)


{
DX::ThrowIfFailed(
m_d3dDevice->CreateVertexShader(
bytecode->Data,
bytecode->Length,
nullptr,
shader
)
);

SetDebugName(*shader, filename);

if (layout != nullptr)
{
if (layoutDesc != nullptr)
{
// Reassign the SemanticName elements of the layoutDesc array copy to point
// to the corresponding copied strings. Performing the assignment inside the
// lambda body ensures that the lambda will take a reference to the shared_ptr
// that holds the data. This will guarantee that the data is still valid when
// CreateInputLayout is called.
for (uint32 i = 0; i < layoutDescNumElements; i++)
{
layoutDescCopy->at(i).SemanticName = layoutDescSemanticNamesCopy->at(i).c_str();
}
}

CreateInputLayout(
bytecode->Data,
bytecode->Length,
layoutDesc == nullptr ? nullptr : layoutDescCopy->data(),
layoutDescNumElements,
layout
);

SetDebugName(*layout, filename);
}
});
}

void BasicLoader::LoadShader(
_In_ Platform::String^ filename,
_Out_ ID3D11PixelShader** shader
)
{
Platform::Array<byte>^ bytecode = m_basicReaderWriter->ReadData(filename);
Platform::Array<byte>^ bytecode = m_basicReaderWriter->ReadData(filename);

DX::ThrowIfFailed(
m_d3dDevice->CreatePixelShader(
bytecode->Data,
bytecode->Length,
nullptr,
shader
)
);

SetDebugName(*shader, filename);
}

task<void> BasicLoader::LoadShaderAsync(
_In_ Platform::String^ filename,
_Out_ ID3D11PixelShader** shader
)
{
return m_basicReaderWriter->ReadDataAsync(filename).then([=](const Platform::Array<byte>^ bytecode)
{
DX::ThrowIfFailed(
m_d3dDevice->CreatePixelShader(
bytecode->Data,
bytecode->Length,
nullptr,
shader
)
);

SetDebugName(*shader, filename);
});
}

void BasicLoader::LoadShader(
_In_ Platform::String^ filename,
_Out_ ID3D11ComputeShader** shader
)
{
Platform::Array<byte>^ bytecode = m_basicReaderWriter->ReadData(filename);

DX::ThrowIfFailed(
m_d3dDevice->CreateComputeShader(
bytecode->Data,
bytecode->Length,
nullptr,
shader
)
);

SetDebugName(*shader, filename);
}

task<void> BasicLoader::LoadShaderAsync(
_In_ Platform::String^ filename,
_Out_ ID3D11ComputeShader** shader
)
{
return m_basicReaderWriter->ReadDataAsync(filename).then([=](const Platform::Array<byte>^ bytecode)
{
DX::ThrowIfFailed(
m_d3dDevice->CreateComputeShader(
bytecode->Data,
bytecode->Length,
nullptr,
shader
)
);

SetDebugName(*shader, filename);
SetDebugName(*shader, filename);
});
}

void BasicLoader::LoadShader(
_In_ Platform::String^ filename,
_Out_ ID3D11GeometryShader** shader
)
{
Platform::Array<byte>^ bytecode = m_basicReaderWriter->ReadData(filename);

DX::ThrowIfFailed(
m_d3dDevice->CreateGeometryShader(
bytecode->Data,
bytecode->Length,
nullptr,
shader
)
);

SetDebugName(*shader, filename);
}

task<void> BasicLoader::LoadShaderAsync(
_In_ Platform::String^ filename,
_Out_ ID3D11GeometryShader** shader
)
{
return m_basicReaderWriter->ReadDataAsync(filename).then([=](const Platform::Array<byte>^ bytecode)
{
DX::ThrowIfFailed(
m_d3dDevice->CreateGeometryShader(
bytecode->Data,
bytecode->Length,
nullptr,
shader
)
);

SetDebugName(*shader, filename);
});
}

void BasicLoader::LoadShader(
_In_ Platform::String^ filename,
_In_reads_opt_(numEntries) const D3D11_SO_DECLARATION_ENTRY* streamOutDeclaration,
_In_ uint32 numEntries,
_In_reads_opt_(numStrides) const uint32* bufferStrides,
_In_ uint32 numStrides,
_In_ uint32 rasterizedStream,
_Out_ ID3D11GeometryShader** shader
)
{
Platform::Array<byte>^ bytecode = m_basicReaderWriter->ReadData(filename);

DX::ThrowIfFailed(
m_d3dDevice->CreateGeometryShaderWithStreamOutput(
bytecode->Data,
bytecode->Length,
streamOutDeclaration,
numEntries,
bufferStrides,
numStrides,
rasterizedStream,
nullptr,
shader
)
);

SetDebugName(*shader, filename);
SetDebugName(*shader, filename);
}

task<void> BasicLoader::LoadShaderAsync(
_In_ Platform::String^ filename,
_In_reads_opt_(numEntries) const D3D11_SO_DECLARATION_ENTRY* streamOutDeclaration,
_In_ uint32 numEntries,
_In_reads_opt_(numStrides) const uint32* bufferStrides,
_In_ uint32 numStrides,
_In_ uint32 rasterizedStream,
_Out_ ID3D11GeometryShader** shader
)
{
// This method assumes that the lifetime of input arguments may be shorter
// than the duration of this task. In order to ensure accurate results, a
// copy of all arguments passed by pointer must be made. The method then
// ensures that the lifetime of the copied data exceeds that of the task.

// Create copies of the streamOutDeclaration array as well as the SemanticName


// strings, both of which are pointers to data whose lifetimes may be shorter
// than that of this method's task.
shared_ptr<vector<D3D11_SO_DECLARATION_ENTRY>> streamOutDeclarationCopy;
shared_ptr<vector<string>> streamOutDeclarationSemanticNamesCopy;
if (streamOutDeclaration != nullptr)
{
streamOutDeclarationCopy.reset(
new vector<D3D11_SO_DECLARATION_ENTRY>(
streamOutDeclaration,
streamOutDeclaration + numEntries
)
);

streamOutDeclarationSemanticNamesCopy.reset(
new vector<string>(numEntries)
);

for (uint32 i = 0; i < numEntries; i++)


{
streamOutDeclarationSemanticNamesCopy->at(i).assign(streamOutDeclaration[i].SemanticName);
}
}

// Create a copy of the bufferStrides array, which is a pointer to data


// whose lifetime may be shorter than that of this method's task.
shared_ptr<vector<uint32>> bufferStridesCopy;
if (bufferStrides != nullptr)
{
bufferStridesCopy.reset(
new vector<uint32>(
bufferStrides,
bufferStrides + numStrides
)
);
}

return m_basicReaderWriter->ReadDataAsync(filename).then([=](const Platform::Array<byte>^ bytecode)


{
if (streamOutDeclaration != nullptr)
{
// Reassign the SemanticName elements of the streamOutDeclaration array copy to
// point to the corresponding copied strings. Performing the assignment inside the
// lambda body ensures that the lambda will take a reference to the shared_ptr
// that holds the data. This will guarantee that the data is still valid when
// CreateGeometryShaderWithStreamOutput is called.
for (uint32 i = 0; i < numEntries; i++)
{
streamOutDeclarationCopy->at(i).SemanticName = streamOutDeclarationSemanticNamesCopy->at(i).c_str();
}
}
DX::ThrowIfFailed(
m_d3dDevice->CreateGeometryShaderWithStreamOutput(
bytecode->Data,
bytecode->Length,
streamOutDeclaration == nullptr ? nullptr : streamOutDeclarationCopy->data(),
numEntries,
bufferStrides == nullptr ? nullptr : bufferStridesCopy->data(),
numStrides,
rasterizedStream,
nullptr,
shader
)
);

SetDebugName(*shader, filename);
});
}

void BasicLoader::LoadShader(
_In_ Platform::String^ filename,
_Out_ ID3D11HullShader** shader
)
{
Platform::Array<byte>^ bytecode = m_basicReaderWriter->ReadData(filename);

DX::ThrowIfFailed(
m_d3dDevice->CreateHullShader(
bytecode->Data,
bytecode->Length,
nullptr,
shader
)
);

SetDebugName(*shader, filename);
}

task<void> BasicLoader::LoadShaderAsync(
_In_ Platform::String^ filename,
_Out_ ID3D11HullShader** shader
)
{
return m_basicReaderWriter->ReadDataAsync(filename).then([=](const Platform::Array<byte>^ bytecode)
{
DX::ThrowIfFailed(
m_d3dDevice->CreateHullShader(
bytecode->Data,
bytecode->Length,
nullptr,
shader
)
);

SetDebugName(*shader, filename);
});
}

void BasicLoader::LoadShader(
_In_ Platform::String^ filename,
_Out_ ID3D11DomainShader** shader
)
{
Platform::Array<byte>^ bytecode = m_basicReaderWriter->ReadData(filename);

DX::ThrowIfFailed(
m_d3dDevice->CreateDomainShader(
bytecode->Data,
bytecode->Length,
nullptr,
shader
)
);

SetDebugName(*shader, filename);
}

task<void> BasicLoader::LoadShaderAsync(
_In_ Platform::String^ filename,
_Out_ ID3D11DomainShader** shader
)
{
return m_basicReaderWriter->ReadDataAsync(filename).then([=](const Platform::Array<byte>^ bytecode)
{
DX::ThrowIfFailed(
m_d3dDevice->CreateDomainShader(
bytecode->Data,
bytecode->Length,
nullptr,
shader
)
);

SetDebugName(*shader, filename);
});
}

void BasicLoader::LoadMesh(
_In_ Platform::String^ filename,
_Out_ ID3D11Buffer** vertexBuffer,
_Out_ ID3D11Buffer** indexBuffer,
_Out_opt_ uint32* vertexCount,
_Out_opt_ uint32* indexCount
)
{
Platform::Array<byte>^ meshData = m_basicReaderWriter->ReadData(filename);

CreateMesh(
meshData->Data,
vertexBuffer,
indexBuffer,
vertexCount,
indexCount,
filename
);
}

task<void> BasicLoader::LoadMeshAsync(
_In_ Platform::String^ filename,
_Out_ ID3D11Buffer** vertexBuffer,
_Out_ ID3D11Buffer** indexBuffer,
_Out_opt_ uint32* vertexCount,
_Out_opt_ uint32* indexCount
)
{
return m_basicReaderWriter->ReadDataAsync(filename).then([=](const Platform::Array<byte>^ meshData)
{
CreateMesh(
meshData->Data,
vertexBuffer,
indexBuffer,
vertexCount,
indexCount,
filename
);
});
}
Complete code for BasicReaderWriter
3/6/2017 2 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Complete code for a class and methods for reading and writing binary data files in general. Used by the
BasicLoader class.
This topic contains these sections:
Technologies
Requirements
View the code (C++)

Download location
This sample is not available for download.

Technologies
Programming languages - C++
Programming models - Windows Runtime

Requirements
Minimum supported client - Windows 10
Minimum supported server - Windows Server 2016 Technical Preview

View the code (C++)


BasicReaderWriter.h
//// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
//// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
//// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
//// PARTICULAR PURPOSE.
////
//// Copyright (c) Microsoft Corporation. All rights reserved

#pragma once

#include <ppltasks.h>

// A simple reader/writer class that provides support for reading and writing
// files on disk. Provides synchronous and asynchronous methods.
ref class BasicReaderWriter
{
private:
Windows::Storage::StorageFolder^ m_location;
Platform::String^ m_locationPath;

internal:
BasicReaderWriter();
BasicReaderWriter(
_In_ Windows::Storage::StorageFolder^ folder
);

Platform::Array<byte>^ ReadData(
_In_ Platform::String^ filename
);

concurrency::task<Platform::Array<byte>^> ReadDataAsync(
_In_ Platform::String^ filename
);

uint32 WriteData(
_In_ Platform::String^ filename,
_In_ const Platform::Array<byte>^ fileData
);

concurrency::task<void> WriteDataAsync(
_In_ Platform::String^ filename,
_In_ const Platform::Array<byte>^ fileData
);
};

BasicReaderWriter.cpp
using namespace Microsoft::WRL;
using namespace Windows::Storage;
using namespace Windows::Storage::FileProperties;
using namespace Windows::Storage::Streams;
using namespace Windows::Foundation;
using namespace Windows::ApplicationModel;
using namespace concurrency;

BasicReaderWriter::BasicReaderWriter()
{
m_location = Package::Current->InstalledLocation;
m_locationPath = Platform::String::Concat(m_location->Path, "\\");
}

BasicReaderWriter::BasicReaderWriter(
_In_ Windows::Storage::StorageFolder^ folder
)
{
m_location = folder;
Platform::String^ path = m_location->Path;
if (path->Length() == 0)
{
// Applications are not permitted to access certain
// folders, such as the Documents folder, using this
// code path. In such cases, the Path property for
// the folder will be an empty string.
throw ref new Platform::FailureException();
}
m_locationPath = Platform::String::Concat(path, "\\");
}

Platform::Array<byte>^ BasicReaderWriter::ReadData(
_In_ Platform::String^ filename
)
{
CREATEFILE2_EXTENDED_PARAMETERS extendedParams = {0};
extendedParams.dwSize = sizeof(CREATEFILE2_EXTENDED_PARAMETERS);
extendedParams.dwFileAttributes = FILE_ATTRIBUTE_NORMAL;
extendedParams.dwFileFlags = FILE_FLAG_SEQUENTIAL_SCAN;
extendedParams.dwSecurityQosFlags = SECURITY_ANONYMOUS;
extendedParams.lpSecurityAttributes = nullptr;
extendedParams.hTemplateFile = nullptr;

Wrappers::FileHandle file(
CreateFile2(
Platform::String::Concat(m_locationPath, filename)->Data(),
GENERIC_READ,
FILE_SHARE_READ,
OPEN_EXISTING,
&extendedParams
)
);
if (file.Get() == INVALID_HANDLE_VALUE)
{
throw ref new Platform::FailureException();
}

FILE_STANDARD_INFO fileInfo = {0};


if (!GetFileInformationByHandleEx(
file.Get(),
FileStandardInfo,
&fileInfo,
sizeof(fileInfo)
))
{
throw ref new Platform::FailureException();
}

if (fileInfo.EndOfFile.HighPart != 0)
{
throw ref new Platform::OutOfMemoryException();
}

Platform::Array<byte>^ fileData = ref new Platform::Array<byte>(fileInfo.EndOfFile.LowPart);

if (!ReadFile(
file.Get(),
fileData->Data,
fileData->Length,
nullptr,
nullptr
))
{
throw ref new Platform::FailureException();
}

return fileData;
}
}

task<Platform::Array<byte>^> BasicReaderWriter::ReadDataAsync(
_In_ Platform::String^ filename
)
{
return task<StorageFile^>(m_location->GetFileAsync(filename)).then([=](StorageFile^ file)
{
return FileIO::ReadBufferAsync(file);
}).then([=](IBuffer^ buffer)
{
auto fileData = ref new Platform::Array<byte>(buffer->Length);
DataReader::FromBuffer(buffer)->ReadBytes(fileData);
return fileData;
});
}

uint32 BasicReaderWriter::WriteData(
_In_ Platform::String^ filename,
_In_ const Platform::Array<byte>^ fileData
)
{
CREATEFILE2_EXTENDED_PARAMETERS extendedParams = {0};
extendedParams.dwSize = sizeof(CREATEFILE2_EXTENDED_PARAMETERS);
extendedParams.dwFileAttributes = FILE_ATTRIBUTE_NORMAL;
extendedParams.dwFileFlags = FILE_FLAG_SEQUENTIAL_SCAN;
extendedParams.dwSecurityQosFlags = SECURITY_ANONYMOUS;
extendedParams.lpSecurityAttributes = nullptr;
extendedParams.hTemplateFile = nullptr;

Wrappers::FileHandle file(
CreateFile2(
Platform::String::Concat(m_locationPath, filename)->Data(),
GENERIC_WRITE,
0,
CREATE_ALWAYS,
&extendedParams
)
);
if (file.Get() == INVALID_HANDLE_VALUE)
{
throw ref new Platform::FailureException();
}

DWORD numBytesWritten;
if (
!WriteFile(
file.Get(),
fileData->Data,
fileData->Length,
&numBytesWritten,
nullptr
) ||
numBytesWritten != fileData->Length
)
{
throw ref new Platform::FailureException();
}

return numBytesWritten;
}

task<void> BasicReaderWriter::WriteDataAsync(
_In_ Platform::String^ filename,
_In_ const Platform::Array<byte>^ fileData
)
{
return task<StorageFile^>(m_location->CreateFileAsync(filename, CreationCollisionOption::ReplaceExisting)).then([=](StorageFile^ file)
{
FileIO::WriteBytesAsync(file, fileData);
FileIO::WriteBytesAsync(file, fileData);
});
}
Complete code for DDSTextureLoader
3/6/2017 16 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Complete code for a class and method that loads a DDS texture from memory.
This topic contains these sections:
Technologies
Requirements
View the code (C++)

Download location
This sample is not available for download.

Technologies
Programming languages - C++
Programming models - Windows Runtime

Requirements
Minimum supported client - Windows 10
Minimum supported server - Windows Server 2016 Technical Preview

View the code (C++)


DDSTextureLoader.h
//--------------------------------------------------------------------------------------
// File: DDSTextureLoader.h
//
// Function for loading a DDS texture and creating a Direct3D 11 runtime resource for it
//
// Note this function is useful as a light-weight runtime loader for DDS files. For
// a full-featured DDS file reader, writer, and texture processing pipeline, see
// the 'Texconv' sample and the 'DirectXTex' library.
//
// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
// PARTICULAR PURPOSE.
//
// Copyright (c) Microsoft Corporation. All rights reserved.
//--------------------------------------------------------------------------------------

#pragma once

void CreateDDSTextureFromMemory(
_In_ ID3D11Device* d3dDevice,
_In_reads_bytes_(ddsDataSize) const byte* ddsData,
_In_ size_t ddsDataSize,
_Out_opt_ ID3D11Resource** texture,
_Out_opt_ ID3D11ShaderResourceView** textureView,
_In_ size_t maxsize = 0
);

DDSTextureLoader.cpp
//--------------------------------------------------------------------------------------
// File: DDSTextureLoader.cpp
//
// Function for loading a DDS texture and creating a Direct3D 11 runtime resource for it
//
// Note this function is useful as a light-weight runtime loader for DDS files. For
// a full-featured DDS file reader, writer, and texture processing pipeline, see
// the 'Texconv' sample and the 'DirectXTex' library.
//
// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
// PARTICULAR PURPOSE.
//
// Copyright (c) Microsoft Corporation. All rights reserved.
//--------------------------------------------------------------------------------------

#include "pch.h"
#include <dxgiformat.h>
#include <assert.h>
#include <memory>
#include <algorithm>
#include "DDSTextureLoader.h"
#include "DirectXSample.h"

using namespace Microsoft::WRL;

//--------------------------------------------------------------------------------------
// Macros
//--------------------------------------------------------------------------------------
#ifndef MAKEFOURCC
#define MAKEFOURCC(ch0, ch1, ch2, ch3) \
((uint32)(byte)(ch0) | ((uint32)(byte)(ch1) << 8) | \
((uint32)(byte)(ch2) << 16) | ((uint32)(byte)(ch3) << 24))
#endif /* defined(MAKEFOURCC) */
//--------------------------------------------------------------------------------------
// DDS file structure definitions
//
// See DDS.h in the 'Texconv' sample and the 'DirectXTex' library.
//--------------------------------------------------------------------------------------
#pragma pack(push, 1)

#define DDS_MAGIC 0x20534444 // "DDS "

struct DDS_PIXELFORMAT
{
uint32 size;
uint32 flags;
uint32 fourCC;
uint32 RGBBitCount;
uint32 RBitMask;
uint32 GBitMask;
uint32 BBitMask;
uint32 ABitMask;
};

#define DDS_FOURCC 0x00000004 // DDPF_FOURCC


#define DDS_RGB 0x00000040 // DDPF_RGB
#define DDS_RGBA 0x00000041 // DDPF_RGB | DDPF_ALPHAPIXELS
#define DDS_LUMINANCE 0x00020000 // DDPF_LUMINANCE
#define DDS_LUMINANCEA 0x00020001 // DDPF_LUMINANCE | DDPF_ALPHAPIXELS
#define DDS_ALPHA 0x00000002 // DDPF_ALPHA
#define DDS_PAL8 0x00000020 // DDPF_PALETTEINDEXED8

#define DDS_HEADER_FLAGS_TEXTURE 0x00001007 // DDSD_CAPS | DDSD_HEIGHT | DDSD_WIDTH | DDSD_PIXELFORMAT


#define DDS_HEADER_FLAGS_MIPMAP 0x00020000 // DDSD_MIPMAPCOUNT
#define DDS_HEADER_FLAGS_VOLUME 0x00800000 // DDSD_DEPTH
#define DDS_HEADER_FLAGS_PITCH 0x00000008 // DDSD_PITCH
#define DDS_HEADER_FLAGS_LINEARSIZE 0x00080000 // DDSD_LINEARSIZE

#define DDS_HEIGHT 0x00000002 // DDSD_HEIGHT


#define DDS_WIDTH 0x00000004 // DDSD_WIDTH

#define DDS_SURFACE_FLAGS_TEXTURE 0x00001000 // DDSCAPS_TEXTURE


#define DDS_SURFACE_FLAGS_MIPMAP 0x00400008 // DDSCAPS_COMPLEX | DDSCAPS_MIPMAP
#define DDS_SURFACE_FLAGS_CUBEMAP 0x00000008 // DDSCAPS_COMPLEX

#define DDS_CUBEMAP_POSITIVEX 0x00000600 // DDSCAPS2_CUBEMAP | DDSCAPS2_CUBEMAP_POSITIVEX


#define DDS_CUBEMAP_NEGATIVEX 0x00000a00 // DDSCAPS2_CUBEMAP | DDSCAPS2_CUBEMAP_NEGATIVEX
#define DDS_CUBEMAP_POSITIVEY 0x00001200 // DDSCAPS2_CUBEMAP | DDSCAPS2_CUBEMAP_POSITIVEY
#define DDS_CUBEMAP_NEGATIVEY 0x00002200 // DDSCAPS2_CUBEMAP | DDSCAPS2_CUBEMAP_NEGATIVEY
#define DDS_CUBEMAP_POSITIVEZ 0x00004200 // DDSCAPS2_CUBEMAP | DDSCAPS2_CUBEMAP_POSITIVEZ
#define DDS_CUBEMAP_NEGATIVEZ 0x00008200 // DDSCAPS2_CUBEMAP | DDSCAPS2_CUBEMAP_NEGATIVEZ

#define DDS_CUBEMAP_ALLFACES (DDS_CUBEMAP_POSITIVEX | DDS_CUBEMAP_NEGATIVEX |\


DDS_CUBEMAP_POSITIVEY | DDS_CUBEMAP_NEGATIVEY |\
DDS_CUBEMAP_POSITIVEZ | DDS_CUBEMAP_NEGATIVEZ)

#define DDS_CUBEMAP 0x00000200 // DDSCAPS2_CUBEMAP

#define DDS_FLAGS_VOLUME 0x00200000 // DDSCAPS2_VOLUME

typedef struct
{
uint32 size;
uint32 flags;
uint32 height;
uint32 width;
uint32 pitchOrLinearSize;
uint32 depth; // only if DDS_HEADER_FLAGS_VOLUME is set in flags
uint32 mipMapCount;
uint32 reserved1[11];
DDS_PIXELFORMAT ddspf;
DDS_PIXELFORMAT ddspf;
uint32 caps;
uint32 caps2;
uint32 caps3;
uint32 caps4;
uint32 reserved2;
} DDS_HEADER;

typedef struct
{
DXGI_FORMAT dxgiFormat;
uint32 resourceDimension;
uint32 miscFlag; // See D3D11_RESOURCE_MISC_FLAG
uint32 arraySize;
uint32 reserved;
} DDS_HEADER_DXT10;

#pragma pack(pop)

//--------------------------------------------------------------------------------------
// Return the BPP for a particular format.
//--------------------------------------------------------------------------------------
static size_t BitsPerPixel(_In_ DXGI_FORMAT fmt)
{
switch (fmt)
{
case DXGI_FORMAT_R32G32B32A32_TYPELESS:
case DXGI_FORMAT_R32G32B32A32_FLOAT:
case DXGI_FORMAT_R32G32B32A32_UINT:
case DXGI_FORMAT_R32G32B32A32_SINT:
return 128;

case DXGI_FORMAT_R32G32B32_TYPELESS:
case DXGI_FORMAT_R32G32B32_FLOAT:
case DXGI_FORMAT_R32G32B32_UINT:
case DXGI_FORMAT_R32G32B32_SINT:
return 96;

case DXGI_FORMAT_R16G16B16A16_TYPELESS:
case DXGI_FORMAT_R16G16B16A16_FLOAT:
case DXGI_FORMAT_R16G16B16A16_UNORM:
case DXGI_FORMAT_R16G16B16A16_UINT:
case DXGI_FORMAT_R16G16B16A16_SNORM:
case DXGI_FORMAT_R16G16B16A16_SINT:
case DXGI_FORMAT_R32G32_TYPELESS:
case DXGI_FORMAT_R32G32_FLOAT:
case DXGI_FORMAT_R32G32_UINT:
case DXGI_FORMAT_R32G32_SINT:
case DXGI_FORMAT_R32G8X24_TYPELESS:
case DXGI_FORMAT_D32_FLOAT_S8X24_UINT:
case DXGI_FORMAT_R32_FLOAT_X8X24_TYPELESS:
case DXGI_FORMAT_X32_TYPELESS_G8X24_UINT:
return 64;

case DXGI_FORMAT_R10G10B10A2_TYPELESS:
case DXGI_FORMAT_R10G10B10A2_UNORM:
case DXGI_FORMAT_R10G10B10A2_UINT:
case DXGI_FORMAT_R11G11B10_FLOAT:
case DXGI_FORMAT_R8G8B8A8_TYPELESS:
case DXGI_FORMAT_R8G8B8A8_UNORM:
case DXGI_FORMAT_R8G8B8A8_UNORM_SRGB:
case DXGI_FORMAT_R8G8B8A8_UINT:
case DXGI_FORMAT_R8G8B8A8_SNORM:
case DXGI_FORMAT_R8G8B8A8_SINT:
case DXGI_FORMAT_R16G16_TYPELESS:
case DXGI_FORMAT_R16G16_FLOAT:
case DXGI_FORMAT_R16G16_UNORM:
case DXGI_FORMAT_R16G16_UINT:
case DXGI_FORMAT_R16G16_SNORM:
case DXGI_FORMAT_R16G16_SINT:
case DXGI_FORMAT_R16G16_SINT:
case DXGI_FORMAT_R32_TYPELESS:
case DXGI_FORMAT_D32_FLOAT:
case DXGI_FORMAT_R32_FLOAT:
case DXGI_FORMAT_R32_UINT:
case DXGI_FORMAT_R32_SINT:
case DXGI_FORMAT_R24G8_TYPELESS:
case DXGI_FORMAT_D24_UNORM_S8_UINT:
case DXGI_FORMAT_R24_UNORM_X8_TYPELESS:
case DXGI_FORMAT_X24_TYPELESS_G8_UINT:
case DXGI_FORMAT_R9G9B9E5_SHAREDEXP:
case DXGI_FORMAT_R8G8_B8G8_UNORM:
case DXGI_FORMAT_G8R8_G8B8_UNORM:
case DXGI_FORMAT_B8G8R8A8_UNORM:
case DXGI_FORMAT_B8G8R8X8_UNORM:
case DXGI_FORMAT_R10G10B10_XR_BIAS_A2_UNORM:
case DXGI_FORMAT_B8G8R8A8_TYPELESS:
case DXGI_FORMAT_B8G8R8A8_UNORM_SRGB:
case DXGI_FORMAT_B8G8R8X8_TYPELESS:
case DXGI_FORMAT_B8G8R8X8_UNORM_SRGB:
return 32;

case DXGI_FORMAT_R8G8_TYPELESS:
case DXGI_FORMAT_R8G8_UNORM:
case DXGI_FORMAT_R8G8_UINT:
case DXGI_FORMAT_R8G8_SNORM:
case DXGI_FORMAT_R8G8_SINT:
case DXGI_FORMAT_R16_TYPELESS:
case DXGI_FORMAT_R16_FLOAT:
case DXGI_FORMAT_D16_UNORM:
case DXGI_FORMAT_R16_UNORM:
case DXGI_FORMAT_R16_UINT:
case DXGI_FORMAT_R16_SNORM:
case DXGI_FORMAT_R16_SINT:
case DXGI_FORMAT_B5G6R5_UNORM:
case DXGI_FORMAT_B5G5R5A1_UNORM:
case DXGI_FORMAT_B4G4R4A4_UNORM:
return 16;

case DXGI_FORMAT_R8_TYPELESS:
case DXGI_FORMAT_R8_UNORM:
case DXGI_FORMAT_R8_UINT:
case DXGI_FORMAT_R8_SNORM:
case DXGI_FORMAT_R8_SINT:
case DXGI_FORMAT_A8_UNORM:
return 8;

case DXGI_FORMAT_R1_UNORM:
return 1;

case DXGI_FORMAT_BC1_TYPELESS:
case DXGI_FORMAT_BC1_UNORM:
case DXGI_FORMAT_BC1_UNORM_SRGB:
case DXGI_FORMAT_BC4_TYPELESS:
case DXGI_FORMAT_BC4_UNORM:
case DXGI_FORMAT_BC4_SNORM:
return 4;

case DXGI_FORMAT_BC2_TYPELESS:
case DXGI_FORMAT_BC2_UNORM:
case DXGI_FORMAT_BC2_UNORM_SRGB:
case DXGI_FORMAT_BC3_TYPELESS:
case DXGI_FORMAT_BC3_UNORM:
case DXGI_FORMAT_BC3_UNORM_SRGB:
case DXGI_FORMAT_BC5_TYPELESS:
case DXGI_FORMAT_BC5_UNORM:
case DXGI_FORMAT_BC5_SNORM:
case DXGI_FORMAT_BC6H_TYPELESS:
case DXGI_FORMAT_BC6H_UF16:
case DXGI_FORMAT_BC6H_SF16:
case DXGI_FORMAT_BC6H_SF16:
case DXGI_FORMAT_BC7_TYPELESS:
case DXGI_FORMAT_BC7_UNORM:
case DXGI_FORMAT_BC7_UNORM_SRGB:
return 8;

default:
return 0;
}
}

//--------------------------------------------------------------------------------------
// Get surface information for a particular format.
//--------------------------------------------------------------------------------------
static void GetSurfaceInfo(
_In_ size_t width,
_In_ size_t height,
_In_ DXGI_FORMAT fmt,
_Out_opt_ size_t* outNumBytes,
_Out_opt_ size_t* outRowBytes,
_Out_opt_ size_t* outNumRows
)
{
size_t numBytes = 0;
size_t rowBytes = 0;
size_t numRows = 0;

bool bc = false;
bool packed = false;
size_t bcnumBytesPerBlock = 0;
switch (fmt)
{
case DXGI_FORMAT_BC1_TYPELESS:
case DXGI_FORMAT_BC1_UNORM:
case DXGI_FORMAT_BC1_UNORM_SRGB:
case DXGI_FORMAT_BC4_TYPELESS:
case DXGI_FORMAT_BC4_UNORM:
case DXGI_FORMAT_BC4_SNORM:
bc = true;
bcnumBytesPerBlock = 8;
break;

case DXGI_FORMAT_BC2_TYPELESS:
case DXGI_FORMAT_BC2_UNORM:
case DXGI_FORMAT_BC2_UNORM_SRGB:
case DXGI_FORMAT_BC3_TYPELESS:
case DXGI_FORMAT_BC3_UNORM:
case DXGI_FORMAT_BC3_UNORM_SRGB:
case DXGI_FORMAT_BC5_TYPELESS:
case DXGI_FORMAT_BC5_UNORM:
case DXGI_FORMAT_BC5_SNORM:
case DXGI_FORMAT_BC6H_TYPELESS:
case DXGI_FORMAT_BC6H_UF16:
case DXGI_FORMAT_BC6H_SF16:
case DXGI_FORMAT_BC7_TYPELESS:
case DXGI_FORMAT_BC7_UNORM:
case DXGI_FORMAT_BC7_UNORM_SRGB:
bc = true;
bcnumBytesPerBlock = 16;
break;

case DXGI_FORMAT_R8G8_B8G8_UNORM:
case DXGI_FORMAT_G8R8_G8B8_UNORM:
packed = true;
break;
}

if (bc)
{
{
size_t numBlocksWide = 0;
if (width > 0)
{
numBlocksWide = std::max<size_t>(1, (width + 3) / 4);
}
size_t numBlocksHigh = 0;
if (height > 0)
{
numBlocksHigh = std::max<size_t>(1, (height + 3) / 4);
}
rowBytes = numBlocksWide * bcnumBytesPerBlock;
numRows = numBlocksHigh;
}
else if (packed)
{
rowBytes = ((width + 1) >> 1) * 4;
numRows = height;
}
else
{
size_t bpp = BitsPerPixel(fmt);
rowBytes = (width * bpp + 7) / 8; // Round up to the nearest byte.
numRows = height;
}

numBytes = rowBytes * numRows;


if (outNumBytes)
{
*outNumBytes = numBytes;
}
if (outRowBytes)
{
*outRowBytes = rowBytes;
}
if (outNumRows)
{
*outNumRows = numRows;
}
}

//--------------------------------------------------------------------------------------
#define ISBITMASK(r, g, b, a) (ddpf.RBitMask == r && ddpf.GBitMask == g && ddpf.BBitMask == b && ddpf.ABitMask == a)

static DXGI_FORMAT GetDXGIFormat(const DDS_PIXELFORMAT& ddpf)


{
if (ddpf.flags & DDS_RGB)
{
// Note that sRGB formats are written using the "DX10" extended header.

switch (ddpf.RGBBitCount)
{
case 32:
if (ISBITMASK(0x000000ff, 0x0000ff00, 0x00ff0000, 0xff000000))
{
return DXGI_FORMAT_R8G8B8A8_UNORM;
}

if (ISBITMASK(0x00ff0000, 0x0000ff00, 0x000000ff, 0xff000000))


{
return DXGI_FORMAT_B8G8R8A8_UNORM;
}

if (ISBITMASK(0x00ff0000, 0x0000ff00, 0x000000ff, 0x00000000))


{
return DXGI_FORMAT_B8G8R8X8_UNORM;
}

// No DXGI format maps to ISBITMASK(0x000000ff, 0x0000ff00, 0x00ff0000, 0x00000000) aka D3DFMT_X8B8G8R8


// No DXGI format maps to ISBITMASK(0x000000ff, 0x0000ff00, 0x00ff0000, 0x00000000) aka D3DFMT_X8B8G8R8

// Note that many common DDS reader/writers (including D3DX) swap the
// the RED/BLUE masks for 10:10:10:2 formats. We assumme
// below that the 'backwards' header mask is being used since it is most
// likely written by D3DX. The more robust solution is to use the 'DX10'
// header extension and specify the DXGI_FORMAT_R10G10B10A2_UNORM format directly

// For 'correct' writers, this should be 0x000003ff, 0x000ffc00, 0x3ff00000 for RGB data
if (ISBITMASK(0x3ff00000, 0x000ffc00, 0x000003ff, 0xc0000000))
{
return DXGI_FORMAT_R10G10B10A2_UNORM;
}

// No DXGI format maps to ISBITMASK(0x000003ff, 0x000ffc00, 0x3ff00000, 0xc0000000) aka D3DFMT_A2R10G10B10

if (ISBITMASK(0x0000ffff, 0xffff0000, 0x00000000, 0x00000000))


{
return DXGI_FORMAT_R16G16_UNORM;
}

if (ISBITMASK(0xffffffff, 0x00000000, 0x00000000, 0x00000000))


{
// Only 32-bit color channel format in D3D9 was R32F.
return DXGI_FORMAT_R32_FLOAT; // D3DX writes this out as a FourCC of 114.
}
break;

case 24:
// No 24bpp DXGI formats aka D3DFMT_R8G8B8
break;

case 16:
if (ISBITMASK(0x7c00, 0x03e0, 0x001f, 0x8000))
{
return DXGI_FORMAT_B5G5R5A1_UNORM;
}
if (ISBITMASK(0xf800, 0x07e0, 0x001f, 0x0000))
{
return DXGI_FORMAT_B5G6R5_UNORM;
}

// No DXGI format maps to ISBITMASK(0x7c00, 0x03e0, 0x001f, 0x0000) aka D3DFMT_X1R5G5B5.


if (ISBITMASK(0x0f00, 0x00f0, 0x000f, 0xf000))
{
return DXGI_FORMAT_B4G4R4A4_UNORM;
}

// No DXGI format maps to ISBITMASK(0x0f00, 0x00f0, 0x000f, 0x0000) aka D3DFMT_X4R4G4B4.

// No 3:3:2, 3:3:2:8, or paletted DXGI formats aka D3DFMT_A8R3G3B2, D3DFMT_R3G3B2, D3DFMT_P8, D3DFMT_A8P8, etc.
break;
}
}
else if (ddpf.flags & DDS_LUMINANCE)
{
if (8 == ddpf.RGBBitCount)
{
if (ISBITMASK(0x000000ff, 0x00000000, 0x00000000, 0x00000000))
{
return DXGI_FORMAT_R8_UNORM; // D3DX10/11 writes this out as DX10 extension
}

// No DXGI format maps to ISBITMASK(0x0f, 0x00, 0x00, 0xf0) aka D3DFMT_A4L4.


}

if (16 == ddpf.RGBBitCount)
{
if (ISBITMASK(0x0000ffff, 0x00000000, 0x00000000, 0x00000000))
{
return DXGI_FORMAT_R16_UNORM; // D3DX10/11 writes this out as DX10 extension.
}
if (ISBITMASK(0x000000ff, 0x00000000, 0x00000000, 0x0000ff00))
{
return DXGI_FORMAT_R8G8_UNORM; // D3DX10/11 writes this out as DX10 extension.
}
}
}
else if (ddpf.flags & DDS_ALPHA)
{
if (8 == ddpf.RGBBitCount)
{
return DXGI_FORMAT_A8_UNORM;
}
}
else if (ddpf.flags & DDS_FOURCC)
{
if (MAKEFOURCC('D', 'X', 'T', '1') == ddpf.fourCC)
{
return DXGI_FORMAT_BC1_UNORM;
}
if (MAKEFOURCC('D', 'X', 'T', '3') == ddpf.fourCC)
{
return DXGI_FORMAT_BC2_UNORM;
}
if (MAKEFOURCC('D', 'X', 'T', '5') == ddpf.fourCC)
{
return DXGI_FORMAT_BC3_UNORM;
}

// While pre-mulitplied alpha isn't directly supported by the DXGI formats,


// they are basically the same as these BC formats so they can be mapped
if (MAKEFOURCC('D', 'X', 'T', '2') == ddpf.fourCC)
{
return DXGI_FORMAT_BC2_UNORM;
}
if (MAKEFOURCC('D', 'X', 'T', '4') == ddpf.fourCC)
{
return DXGI_FORMAT_BC3_UNORM;
}

if (MAKEFOURCC('A', 'T', 'I', '1') == ddpf.fourCC)


{
return DXGI_FORMAT_BC4_UNORM;
}
if (MAKEFOURCC('B', 'C', '4', 'U') == ddpf.fourCC)
{
return DXGI_FORMAT_BC4_UNORM;
}
if (MAKEFOURCC('B', 'C', '4', 'S') == ddpf.fourCC)
{
return DXGI_FORMAT_BC4_SNORM;
}

if (MAKEFOURCC('A', 'T', 'I', '2') == ddpf.fourCC)


{
return DXGI_FORMAT_BC5_UNORM;
}
if (MAKEFOURCC('B', 'C', '5', 'U') == ddpf.fourCC)
{
return DXGI_FORMAT_BC5_UNORM;
}
if (MAKEFOURCC('B', 'C', '5', 'S') == ddpf.fourCC)
{
return DXGI_FORMAT_BC5_SNORM;
}

// BC6H and BC7 are written using the "DX10" extended header
if (MAKEFOURCC('R', 'G', 'B', 'G') == ddpf.fourCC)
{
return DXGI_FORMAT_R8G8_B8G8_UNORM;
}
if (MAKEFOURCC('G', 'R', 'G', 'B') == ddpf.fourCC)
{
return DXGI_FORMAT_G8R8_G8B8_UNORM;
}

// Check for D3DFORMAT enums being set here.


switch (ddpf.fourCC)
{
case 36: // D3DFMT_A16B16G16R16
return DXGI_FORMAT_R16G16B16A16_UNORM;

case 110: // D3DFMT_Q16W16V16U16


return DXGI_FORMAT_R16G16B16A16_SNORM;

case 111: // D3DFMT_R16F


return DXGI_FORMAT_R16_FLOAT;

case 112: // D3DFMT_G16R16F


return DXGI_FORMAT_R16G16_FLOAT;

case 113: // D3DFMT_A16B16G16R16F


return DXGI_FORMAT_R16G16B16A16_FLOAT;

case 114: // D3DFMT_R32F


return DXGI_FORMAT_R32_FLOAT;

case 115: // D3DFMT_G32R32F


return DXGI_FORMAT_R32G32_FLOAT;

case 116: // D3DFMT_A32B32G32R32F


return DXGI_FORMAT_R32G32B32A32_FLOAT;
}
}

return DXGI_FORMAT_UNKNOWN;
}

//--------------------------------------------------------------------------------------
static void FillInitData(
_In_ size_t width,
_In_ size_t height,
_In_ size_t depth,
_In_ size_t mipCount,
_In_ size_t arraySize,
_In_ DXGI_FORMAT format,
_In_ size_t maxsize,
_In_ size_t bitSize,
_In_reads_bytes_(bitSize) const byte* bitData,
_Out_ size_t& twidth,
_Out_ size_t& theight,
_Out_ size_t& tdepth,
_Out_ size_t& skipMip,
_Out_writes_(mipCount*arraySize) D3D11_SUBRESOURCE_DATA* initData
)
{
if (!bitData || !initData)
{
throw ref new Platform::InvalidArgumentException();
}

skipMip = 0;
twidth = 0;
theight = 0;
theight = 0;
tdepth = 0;

size_t NumBytes = 0;
size_t RowBytes = 0;
size_t NumRows = 0;
const byte* pSrcBits = bitData;
const byte* pEndBits = bitData + bitSize;

size_t index = 0;
for (size_t j = 0; j < arraySize; j++)
{
size_t w = width;
size_t h = height;
size_t d = depth;
for (size_t i = 0; i < mipCount; i++)
{
GetSurfaceInfo(w, h, format, &NumBytes, &RowBytes, &NumRows);

if ((mipCount <= 1) || !maxsize || (w <= maxsize && h <= maxsize && d <= maxsize))
{
if (!twidth)
{
twidth = w;
theight = h;
tdepth = d;
}

initData[index].pSysMem = (const void*)pSrcBits;


initData[index].SysMemPitch = static_cast<UINT>(RowBytes);
initData[index].SysMemSlicePitch = static_cast<UINT>(NumBytes);
++index;
}
else
{
++skipMip;
}

if (pSrcBits + (NumBytes*d) > pEndBits)


{
throw ref new Platform::OutOfBoundsException();
}

pSrcBits += NumBytes * d;

w = w >> 1;
h = h >> 1;
d = d >> 1;
if (w == 0)
{
w = 1;
}
if (h == 0)
{
h = 1;
}
if (d == 0)
{
d = 1;
}
}
}

if (!index)
{
throw ref new Platform::FailureException();
}
}
//--------------------------------------------------------------------------------------
static HRESULT CreateD3DResources(
_In_ ID3D11Device* d3dDevice,
_In_ uint32 resDim,
_In_ size_t width,
_In_ size_t height,
_In_ size_t depth,
_In_ size_t mipCount,
_In_ size_t arraySize,
_In_ DXGI_FORMAT format,
_In_ bool isCubeMap,
_In_reads_(mipCount*arraySize) D3D11_SUBRESOURCE_DATA* initData,
_Out_opt_ ID3D11Resource** texture,
_Out_opt_ ID3D11ShaderResourceView** textureView
)
{
if (!d3dDevice || !initData)
{
return E_INVALIDARG;
}

HRESULT hr = E_FAIL;

switch (resDim)
{
case D3D11_RESOURCE_DIMENSION_TEXTURE1D:
{
D3D11_TEXTURE1D_DESC desc;
desc.Width = static_cast<UINT>(width);
desc.MipLevels = static_cast<UINT>(mipCount);
desc.ArraySize = static_cast<UINT>(arraySize);
desc.Format = format;
desc.Usage = D3D11_USAGE_DEFAULT;
desc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
desc.CPUAccessFlags = 0;
desc.MiscFlags = 0;

ID3D11Texture1D* tex = nullptr;


hr = d3dDevice->CreateTexture1D(&desc, initData, &tex);

if (SUCCEEDED(hr) && tex != 0)


{
if (textureView != 0)
{
D3D11_SHADER_RESOURCE_VIEW_DESC SRVDesc;
memset(&SRVDesc, 0, sizeof(SRVDesc));
SRVDesc.Format = format;

if (arraySize > 1)
{
SRVDesc.ViewDimension = D3D_SRV_DIMENSION_TEXTURE1DARRAY;
SRVDesc.Texture1DArray.MipLevels = desc.MipLevels;
SRVDesc.Texture1DArray.ArraySize = static_cast<UINT>(arraySize);
}
else
{
SRVDesc.ViewDimension = D3D_SRV_DIMENSION_TEXTURE1D;
SRVDesc.Texture1D.MipLevels = desc.MipLevels;
}

hr = d3dDevice->CreateShaderResourceView(tex, &SRVDesc, textureView);

if (FAILED(hr))
{
tex->Release();
return hr;
}
}
if (texture != 0)
{
*texture = tex;
}
else
{
tex->Release();
}
}
}
break;

case D3D11_RESOURCE_DIMENSION_TEXTURE2D:
{
D3D11_TEXTURE2D_DESC desc;
desc.Width = static_cast<UINT>(width);
desc.Height = static_cast<UINT>(height);
desc.MipLevels = static_cast<UINT>(mipCount);
desc.ArraySize = static_cast<UINT>(arraySize);
desc.Format = format;
desc.SampleDesc.Count = 1;
desc.SampleDesc.Quality = 0;
desc.Usage = D3D11_USAGE_DEFAULT;
desc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
desc.CPUAccessFlags = 0;
desc.MiscFlags = (isCubeMap) ? D3D11_RESOURCE_MISC_TEXTURECUBE : 0;

ID3D11Texture2D* tex = nullptr;


hr = d3dDevice->CreateTexture2D(&desc, initData, &tex);

if (SUCCEEDED(hr) && tex != 0)


{
if (textureView != 0)
{
D3D11_SHADER_RESOURCE_VIEW_DESC SRVDesc;
memset(&SRVDesc, 0, sizeof(SRVDesc));
SRVDesc.Format = format;

if (isCubeMap)
{
if (arraySize > 6)
{
SRVDesc.ViewDimension = D3D_SRV_DIMENSION_TEXTURECUBEARRAY;
SRVDesc.TextureCubeArray.MipLevels = desc.MipLevels;

// Earlier, we set arraySize to (NumCubes * 6).


SRVDesc.TextureCubeArray.NumCubes = static_cast<UINT>(arraySize / 6);
}
else
{
SRVDesc.ViewDimension = D3D_SRV_DIMENSION_TEXTURECUBE;
SRVDesc.TextureCube.MipLevels = desc.MipLevels;
}
}
else if (arraySize > 1)
{
SRVDesc.ViewDimension = D3D_SRV_DIMENSION_TEXTURE2DARRAY;
SRVDesc.Texture2DArray.MipLevels = desc.MipLevels;
SRVDesc.Texture2DArray.ArraySize = static_cast<UINT>(arraySize);
}
else
{
SRVDesc.ViewDimension = D3D_SRV_DIMENSION_TEXTURE2D;
SRVDesc.Texture2D.MipLevels = desc.MipLevels;
}

hr = d3dDevice->CreateShaderResourceView(tex, &SRVDesc, textureView);

if (FAILED(hr))
if (FAILED(hr))
{
tex->Release();
return hr;
}
}

if (texture != 0)
{
*texture = tex;
}
else
{
tex->Release();
}
}
}
break;

case D3D11_RESOURCE_DIMENSION_TEXTURE3D:
{
D3D11_TEXTURE3D_DESC desc;
desc.Width = static_cast<UINT>(width);
desc.Height = static_cast<UINT>(height);
desc.Depth = static_cast<UINT>(depth);
desc.MipLevels = static_cast<UINT>(mipCount);
desc.Format = format;
desc.Usage = D3D11_USAGE_DEFAULT;
desc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
desc.CPUAccessFlags = 0;
desc.MiscFlags = 0;

ID3D11Texture3D* tex = nullptr;


hr = d3dDevice->CreateTexture3D(&desc, initData, &tex);

if (SUCCEEDED(hr) && tex != 0)


{
if (textureView != 0)
{
D3D11_SHADER_RESOURCE_VIEW_DESC SRVDesc;
memset(&SRVDesc, 0, sizeof(SRVDesc));
SRVDesc.Format = format;
SRVDesc.ViewDimension = D3D_SRV_DIMENSION_TEXTURE3D;
SRVDesc.Texture3D.MipLevels = desc.MipLevels;

hr = d3dDevice->CreateShaderResourceView(tex, &SRVDesc, textureView);

if (FAILED(hr))
{
tex->Release();
return hr;
}
}

if (texture != 0)
{
*texture = tex;
}
else
{
tex->Release();
}
}
}
break;
}

return hr;
}
//--------------------------------------------------------------------------------------
static void CreateTextureFromDDS(
_In_ ID3D11Device* d3dDevice,
_In_ const DDS_HEADER* header,
_In_reads_bytes_(bitSize) const byte* bitData,
_In_ size_t bitSize,
_Out_opt_ ID3D11Resource** texture,
_Out_opt_ ID3D11ShaderResourceView** textureView,
_In_ size_t maxsize
)
{
HRESULT hr = S_OK;

size_t width = header->width;


size_t height = header->height;
size_t depth = header->depth;

uint32 resDim = D3D11_RESOURCE_DIMENSION_UNKNOWN;


size_t arraySize = 1;
DXGI_FORMAT format = DXGI_FORMAT_UNKNOWN;
bool isCubeMap = false;

size_t mipCount = header->mipMapCount;


if (0 == mipCount)
{
mipCount = 1;
}

if ((header->ddspf.flags & DDS_FOURCC) &&


(MAKEFOURCC('D', 'X', '1', '0') == header->ddspf.fourCC))
{
const DDS_HEADER_DXT10* d3d10ext = reinterpret_cast<const DDS_HEADER_DXT10*>((const char*)header + sizeof(DDS_HEADER));

arraySize = d3d10ext->arraySize;
if (arraySize == 0)
{
throw ref new Platform::FailureException();
}

if (BitsPerPixel(d3d10ext->dxgiFormat) == 0)
{
throw ref new Platform::FailureException();
}

format = d3d10ext->dxgiFormat;

switch (d3d10ext->resourceDimension)
{
case D3D11_RESOURCE_DIMENSION_TEXTURE1D:
// D3DX writes 1D textures with a fixed Height of 1.
if ((header->flags & DDS_HEIGHT) && height != 1)
{
throw ref new Platform::FailureException();
}
height = depth = 1;
break;

case D3D11_RESOURCE_DIMENSION_TEXTURE2D:
if (d3d10ext->miscFlag & D3D11_RESOURCE_MISC_TEXTURECUBE)
{
arraySize *= 6;
isCubeMap = true;
}
depth = 1;
break;

case D3D11_RESOURCE_DIMENSION_TEXTURE3D:
if (!(header->flags & DDS_HEADER_FLAGS_VOLUME))
{
throw ref new Platform::FailureException();
}

if (arraySize > 1)
{
throw ref new Platform::FailureException();
}
break;

default:
return throw ref new Platform::FailureException();
}

resDim = d3d10ext->resourceDimension;
}
else
{
format = GetDXGIFormat(header->ddspf);

if (format == DXGI_FORMAT_UNKNOWN)
{
return throw ref new Platform::FailureException();
}

if (header->flags & DDS_HEADER_FLAGS_VOLUME)


{
resDim = D3D11_RESOURCE_DIMENSION_TEXTURE3D;
}
else
{
if (header->caps2 & DDS_CUBEMAP)
{
// We require all six faces to be defined.
if ((header->caps2 & DDS_CUBEMAP_ALLFACES) != DDS_CUBEMAP_ALLFACES)
{
return throw ref new Platform::FailureException();
}

arraySize = 6;
isCubeMap = true;
}

depth = 1;
resDim = D3D11_RESOURCE_DIMENSION_TEXTURE2D;

// Note there's no way for a legacy Direct3D 9 DDS to express a '1D' texture.
}

assert(BitsPerPixel(format) != 0);
}

// Bound sizes (For security purposes, we don't trust DDS file metadata larger than the D3D 11.x hardware requirements.)
if (mipCount > D3D11_REQ_MIP_LEVELS)
{
return throw ref new Platform::FailureException();
}

switch (resDim)
{
case D3D11_RESOURCE_DIMENSION_TEXTURE1D:
if ((arraySize > D3D11_REQ_TEXTURE1D_ARRAY_AXIS_DIMENSION) ||
(width > D3D11_REQ_TEXTURE1D_U_DIMENSION))
{
return throw ref new Platform::FailureException();
}
break;
case D3D11_RESOURCE_DIMENSION_TEXTURE2D:
if (isCubeMap)
{
// This is the right bound because we set arraySize to (NumCubes*6) above.
if ((arraySize > D3D11_REQ_TEXTURE2D_ARRAY_AXIS_DIMENSION) ||
(width > D3D11_REQ_TEXTURECUBE_DIMENSION) ||
(height > D3D11_REQ_TEXTURECUBE_DIMENSION))
{
return throw ref new Platform::FailureException();
}
}
else if ((arraySize > D3D11_REQ_TEXTURE2D_ARRAY_AXIS_DIMENSION) ||
(width > D3D11_REQ_TEXTURE2D_U_OR_V_DIMENSION) ||
(height > D3D11_REQ_TEXTURE2D_U_OR_V_DIMENSION))
{
return throw ref new Platform::FailureException();
}
break;

case D3D11_RESOURCE_DIMENSION_TEXTURE3D:
if ((arraySize > 1) ||
(width > D3D11_REQ_TEXTURE3D_U_V_OR_W_DIMENSION) ||
(height > D3D11_REQ_TEXTURE3D_U_V_OR_W_DIMENSION) ||
(depth > D3D11_REQ_TEXTURE3D_U_V_OR_W_DIMENSION))
{
return throw ref new Platform::FailureException();
}
break;
}

// Create the texture.


std::unique_ptr<D3D11_SUBRESOURCE_DATA> initData(new D3D11_SUBRESOURCE_DATA[mipCount * arraySize]);

size_t skipMip = 0;
size_t twidth = 0;
size_t theight = 0;
size_t tdepth = 0;
FillInitData(width, height, depth, mipCount, arraySize, format, maxsize, bitSize, bitData, twidth, theight, tdepth, skipMip, initData.get());

hr = CreateD3DResources(d3dDevice, resDim, twidth, theight, tdepth, mipCount - skipMip, arraySize, format, isCubeMap, initData.get(), texture,
textureView);

if (FAILED(hr) && !maxsize && (mipCount > 1))


{
// Retry with a maxsize determined by feature level.
switch (d3dDevice->GetFeatureLevel())
{
case D3D_FEATURE_LEVEL_9_1:
case D3D_FEATURE_LEVEL_9_2:
if (isCubeMap)
{
maxsize = D3D_FL9_1_REQ_TEXTURECUBE_DIMENSION;
}
else
{
maxsize = (resDim == D3D11_RESOURCE_DIMENSION_TEXTURE3D)
? D3D_FL9_1_REQ_TEXTURE3D_U_V_OR_W_DIMENSION
: D3D_FL9_1_REQ_TEXTURE2D_U_OR_V_DIMENSION;
}
break;

case D3D_FEATURE_LEVEL_9_3:
maxsize = (resDim == D3D11_RESOURCE_DIMENSION_TEXTURE3D)
? D3D_FL9_1_REQ_TEXTURE3D_U_V_OR_W_DIMENSION
: D3D_FL9_3_REQ_TEXTURE2D_U_OR_V_DIMENSION;
break;

default: // D3D_FEATURE_LEVEL_10_0 & D3D_FEATURE_LEVEL_10_1


maxsize = (resDim == D3D11_RESOURCE_DIMENSION_TEXTURE3D)
maxsize = (resDim == D3D11_RESOURCE_DIMENSION_TEXTURE3D)
? D3D10_REQ_TEXTURE3D_U_V_OR_W_DIMENSION
: D3D10_REQ_TEXTURE2D_U_OR_V_DIMENSION;
break;
}

FillInitData(width, height, depth, mipCount, arraySize, format, maxsize, bitSize, bitData, twidth, theight, tdepth, skipMip, initData.get());

hr = CreateD3DResources(d3dDevice, resDim, twidth, theight, tdepth, mipCount - skipMip, arraySize, format, isCubeMap, initData.get(),
texture, textureView);
}

DX::ThrowIfFailed(hr);
}

//--------------------------------------------------------------------------------------
void CreateDDSTextureFromMemory(
_In_ ID3D11Device* d3dDevice,
_In_reads_bytes_(ddsDataSize) const byte* ddsData,
_In_ size_t ddsDataSize,
_Out_opt_ ID3D11Resource** texture,
_Out_opt_ ID3D11ShaderResourceView** textureView,
_In_ size_t maxsize
)
{
if (!d3dDevice || !ddsData || (!texture && !textureView))
{
throw ref new Platform::InvalidArgumentException();
}

// Validate DDS file in memory.


if (ddsDataSize < (sizeof(uint32) + sizeof(DDS_HEADER)))
{
throw ref new Platform::FailureException();
}

uint32 dwMagicNumber = *(const uint32*)(ddsData);


if (dwMagicNumber != DDS_MAGIC)
{
throw ref new Platform::FailureException();
}

const DDS_HEADER* header = reinterpret_cast<const DDS_HEADER*>(ddsData + sizeof(uint32));

// Verify the header to validate the DDS file.


if (header->size != sizeof(DDS_HEADER) ||
header->ddspf.size != sizeof(DDS_PIXELFORMAT))
{
throw ref new Platform::FailureException();
}

// Check for the DX10 extension.


bool bDXT10Header = false;
if ((header->ddspf.flags & DDS_FOURCC) &&
(MAKEFOURCC('D', 'X', '1', '0') == header->ddspf.fourCC))
{
// Must be long enough for both headers and magic value
if (ddsDataSize < (sizeof(DDS_HEADER) + sizeof(uint32) + sizeof(DDS_HEADER_DXT10)))
{
throw ref new Platform::FailureException();
}

bDXT10Header = true;
}

ptrdiff_t offset = sizeof(uint32) + sizeof(DDS_HEADER) + (bDXT10Header ? sizeof(DDS_HEADER_DXT10) : 0);

CreateTextureFromDDS(d3dDevice, header, ddsData + offset, ddsDataSize - offset, texture, textureView, maxsize);


}
Add features to DirectX games
3/6/2017 1 min to read Edit on GitHub

This section provides information about adding various features to your DirectX games.
DirectX and XAML interop topic explains how to use Extensible Application Markup Language (XAML) and DirectX
together to build user interface frameworks.
Game input topic provides information about adding various types of controls to your game.
Supporting screen orientation topic discusses the best practices for handling screen rotation in your UWP DirectX
game.

TOPIC DESCRIPTION

DirectX and XAML interop Enable DirectX and XAML interop.

Game input Add input methods to your game.

Supporting screen orientation Add screen orientation support.


DirectX and XAML interop
3/6/2017 13 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
You can use Extensible Application Markup Language (XAML) and Microsoft DirectX together in your Universal
Windows Platform (UWP) game or app. The combination of XAML and DirectX lets you build flexible user interface
frameworks that interop with your DirectX-rendered content, and is particularly useful for graphics-intensive apps.
This topic explains the structure of a UWP app that uses DirectX and identifies the important types to use when
building your UWP app to work with DirectX.
If your app mainly focuses on 2D rendering, you may want to use the Win2D Windows Runtime library. This
library is maintained by Microsoft and built on top of the core Direct2D technologies. It greatly simplifies the usage
pattern to implement 2D graphics and includes helpful abstractions for some of the techniques described in this
document. See the project page for more details. This document covers guidance for app developers who choose
not to use Win2D.

Note DirectX APIs are not defined as Windows Runtime types, so you typically use Visual C++ component
extensions (C++/CX) to develop XAML UWP components that interoperate with DirectX. Also, you can create a
UWP app with C# and XAML that uses DirectX, if you wrap the DirectX calls in a separate Windows Runtime
metadata file.

XAML and DirectX


DirectX provides two powerful libraries for 2D and 3D graphics: Direct2D and Microsoft Direct3D. Although XAML
provides support for basic 2D primitives and effects, many apps, such as modeling and gaming, need more
complex graphics support. For these, you can use Direct2D and Direct3D to render part or all of the graphics and
use XAML for everything else.
If you are implementing custom XAML and DirectX interop, you need to know these two concepts:
Shared surfaces are sized regions of the display, defined by XAML, that you can use DirectX to draw into
indirectly, using Windows::UI::Xaml::Media::ImageSource types. For shared surfaces, you don't control the
precise timing for when new content appears on-screen. Rather, updates to the shared surface are synced to
the XAML framework's updates.
Swap chains represent a collection of buffers used to display graphics at minimal latency. Typically, swap chains
are updated at 60 frames per second separately from the UI thread. However, swap chains use more memory
and CPU resources in order to support rapid updates, and are more difficult to use since you have to manage
multiple threads.
Consider what you are using DirectX for. Will it be used to composite or animate a single control that fits within
the dimensions of the display window? Will it contain output that needs to be rendered and controlled in real-time,
as in a game? If so, you will probably need to implement a swap chain. Otherwise, should be fine using a shared
surface.
Once you've determined how you intend to use DirectX, you use one of these Windows Runtime types to
incorporate DirectX rendering into your Windows Store app:
If you want to compose a static image, or draw a complex image on event-driven intervals, draw to a shared
surface with Windows::UI::Xaml::Media::Imaging::SurfaceImageSource. This type handles a sized DirectX
drawing surface. Typically, you use this type when composing an image or texture as a bitmap for display in
a document or UI element. It doesn't work well for real-time interactivity, such as a high-performance game.
That's because updates to a SurfaceImageSource object are synced to XAML user interface updates, and
that can introduce latency into the visual feedback you provide to the user, like a fluctuating frame rate or a
perceived poor response to real-time input. Updates are still quick enough for dynamic controls or data
simulations, though!
If the image is larger than the provided screen real estate, and can be panned or zoomed by the user, use
Windows::UI::Xaml::Media::Imaging::VirtualSurfaceImageSource. This type handles a sized DirectX drawing
surface that is larger than the screen. Like SurfaceImageSource, you use this when composing a complex
image or control dynamically. And, also like SurfaceImageSource, it doesn't work well for high-
performance games. Some examples of XAML elements that could use a VirtualSurfaceImageSource are
map controls, or a large, image-dense document viewer.
If you are using DirectX to present graphics updated in real-time, or in a situation where the updates must
come on regular low-latency intervals, use the SwapChainPanel class, so you can refresh the graphics
without syncing to the XAML framework refresh timer. This type enables you to access the graphics device's
swap chain (IDXGISwapChain1) directly and layer XAML atop the render target. This type works great for
games and full-screen DirectX apps that require a XAML-based user interface. You must know DirectX well
to use this approach, including the Microsoft DirectX Graphics Infrastructure (DXGI), Direct2D, and Direct3D
technologies. For more info, see Programming Guide for Direct3D 11.

SurfaceImageSource
SurfaceImageSource provides DirectX shared surfaces to draw into and then composes the bits into app content.
Here is the basic process for creating and updating a SurfaceImageSource object in the code-behind:
1. Define the size of the shared surface by passing the height and width to the SurfaceImageSource
constructor. You can also indicate whether the surface needs alpha (opacity) support.
For example:
SurfaceImageSource^ surfaceImageSource = ref new SurfaceImageSource(400, 300);

2. Get a pointer to ISurfaceImageSourceNativeWithD2D. Cast the SurfaceImageSource object as IInspectable


(or IUnknown), and call QueryInterface on it to get the underlying
ISurfaceImageSourceNativeWithD2D implementation. You use the methods defined on this
implementation to set the device and run the draw operations.

Microsoft::WRL::ComPtr<ISurfaceImageSourceNativeWithD2D> m_sisNativeWithD2D;

// ...

IInspectable* sisInspectable =
(IInspectable*) reinterpret_cast<IInspectable*>(surfaceImageSource);

sisInspectable->QueryInterface(
__uuidof(ISurfaceImageSourceNativeWithD2D),
(void **)&m_sisNativeWithD2D);

3. Create the DXGI and D2D devices by first calling D3D11CreateDevice and D2D1CreateDevice then passing
the device and context to ISurfaceImageSourceNativeWithD2D::SetDevice.
NOTE
If you will be drawing to your SurfaceImageSource from a background thread, you'll also need to ensure that the
DXGI device has enabled multi-threaded access. This should only be done if you will be drawing from a background
thread, for performance reasons.

For example:

Microsoft::WRL::ComPtr<ID3D11Device> m_d3dDevice;
Microsoft::WRL::ComPtr<ID3D11DeviceContext> m_d3dContext;
Microsoft::WRL::ComPtr<ID2D1Device> m_d2dDevice;

// Create the DXGI device


D3D11CreateDevice(
NULL,
D3D_DRIVER_TYPE_HARDWARE,
NULL,
flags,
featureLevels,
ARRAYSIZE(featureLevels),
D3D11_SDK_VERSION,
&m_d3dDevice,
NULL,
&m_d3dContext);

Microsoft::WRL::ComPtr<IDXGIDevice> dxgiDevice;
m_d3dDevice.As(&dxgiDevice);

// To enable multi-threaded access (optional)


Microsoft::WRL::ComPtr<ID3D10Multithread> d3dMultiThread;

m_d3dDevice->QueryInterface(
__uuidof(ID3D10Multithread),
(void **) &d3dMultiThread);

d3dMultiThread->SetMultithreadProtected(TRUE);

// Create the D2D device


D2D1CreateDevice(m_dxgiDevice.Get(), NULL, &m_d2dDevice);

// Set the D2D device


m_sisNativeWithD2D->SetDevice(m_d2dDevice.Get());

4. Provide a pointer to ID2D1DeviceContext object to ISurfaceImageSourceNativeWithD2D::BeginDraw, and


use the returned drawing context to draw the contents of the desired rectangle within the
SurfaceImageSource. ISurfaceImageSourceNativeWithD2D::BeginDraw and the drawing commands
can be called from a background thread. Only the area specified for update in the updateRect parameter is
drawn.
This method returns the point (x,y) offset of the updated target rectangle in the offset parameter. You use
this offset to determine where to draw your updated content with the ID2D1DeviceContext.
Microsoft::WRL::ComPtr<ID2D1DeviceContext> drawingContext;

HRESULT beginDrawHR = m_sisNative->BeginDraw(


updateRect,
&drawingContext,
&offset);

if (beginDrawHR == DXGI_ERROR_DEVICE_REMOVED
|| beginDrawHR == DXGI_ERROR_DEVICE_RESET
|| beginDrawHR == D2DERR_RECREATE_TARGET)
{
// The D3D and D2D device was lost and need to be re-created.
// Recovery steps are:
// 1) Re-create the D3D and D2D devices
// 2) Call ISurfaceImageSourceNativeWithD2D::SetDevice with the new D2D
// device
// 3) Redraw the contents of the SurfaceImageSource
}
else if (beginDrawHR == E_SURFACE_CONTENTS_LOST)
{
// The devices were not lost but the entire contents of the surface
// were. Recovery steps are:
// 1) Call ISurfaceImageSourceNativeWithD2D::SetDevice with the D2D
// device again
// 2) Redraw the entire contents of the SurfaceImageSource
}
else
{
// Draw your updated rectangle with the drawingContext
}

5. Call ISurfaceImageSourceNativeWithD2D::EndDraw to complete the bitmap. The bitmap can be used as a


source for a XAML Image or ImageBrush. ISurfaceImageSourceNativeWithD2D::EndDraw must be
called only from the UI thread.

m_sisNative->EndDraw();

// ...
// The SurfaceImageSource object's underlying
// ISurfaceImageSourceNativeWithD2D object contains the completed bitmap.

ImageBrush^ brush = ref new ImageBrush();


brush->ImageSource = surfaceImageSource;

NOTE
Calling SurfaceImageSource::SetSource (inherited from IBitmapSource::SetSource) currently throws an exception.
Do not call it from your SurfaceImageSource object.

NOTE
Applications must avoid drawing to SurfaceImageSource while their associated Window is hidden, otherwise
ISurfaceImageSourceNativeWithD2D APIs will fail. To accomplish this, register as an event listener for the
Window.VisibilityChanged event to track visibility changes.

VirtualSurfaceImageSource
VirtualSurfaceImageSource extends SurfaceImageSource when the content is potentially larger than what can fit
on screen and so the content must be virtualized to render optimally.
VirtualSurfaceImageSource differs from SurfaceImageSource in that it uses a callback,
IVirtualSurfaceImageSourceCallbacksNative::UpdatesNeeded, that you implement to update regions of the surface
as they become visible on the screen. You do not need to clear regions that are hidden, as the XAML framework
takes care of that for you.
Here is the basic process for creating and updating a VirtualSurfaceImageSource object in the code-behind:
1. Create an instance of VirtualSurfaceImageSource with the size that you want. For example:

VirtualSurfaceImageSource^ virtualSIS =
ref new VirtualSurfaceImageSource(2000, 2000);

2. Get pointers to IVirtualSurfaceImageSourceNative and ISurfaceImageSourceNativeWithD2D. Cast the


VirtualSurfaceImageSource object as IInspectable or IUnknown, and call QueryInterface on it to get the
underlying IVirtualSurfaceImageSourceNative and ISurfaceImageSourceNativeWithD2D
implementations. You use the methods defined on these implementations to set the device and run the
draw operations.

Microsoft::WRL::ComPtr<IVirtualSurfaceImageSourceNative> m_vsisNative;
Microsoft::WRL::ComPtr<ISurfaceImageSourceNativeWithD2D> m_sisNativeWithD2D;

// ...

IInspectable* vsisInspectable =
(IInspectable*) reinterpret_cast<IInspectable*>(virtualSIS);

vsisInspectable->QueryInterface(
__uuidof(IVirtualSurfaceImageSourceNative),
(void **) &m_vsisNative);

vsisInspectable->QueryInterface(
__uuidof(ISurfaceImageSourceNativeWithD2D),
(void **) &m_sisNativeWithD2D);

3. Create the DXGI and D2D devices by first calling D3D11CreateDevice and D2D1CreateDevice, and then
pass the D2D device to ISurfaceImageSourceNativeWithD2D::SetDevice.

NOTE
If you will be drawing to your VirtualSurfaceImageSource from a background thread, you'll also need to ensure
that the DXGI device has enabled multi-threaded access. This should only be done if you will be drawing from a
background thread, for performance reasons.

For example:
Microsoft::WRL::ComPtr<ID3D11Device> m_d3dDevice;
Microsoft::WRL::ComPtr<ID3D11DeviceContext> m_d3dContext;
Microsoft::WRL::ComPtr<ID2D1Device> m_d2dDevice;

// Create the DXGI device


D3D11CreateDevice(
NULL,
D3D_DRIVER_TYPE_HARDWARE,
NULL,
flags,
featureLevels,
ARRAYSIZE(featureLevels),
D3D11_SDK_VERSION,
&m_d3dDevice,
NULL,
&m_d3dContext);

Microsoft::WRL::ComPtr<IDXGIDevice> dxgiDevice;
m_d3dDevice.As(&dxgiDevice);

// To enable multi-threaded access (optional)


Microsoft::WRL::ComPtr<ID3D10Multithread> d3dMultiThread;

m_d3dDevice->QueryInterface(
__uuidof(ID3D10Multithread),
(void **) &d3dMultiThread);

d3dMultiThread->SetMultithreadProtected(TRUE);

// Create the D2D device


D2D1CreateDevice(m_dxgiDevice.Get(), NULL, &m_d2dDevice);

// Set the D2D device


m_vsisNativeWithD2D->SetDevice(m_d2dDevice.Get());

m_vsisNative->SetDevice(dxgiDevice.Get());

4. Call IVirtualSurfaceImageSourceNative::RegisterForUpdatesNeeded, passing in a reference to your


implementation of IVirtualSurfaceUpdatesCallbackNative.

class MyContentImageSource : public IVirtualSurfaceUpdatesCallbackNative


{
// ...
private:
virtual HRESULT STDMETHODCALLTYPE UpdatesNeeded() override;
}

// ...

HRESULT STDMETHODCALLTYPE MyContentImageSource::UpdatesNeeded()


{
// .. Perform drawing here ...
}

void MyContentImageSource::Initialize()
{
// ...
m_vsisNative->RegisterForUpdatesNeeded(this);
// ...
}

The framework calls your implementation of IVirtualSurfaceUpdatesCallbackNative::UpdatesNeeded when a


region of the VirtualSurfaceImageSource needs to be updated.
This can happen either when the framework determines the region needs to be drawn (such as when the
user pans or zooms the view of the surface), or after the app has called
IVirtualSurfaceImageSourceNative::Invalidate on that region.
5. In IVirtualSurfaceImageSourceNative::UpdatesNeeded, use the
IVirtualSurfaceImageSourceNative::GetUpdateRectCount and
IVirtualSurfaceImageSourceNative::GetUpdateRects methods to determine which region(s) of the surface
must be drawn.

HRESULT STDMETHODCALLTYPE MyContentImageSource::UpdatesNeeded()


{
HRESULT hr = S_OK;

try
{
ULONG drawingBoundsCount = 0;
m_vsisNative->GetUpdateRectCount(&drawingBoundsCount);

std::unique_ptr<RECT[]> drawingBounds(
new RECT[drawingBoundsCount]);

m_vsisNative->GetUpdateRects(
drawingBounds.get(),
drawingBoundsCount);

for (ULONG i = 0; i < drawingBoundsCount; ++i)


{
// Drawing code here ...
}
}
catch (Platform::Exception^ exception)
{
hr = exception->HResult;
}

return hr;
}

6. Lastly, for each region that must be updated:


a. Provide a pointer to the ID2D1DeviceContext object to
ISurfaceImageSourceNativeWithD2D::BeginDraw, and use the returned drawing context to draw
the contents of the desired rectangle within the SurfaceImageSource.
ISurfaceImageSourceNativeWithD2D::BeginDraw and the drawing commands can be called
from a background thread. Only the area specified for update in the updateRect parameter is drawn.
This method returns the point (x,y) offset of the updated target rectangle in the offset parameter. You
use this offset to determine where to draw your updated content with the ID2D1DeviceContext.
Microsoft::WRL::ComPtr<ID2D1DeviceContext> drawingContext;

HRESULT beginDrawHR = m_sisNative->BeginDraw(


updateRect,
&drawingContext,
&offset);

if (beginDrawHR == DXGI_ERROR_DEVICE_REMOVED
|| beginDrawHR == DXGI_ERROR_DEVICE_RESET
|| beginDrawHR == D2DERR_RECREATE_TARGET)
{
// The D3D and D2D devices were lost and need to be re-created.
// Recovery steps are:
// 1) Re-create the D3D and D2D devices
// 2) Call ISurfaceImageSourceNativeWithD2D::SetDevice with the
// new D2D device
// 3) Redraw the contents of the VirtualSurfaceImageSource
}
else if (beginDrawHR == E_SURFACE_CONTENTS_LOST)
{
// The devices were not lost but the entire contents of the
// surface were lost. Recovery steps are:
// 1) Call ISurfaceImageSourceNativeWithD2D::SetDevice with the
// D2D device again
// 2) Redraw the entire contents of the VirtualSurfaceImageSource
}
else
{
// Draw your updated rectangle with the drawingContext
}

b. Draw the specific content to that region, but constrain your drawing to the bounded regions for
better performance.
c. Call ISurfaceImageSourceNativeWithD2D::EndDraw. The result is a bitmap.

NOTE
Applications must avoid drawing to SurfaceImageSource while their associated Window is hidden, otherwise
ISurfaceImageSourceNativeWithD2D APIs will fail. To accomplish this, register as an event listener for the
Window.VisibilityChanged event to track visibility changes.

SwapChainPanel and gaming


SwapChainPanel is the Windows Runtime type designed to support high-performance graphics and gaming,
where you manage the swap chain directly. In this case, you create your own DirectX swap chain and manage the
presentation of your rendered content.
To ensure good performance, there are certain limitations to the SwapChainPanel type:
There are no more than 4 SwapChainPanel instances per app.
You should set the DirectX swap chain's height and width (in DXGI_SWAP_CHAIN_DESC1) to the current
dimensions of the swap chain element. If you don't, the display content will be scaled (using
DXGI_SCALING_STRETCH) to fit.
You must set the DirectX swap chain's scaling mode (in DXGI_SWAP_CHAIN_DESC1) to
DXGI_SCALING_STRETCH.
You can't set the DirectX swap chain's alpha mode (in DXGI_SWAP_CHAIN_DESC1) to
DXGI_ALPHA_MODE_PREMULTIPLIED.
You must create the DirectX swap chain by calling IDXGIFactory2::CreateSwapChainForComposition.
You update the SwapChainPanel based on the needs of your app, and not the updates of the XAML framework. If
you need to synchronize the updates of SwapChainPanel to those of the XAML framework, register for the
Windows::UI::Xaml::Media::CompositionTarget::Rendering event. Otherwise, you must consider any cross-thread
issues if you try to update the XAML elements from a different thread than the one updating the
SwapChainPanel.
If you need to receive low-latency pointer input to your SwapChainPanel, use
SwapChainPanel::CreateCoreIndependentInputSource. This method returns a CoreIndependentInputSource object
that can be used to receive input events at minimal latency on a background thread. Note that once this method is
called, normal XAML pointer input events will not be fired for the SwapChainPanel, since all input will be
redirected to the background thread.

Note In general, your DirectX apps should create swap chains in landscape orientation, and equal to the
display window size (which is usually the native screen resolution in most Windows Store games). This ensures
that your app uses the optimal swap chain implementation when it doesn't have any visible XAML overlay. If
the app is rotated to portrait mode, your app should call IDXGISwapChain1::SetRotation on the existing swap
chain, apply a transform to the content if needed, and then call SetSwapChain again on the same swap chain.
Similarly, your app should call SetSwapChain again on the same swap chain whenever the swap chain is
resized by calling IDXGISwapChain::ResizeBuffers.

Here is basic process for creating and updating a SwapChainPanel object in the code-behind:
1. Get an instance of a swap chain panel for your app. The instances are indicated in your XAML with the
<SwapChainPanel> tag.

Windows::UI::Xaml::Controls::SwapChainPanel^ swapChainPanel;

Here is an example <SwapChainPanel> tag.

<SwapChainPanel x:Name="swapChainPanel">
<SwapChainPanel.ColumnDefinitions>
<ColumnDefinition Width="300*"/>
<ColumnDefinition Width="1069*"/>
</SwapChainPanel.ColumnDefinitions>

2. Get a pointer to ISwapChainPanelNative. Cast the SwapChainPanel object as IInspectable (or IUnknown),
and call QueryInterface on it to get the underlying ISwapChainPanelNative implementation.

Microsoft::WRL::ComPtr<ISwapChainPanelNative> m_swapChainNative;
// ...
IInspectable* panelInspectable = (IInspectable*) reinterpret_cast<IInspectable*>(swapChainPanel);
panelInspectable->QueryInterface(__uuidof(ISwapChainPanelNative), (void **)&m_swapChainNative);

3. Create the DXGI device and the swap chain, and set the swap chain to ISwapChainPanelNative by passing it
to SetSwapChain.
Microsoft::WRL::ComPtr<IDXGISwapChain1> m_swapChain;
// ...
DXGI_SWAP_CHAIN_DESC1 swapChainDesc = {0};
swapChainDesc.Width = m_bounds.Width;
swapChainDesc.Height = m_bounds.Height;
swapChainDesc.Format = DXGI_FORMAT_B8G8R8A8_UNORM; // This is the most common swapchain format.
swapChainDesc.Stereo = false;
swapChainDesc.SampleDesc.Count = 1; // Don't use multi-sampling.
swapChainDesc.SampleDesc.Quality = 0;
swapChainDesc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT;
swapChainDesc.BufferCount = 2;
swapChainDesc.Scaling = DXGI_SCALING_STRETCH;
swapChainDesc.SwapEffect = DXGI_SWAP_EFFECT_FLIP_SEQUENTIAL; // We recommend using this swap effect for all.
applications
swapChainDesc.Flags = 0;

// QI for DXGI device


Microsoft::WRL::ComPtr<IDXGIDevice> dxgiDevice;
m_d3dDevice.As(&dxgiDevice);

// Get the DXGI adapter.


Microsoft::WRL::ComPtr<IDXGIAdapter> dxgiAdapter;
dxgiDevice->GetAdapter(&dxgiAdapter);

// Get the DXGI factory.


Microsoft::WRL::ComPtr<IDXGIFactory2> dxgiFactory;
dxgiAdapter->GetParent(__uuidof(IDXGIFactory2), &dxgiFactory);
// Create a swap chain by calling CreateSwapChainForComposition.
dxgiFactory->CreateSwapChainForComposition(
m_d3dDevice.Get(),
&swapChainDesc,
nullptr, // Allow on any display.
&m_swapChain
);

m_swapChainNative->SetSwapChain(m_swapChain.Get());

4. Draw to the DirectX swap chain, and present it to display the contents.

HRESULT hr = m_swapChain->Present(1, 0);

The XAML elements are refreshed when the Windows Runtime layout/render logic signals an update.

Related topics
Win2D
SurfaceImageSource
VirtualSurfaceImageSource
SwapChainPanel
ISwapChainPanelNative
Programming Guide for Direct3D 11
Game input for DirectX games
3/6/2017 1 min to read Edit on GitHub

This section provides information about adding various types of inputs for your DirectX games.
Move-look controls for games topic explains how to add traditional mouse and keyboard controls to your DirectX
game.
Relative mouse movement topic discusses how you can implement mouse control by tracking the pixel delta
between mouse movements.
Touch controls for games topic explains how to add touch-based controls to move a fixed-plane camera in a
Direct3D environment.

TOPIC DESCRIPTION

Move-look controls for games Add move-look controls.

Relative mouse movement Handle relative mouse movement.

Touch controls for games Add basic touch controls to your game.
Move-look controls for games
3/6/2017 18 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Learn how to add traditional mouse and keyboard move-look controls (also known as mouselook controls) to
your DirectX game.
We also discuss move-look support for touch devices, with the move controller defined as the lower-left section of
the screen that behaves like a directional input, and the look controller defined for the remainder of the screen,
with the camera centering on the last place the player touched in that area.
If this is an unfamiliar control concept to you, think of it this way: the keyboard (or the touch-based directional
input box) controls your legs in this 3D space, and behaves as if your legs were only capable of moving forward or
backward, or strafing left and right. The mouse (or touch pointer) controls your head. You use your head to look in
a direction -- left or right, up or down, or somewhere in that plane. If there is a target in your view, you would use
the mouse to center your camera view on that target, and then press the forward key to move towards it, or back
to move away from it. To circle the target, you would keep the camera view centered on the target, and move left
or right at the same time. You can see how this is a very effective control method for navigating 3D environments!
These controls are commonly known as WASD controls in gaming, where the W, A, S, and D keys are used for x-z
plane fixed camera movement, and the mouse is used to control camera rotation around the x and y axes.

Objectives
Add basic move-look controls to your DirectX game for both mouse and keyboard, and touch screens.
Implement a first-person camera used to navigate a 3D environment.

A note on touch control implementations


For touch controls, we implement two controllers: the move controller, which handles movement in the x-z plane
relative to the camera's look point; and the look controller, which aims the camera's look point. Our move
controller maps to the keyboard WASD buttons, and the look controller maps to the mouse. But for touch controls,
we need to define a region of the screen that serves as the directional inputs, or the virtual WASD buttons, with the
remainder of the screen serving as the input space for the look controls.
Our screen looks like this.
When you move the touch pointer (not the mouse!) in the lower left of the screen, any movement upwards will
make the camera move forward. Any movement downwards will make the camera move backwards. The same
holds for left and right movement inside the move controller's pointer space. Outside of that space, and it
becomes a look controller -- you just touch or drag the camera to where you'd like it to face.

Set up the basic input event infrastructure


First, we must create our control class that we use to handle input events from the mouse and keyboard, and
update the camera perspective based on that input. Because we're implementing move-look controls, we call it
MoveLookController.

using namespace Windows::UI::Core;


using namespace Windows::System;
using namespace Windows::Foundation;
using namespace Windows::Devices::Input;
#include <DirectXMath.h>

// Methods to get input from the UI pointers


ref class MoveLookController
{
}; // class MoveLookController

Now, let's create a header that defines the state of the move-look controller and its first-person camera, plus the
basic methods and event handlers that implement the controls and that update the state of the camera.

#define ROTATION_GAIN 0.004f // Sensitivity adjustment for the look controller


#define MOVEMENT_GAIN 0.1f // Sensitivity adjustment for the move controller

ref class MoveLookController


{
private:
// Properties of the controller object
DirectX::XMFLOAT3 m_position; // The position of the controller
float m_pitch, m_yaw; // Orientation euler angles in radians

// Properties of the Move control


bool m_moveInUse; // Specifies whether the move control is in use
uint32 m_movePointerID; // Id of the pointer in this control
DirectX::XMFLOAT2 m_moveFirstDown; // Point where initial contact occurred
DirectX::XMFLOAT2 m_movePointerPosition; // Point where the move pointer is currently located
DirectX::XMFLOAT3 m_moveCommand; // The net command from the move control

// Properties of the Look control


bool m_lookInUse; // Specifies whether the look control is in use
uint32 m_lookPointerID; // Id of the pointer in this control
DirectX::XMFLOAT2 m_lookLastPoint; // Last point (from last frame)
DirectX::XMFLOAT2 m_lookLastDelta; // For smoothing

bool m_forward, m_back; // States for movement


bool m_left, m_right;
bool m_up, m_down;

public:

// Methods to get input from the UI pointers


void OnPointerPressed(
_In_ Windows::UI::Core::CoreWindow^ sender,
_In_ Windows::UI::Core::PointerEventArgs^ args
);

void OnPointerMoved(
_In_ Windows::UI::Core::CoreWindow^ sender,
_In_ Windows::UI::Core::CoreWindow^ sender,
_In_ Windows::UI::Core::PointerEventArgs^ args
);

void OnPointerReleased(
_In_ Windows::UI::Core::CoreWindow^ sender,
_In_ Windows::UI::Core::PointerEventArgs^ args
);

void OnKeyDown(
_In_ Windows::UI::Core::CoreWindow^ sender,
_In_ Windows::UI::Core::KeyEventArgs^ args
);

void OnKeyUp(
_In_ Windows::UI::Core::CoreWindow^ sender,
_In_ Windows::UI::Core::KeyEventArgs^ args
);

// Set up the Controls that this controller supports


void Initialize( _In_ Windows::UI::Core::CoreWindow^ window );

void Update( Windows::UI::Core::CoreWindow ^window );

internal:
// Accessor to set position of controller
void SetPosition( _In_ DirectX::XMFLOAT3 pos );

// Accessor to set position of controller


void SetOrientation( _In_ float pitch, _In_ float yaw );

// Returns the position of the controller object


DirectX::XMFLOAT3 get_Position();

// Returns the point which the controller is facing


DirectX::XMFLOAT3 get_LookPoint();

}; // class MoveLookController

Our code contains 4 groups of private fields. Let's review the purpose of each one.
First, we define some useful fields that hold our updated info about our camera view.
m_position is the position of the camera (and therefore the viewplane) in the 3D scene, using scene
coordinates.
m_pitch is the pitch of the camera, or its up-down rotation around the viewplane's x-axis, in radians.
m_yaw is the yaw of the camera, or its left-right rotation around the viewplane's y-axis, in radians.
Now, let's define the fields that we use to store info about the status and position of our controllers. First, we'll
define the fields we need for our touch-based move controller. (There's nothing special needed for the keyboard
implementation of the move controller. We just read keyboard events with specific handlers.)
m_moveInUse indicates whether the move controller is in use.
m_movePointerID is the unique ID for the current move pointer. We use it to differentiate between the look
pointer and the move pointer when we check the pointer ID value.
m_moveFirstDown is the point on the screen where the player first touched the move controller pointer area.
We use this value later to set a dead zone to keep tiny movements from jittering the view.
m_movePointerPosition is the point on the screen the player has currently moved the pointer to. We use it to
determine what direction the player wanted to move by examining it relative to m_moveFirstDown.
m_moveCommand is the final computed command for the move controller: up (forward), down (back), left, or
right.
Now, we define the fields we use for our look controller, both the mouse and touch implementations.
m_lookInUse indicates whether the look control is in use.
m_lookPointerID is the unique ID for the current look pointer. We use it to differentiate between the look
pointer and the move pointer when we check the pointer ID value.
m_lookLastPoint is the last point, in scene coordinates, that was captured in the previous frame.
m_lookLastDelta is the computed difference between the current m_position and m_lookLastPoint.
Finally, we define 6 Boolean values for the 6 degrees of movement, which we use to indicate the current state of
each directional move action (on or off):
m_forward, m_back, m_left, m_right, m_up and m_down.
We use the 6 event handlers to capture the input data we use to update the state of our controllers:
OnPointerPressed. The player pressed the left mouse button with the pointer in our game screen, or touched
the screen.
OnPointerMoved. The player moved the mouse with the pointer in our game screen, or dragged the touch
pointer on the screen.
OnPointerReleased. The player released the left mouse button with the pointer in our game screen, or
stopped touching the screen.
OnKeyDown. The player pressed a key.
OnKeyUp. The player released a key.
And finally, we use these methods and properties to initialize, access, and update the controllers' state info.
Initialize. Our app calls this event handler to initialize the controls and attach them to the CoreWindow object
that describes our display window.
SetPosition. Our app calls this method to set the (x, y, and z) coordinates of our controls in the scene space.
SetOrientation. Our app calls this method to set the pitch and yaw of the camera.
get_Position. Our app accesses this property to get the current position of the camera in the scene space. You
use this property as the method of communicating the current camera position to the app.
get_LookPoint. Our app accesses this property to get the current point toward which the controller camera is
facing.
Update. Reads the state of the move and look controllers and updates the camera position. You continually call
this method from the app's main loop to refresh the camera controller data and the camera position in the
scene space.
Now, you have here all the components you need to implement your move-look controls. So, let's connect these
pieces together.

Create the basic input events


The Windows Runtime event dispatcher provides 5 events we want instances of the MoveLookController class to
handle:
PointerPressed
PointerMoved
PointerReleased
KeyUp
KeyDown
These events are implemented on the CoreWindow type. We assume that you have a CoreWindow object to
work with. If you don't know how to obtain one, see How to set up your Universal Windows Platform (UWP) C++
app to display a DirectX view.
As these events fire while our app is running, the handlers update the controllers' state info defined in our private
fields.
First, let's populate the mouse and touch pointer event handlers. In the first event handler, OnPointerPressed(),
we get the x-y coordinates of the pointer from the CoreWindow that manages our display when the user clicks
the mouse or touches the screen in the look controller region.
OnPointerPressed

void MoveLookController::OnPointerPressed(
_In_ CoreWindow^ sender,
_In_ PointerEventArgs^ args)
{
// Get the current pointer position.
uint32 pointerID = args->CurrentPoint->PointerId;
DirectX::XMFLOAT2 position = DirectX::XMFLOAT2( args->CurrentPoint->Position.X, args->CurrentPoint->Position.Y );

auto device = args->CurrentPoint->PointerDevice;


auto deviceType = device->PointerDeviceType;
if ( deviceType == PointerDeviceType::Mouse )
{
// Action, Jump, or Fire
}

// Check if this pointer is in the move control.


// Change the values to percentages of the preferred screen resolution.
// You can set the x value to <preferred resolution> * <percentage of width>
// for example, ( position.x < (screenResolution.x * 0.15) ).

if (( position.x < 300 && position.y > 380 ) && ( deviceType != PointerDeviceType::Mouse ))
{
if ( !m_moveInUse ) // if no pointer is in this control yet
{
// Process a DPad touch down event.
m_moveFirstDown = position; // Save the location of the initial contact.
m_movePointerPosition = position;
m_movePointerID = pointerID; // Store the id of the pointer using this control.
m_moveInUse = TRUE;
}
}
else // This pointer must be in the look control.
{
if ( !m_lookInUse ) // If no pointer is in this control yet...
{
m_lookLastPoint = position; // save the point for later move
m_lookPointerID = args->CurrentPoint->PointerId; // store the id of pointer using this control
m_lookLastDelta.x = m_lookLastDelta.y = 0; // these are for smoothing
m_lookInUse = TRUE;
}
}
}

This event handler checks whether the pointer is not the mouse (for the purposes of this sample, which supports
both mouse and touch) and if it is in the move controller area. If both criteria are true, it checks whether the
pointer was just pressed, specifically, whether this click is unrelated to a previous move or look input, by testing if
m_moveInUse is false. If so, the handler captures the point in the move controller area where the press happened
and sets m_moveInUse to true, so that when this handler is called again, it won't overwrite the start position of
the move controller input interaction. It also updates the move controller pointer ID to the current pointer's ID.
If the pointer is the mouse or if the touch pointer isn't in the move controller area, it must be in the look controller
area. It sets m_lookLastPoint to the current position where the user pressed the mouse button or touched and
pressed, resets the delta, and updates the look controller's pointer ID to the current pointer ID. It also sets the state
of the look controller to active.
OnPointerMoved

void MoveLookController::OnPointerMoved(
_In_ CoreWindow ^sender,
_In_ PointerEventArgs ^args)
{
uint32 pointerID = args->CurrentPoint->PointerId;
DirectX::XMFLOAT2 position = DirectX::XMFLOAT2(args->CurrentPoint->Position.X, args->CurrentPoint->Position.Y);

// Decide which control this pointer is operating.


if (pointerID == m_movePointerID) // This is the move pointer.
{
// Move control
m_movePointerPosition = position; // Save the current position.

}
else if (pointerID == m_lookPointerID) // This is the look pointer.
{
// Look control

DirectX::XMFLOAT2 pointerDelta;
pointerDelta.x = position.x - m_lookLastPoint.x; // How far did pointer move
pointerDelta.y = position.y - m_lookLastPoint.y;

DirectX::XMFLOAT2 rotationDelta;
rotationDelta.x = pointerDelta.x * ROTATION_GAIN; // Scale for control sensitivity.
rotationDelta.y = pointerDelta.y * ROTATION_GAIN;

m_lookLastPoint = position; // Save for the next time through.

// Update our orientation based on the command.


m_pitch -= rotationDelta.y; // Mouse y increases down, but pitch increases up.
m_yaw -= rotationDelta.x; // Yaw is defined as CCW around the y-axis.

// Limit the pitch to straight up or straight down.


m_pitch = (float)__max(-DirectX::XM_PI / 2.0f, m_pitch);
m_pitch = (float)__min(+DirectX::XM_PI / 2.0f, m_pitch);
}
}

The OnPointerMoved event handler fires whenever the pointer moves (in this case, if a touch screen pointer is
being dragged, or if the mouse pointer is being moved while the left button is pressed). If the pointer ID is the
same as the move controller pointer's ID, then it's the move pointer; otherwise, we check if it's the look controller
that's the active pointer.
If it's the move controller, we just update the pointer position. We keep updating it as long the PointerMoved
event keeps firing, because we want to compare the final position with the first one we captured with the
OnPointerPressed event handler.
If it's the look controller, things are a little more complicated. We need to calculate a new look point and center the
camera on it, so we calculate the delta between the last look point and the current screen position, and then we
multiply versus our scale factor, which we can tweak to make the look movements smaller or larger relative to the
distance of the screen movement. Using that value, we calculate the pitch and the yaw.
Finally, we need to deactivate the move or look controller behaviors when the player stops moving the mouse or
touching the screen. We use OnPointerReleased, which we call when PointerReleased is fired, to set
m_moveInUse or m_lookInUse to FALSE and turn off the camera pan movement, and to zero out the pointer ID.
OnPointerReleased
void MoveLookController::OnPointerReleased(
_In_ CoreWindow ^sender,
_In_ PointerEventArgs ^args)
{
uint32 pointerID = args->CurrentPoint->PointerId;
DirectX::XMFLOAT2 position = DirectX::XMFLOAT2( args->CurrentPoint->Position.X, args->CurrentPoint->Position.Y );

if ( pointerID == m_movePointerID ) // This was the move pointer.


{
m_moveInUse = FALSE;
m_movePointerID = 0;
}
else if (pointerID == m_lookPointerID ) // This was the look pointer.
{
m_lookInUse = FALSE;
m_lookPointerID = 0;
}
}

So far, we handled all the touch screen events. Now, let's handle the key input events for a keyboard-based move
controller.
OnKeyDown

void MoveLookController::OnKeyDown(
__in CoreWindow^ sender,
__in KeyEventArgs^ args )
{
Windows::System::VirtualKey Key;
Key = args->VirtualKey;

// Figure out the command from the keyboard.


if ( Key == VirtualKey::W ) // Forward
m_forward = true;
if ( Key == VirtualKey::S ) // Back
m_back = true;
if ( Key == VirtualKey::A ) // Left
m_left = true;
if ( Key == VirtualKey::D ) // Right
m_right = true;
}

As long as one of these keys is pressed, this event handler sets the corresponding directional move state to true.
OnKeyUp
void MoveLookController::OnKeyUp(
__in CoreWindow^ sender,
__in KeyEventArgs^ args)
{
Windows::System::VirtualKey Key;
Key = args->VirtualKey;

// Figure out the command from the keyboard.


if ( Key == VirtualKey::W ) // forward
m_forward = false;
if ( Key == VirtualKey::S ) // back
m_back = false;
if ( Key == VirtualKey::A ) // left
m_left = false;
if ( Key == VirtualKey::D ) // right
m_right = false;
}

And when the key is released, this event handler sets it back to false. When we call Update, it checks these
directional move states, and move the camera accordingly. This is a bit simpler than the touch implementation!

Initialize the touch controls and the controller state


Let's hook up the events now, and initialize all the controller state fields.
Initialize

void MoveLookController::Initialize( _In_ CoreWindow^ window )


{

// Opt in to recieve touch/mouse events.


window->PointerPressed +=
ref new TypedEventHandler<CoreWindow^, PointerEventArgs^>(this, &MoveLookController::OnPointerPressed);

window->PointerMoved +=
ref new TypedEventHandler<CoreWindow^, PointerEventArgs^>(this, &MoveLookController::OnPointerMoved);

window->PointerReleased +=
ref new TypedEventHandler<CoreWindow^, PointerEventArgs^>(this, &MoveLookController::OnPointerReleased);

window->CharacterReceived +=
ref new TypedEventHandler<CoreWindow^, CharacterReceivedEventArgs^>(this, &MoveLookController::OnCharacterReceived);

window->KeyDown +=
ref new TypedEventHandler<CoreWindow^, KeyEventArgs^>(this, &MoveLookController::OnKeyDown);

window->KeyUp +=
ref new TypedEventHandler<CoreWindow^, KeyEventArgs^>(this, &MoveLookController::OnKeyUp);

// Initialize the state of the controller.


m_moveInUse = FALSE; // No pointer is in the Move control.
m_movePointerID = 0;

m_lookInUse = FALSE; // No pointer is in the Look control.


m_lookPointerID = 0;

// Need to init this as it is reset every frame.


m_moveCommand = DirectX::XMFLOAT3( 0.0f, 0.0f, 0.0f );

SetOrientation( 0, 0 ); // Look straight ahead when the app starts.

}
Initialize takes a reference to the app's CoreWindow instance as a parameter and registers the event handlers
we developed to the appropriate events on that CoreWindow. It initializes the move and look pointer's IDs, sets
the command vector for our touch screen move controller implementation to zero, and sets the camera looking
straight ahead when the app starts.

Getting and setting the position and orientation of the camera


Let's define some methods to get and set the position of the camera with respect to the viewport.

void MoveLookController::SetPosition( _In_ DirectX::XMFLOAT3 pos )


{
m_position = pos;
}

// Accessor to set the position of the controller.


void MoveLookController::SetOrientation( _In_ float pitch, _In_ float yaw )
{
m_pitch = pitch;
m_yaw = yaw;
}

// Returns the position of the controller object.


DirectX::XMFLOAT3 MoveLookController::get_Position()
{
return m_position;
}

// Returns the point at which the camera controller is facing.


DirectX::XMFLOAT3 MoveLookController::get_LookPoint()
{
float y = sinf(m_pitch); // Vertical
float r = cosf(m_pitch); // In the plane
float z = r*cosf(m_yaw); // Fwd-back
float x = r*sinf(m_yaw); // Left-right
DirectX::XMFLOAT3 result(x,y,z);
result.x += m_position.x;
result.y += m_position.y;
result.z += m_position.z;

// Return m_position + DirectX::XMFLOAT3(x, y, z);


return result;
}

Updating the controller state info


Now, we perform our calculations that convert the pointer coordinate info tracked in m_movePointerPosition
into new coordinate information respective of our world coordinate system. Our app calls this method every time
we refresh the main app loop. So, it is here that we compute the new look point position info we want to pass to
the app for updating the view matrix before projection into the viewport.

void MoveLookController::Update(CoreWindow ^window)


{
// Check for input from the Move control.
if (m_moveInUse)
{
DirectX::XMFLOAT2 pointerDelta(m_movePointerPosition);
pointerDelta.x -= m_moveFirstDown.x;
pointerDelta.y -= m_moveFirstDown.y;

// Figure out the command from the touch-based virtual joystick.


if (pointerDelta.x > 16.0f) // Leave 32 pixel-wide dead spot for being still.
m_moveCommand.x = 1.0f;
else
if (pointerDelta.x < -16.0f)
m_moveCommand.x = -1.0f;

if (pointerDelta.y > 16.0f) // Joystick y is up, so change sign.


m_moveCommand.y = -1.0f;
else
if (pointerDelta.y < -16.0f)
m_moveCommand.y = 1.0f;
}

// Poll our state bits that are set by the keyboard input events.
if (m_forward)
m_moveCommand.y += 1.0f;
if (m_back)
m_moveCommand.y -= 1.0f;

if (m_left)
m_moveCommand.x -= 1.0f;
if (m_right)
m_moveCommand.x += 1.0f;

if (m_up)
m_moveCommand.z += 1.0f;
if (m_down)
m_moveCommand.z -= 1.0f;

// Make sure that 45 degree cases are not faster.


DirectX::XMFLOAT3 command = m_moveCommand;
DirectX::XMVECTOR vector;
vector = DirectX::XMLoadFloat3(&command);

if (fabsf(command.x) > 0.1f || fabsf(command.y) > 0.1f || fabsf(command.z) > 0.1f)


{
vector = DirectX::XMVector3Normalize(vector);
DirectX::XMStoreFloat3(&command, vector);
}

// Rotate command to align with our direction (world coordinates).


DirectX::XMFLOAT3 wCommand;
wCommand.x = command.x*cosf(m_yaw) - command.y*sinf(m_yaw);
wCommand.y = command.x*sinf(m_yaw) + command.y*cosf(m_yaw);
wCommand.z = command.z;

// Scale for sensitivity adjustment.


wCommand.x = wCommand.x * MOVEMENT_GAIN;
wCommand.y = wCommand.y * MOVEMENT_GAIN;
wCommand.z = wCommand.z * MOVEMENT_GAIN;

// Our velocity is based on the command.


// Also note that y is the up-down axis.
DirectX::XMFLOAT3 Velocity;
Velocity.x = -wCommand.x;
Velocity.z = wCommand.y;
Velocity.y = wCommand.z;

// Integrate
m_position.x += Velocity.x;
m_position.y += Velocity.y;
m_position.z += Velocity.z;

// Clear movement input accumulator for use during the next frame.
m_moveCommand = DirectX::XMFLOAT3(0.0f, 0.0f, 0.0f);

}
Because we don't want jittery movement when the player uses our touch-based move controller, we set a virtual
dead zone around the pointer with a diameter of 32 pixels. We also add velocity, which is the command value plus
a movement gain rate. (You can adjust this behavior to your liking, to slow down or speed up the rate of
movement based on the distance the pointer moves in the move controller area.)
When we compute the velocity, we also translate the coordinates received from the move and look controllers into
the movement of the actual look point we send to the method that computes our view matrix for the scene. First,
we invert the x coordinate, because if we click-move or drag left or right with the look controller, the look point
rotates in the opposite direction in the scene, as a camera might swing about its central axis. Then, we swap the y
and z axes, because an up/down key press or touch drag motion (read as a y-axis behavior) on the move controller
should translate into a camera action that moves the look point into or out of the screen (the z-axis).
The final position of the look point for the player is the last position plus the calculated velocity, and this is what is
read by the renderer when it calls the get_Position method (most likely during the setup for each frame). After
that, we reset the move command to zero.

Updating the view matrix with the new camera position


We can obtain a scene space coordinate that our camera is focused on, and which is updated whenever you tell
your app to do so (every 60 seconds in the main app loop, for example). This pseudocode suggests the calling
behavior you can implement:

myMoveLookController->Update( m_window );

// Update the view matrix based on the camera position.


myFirstPersonCamera->SetViewParameters(
myMoveLookController->get_Position(), // Point we are at
myMoveLookController->get_LookPoint(), // Point to look towards
DirectX::XMFLOAT3( 0, 1, 0 ) // Up-vector
);

Congratulations! You've implemented basic move-look controls for both touch screens and keyboard/mouse
input touch controls in your game!

Note
This article is for Windows 10 developers writing Universal Windows Platform (UWP) apps. If youre
developing for Windows 8.x or Windows Phone 8.x, see the archived documentation.
Relative mouse movement and CoreWindow
3/6/2017 3 min to read Edit on GitHub

In games, the mouse is a common control option that is familiar to many players, and is likewise essential to many
genres of games, including first- and third-person shooters, and real-time strategy games. Here we discuss the
implementation of relative mouse controls, which don't use the system cursor and don't return absolute screen
coordinates; instead, they track the pixel delta between mouse movements.
Some apps, such as games, use the mouse as a more general input device. For example, a 3-D modeler might use
mouse input to orient a 3-D object by simulating a virtual trackball; or a game might use the mouse to change the
direction of the viewing camera via mouse-look controls.
In these scenarios, the app requires relative mouse data. Relative mouse values represent how far the mouse
moved since the last frame, rather than the absolute x-y coordinate values within a window or screen. Also, apps
often hide the mouse cursor since the position of the cursor with respect to the screen coordinates is not relevant
when manipulating a 3-D object or scene.
When the user takes an action that moves the app into a relative 3-D object/scene manipulation mode, the app
must:
Ignore default mouse handling.
Enable relative mouse handling.
Hide the mouse cursor by setting it a null pointer (nullptr).
When the user takes an action that moves the app out of a relative 3-D object/scene manipulation mode, the app
must:
Enable default/absolute mouse handling.
Turn off relative mouse handling.
Set the mouse cursor to a non-null value (which makes it visible).

Note
With this pattern, the location of the absolute mouse cursor is preserved on entering the cursorless relative
mode. The cursor re-appears in the same screen coordinate location as it was previous to enabling the relative
mouse movement mode.

Handling relative mouse movement


To access relative mouse delta values, register for the MouseDevice::MouseMoved event as shown here.

// register handler for relative mouse movement events


Windows::Devices::Input::MouseDevice::GetForCurrentView()->MouseMoved +=
ref new TypedEventHandler<MouseDevice^, MouseEventArgs^>(this, &MoveLookController::OnMouseMoved);
void MoveLookController::OnMouseMoved(
_In_ Windows::Devices::Input::MouseDevice^ mouseDevice,
_In_ Windows::Devices::Input::MouseEventArgs^ args
)
{
float2 pointerDelta;
pointerDelta.x = static_cast<float>(args->MouseDelta.X);
pointerDelta.y = static_cast<float>(args->MouseDelta.Y);

float2 rotationDelta;
rotationDelta = pointerDelta * ROTATION_GAIN; // scale for control sensitivity

// update our orientation based on the command


m_pitch -= rotationDelta.y; // mouse y increases down, but pitch increases up
m_yaw -= rotationDelta.x; // yaw defined as CCW around y-axis

// limit pitch to straight up or straight down


float limit = (float)(M_PI/2) - 0.01f;
m_pitch = (float) __max( -limit, m_pitch );
m_pitch = (float) __min( +limit, m_pitch );

// keep longitude in useful range by wrapping


if ( m_yaw > M_PI )
m_yaw -= (float)M_PI*2;
else if ( m_yaw < -M_PI )
m_yaw += (float)M_PI*2;
}

The event handler in this code example, OnMouseMoved, renders the view based on the movements of the
mouse. The position of the mouse pointer is passed to the handler as a MouseEventArgs object.
Skip over processing of absolute mouse data from the CoreWindow::PointerMoved event when your app changes
to handling relative mouse movement values. However, only skip this input if the CoreWindow::PointerMoved
event occurred as the result of mouse input (as opposed to touch input). The cursor is hidden by setting the
CoreWindow::PointerCursor to nullptr.

Returning to absolute mouse movement


When the app exits the 3-D object or scene manipulation mode and no longer uses relative mouse movement
(such as when it returns to a menu screen), return to normal processing of absolute mouse movement. At this time,
stop reading relative mouse data, restart the processing of standard mouse (and pointer) events, and set
CoreWindow::PointerCursor to non-null value.

Note
When your app is in the 3-D object/scene manipulation mode (processing relative mouse movements with the
cursor off), the mouse cannot invoke edge UI such as the charms, back stack, or app bar. Therefore, it is
important to provide a mechanism to exit this particular mode, such as the commonly used Esc key.

Related topics
Move-look controls for games
Touch controls for games
Touch controls for games
3/6/2017 10 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Learn how to add basic touch controls to your Universal Windows Platform (UWP) C++ game with DirectX. We
show you how to add touch-based controls to move a fixed-plane camera in a Direct3D environment, where
dragging with a finger or stylus shifts the camera perspective.
You can incorporate these controls in games where you want the player to drag to scroll or pan over a 3D
environment, such as a map or playfield. For example, in a strategy or puzzle game, you can use these controls to
let the player view a game environment that is larger than the screen by panning left or right.

Note Our code also works with mouse-based panning controls. The pointer related events are abstracted by
the Windows Runtime APIs, so they can handle either touch- or mouse-based pointer events.

Objectives
Create a simple touch drag control for panning a fixed-plane camera in a DirectX game.

Set up the basic touch event infrastructure


First, we define our basic controller type, the CameraPanController, in this case. Here, we define a controller as an
abstract idea, the set of behaviors the user can perform.
The CameraPanController class is a regularly refreshed collection of information about the camera controller
state, and provides a way for our app to obtain that information from its update loop.

using namespace Windows::UI::Core;


using namespace Windows::System;
using namespace Windows::Foundation;
using namespace Windows::Devices::Input;
#include <directxmath.h>

// Methods to get input from the UI pointers


ref class CameraPanController
{
}

Now, let's create a header that defines the state of the camera controller, and the basic methods and event
handlers that implement the camera controller interactions.
ref class CameraPanController
{
private:
// Properties of the controller object
DirectX::XMFLOAT3 m_position; // the position of the camera

// Properties of the camera pan control


bool m_panInUse;
uint32 m_panPointerID;
DirectX::XMFLOAT2 m_panFirstDown;
DirectX::XMFLOAT2 m_panPointerPosition;
DirectX::XMFLOAT3 m_panCommand;

internal:
// Accessor to set the position of the controller
void SetPosition( _In_ DirectX::XMFLOAT3 pos );

// Accessor to set the fixed "look point" of the controller


DirectX::XMFLOAT3 get_FixedLookPoint();

// Returns the position of the controller object


DirectX::XMFLOAT3 get_Position();

public:

// Methods to get input from the UI pointers


void OnPointerPressed(
_In_ Windows::UI::Core::CoreWindow^ sender,
_In_ Windows::UI::Core::PointerEventArgs^ args
);

void OnPointerMoved(
_In_ Windows::UI::Core::CoreWindow^ sender,
_In_ Windows::UI::Core::PointerEventArgs^ args
);

void OnPointerReleased(
_In_ Windows::UI::Core::CoreWindow^ sender,
_In_ Windows::UI::Core::PointerEventArgs^ args
);

// Set up the Controls supported by this controller


void Initialize( _In_ Windows::UI::Core::CoreWindow^ window );

void Update( Windows::UI::Core::CoreWindow ^window );

}; // Class CameraPanController

The private fields contain the current state of the camera controller. Let's review them.
m_position is the position of the camera in the scene space. In this example, the z-coordinate value is fixed at 0.
We could use a DirectX::XMFLOAT2 to represent this value, but for the purposes of this sample and future
extensibility, we use a DirectX::XMFLOAT3. We pass this value through the get_Position property to the app
itself so it can update the viewport accordingly.
m_panInUse is a Boolean value that indicates whether a pan operation is active; or, more specifically, whether
the player is touching the screen and moving the camera.
m_panPointerID is a unique ID for the pointer. We won't use this in the sample, but it's a good practice to
associate your controller state class with a specific pointer.
m_panFirstDown is the point on the screen where the player first touched the screen or clicked the mouse
during the camera pan action. We use this value later to set a dead zone to prevent jitter when the screen is
touched, or if the mouse shakes a little.
m_panPointerPosition is the point on the screen where the player has currently moved the pointer. We use it
to determine what direction the player wanted to move by examining it relative to m_panFirstDown.
m_panCommand is the final computed command for the camera controller: up, down, left, or right. Because
we are working with a camera fixed to the x-y plane, this could be a DirectX::XMFLOAT2 value instead.
We use these 3 event handlers to update the camera controller state info.
OnPointerPressed is an event handler that our app calls when the players presses a finger onto the touch
surface and the pointer is moved to the coordinates of the press.
OnPointerMoved is an event handler that our app calls when the player swipes a finger across the touch
surface. It updates with the new coordinates of the drag path.
OnPointerReleased is an event handler that our app calls when the player removes the pressing finger from
the touch surface.
Finally, we use these methods and properties to initialize, access, and update the camera controller state
information.
Initialize is an event handler that our app calls to initialize the controls and attach them to the CoreWindow
object that describes your display window.
SetPosition is a method that our app calls to set the (x, y, and z) coordinates of your controls in the scene
space. Note that our z-coordinate is 0 throughout this tutorial.
get_Position is a property that our app accesses to get the current position of the camera in the scene space.
You use this property as the way of communicating the current camera position to the app.
get_FixedLookPoint is a property that our app accesses to get the current point toward which the controller
camera is facing. In this example, it is locked normal to the x-y plane.
Update is a method that reads the controller state and updates the camera position. You continually call this
<something> from the app's main loop to refresh the camera controller data and the camera position in the
scene space.
Now, you have here all the components you need to implement touch controls. You can detect when and where
the touch or mouse pointer events have occurred, and what the action is. You can set the position and orientation
of the camera relative to the scene space, and track the changes. Finally, you can communicate the new camera
position to the calling app.
Now, let's connect these pieces together.

Create the basic touch events


The Windows Runtime event dispatcher provides 3 events we want our app to handle:
PointerPressed
PointerMoved
PointerReleased
These events are implemented on the CoreWindow type. We assume that you have a CoreWindow object to
work with. For more info, see How to set up your UWP C++ app to display a DirectX view.
As these events fire while our app is running, the handlers update the camera controller state info defined in our
private fields.
First, let's populate the touch pointer event handlers. In the first event handler, OnPointerPressed, we get the x-y
coordinates of the pointer from the CoreWindow that manages our display when the user touches the screen or
clicks the mouse.
OnPointerPressed
void CameraPanController::OnPointerPressed(
_In_ CoreWindow^ sender,
_In_ PointerEventArgs^ args)
{
// Get the current pointer position.
uint32 pointerID = args->CurrentPoint->PointerId;
DirectX::XMFLOAT2 position = DirectX::XMFLOAT2( args->CurrentPoint->Position.X, args->CurrentPoint->Position.Y );

auto device = args->CurrentPoint->PointerDevice;


auto deviceType = device->PointerDeviceType;

if ( !m_panInUse ) // If no pointer is in this control yet.


{
m_panFirstDown = position; // Save the location of the initial contact.
m_panPointerPosition = position;
m_panPointerID = pointerID; // Store the id of the pointer using this control.
m_panInUse = TRUE;
}

We use this handler to let the current CameraPanController instance know that camera controller should be
treated as active by setting m_panInUse to TRUE. That way, when the app calls Update , it will use the current
position data to update the viewport.
Now that we've established the base values for the camera movement when the user touches the screen or click-
presses in the display window, we must determine what to do when the user either drags the screen press or
moves the mouse with button pressed.
The OnPointerMoved event handler fires whenever the pointer moves, at every tick that the player drags it on the
screen. We need to keep the app aware of the current location of the pointer, and this is how we do it.
OnPointerMoved

void CameraPanController::OnPointerMoved(
_In_ CoreWindow ^sender,
_In_ PointerEventArgs ^args)
{
uint32 pointerID = args->CurrentPoint->PointerId;
DirectX::XMFLOAT2 position = DirectX::XMFLOAT2( args->CurrentPoint->Position.X, args->CurrentPoint->Position.Y );

m_panPointerPosition = position;
}

Finally, we need to deactivate the camera pan behavior when the player stops touching the screen. We use
OnPointerReleased, which is called when PointerReleased is fired, to set m_panInUse to FALSE and turn off
the camera pan movement, and set the pointer ID to 0.
OnPointerReleased
void CameraPanController::OnPointerReleased(
_In_ CoreWindow ^sender,
_In_ PointerEventArgs ^args)
{
uint32 pointerID = args->CurrentPoint->PointerId;
DirectX::XMFLOAT2 position = DirectX::XMFLOAT2( args->CurrentPoint->Position.X, args->CurrentPoint->Position.Y );

m_panInUse = FALSE;
m_panPointerID = 0;
}

Initialize the touch controls and the controller state


Let's hook the events and initialize all the basic state fields of the camera controller.
Initialize

void CameraPanController::Initialize( _In_ CoreWindow^ window )


{

// Start recieving touch/mouse events.


window->PointerPressed +=
ref new TypedEventHandler<CoreWindow^, PointerEventArgs^>(this, &CameraPanController::OnPointerPressed);

window->PointerMoved +=
ref new TypedEventHandler<CoreWindow^, PointerEventArgs^>(this, &CameraPanController::OnPointerMoved);

window->PointerReleased +=
ref new TypedEventHandler<CoreWindow^, PointerEventArgs^>(this, &CameraPanController::OnPointerReleased);

// Initialize the state of the controller.


m_panInUse = FALSE;
m_panPointerID = 0;

// Initialize this as it is reset on every frame.


m_panCommand = DirectX::XMFLOAT3( 0.0f, 0.0f, 0.0f );

Initialize takes a reference to the app's CoreWindow instance as a parameter and registers the event handlers
we developed to the appropriate events on that CoreWindow.

Getting and setting the position of the camera controller


Let's define some methods to get and set the position of the camera controller in the scene space.
void CameraPanController::SetPosition( _In_ DirectX::XMFLOAT3 pos )
{
m_position = pos;
}

// Returns the position of the controller object


DirectX::XMFLOAT3 CameraPanController::get_Position()
{
return m_position;
}

DirectX::XMFLOAT3 CameraPanController::get_FixedLookPoint()
{
// For this sample, we don't need to use the trig functions because our
// look point is fixed.
DirectX::XMFLOAT3 result= m_position;
result.z += 1.0f;
return result;

SetPosition is a public method that we can call from our app if we need to set the camera controller position to a
specific point.
get_Position is our most important public property: it's the way our app gets the current position of the camera
controller in the scene space so it can update the viewport accordingly.
get_FixedLookPoint is a public property that, in this example, obtains a look point that is normal to the x-y plane.
You can change this method to use the trigonometric functions, sin and cos, when calculating the x, y, and z
coordinate values if you want to create more oblique angles for the fixed camera.

Updating the camera controller state information


Now, we perform our calculations that convert the pointer coordinate info tracked in m_panPointerPosition into
new coordinate info respective of our 3D scene space. Our app calls this method every time we refresh the main
app loop. In it we compute the new position information we want to pass to the app which is used to update the
view matrix before projection into the viewport.
void CameraPanController::Update( CoreWindow ^window )
{
if ( m_panInUse )
{
pointerDelta.x = m_panPointerPosition.x - m_panFirstDown.x;
pointerDelta.y = m_panPointerPosition.y - m_panFirstDown.y;

if ( pointerDelta.x > 16.0f ) // Leave 32 pixel-wide dead spot for being still.
m_panCommand.x += 1.0f;
else
if ( pointerDelta.x < -16.0f )
m_panCommand.x += -1.0f;

if ( pointerDelta.y > 16.0f )


m_panCommand.y += 1.0f;
else
if (pointerDelta.y < -16.0f )
m_panCommand.y += -1.0f;
}

DirectX::XMFLOAT3 command = m_panCommand;

// Our velocity is based on the command.


DirectX::XMFLOAT3 Velocity;
Velocity.x = command.x;
Velocity.y = command.y;
Velocity.z = 0.0f;

// Integrate
m_position.x = m_position.x + Velocity.x;
m_position.y = m_position.y + Velocity.y;
m_position.z = m_position.z + Velocity.z;

// Clear the movement input accumulator for use during the next frame.
m_panCommand = DirectX::XMFLOAT3( 0.0f, 0.0f, 0.0f );

Because we don't want touch or mouse jitter to make our camera panning jerky, we set a dead zone around the
pointer with a diameter of 32 pixels. We also have a velocity value, which in this case is 1:1 with the pixel traversal
of the pointer past the dead zone. You can adjust this behavior to slow down or speed up the rate of movement.

Updating the view matrix with the new camera position


We can now obtain a scene space coordinate that our camera is focused on, and which is updated whenever you
tell your app to do so (every 60 seconds in the main app loop, for example). This pseudocode suggests the calling
behavior you can implement:

myCameraPanController->Update( m_window );

// Update the view matrix based on the camera position.


myCamera->MyMethodToComputeViewMatrix(
myController->get_Position(), // The position in the 3D scene space.
myController->get_FixedLookPoint(), // The point in the space we are looking at.
DirectX::XMFLOAT3( 0, 1, 0 ) // The axis that is "up" in our space.
);

Congratulations! You've implemented a simple set of camera panning touch controls in your game.

Note
This article is for Windows 10 developers writing Universal Windows Platform (UWP) apps. If youre
developing for Windows 8.x or Windows Phone 8.x, see the archived documentation.
Supporting screen orientation (DirectX and C++)
3/6/2017 16 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Your Universal Windows Platform (UWP) app can support multiple screen orientations when you handle the
DisplayInformation::OrientationChanged event. Here, we'll discuss best practices for handling screen rotation
in your UWP DirectX app, so that the Windows 10 device's graphics hardware are used efficiently and effectively.
Before you start, remember that graphics hardware always outputs pixel data in the same way, regardless of the
orientation of the device. Windows 10 devices can determine their current display orientation (with some sort of
sensor, or with a software toggle) and allow users to change the display settings. Because of this, Windows 10 itself
handles the rotation of the images to ensure they are "upright" based on the orientation of the device. By default,
your app receives the notification that something has changed in orientation, for example, a window size. When this
happens, Windows 10 immediately rotates the image for final display. For three of the four specific screen
orientations (discussed later), Windows 10 uses additional graphic resources and computation to display the final
image.
For UWP DirectX apps, the DisplayInformation object provides basic display orientation data that your app can
query. The default orientation is landscape, where the pixel width of the display is greater than the height; the
alternative orientation is portrait, where the display is rotated 90 degrees in either direction and the width becomes
less than the height.
Windows 10 defines four specific display orientation modes:
Landscapethe default display orientation for Windows 10, and is considered the base or identity angle for
rotation (0 degrees).
Portraitthe display has been rotated clockwise 90 degrees (or counter-clockwise 270 degrees).
Landscape, flippedthe display has been rotated 180 degrees (turned upside-down).
Portrait, flippedthe display has been rotated clockwise 270 degrees (or counter-clockwise 90 degrees).
When the display rotates from one orientation to another, Windows 10 internally performs a rotation operation to
align the drawn image with the new orientation, and the user sees an upright image on the screen.
Also, Windows 10 displays automatic transition animations to create a smooth user experience when shifting from
one orientation to another. As the display orientation shifts, the user sees these shifts as a fixed zoom and rotation
animation of the displayed screen image. Time is allocated by Windows 10 to the app for layout in the new
orientation.
Overall, this is the general process for handling changes in screen orientation:
1. Use a combination of the window bounds values and the display orientation data to keep the swap chain
aligned with the native display orientation of the device.
2. Notify Windows 10 of the orientation of the swap chain using IDXGISwapChain1::SetRotation.
3. Change the rendering code to generate images aligned with the user orientation of the device.

Resizing the swap chain and pre-rotating its contents


To perform a basic display resize and pre-rotate its contents in your UWP DirectX app , implement these steps:
1. Handle the DisplayInformation::OrientationChanged event.
2. Resize the swap chain to the new dimensions of the window.
3. Call IDXGISwapChain1::SetRotation to set the orientation of the swap chain.
4. Recreate any window size dependent resources, such as your render targets and other pixel data buffers.
Now's let's look at those steps in a bit more detail.
Your first step is to register a handler for the DisplayInformation::OrientationChanged event. This event is
raised in your app every time the screen orientation changes, such as when the display is rotated.
To handle the DisplayInformation::OrientationChanged event, you connect your handler for
DisplayInformation::OrientationChanged in the required SetWindow method, which is one of the methods of
the IFrameworkView interface that your view provider must implement.
In this code example, the event handler for DisplayInformation::OrientationChanged is a method called
OnOrientationChanged. When DisplayInformation::OrientationChanged is raised, it in turn calls a method
called SetCurrentOrientation which then calls CreateWindowSizeDependentResources.

void App::SetWindow(CoreWindow^ window)


{
// ... Other UI event handlers assigned here ...

currentDisplayInformation->OrientationChanged +=
ref new TypedEventHandler<DisplayInformation^, Object^>(this, &App::OnOrientationChanged);

// ...
}
}

void App::OnOrientationChanged(DisplayInformation^ sender, Object^ args)


{
m_deviceResources->SetCurrentOrientation(sender->CurrentOrientation);
m_main->CreateWindowSizeDependentResources();
}

// This method is called in the event handler for the OrientationChanged event.
void DX::DeviceResources::SetCurrentOrientation(DisplayOrientations currentOrientation)
{
if (m_currentOrientation != currentOrientation)
{
m_currentOrientation = currentOrientation;
CreateWindowSizeDependentResources();
}
}

Next, you resize the swap chain for the new screen orientation and prepare it to rotate the contents of the graphic
pipeline when the rendering is performed. In this example,
DirectXBase::CreateWindowSizeDependentResources is a method that handles calling
IDXGISwapChain::ResizeBuffers, setting a 3D and a 2D rotation matrix, calling SetRotation, and recreating your
resources.

void DX::DeviceResources::CreateWindowSizeDependentResources()
{
// Clear the previous window size specific context.
ID3D11RenderTargetView* nullViews[] = {nullptr};
m_d3dContext->OMSetRenderTargets(ARRAYSIZE(nullViews), nullViews, nullptr);
m_d3dRenderTargetView = nullptr;
m_d2dContext->SetTarget(nullptr);
m_d2dTargetBitmap = nullptr;
m_d3dDepthStencilView = nullptr;
m_d3dContext->Flush();

// Calculate the necessary render target size in pixels.


// Calculate the necessary render target size in pixels.
m_outputSize.Width = DX::ConvertDipsToPixels(m_logicalSize.Width, m_dpi);
m_outputSize.Height = DX::ConvertDipsToPixels(m_logicalSize.Height, m_dpi);

// Prevent zero size DirectX content from being created.


m_outputSize.Width = max(m_outputSize.Width, 1);
m_outputSize.Height = max(m_outputSize.Height, 1);

// The width and height of the swap chain must be based on the window's
// natively-oriented width and height. If the window is not in the native
// orientation, the dimensions must be reversed.
DXGI_MODE_ROTATION displayRotation = ComputeDisplayRotation();

bool swapDimensions = displayRotation == DXGI_MODE_ROTATION_ROTATE90 || displayRotation ==


DXGI_MODE_ROTATION_ROTATE270;
m_d3dRenderTargetSize.Width = swapDimensions ? m_outputSize.Height : m_outputSize.Width;
m_d3dRenderTargetSize.Height = swapDimensions ? m_outputSize.Width : m_outputSize.Height;

if (m_swapChain != nullptr)
{
// If the swap chain already exists, resize it.
HRESULT hr = m_swapChain->ResizeBuffers(
2, // Double-buffered swap chain.
lround(m_d3dRenderTargetSize.Width),
lround(m_d3dRenderTargetSize.Height),
DXGI_FORMAT_B8G8R8A8_UNORM,
0
);

if (hr == DXGI_ERROR_DEVICE_REMOVED || hr == DXGI_ERROR_DEVICE_RESET)


{
// If the device was removed for any reason, a new device and swap chain will need to be created.
HandleDeviceLost();

// Everything is set up now. Do not continue execution of this method. HandleDeviceLost will reenter this method
// and correctly set up the new device.
return;
}
else
{
DX::ThrowIfFailed(hr);
}
}
else
{
// Otherwise, create a new one using the same adapter as the existing Direct3D device.
DXGI_SWAP_CHAIN_DESC1 swapChainDesc = {0};

swapChainDesc.Width = lround(m_d3dRenderTargetSize.Width); // Match the size of the window.


swapChainDesc.Height = lround(m_d3dRenderTargetSize.Height);
swapChainDesc.Format = DXGI_FORMAT_B8G8R8A8_UNORM; // This is the most common swap chain format.
swapChainDesc.Stereo = false;
swapChainDesc.SampleDesc.Count = 1; // Don't use multi-sampling.
swapChainDesc.SampleDesc.Quality = 0;
swapChainDesc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT;
swapChainDesc.BufferCount = 2; // Use double-buffering to minimize latency.
swapChainDesc.SwapEffect = DXGI_SWAP_EFFECT_FLIP_SEQUENTIAL; // All Windows Store apps must use this SwapEffect.
swapChainDesc.Flags = 0;
swapChainDesc.Scaling = DXGI_SCALING_NONE;
swapChainDesc.AlphaMode = DXGI_ALPHA_MODE_IGNORE;

// This sequence obtains the DXGI factory that was used to create the Direct3D device above.
ComPtr<IDXGIDevice3> dxgiDevice;
DX::ThrowIfFailed(
m_d3dDevice.As(&dxgiDevice)
);

ComPtr<IDXGIAdapter> dxgiAdapter;
DX::ThrowIfFailed(
dxgiDevice->GetAdapter(&dxgiAdapter)
dxgiDevice->GetAdapter(&dxgiAdapter)
);

ComPtr<IDXGIFactory2> dxgiFactory;
DX::ThrowIfFailed(
dxgiAdapter->GetParent(IID_PPV_ARGS(&dxgiFactory))
);

DX::ThrowIfFailed(
dxgiFactory->CreateSwapChainForCoreWindow(
m_d3dDevice.Get(),
reinterpret_cast<IUnknown*>(m_window.Get()),
&swapChainDesc,
nullptr,
&m_swapChain
)
);

// Ensure that DXGI does not queue more than one frame at a time. This both reduces latency and
// ensures that the application will only render after each VSync, minimizing power consumption.
DX::ThrowIfFailed(
dxgiDevice->SetMaximumFrameLatency(1)
);
}

// Set the proper orientation for the swap chain, and generate 2D and
// 3D matrix transformations for rendering to the rotated swap chain.
// Note the rotation angle for the 2D and 3D transforms are different.
// This is due to the difference in coordinate spaces. Additionally,
// the 3D matrix is specified explicitly to avoid rounding errors.

switch (displayRotation)
{
case DXGI_MODE_ROTATION_IDENTITY:
m_orientationTransform2D = Matrix3x2F::Identity();
m_orientationTransform3D = ScreenRotation::Rotation0;
break;

case DXGI_MODE_ROTATION_ROTATE90:
m_orientationTransform2D =
Matrix3x2F::Rotation(90.0f) *
Matrix3x2F::Translation(m_logicalSize.Height, 0.0f);
m_orientationTransform3D = ScreenRotation::Rotation270;
break;

case DXGI_MODE_ROTATION_ROTATE180:
m_orientationTransform2D =
Matrix3x2F::Rotation(180.0f) *
Matrix3x2F::Translation(m_logicalSize.Width, m_logicalSize.Height);
m_orientationTransform3D = ScreenRotation::Rotation180;
break;

case DXGI_MODE_ROTATION_ROTATE270:
m_orientationTransform2D =
Matrix3x2F::Rotation(270.0f) *
Matrix3x2F::Translation(0.0f, m_logicalSize.Width);
m_orientationTransform3D = ScreenRotation::Rotation90;
break;

default:
throw ref new FailureException();
}

//SDM: only instance of SetRotation


DX::ThrowIfFailed(
m_swapChain->SetRotation(displayRotation)
);

// Create a render target view of the swap chain back buffer.


// Create a render target view of the swap chain back buffer.
ComPtr<ID3D11Texture2D> backBuffer;
DX::ThrowIfFailed(
m_swapChain->GetBuffer(0, IID_PPV_ARGS(&backBuffer))
);

DX::ThrowIfFailed(
m_d3dDevice->CreateRenderTargetView(
backBuffer.Get(),
nullptr,
&m_d3dRenderTargetView
)
);

// Create a depth stencil view for use with 3D rendering if needed.


CD3D11_TEXTURE2D_DESC depthStencilDesc(
DXGI_FORMAT_D24_UNORM_S8_UINT,
lround(m_d3dRenderTargetSize.Width),
lround(m_d3dRenderTargetSize.Height),
1, // This depth stencil view has only one texture.
1, // Use a single mipmap level.
D3D11_BIND_DEPTH_STENCIL
);

ComPtr<ID3D11Texture2D> depthStencil;
DX::ThrowIfFailed(
m_d3dDevice->CreateTexture2D(
&depthStencilDesc,
nullptr,
&depthStencil
)
);

CD3D11_DEPTH_STENCIL_VIEW_DESC depthStencilViewDesc(D3D11_DSV_DIMENSION_TEXTURE2D);
DX::ThrowIfFailed(
m_d3dDevice->CreateDepthStencilView(
depthStencil.Get(),
&depthStencilViewDesc,
&m_d3dDepthStencilView
)
);

// Set the 3D rendering viewport to target the entire window.


m_screenViewport = CD3D11_VIEWPORT(
0.0f,
0.0f,
m_d3dRenderTargetSize.Width,
m_d3dRenderTargetSize.Height
);

m_d3dContext->RSSetViewports(1, &m_screenViewport);

// Create a Direct2D target bitmap associated with the


// swap chain back buffer and set it as the current target.
D2D1_BITMAP_PROPERTIES1 bitmapProperties =
D2D1::BitmapProperties1(
D2D1_BITMAP_OPTIONS_TARGET | D2D1_BITMAP_OPTIONS_CANNOT_DRAW,
D2D1::PixelFormat(DXGI_FORMAT_B8G8R8A8_UNORM, D2D1_ALPHA_MODE_PREMULTIPLIED),
m_dpi,
m_dpi
);

ComPtr<IDXGISurface2> dxgiBackBuffer;
DX::ThrowIfFailed(
m_swapChain->GetBuffer(0, IID_PPV_ARGS(&dxgiBackBuffer))
);

DX::ThrowIfFailed(
m_d2dContext->CreateBitmapFromDxgiSurface(
dxgiBackBuffer.Get(),
&bitmapProperties,
&m_d2dTargetBitmap
)
);

m_d2dContext->SetTarget(m_d2dTargetBitmap.Get());

// Grayscale text anti-aliasing is recommended for all Windows Store apps.


m_d2dContext->SetTextAntialiasMode(D2D1_TEXT_ANTIALIAS_MODE_GRAYSCALE);

After saving the current height and width values of the window for the next time this method is called, convert the
device independent pixel (DIP) values for the display bounds to pixels. In the sample, you call
ConvertDipsToPixels, which is a simple function that runs this code:
floor((dips * dpi / 96.0f) + 0.5f);

You add the 0.5f to ensure rounding to the nearest integer value.
As an aside, CoreWindow coordinates are always defined in DIPs. For Windows 10 and earlier versions of
Windows, a DIP is defined as 1/96th of an inch, and aligned to the OS's definition of up. When the display
orientation rotates to portrait mode, the app flips the width and height of the CoreWindow, and the render target
size (bounds) must change accordingly. Because Direct3Ds coordinates are always in physical pixels, you must
convert from CoreWindow's DIP values to integer pixel values before you pass these values to Direct3D to set up
the swap chain.
Process-wise, you're doing a bit more work than you would if you simply resized the swap chain: you're actually
rotating the Direct2D and Direct3D components of your image before you composite them for presentation, and
you're telling the swap chain that you've rendered the results in a new orientation. Here's a little more detail on this
process, as shown in the code example for DX::DeviceResources::CreateWindowSizeDependentResources:
Determine the new orientation of the display. If the display has flipped from landscape to portrait, or vice
versa, swap the height and width valueschanged from DIP values to pixels, of coursefor the display
bounds.
Then, check to see if the swap chain has been created. If it hasn't been created, create it by calling
IDXGIFactory2::CreateSwapChainForCoreWindow. Otherwise, resize the existing swap chain's buffers to
the new display dimensions by calling IDXGISwapchain:ResizeBuffers. Although you don't need to resize
the swap chain for the rotation eventyou're outputting the content already rotated by your rendering
pipeline, after allthere are other size change events, such as snap and fill events, that require resizing.
After that, set the appropriate 2-D or 3-D matrix transformation to apply to the pixels or the vertices
(respectively) in the graphics pipeline when rendering them to the swap chain. We have 4 possible rotation
matrices:
landscape (DXGI_MODE_ROTATION_IDENTITY)
portrait (DXGI_MODE_ROTATION_ROTATE270)
landscape, flipped (DXGI_MODE_ROTATION_ROTATE180)
portrait, flipped (DXGI_MODE_ROTATION_ROTATE90)
The correct matrix is selected based on the data provided by Windows 10 (such as the results of
DisplayInformation::OrientationChanged) for determining display orientation, and it will be multiplied
by the coordinates of each pixel (Direct2D) or vertex (Direct3D) in the scene, effectively rotating them to
align to the orientation of the screen. (Note that in Direct2D, the screen origin is defined as the upper-left
corner, while in Direct3D the origin is defined as the logical center of the window.)

Note For more info about the 2-D transformations used for rotation and how to define them, see Defining
matrices for screen rotation (2-D). For more info about the 3-D transformations used for rotation, see Defining
matrices for screen rotation (3-D).

Now, here's the important bit: call IDXGISwapChain1::SetRotation and provide it with your updated rotation
matrix, like this:
m_swapChain->SetRotation(rotation);

You also store the selected rotation matrix where your render method can get it when it computes the new
projection. You'll use this matrix when you render your final 3-D projection or composite your final 2-D layout. (It
doesn't automatically apply it for you.)
After that, create a new render target for the rotated 3-D view, as well as a new depth stencil buffer for the view. Set
the 3-D rendering viewport for the rotated scene by calling ID3D11DeviceContext:RSSetViewports.
Lastly, if you have 2-D images to rotate or lay out, create a 2-D render target as a writable bitmap for the resized
swap chain using ID2D1DeviceContext::CreateBitmapFromDxgiSurface and composite your new layout for
the updated orientation. Set any properties you need to on the render target, such as the anti-aliasing mode (as
seen in the code example).
Now, present the swap chain.

Reduce the rotation delay by using CoreWindowResizeManager


By default, Windows 10 provides a short but noticeable window of time for any app, regardless of app model or
language, to complete the rotation of the image. However, chances are that when your app performs the rotation
calculation using one of the techniques described here, it will be done well before this window of time has closed.
You'd like to get that time back and complete the rotation animation, right? That's where
CoreWindowResizeManager comes in.
Here's how to use CoreWindowResizeManager: when a DisplayInformation::OrientationChanged event is
raised, call CoreWindowResizeManager::GetForCurrentView within the handler for the event to obtain an
instance of CoreWindowResizeManager and, when the layout for the new orientation is complete and presented,
call the NotifyLayoutCompleted to let Windows know that it can complete the rotation animation and display the
app screen.
Here's what the code in your event handler for DisplayInformation::OrientationChanged might look like:

CoreWindowResizeManager^ resizeManager = Windows::UI::Core::CoreWindowResizeManager::GetForCurrentView();

// ... build the layout for the new display orientation ...

resizeManager->NotifyLayoutCompleted();

When a user rotates the orientation of the display, Windows 10 shows an animation independent of your app as
feedback to the user. There are three parts to that animation that happen in the following order:
Windows 10 shrinks the original image.
Windows 10 holds the image for the time it takes to rebuild the new layout. This is the window of time that
you'd like to reduce, because your app probably doesn't need all of it.
When the layout window expires, or when a notification of layout completion is received, Windows rotates the
image and then cross-fade zooms to new orientation.
As suggested in the third bullet, when an app calls NotifyLayoutCompleted, Windows 10 stops the timeout
window, completes the rotation animation and returns control to your app, which is now drawing in the new
display orientation. The overall effect is that your app now feels a little bit more fluid and responsive, and works a
little more efficiently!

Appendix A: Applying matrices for screen rotation (2-D)


In the example code in Resizing the swap chain and pre-rotating its contents (and in the DXGI swap chain rotation
sample), you might have noticed that we had separate rotation matrices for Direct2D output and Direct3D output.
Let's look at the 2-D matrices, first.
There are two reasons that we can't apply the same rotation matrices to Direct2D and Direct3D content:
One, they use different Cartesian coordinate models. Direct2D uses the right-handed rule, where the y-
coordinate increases in positive value moving upward from the origin. However, Direct3D uses the left-
handed rule, where the y-coordinate increases in positive value rightward from the origin. The result is the
origin for the screen coordinates is located in the upper-left for Direct2D, while the origin for the screen (the
projection plane) is in the lower-left for Direct3D. (See 3-D coordinate systems for more info.)

Two, the 3-D rotation matrices must be specified explicitly to avoid rounding errors.
The swap chain assumes that the origin is located in the lower-left, so you must perform a rotation to align the
right-handed Direct2D coordinate system with the left-handed one used by the swap chain. Specifically, you
reposition the image under the new left-handed orientation by multiplying the rotation matrix with a translation
matrix for the rotated coordinate system origin, and transform the image from the CoreWindow's coordinate
space to the swap chain's coordinate space. Your app also must consistently apply this transform when the
Direct2D render target is connected with the swap chain. However, if your app is drawing to intermediate surfaces
that are not associated directly with the swap chain, don't apply this coordinate space transformation.
Your code to select the correct matrix from the four possible rotations might look like this (be aware of the
translation to the new coordinate system origin):

// Set the proper orientation for the swap chain, and generate 2D and
// 3D matrix transformations for rendering to the rotated swap chain.
// Note the rotation angle for the 2D and 3D transforms are different.
// This is due to the difference in coordinate spaces. Additionally,
// the 3D matrix is specified explicitly to avoid rounding errors.

switch (displayRotation)
{
case DXGI_MODE_ROTATION_IDENTITY:
m_orientationTransform2D = Matrix3x2F::Identity();
m_orientationTransform3D = ScreenRotation::Rotation0;
break;

case DXGI_MODE_ROTATION_ROTATE90:
m_orientationTransform2D =
Matrix3x2F::Rotation(90.0f) *
Matrix3x2F::Translation(m_logicalSize.Height, 0.0f);
m_orientationTransform3D = ScreenRotation::Rotation270;
break;

case DXGI_MODE_ROTATION_ROTATE180:
m_orientationTransform2D =
Matrix3x2F::Rotation(180.0f) *
Matrix3x2F::Translation(m_logicalSize.Width, m_logicalSize.Height);
m_orientationTransform3D = ScreenRotation::Rotation180;
break;

case DXGI_MODE_ROTATION_ROTATE270:
m_orientationTransform2D =
Matrix3x2F::Rotation(270.0f) *
Matrix3x2F::Translation(0.0f, m_logicalSize.Width);
m_orientationTransform3D = ScreenRotation::Rotation90;
break;

default:
throw ref new FailureException();
}

After you have the correct rotation matrix and origin for the 2-D image, set it with a call to
ID2D1DeviceContext::SetTransform between your calls to ID2D1DeviceContext::BeginDraw and
ID2D1DeviceContext::EndDraw.
Warning Direct2D doesn't have a transformation stack. If your app is also using the
ID2D1DeviceContext::SetTransform as a part of its drawing code, this matrix needs to be post-multiplied to any
other transform you have applied.
ID2D1DeviceContext* context = m_deviceResources->GetD2DDeviceContext();
Windows::Foundation::Size logicalSize = m_deviceResources->GetLogicalSize();

context->SaveDrawingState(m_stateBlock.Get());
context->BeginDraw();

// Position on the bottom right corner.


D2D1::Matrix3x2F screenTranslation = D2D1::Matrix3x2F::Translation(
logicalSize.Width - m_textMetrics.layoutWidth,
logicalSize.Height - m_textMetrics.height
);

context->SetTransform(screenTranslation * m_deviceResources->GetOrientationTransform2D());

DX::ThrowIfFailed(
m_textFormat->SetTextAlignment(DWRITE_TEXT_ALIGNMENT_TRAILING)
);

context->DrawTextLayout(
D2D1::Point2F(0.f, 0.f),
m_textLayout.Get(),
m_whiteBrush.Get()
);

// Ignore D2DERR_RECREATE_TARGET here. This error indicates that the device


// is lost. It will be handled during the next call to Present.
HRESULT hr = context->EndDraw();

The next time you present the swap chain, your 2-D image will be rotated to match the new display orientation.

Appendix B: Applying matrices for screen rotation (3-D)


In the example code in Resizing the swap chain and pre-rotating its contents (and in the DXGI swap chain rotation
sample), we defined a specific transformation matrix for each possible screen orientation. Now, let's look at the
matrixes for rotating 3-D scenes. As before, you create a set of matrices for each of the 4 possible orientations. To
prevent rounding errors and thus minor visual artifacts, declare the matrices explicitly in your code.
You set up these 3-D rotation matrices as follows. The matrices shown in the following code example are standard
rotation matrices for 0, 90, 180, and 270 degree rotations of the vertices that define points in the camera's 3-D
scene space. Each vertex's [x, y, z] coordinate value in the scene is multiplied by this rotation matrix when the 2-D
projection of the scene is computed.
// 0-degree Z-rotation
static const XMFLOAT4X4 Rotation0(
1.0f, 0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f, 0.0f,
0.0f, 0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f
);

// 90-degree Z-rotation
static const XMFLOAT4X4 Rotation90(
0.0f, 1.0f, 0.0f, 0.0f,
-1.0f, 0.0f, 0.0f, 0.0f,
0.0f, 0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f
);

// 180-degree Z-rotation
static const XMFLOAT4X4 Rotation180(
-1.0f, 0.0f, 0.0f, 0.0f,
0.0f, -1.0f, 0.0f, 0.0f,
0.0f, 0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f
);

// 270-degree Z-rotation
static const XMFLOAT4X4 Rotation270(
0.0f, -1.0f, 0.0f, 0.0f,
1.0f, 0.0f, 0.0f, 0.0f,
0.0f, 0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f
);
}

You set the rotation type on the swap chain with a call to IDXGISwapChain1::SetRotation, like this:
m_swapChain->SetRotation(rotation);

Now, in your render method, implement some code similar to this:

struct ConstantBuffer // This struct is provided for illustration.


{
// Other constant buffer matrices and data are defined here.

float4x4 projection; // Current matrix for projection


};
ConstantBuffer m_constantBufferData; // Constant buffer resource data

// ...

// Rotate the projection matrix as it will be used to render to the rotated swap chain.
m_constantBufferData.projection = mul(m_constantBufferData.projection, m_rotationTransform3D);

Now, when you call your render method, it multiplies the current rotation matrix (as specified by the class variable
m_orientationTransform3D) with the current projection matrix, and assigns the results of that operation as the
new projection matrix for your renderer. Present the swap chain to see the scene in the updated display orientation.
Optimization and advanced topics for DirectX games
3/6/2017 1 min to read Edit on GitHub

This section provides information about optimizing your DirectX game performance and other advanced topics.
Asynchronous programming for games topic discusses the various points to consider when you want to use
asynchronous programming to parallelize some of the components and use multithreading to maximise the use of
a powerful GPU.
Handle device removed scenarios in Direct3D 11 topic uses a walkthrough to explain how games developed using
Direct3D 11 can detect and respond to situations where the graphics adapter is reset, removed, or changed.
Multisampling in UWP apps topic provides an overview of how to use multi-sample antialiasing, a graphics
technique to reduce the appearance of aliased edges in UWP games built with Direct3D.
Optimize input and rendering loop topic provides guidance on how to choose the right input event processing
option to manage input latency and optimize the rendering loop.
Reduce latency with DXGI 1.3 swap chains topic explains how to reduce effective frame latency by waiting for the
swap chain to signal the appropriate time to begin rendering a new frame.
Swap chain scaling and overlays topic explains how to improve rendering times by using scaled swap chains to
render real-time game content at a lower resolution than the display is natively capable of. It also explains how to
create overlay swap chains for devices with the hardware overlay capability; this technique can be used to alleviate
the issue of a scaled down UI resulting from the use of swap chain scaling.

TOPIC DESCRIPTION

Asynchronous programming for games Understand asynchronous programming and threading


with DirectX.

Handle device removed scenarios in Direct3D 11 Recreate the Direct3D and DXGI device interface chain
when the graphics adapter is removed or reinitialized.

Multisampling in UWP apps Use multisampling in UWP games built using Direct3D.

Optimize input and rendering loop Reduce input latency and optimize the rendering loop.

Reduce latency with DXGI 1.3 swap chains Use DXGI 1.3 to reduce the effective frame latency.

Swap chain scaling and overlays Create scaled swap chains for faster rendering on mobile
devices, and use overlay swap chains to increase the visual
quality.
Asynchronous programming (DirectX and C++)
3/6/2017 4 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This topic covers various points to consider when you are using asynchronous programming and threading with
DirectX.

Async programming and DirectX


If you're just learning about DirectX, or even if you're experienced with it, consider putting all your graphics
processing pipeline on one thread. In any given scene in a game, there are common resources such as bitmaps,
shaders, and other assets that require exclusive access. These same resources require that you synchronize any
access to these resources across the parallel threads. Rendering is a difficult process to parallelize across multiple
threads.
However, if your game is sufficiently complex, or if you are looking to get improved performance, you can use
asynchronous programming to parallelize some of the components that are not specific to your rendering pipeline.
Modern hardware features multiple core and hyperthreaded CPUs, and your app should take advantage of this!
You can ensure this by using asynchronous programming for some of the components of your game that don't
need direct access to the Direct3D device context, such as:
file I/O
physics
AI
networking
audio
controls
XAML-based UI components
Your app can handle these components on multiple concurrent threads. File I/O, especially asset loading, benefits
greatly from asynchronous loading, because your game or app can be in an interactive state while several (or
several hundred) megabytes of assets are being loaded or streamed. The easiest way to create and manage these
threads is by using the Parallel Patterns Library and the task pattern, as contained in the concurrency namespace
defined in PPLTasks.h. Using the Parallel Patterns Library takes direct advantage of multiple core and
hyperthreaded CPUs, and can improve everything from perceived load times to the hitches and lags that come with
intensive CPU calculations or network processing.

Note In a Universal Windows Platform (UWP) app, the user interface runs entirely in a single-threaded
apartment (STA). If you are creating a UI for your DirectX game using XAML interop, you can only access the
controls by using the STA.

Multithreading with Direct3D devices


Multithreading for device contexts is only available on graphics devices that support a Direct3D feature level of
11_0 or higher. However, you might want to maximize the use of the powerful GPU in many platforms, such as
dedicated gaming platforms. In the simplest case, you might want to separate the rendering of a heads-up display
(HUD) overlay from the 3D scene rendering and projection and have both components use separate parallel
pipelines. Both threads must use the same ID3D11DeviceContext to create and manage the resource objects (the
textures, meshes, shaders, and other assets), though, which is single-threaded, and which requires that you
implement some sort of synchronization mechanism (such as critical sections) to access it safely. And, while you
can create separate command lists for the device context on different threads (for deferred rendering), you cannot
play those command lists back simultaneously on the same ID3D11DeviceContext instance.
Now, your app can also use ID3D11Device, which is safe for multithreading, to create resource objects. So, why
not always use ID3D11Device instead of ID3D11DeviceContext? Well, currently, driver support for
multithreading might not be available for some graphics interfaces. You can query the device and find out if it does
support multithreading, but if you are looking to reach the broadest audience, you might stick with single-threaded
ID3D11DeviceContext for resource object management. That said, when the graphics device driver doesn't
support multithreading or command lists, Direct3D 11 attempts to handle synchronized access to the device
context internally; and if command lists are not supported, it provides a software implementation. As a result, you
can write multithreaded code that will run on platforms with graphics interfaces that lack driver support for
multithreaded device context access.
If your app supports separate threads for processing command lists and for displaying frames, you probably want
to keep the GPU active, processing the command lists while displaying frames in a timely fashion without
perceptible stutter or lag. In this case, you could use a separate ID3D11DeviceContext for each thread, and to
share resources (like textures) by creating them with the D3D11_RESOURCE_MISC_SHARED flag. In this scenario,
ID3D11DeviceContext::Flush must be called on the processing thread to complete the execution of the command
list prior to displaying the results of processing the resource object in the display thread.

Deferred rendering
Deferred rendering records graphics commands in a command list so that they can be played back at some other
time, and is designed to support rendering on one thread while recording commands for rendering on additional
threads. After these commands are completed, they can be executed on the thread that generates the final display
object (frame buffer, texture, or other graphics output).
Create a deferred context using ID3D11Device::CreateDeferredContext (instead of D3D11CreateDevice or
D3D11CreateDeviceAndSwapChain, which create an immediate context). For more info, see Immediate and
Deferred Rendering.

Related topics
Introduction to Multithreading in Direct3D 11
Handle device removed scenarios in Direct3D 11
3/6/2017 5 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This topic explains how to recreate the Direct3D and DXGI device interface chain when the graphics adapter is
removed or reinitialized.
In DirectX 9, applications could encounter a "device lost" condition where the D3D device enters a non-operational
state. For example, when a full-screen Direct3D 9 application loses focus, the Direct3D device becomes "lost;" any
attempts to draw with a lost device will silently fail. Direct3D 11 uses virtual graphics device interfaces, enabling
multiple programs to share the same physical graphics device and eliminating conditions where apps lose control
of the Direct3D device. However, it is still possible for graphics adapter availability to change. For example:
The graphics driver is upgraded.
The system changes from a power-saving graphics adapter to a performance graphics adapter.
The graphics device stops responding and is reset.
A graphics adapter is physically attached or removed.
When such circumstances arise, DXGI returns an error code indicating that the Direct3D device must be reinitialized
and device resources must be recreated. This walkthrough explains how Direct3D 11 apps and games can detect
and respond to any circumstance where the graphics adapter is reset, removed, or changed. Code examples are
provided from the DirectX 11 App (Universal Windows) template provided with Microsoft Visual Studio 2015.

Instructions
Step 1:
Include a check for the device removed error in the rendering loop. Present the frame by calling
IDXGISwapChain::Present (or Present1, and so on). Then, check whether it returned
DXGI_ERROR_DEVICE_REMOVED or DXGI_ERROR_DEVICE_RESET.
First, the template stores the HRESULT returned by the DXGI swap chain:

HRESULT hr = m_swapChain->Present(1, 0);

After taking care of all other work for presenting the frame, the template checks for the device removed error. If
necessary, it calls a method to handle the device removed condition:

// If the device was removed either by a disconnection or a driver upgrade, we


// must recreate all device resources.
if (hr == DXGI_ERROR_DEVICE_REMOVED || hr == DXGI_ERROR_DEVICE_RESET)
{
HandleDeviceLost();
}
else
{
DX::ThrowIfFailed(hr);
}

Step 2:
Also, include a check for the device removed error when responding to window size changes. This is a good place
to check for DXGI_ERROR_DEVICE_REMOVED or DXGI_ERROR_DEVICE_RESET for several reasons:
Resizing the swap chain requires a call to the underlying DXGI adapter, which can return the device removed
error.
The app might have moved to a monitor that's attached to a different graphics device.
When a graphics device is removed or reset, the desktop resolution often changes, resulting in a window size
change.
The template checks the HRESULT returned by ResizeBuffers:

// If the swap chain already exists, resize it.


HRESULT hr = m_swapChain->ResizeBuffers(
2, // Double-buffered swap chain.
static_cast<UINT>(m_d3dRenderTargetSize.Width),
static_cast<UINT>(m_d3dRenderTargetSize.Height),
DXGI_FORMAT_B8G8R8A8_UNORM,
0
);

if (hr == DXGI_ERROR_DEVICE_REMOVED || hr == DXGI_ERROR_DEVICE_RESET)


{
// If the device was removed for any reason, a new device and swap chain will need to be created.
HandleDeviceLost();

// Everything is set up now. Do not continue execution of this method. HandleDeviceLost will reenter this method
// and correctly set up the new device.
return;
}
else
{
DX::ThrowIfFailed(hr);
}

Step 3:
Any time your app receives the DXGI_ERROR_DEVICE_REMOVED error, it must reinitialize the Direct3D device
and recreate any device-dependent resources. Release any references to graphics device resources created with the
previous Direct3D device; those resources are now invalid, and all references to the swap chain must be released
before a new one can be created.
The HandleDeviceLost method releases the swap chain and notifies app components to release device resources:

m_swapChain = nullptr;

if (m_deviceNotify != nullptr)
{
// Notify the renderers that device resources need to be released.
// This ensures all references to the existing swap chain are released so that a new one can be created.
m_deviceNotify->OnDeviceLost();
}

Then, it creates a new swap chain and reinitializes the device-dependent resources controlled by the device
management class:

// Create the new device and swap chain.


CreateDeviceResources();
m_d2dContext->SetDpi(m_dpi, m_dpi);
CreateWindowSizeDependentResources();

After the device and swap chain have been re-established, it notifies app components to reinitialize device-
dependent resources:

// Create the new device and swap chain.


CreateDeviceResources();
m_d2dContext->SetDpi(m_dpi, m_dpi);
CreateWindowSizeDependentResources();

if (m_deviceNotify != nullptr)
{
// Notify the renderers that resources can now be created again.
m_deviceNotify->OnDeviceRestored();
}

When the HandleDeviceLost method exits, control returns to the rendering loop, which continues on to draw the
next frame.

Remarks
Investigating the cause of device removed errors
Repeat issues with DXGI device removed errors can indicate that your graphics code is creating invalid conditions
during a drawing routine. It can also indicate a hardware failure or a bug in the graphics driver. To investigate the
cause of device removed errors, call ID3D11Device::GetDeviceRemovedReason before releasing the Direct3D
device. This method returns one of six possible DXGI error codes indicating the reason for the device removed
error:
DXGI_ERROR_DEVICE_HUNG: The graphics driver stopped responding because of an invalid combination of
graphics commands sent by the app. If you get this error repeatedly, it is a likely indication that your app caused
the device to hang and needs to be debugged.
DXGI_ERROR_DEVICE_REMOVED: The graphics device has been physically removed, turned off, or a driver
upgrade has occurred. This happens occasionally and is normal; your app or game should recreate device
resources as described in this topic.
DXGI_ERROR_DEVICE_RESET: The graphics device failed because of a badly formed command. If you get this
error repeatedly, it may mean that your code is sending invalid drawing commands.
DXGI_ERROR_DRIVER_INTERNAL_ERROR: The graphics driver encountered an error and reset the device.
DXGI_ERROR_INVALID_CALL: The application provided invalid parameter data. If you get this error even once,
it means that your code caused the device removed condition and must be debugged.
S_OK: Returned when a graphics device was enabled, disabled, or reset without invalidating the current graphics
device. For example, this error code can be returned if an app is using Windows Advanced Rasterization
Platform (WARP) and a hardware adapter becomes available.
The following code will retrieve the DXGI_ERROR_DEVICE_REMOVED error code and print it to the debug
console. Insert this code at the beginning of the HandleDeviceLost method:

HRESULT reason = m_d3dDevice->GetDeviceRemovedReason();

#if defined(_DEBUG)
wchar_t outString[100];
size_t size = 100;
swprintf_s(outString, size, L"Device removed! DXGI_ERROR code: 0x%X\n", reason);
OutputDebugStringW(outString);
#endif

For more details, see GetDeviceRemovedReason and DXGI_ERROR.


Testing Device Removed Handling
Visual Studio's Developer Command Prompt supports a command line tool 'dxcap' for Direct3D event capture and
playback related to the Visual Studio Graphics Diagnostics. You can use the command line option "-forcetdr" while
your app is running which will force a GPU Timeout Detection and Recovery event, thereby triggering
DXGI_ERROR_DEVICE_REMOVED and allowing you to test your error handling code.

Note DXCap and its support DLLs are installed into system32/syswow64 as part of the Graphics Tools for
Windows 10 which are no longer distributed via the Windows SDK. Instead they are provided via the Graphics
Tools Feature on Demand that is an optional OS component and must be installed in order to enable and use
the Graphics Tools on Windows 10. More information on how to Install the Graphics Tools for Windows 10 can
be found here: https://msdn.microsoft.com/library/mt125501.aspx#InstallGraphicsTools
Multisampling in Universal Windows Platform (UWP)
apps
3/6/2017 4 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Learn how to use multisampling in Universal Windows Platform (UWP) apps built with Direct3D. Multisampling,
also known as multi-sample antialiasing, is a graphics technique used to reduce the appearance of aliased edges. It
works by drawing more pixels than are actually in the final render target, then averaging values to maintain the
appearance of a "partial" edge in certain pixels. For a detailed description of how multisampling actually works in
Direct3D, see Multisample Anti-Aliasing Rasterization Rules.

Multisampling and the flip model swap chain


UWP apps that use DirectX must use flip model swap chains. Flip model swap chains don't support multisampling
directly, but multisampling can still be applied in a different way by rendering the scene to a multisampled render
target view, and then resolving the multisampled render target to the back buffer before presenting. This article
explains the steps required to add multisampling to your UWP app.
How to use multisampling
Direct3D feature levels guarantee support for specific, minimum sample count capabilities, and guarantee certain
buffer formats will be available that support multisampling. Graphics devices often support a wider range of
formats and sample counts than the minimum required. Multisampling support can be determined at run-time by
checking feature support for multisampling with specific DXGI formats, and then checking the sample counts you
can use with each supported format.
1. Call ID3D11Device::CheckFeatureSupport to find out which DXGI formats can be used with
multisampling. Supply the render target formats your game can use. Both the render target and resolve
target must use the same format, so check for both
D3D11_FORMAT_SUPPORT_MULTISAMPLE_RENDERTARGET and
D3D11_FORMAT_SUPPORT_MULTISAMPLE_RESOLVE.
**Feature level 9: ** Although feature level 9 devices guarantee support for multisampled render target
formats, support is not guaranteed for multisample resolve targets. So this check is necessary before trying
to use the multisampling technique described in this topic.
The following code checks multisampling support for all the DXGI_FORMAT values:
// Determine the format support for multisampling.
for (UINT i = 1; i < DXGI_FORMAT_MAX; i++)
{
DXGI_FORMAT inFormat = safe_cast<DXGI_FORMAT>(i);
UINT formatSupport = 0;
HRESULT hr = m_d3dDevice->CheckFormatSupport(inFormat, &formatSupport);

if ((formatSupport & D3D11_FORMAT_SUPPORT_MULTISAMPLE_RESOLVE) &&


(formatSupport & D3D11_FORMAT_SUPPORT_MULTISAMPLE_RENDERTARGET)
)
{
m_supportInfo->SetFormatSupport(i, true);
}
else
{
m_supportInfo->SetFormatSupport(i, false);
}
}

2. For each supported format, query for sample count support by calling
ID3D11Device::CheckMultisampleQualityLevels.
The following code checks sample size support for supported DXGI formats:

// Find available sample sizes for each supported format.


for (unsigned int i = 0; i < DXGI_FORMAT_MAX; i++)
{
for (unsigned int j = 1; j < MAX_SAMPLES_CHECK; j++)
{
UINT numQualityFlags;

HRESULT test = m_d3dDevice->CheckMultisampleQualityLevels(


(DXGI_FORMAT) i,
j,
&numQualityFlags
);

if (SUCCEEDED(test) && (numQualityFlags > 0))


{
m_supportInfo->SetSampleSize(i, j, 1);
m_supportInfo->SetQualityFlagsAt(i, j, numQualityFlags);
}
}
}

Note Use ID3D11Device2::CheckMultisampleQualityLevels1 instead if you need to check


multisample support for tiled resource buffers.

3. Create a buffer and render target view with the desired sample count. Use the same DXGI_FORMAT, width,
and height as the swap chain, but specify a sample count greater than 1 and use a multisampled texture
dimension (D3D11_RTV_DIMENSION_TEXTURE2DMS for example). If necessary, you can re-create the
swap chain with new settings that are optimal for multisampling.
The following code creates a multisampled render target:
float widthMulti = m_d3dRenderTargetSize.Width;
float heightMulti = m_d3dRenderTargetSize.Height;

D3D11_TEXTURE2D_DESC offScreenSurfaceDesc;
ZeroMemory(&offScreenSurfaceDesc, sizeof(D3D11_TEXTURE2D_DESC));

offScreenSurfaceDesc.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
offScreenSurfaceDesc.Width = static_cast<UINT>(widthMulti);
offScreenSurfaceDesc.Height = static_cast<UINT>(heightMulti);
offScreenSurfaceDesc.BindFlags = D3D11_BIND_RENDER_TARGET;
offScreenSurfaceDesc.MipLevels = 1;
offScreenSurfaceDesc.ArraySize = 1;
offScreenSurfaceDesc.SampleDesc.Count = m_sampleSize;
offScreenSurfaceDesc.SampleDesc.Quality = m_qualityFlags;

// Create a surface that's multisampled.


DX::ThrowIfFailed(
m_d3dDevice->CreateTexture2D(
&offScreenSurfaceDesc,
nullptr,
&m_offScreenSurface)
);

// Create a render target view.


CD3D11_RENDER_TARGET_VIEW_DESC renderTargetViewDesc(D3D11_RTV_DIMENSION_TEXTURE2DMS);
DX::ThrowIfFailed(
m_d3dDevice->CreateRenderTargetView(
m_offScreenSurface.Get(),
&renderTargetViewDesc,
&m_d3dRenderTargetView
)
);

4. The depth buffer must have the same width, height, sample count, and texture dimension to match the
multisampled render target.
The following code creates a multisampled depth buffer:
// Create a depth stencil view for use with 3D rendering if needed.
CD3D11_TEXTURE2D_DESC depthStencilDesc(
DXGI_FORMAT_D24_UNORM_S8_UINT,
static_cast<UINT>(widthMulti),
static_cast<UINT>(heightMulti),
1, // This depth stencil view has only one texture.
1, // Use a single mipmap level.
D3D11_BIND_DEPTH_STENCIL,
D3D11_USAGE_DEFAULT,
0,
m_sampleSize,
m_qualityFlags
);

ComPtr<ID3D11Texture2D> depthStencil;
DX::ThrowIfFailed(
m_d3dDevice->CreateTexture2D(
&depthStencilDesc,
nullptr,
&depthStencil
)
);

CD3D11_DEPTH_STENCIL_VIEW_DESC depthStencilViewDesc(D3D11_DSV_DIMENSION_TEXTURE2DMS);
DX::ThrowIfFailed(
m_d3dDevice->CreateDepthStencilView(
depthStencil.Get(),
&depthStencilViewDesc,
&m_d3dDepthStencilView
)
);

5. Now is a good time to create the viewport, because the viewport width and height must also match the
render target.
The following code creates a viewport:

// Set the 3D rendering viewport to target the entire window.


m_screenViewport = CD3D11_VIEWPORT(
0.0f,
0.0f,
widthMulti / m_scalingFactor,
heightMulti / m_scalingFactor
);

m_d3dContext->RSSetViewports(1, &m_screenViewport);

6. Render each frame to the multisampled render target. When rendering is complete, call
ID3D11DeviceContext::ResolveSubresource before presenting the frame. This instructs Direct3D to
peform the multisampling operation, computing the value of each pixel for display and placing the result in
the back buffer. The back buffer then contains the final anti-aliased image and can be presented.
The following code resolves the subresource before presenting the frame:
if (m_sampleSize > 1)
{
unsigned int sub = D3D11CalcSubresource(0, 0, 1);

m_d3dContext->ResolveSubresource(
m_backBuffer.Get(),
sub,
m_offScreenSurface.Get(),
sub,
DXGI_FORMAT_B8G8R8A8_UNORM
);
}

// The first argument instructs DXGI to block until VSync, putting the application
// to sleep until the next VSync. This ensures that we don't waste any cycles rendering
// frames that will never be displayed to the screen.
hr = m_swapChain->Present(1, 0);
Optimize input latency for Universal Windows
Platform (UWP) DirectX games
3/6/2017 11 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Input latency can significantly impact the experience of a game, and optimizing it can make a game feel more
polished. Additionally, proper input event optimization can improve battery life. Learn how to choose the right
CoreDispatcher input event processing options to make sure your game handles input as smoothly as possible.

Input latency
Input latency is the time it takes for the system to respond to user input. The response is often a change in what's
displayed on the screen, or what's heard through audio feedback.
Every input event, whether it comes from a touch pointer, mouse pointer, or keyboard, generates a message to be
processed by an event handler. Modern touch digitizers and gaming peripherals report input events at a minimum
of 100 Hz per pointer, which means that apps can receive 100 events or more per second per pointer (or
keystroke). This rate of updates is amplified if multiple pointers are happening concurrently, or a higher precision
input device is used (for example, a gaming mouse). The event message queue can fill up very quickly.
It's important to understand the input latency demands of your game so that events are processed in a way that is
best for the scenario. There is no one solution for all games.

Power efficiency
In the context of input latency, "power efficiency" refers to how much a game uses the GPU. A game that uses less
GPU resources is more power efficient and allows for longer battery life. This also holds true for the CPU.
If a game can draw the whole screen at less than 60 frames per second (currently, the maximum rendering speed
on most displays) without degrading the user's experience, it will be more power efficient by drawing less often.
Some games only update the screen in response to user input, so those games should not draw the same content
repeatedly at 60 frames per second.

Choosing what to optimize for


When designing a DirectX app, you need to make some choices. Does the app need to render 60 frames per second
to present smooth animation, or does it only need to render in response to input? Does it need to have the lowest
possible input latency, or can it tolerate a little bit of delay? Will my users expect my app to be judicious about
battery usage?
The answers to these questions will likely align your app with one of the following scenarios:
1. Render on demand. Games in this category only need to update the screen in response to specific types of input.
Power efficiency is excellent because the app doesnt render identical frames repeatedly, and input latency is low
because the app spends most of its time waiting for input. Board games and news readers are examples of apps
that might fall into this category.
2. Render on demand with transient animations. This scenario is similar to the first scenario except that certain
types of input will start an animation that isnt dependent on subsequent input from the user. Power efficiency is
good because the game doesnt render identical frames repeatedly, and input latency is low while the game is
not animating. Interactive childrens games and board games that animate each move are examples of apps that
might fall into this category.
3. Render 60 frames per second. In this scenario, the game is constantly updating the screen. Power efficiency is
poor because it renders the maximum number of frames the display can present. Input latency is high because
DirectX blocks the thread while content is being presented. Doing so prevents the thread from sending more
frames to the display than it can show to the user. First person shooters, real-time strategy games, and physics-
based games are examples of apps that might fall into this category.
4. Render 60 frames per second and achieve the lowest possible input latency. Similar to scenario 3, the app is
constantly updating the screen, so power efficiency will be poor. The difference is that the game responds to
input on a separate thread, so that input processing isnt blocked by presenting graphics to the display. Online
multiplayer games, fighting games, or rhythm/timing games might fall into this category because they support
move inputs within extremely tight event windows.

Implementation
Most DirectX games are driven by what is known as the game loop. The basic algorithm is to perform these steps
until the user quits the game or app:
1. Process input
2. Update the game state
3. Draw the game content
When the content of a DirectX game is rendered and ready to be presented to the screen, the game loop waits until
the GPU is ready to receive a new frame before waking up to process input again.
Well show the implementation of the game loop for each of the scenarios mentioned earlier by iterating on a
simple jigsaw puzzle game. The decision points, benefits, and tradeoffs discussed with each implementation can
serve as a guide to help you optimize your apps for low latency input and power efficiency.

Scenario 1: Render on demand


The first iteration of the jigsaw puzzle game only updates the screen when a user moves a puzzle piece. A user can
either drag a puzzle piece into place or snap it into place by selecting it and then touching the correct destination. In
the second case, the puzzle piece will jump to the destination with no animation or effects.
The code has a single-threaded game loop within the IFrameworkView::Run method that uses
CoreProcessEventsOption::ProcessOneAndAllPending. Using this option dispatches all currently available
events in the queue. If no events are pending, the game loop waits until one appears.
void App::Run()
{

while (!m_windowClosed)
{
// Wait for system events or input from the user.
// ProcessOneAndAllPending will block the thread until events appear and are processed.
CoreWindow::GetForCurrentThread()->Dispatcher->ProcessEvents(CoreProcessEventsOption::ProcessOneAndAllPending);

// If any of the events processed resulted in a need to redraw the window contents, then we will re-render the
// scene and present it to the display.
if (m_updateWindow || m_state->StateChanged())
{
m_main->Render();
m_deviceResources->Present();

m_updateWindow = false;
m_state->Validate();
}
}
}

Scenario 2: Render on demand with transient animations


In the second iteration, the game is modified so that when a user selects a puzzle piece and then touches the correct
destination for that piece, it animates across the screen until it reaches its destination.
As before, the code has a single-threaded game loop that uses ProcessOneAndAllPending to dispatch input
events in the queue. The difference now is that during an animation, the loop changes to use
CoreProcessEventsOption::ProcessAllIfPresent so that it doesnt wait for new input events. If no events are
pending, ProcessEvents returns immediately and allows the app to present the next frame in the animation. When
the animation is complete, the loop switches back to ProcessOneAndAllPending to limit screen updates.
void App::Run()
{

while (!m_windowClosed)
{
// 2. Switch to a continuous rendering loop during the animation.
if (m_state->Animating())
{
// Process any system events or input from the user that is currently queued.
// ProcessAllIfPresent will not block the thread to wait for events. This is the desired behavior when
// you are trying to present a smooth animation to the user.
CoreWindow::GetForCurrentThread()->Dispatcher->ProcessEvents(CoreProcessEventsOption::ProcessAllIfPresent);

m_state->Update();
m_main->Render();
m_deviceResources->Present();
}
else
{
// Wait for system events or input from the user.
// ProcessOneAndAllPending will block the thread until events appear and are processed.
CoreWindow::GetForCurrentThread()->Dispatcher->ProcessEvents(CoreProcessEventsOption::ProcessOneAndAllPending);

// If any of the events processed resulted in a need to redraw the window contents, then we will re-render the
// scene and present it to the display.
if (m_updateWindow || m_state->StateChanged())
{
m_main->Render();
m_deviceResources->Present();

m_updateWindow = false;
m_state->Validate();
}
}
}
}

To support the transition between ProcessOneAndAllPending and ProcessAllIfPresent, the app must track state
to know if its animating. In the jigsaw puzzle app, you do this by adding a new method that can be called during
the game loop on the GameState class. The animation branch of the game loop drives updates in the state of the
animation by calling GameStates new Update method.

Scenario 3: Render 60 frames per second


In the third iteration, the app displays a timer that shows the user how long theyve been working on the puzzle.
Because it displays the elapsed time up to the millisecond, it must render 60 frames per second to keep the display
up to date.
As in scenarios 1 and 2, the app has a single-threaded game loop. The difference with this scenario is that because
its always rendering, it no longer needs to track changes in the game state as was done in the first two scenarios.
As a result, it can default to use ProcessAllIfPresent for processing events. If no events are pending,
ProcessEvents returns immediately and proceeds to render the next frame.
void App::Run()
{

while (!m_windowClosed)
{
if (m_windowVisible)
{
// 3. Continuously render frames and process system events and input as they appear in the queue.
// ProcessAllIfPresent will not block the thread to wait for events. This is the desired behavior when
// trying to present smooth animations to the user.
CoreWindow::GetForCurrentThread()->Dispatcher->ProcessEvents(CoreProcessEventsOption::ProcessAllIfPresent);

m_state->Update();
m_main->Render();
m_deviceResources->Present();
}
else
{
// 3. If the window isn't visible, there is no need to continuously render.
// Process events as they appear until the window becomes visible again.
CoreWindow::GetForCurrentThread()->Dispatcher->ProcessEvents(CoreProcessEventsOption::ProcessOneAndAllPending);
}
}
}

This approach is the easiest way to write a game because theres no need to track additional state to determine
when to render. It achieves the fastest rendering possible along with reasonable input responsiveness on a timer
interval.
However, this ease of development comes with a price. Rendering at 60 frames per second uses more power than
rendering on demand. Its best to use ProcessAllIfPresent when the game is changing what is displayed every
frame. It also increases input latency by as much as 16.7 ms because the app is now blocking the game loop on the
displays sync interval instead of on ProcessEvents. Some input events might be dropped because the queue is
only processed once per frame (60 Hz).

Scenario 4: Render 60 frames per second and achieve the lowest


possible input latency
Some games may be able to ignore or compensate for the increase in input latency seen in scenario 3. However, if
low input latency is critical to the games experience and sense of player feedback, games that render 60 frames per
second need to process input on a separate thread.
The fourth iteration of the jigsaw puzzle game builds on scenario 3 by splitting the input processing and graphics
rendering from the game loop into separate threads. Having separate threads for each ensures that input is never
delayed by graphics output; however, the code becomes more complex as a result. In scenario 4, the input thread
calls ProcessEvents with CoreProcessEventsOption::ProcessUntilQuit, which waits for new events and
dispatches all available events. It continues this behavior until the window is closed or the game calls
CoreWindow::Close.
void App::Run()
{
// 4. Start a thread dedicated to rendering and dedicate the UI thread to input processing.
m_main->StartRenderThread();

// ProcessUntilQuit will block the thread and process events as they appear until the App terminates.
CoreWindow::GetForCurrentThread()->Dispatcher->ProcessEvents(CoreProcessEventsOption::ProcessUntilQuit);
}

void JigsawPuzzleMain::StartRenderThread()
{
// If the render thread is already running, then do not start another one.
if (IsRendering())
{
return;
}

// Create a task that will be run on a background thread.


auto workItemHandler = ref new WorkItemHandler([this](IAsyncAction^ action)
{
// Notify the swap chain that this app intends to render each frame faster
// than the display's vertical refresh rate (typically 60 Hz). Apps that cannot
// deliver frames this quickly should set this to 2.
m_deviceResources->SetMaximumFrameLatency(1);

// Calculate the updated frame and render once per vertical blanking interval.
while (action->Status == AsyncStatus::Started)
{
// Execute any work items that have been queued by the input thread.
ProcessPendingWork();

// Take a snapshot of the current game state. This allows the renderers to work with a
// set of values that won't be changed while the input thread continues to process events.
m_state->SnapState();

m_sceneRenderer->Render();
m_deviceResources->Present();
}

// Ensure that all pending work items have been processed before terminating the thread.
ProcessPendingWork();
});

// Run the task on a dedicated high priority background thread.


m_renderLoopWorker = ThreadPool::RunAsync(workItemHandler, WorkItemPriority::High, WorkItemOptions::TimeSliced);
}

The DirectX 11 and XAML App (Universal Windows) template in Microsoft Visual Studio 2015 splits the game
loop into multiple threads in a similar fashion. It uses the Windows::UI::Core::CoreIndependentInputSource
object to start a thread dedicated to handling input and also creates a rendering thread independent of the XAML UI
thread. For more details on these templates, read Create a Universal Windows Platform and DirectX game project
from a template.

Additional ways to reduce input latency


Use waitable swap chains
DirectX games respond to user input by updating what the user sees on-screen. On a 60 Hz display, the screen
refreshes every 16.7 ms (1 second/60 frames). Figure 1 shows the approximate life cycle and response to an input
event relative to the 16.7 ms refresh signal (VBlank) for an app that renders 60 frames per second:
Figure 1
In Windows 8.1, DXGI introduced the DXGI_SWAP_CHAIN_FLAG_FRAME_LATENCY_WAITABLE_OBJECT flag for
the swap chain, which allows apps to easily reduce this latency without requiring them to implement heuristics to
keep the Present queue empty. Swap chains created with this flag are referred to as waitable swap chains. Figure 2
shows the approximate life cycle and response to an input event when using waitable swap chains:
Figure 2

What we see from these diagrams is that games can potentially reduce input latency by two full frames if they are
capable of rendering and presenting each frame within the 16.7 ms budget defined by the displays refresh rate.
The jigsaw puzzle sample uses waitable swap chains and controls the Present queue limit by calling:
m_deviceResources->SetMaximumFrameLatency(1);
Reduce latency with DXGI 1.3 swap chains
3/6/2017 4 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Use DXGI 1.3 to reduce the effective frame latency by waiting for the swap chain to signal the appropriate time to
begin rendering a new frame. Games typically need to provide the lowest amount of latency possible from the time
the player input is received, to when the game responds to that input by updating the display. This topic explains a
technique available starting in Direct3D 11.2 that you can use to minimize the effective frame latency in your game.

How does waiting on the back buffer reduce latency?


With the flip model swap chain, back buffer "flips" are queued whenever your game calls
IDXGISwapChain::Present. When the rendering loop calls Present(), the system blocks the thread until it is done
presenting a prior frame, making room to queue up the new frame, before it actually presents. This causes extra
latency between the time the game draws a frame and the time the system allows it to display that frame. In many
cases, the system will reach a stable equilibrium where the game is always waiting almost a full extra frame
between the time it renders and the time it presents each frame. It's better to wait until the system is ready to
accept a new frame, then render the frame based on current data and queue the frame immediately.
Create a waitable swap chain with the DXGI_SWAP_CHAIN_FLAG_FRAME_LATENCY_WAITABLE_OBJECT flag.
Swap chains created this way can notify your rendering loop when the system is actually ready to accept a new
frame. This allows your game to render based on current data and then put the result in the present queue right
away.

Step 1: Create a waitable swap chain


Specify the DXGI_SWAP_CHAIN_FLAG_FRAME_LATENCY_WAITABLE_OBJECT flag when you call
CreateSwapChainForCoreWindow.

swapChainDesc.Flags = DXGI_SWAP_CHAIN_FLAG_FRAME_LATENCY_WAITABLE_OBJECT; // Enable GetFrameLatencyWaitableObject().

Note In contrast to some flags, this flag can't be added or removed using ResizeBuffers. DXGI returns an error
code if this flag is set differently from when the swap chain was created.

// If the swap chain already exists, resize it.


HRESULT hr = m_swapChain->ResizeBuffers(
2, // Double-buffered swap chain.
static_cast<UINT>(m_d3dRenderTargetSize.Width),
static_cast<UINT>(m_d3dRenderTargetSize.Height),
DXGI_FORMAT_B8G8R8A8_UNORM,
DXGI_SWAP_CHAIN_FLAG_FRAME_LATENCY_WAITABLE_OBJECT // Enable GetFrameLatencyWaitableObject().
);

Step 2: Set the frame latency


Set the frame latency with the IDXGISwapChain2::SetMaximumFrameLatency API, instead of calling
IDXGIDevice1::SetMaximumFrameLatency.
By default, the frame latency for waitable swap chains is set to 1, which results in the least possible latency but also
reduces CPU-GPU parallelism. If you need increased CPU-GPU parallelism to achieve 60 FPS - that is, if the CPU
and GPU each spend less than 16.7 ms a frame processing rendering work, but their combined sum is greater than
16.7 ms set the frame latency to 2. This allows the GPU to process work queued up by the CPU during the
previous frame, while at the same time allowing the CPU to submit rendering commands for the current frame
independently.

// Swapchains created with the DXGI_SWAP_CHAIN_FLAG_FRAME_LATENCY_WAITABLE_OBJECT flag use their


// own per-swapchain latency setting instead of the one associated with the DXGI device. The
// default per-swapchain latency is 1, which ensures that DXGI does not queue more than one frame
// at a time. This both reduces latency and ensures that the application will only render after
// each VSync, minimizing power consumption.
//DX::ThrowIfFailed(
// swapChain2->SetMaximumFrameLatency(1)
// );

Step 3: Get the waitable object from the swap chain


Call IDXGISwapChain2::GetFrameLatencyWaitableObject to retrieve the wait handle. The wait handle is a
pointer to the waitable object. Store this handle for use by your rendering loop.

// Get the frame latency waitable object, which is used by the WaitOnSwapChain method. This
// requires that swap chain be created with the DXGI_SWAP_CHAIN_FLAG_FRAME_LATENCY_WAITABLE_OBJECT
// flag.
m_frameLatencyWaitableObject = swapChain2->GetFrameLatencyWaitableObject();

Step 4: Wait before rendering each frame


Your rendering loop should wait for the swap chain to signal via the waitable object before it begins rendering
every frame. This includes the first frame rendered with the swap chain. Use WaitForSingleObjectEx, providing
the wait handle retrieved in Step 2, to signal the start of each frame.
The following example shows the render loop from the DirectXLatency sample:
while (!m_windowClosed)
{
if (m_windowVisible)
{
// Block this thread until the swap chain is finished presenting. Note that it is
// important to call this before the first Present in order to minimize the latency
// of the swap chain.
m_deviceResources->WaitOnSwapChain();

// Process any UI events in the queue.


CoreWindow::GetForCurrentThread()->Dispatcher->ProcessEvents(CoreProcessEventsOption::ProcessAllIfPresent);

// Update app state in response to any UI events that occurred.


m_main->Update();

// Render the scene.


m_main->Render();

// Present the scene.


m_deviceResources->Present();
}
else
{
// The window is hidden. Block until a UI event occurs.
CoreWindow::GetForCurrentThread()->Dispatcher->ProcessEvents(CoreProcessEventsOption::ProcessOneAndAllPending);
}
}

The following example shows the WaitForSingleObjectEx call from the DirectXLatency sample:

// Block the current thread until the swap chain has finished presenting.
void DX::DeviceResources::WaitOnSwapChain()
{
DWORD result = WaitForSingleObjectEx(
m_frameLatencyWaitableObject,
1000, // 1 second timeout (shouldn't ever occur)
true
);
}

What should my game do while it waits for the swap chain to present?
If your game doesnt have any tasks that block on the render loop, letting it wait for the swap chain to present can
be advantageous because it saves power, which is especially important on mobile devices. Otherwise, you can use
multithreading to accomplish work while your game is waiting for the swap chain to present. Here are just a few
tasks that your game can complete:
Process network events
Update the AI
CPU-based physics
Deferred-context rendering (on supported devices)
Asset loading
For more information about multithreaded programming in Windows, see the following related topics.

Related topics
DirectXLatency sample
IDXGISwapChain2::GetFrameLatencyWaitableObject
WaitForSingleObjectEx
Windows.System.Threading
Asynchronous programming in C++
Processes and Threads
Synchronization
Using Event Objects (Windows)
Swap chain scaling and overlays
3/6/2017 6 min to read Edit on GitHub

[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
Learn how to create scaled swap chains for faster rendering on mobile devices, and use overlay swap chains (when
available) to increase the visual quality.

Swap chains in DirectX 11.2


Direct3D 11.2 allows you to create Universal Windows Platform (UWP) apps with swap chains that are scaled up
from non-native (reduced) resolutions, enabling faster fill rates. Direct3D 11.2 also includes APIs for rendering with
hardware overlays so that you can present a UI in another swap chain at native resolution. This allows your game to
draw UI at full native resolution while maintaining a high framerate, thereby making the best use of mobile devices
and high DPI displays (such as 3840 by 2160). This article explains how to use overlapping swap chains.
Direct3D 11.2 also introduces a new feature for reduced latency with flip model swap chains. See Reduce latency
with DXGI 1.3 swap chains.

Use swap chain scaling


When your game is running on downlevel hardware - or hardware optimized for power savings - it can be
beneficial to render real-time game content at a lower resolution than the display is natively capable of. To do this,
the swap chain that is used for rendering game content must be smaller than the native resolution, or a subregion
of the swapchain must be used.
1. First, create a swap chain at full native resolution.
DXGI_SWAP_CHAIN_DESC1 swapChainDesc = {0};

swapChainDesc.Width = static_cast<UINT>(m_d3dRenderTargetSize.Width); // Match the size of the window.


swapChainDesc.Height = static_cast<UINT>(m_d3dRenderTargetSize.Height);
swapChainDesc.Format = DXGI_FORMAT_B8G8R8A8_UNORM; // This is the most common swap chain format.
swapChainDesc.Stereo = false;
swapChainDesc.SampleDesc.Count = 1; // Don't use multi-sampling.
swapChainDesc.SampleDesc.Quality = 0;
swapChainDesc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT;
swapChainDesc.BufferCount = 2; // Use double-buffering to minimize latency.
swapChainDesc.SwapEffect = DXGI_SWAP_EFFECT_FLIP_SEQUENTIAL; // All Windows Store apps must use this SwapEffect.
swapChainDesc.Flags = 0;
swapChainDesc.Scaling = DXGI_SCALING_STRETCH;

// This sequence obtains the DXGI factory that was used to create the Direct3D device above.
ComPtr<IDXGIDevice3> dxgiDevice;
DX::ThrowIfFailed(
m_d3dDevice.As(&dxgiDevice)
);

ComPtr<IDXGIAdapter> dxgiAdapter;
DX::ThrowIfFailed(
dxgiDevice->GetAdapter(&dxgiAdapter)
);

ComPtr<IDXGIFactory2> dxgiFactory;
DX::ThrowIfFailed(
dxgiAdapter->GetParent(IID_PPV_ARGS(&dxgiFactory))
);

ComPtr<IDXGISwapChain1> swapChain;
DX::ThrowIfFailed(
dxgiFactory->CreateSwapChainForCoreWindow(
m_d3dDevice.Get(),
reinterpret_cast<IUnknown*>(m_window.Get()),
&swapChainDesc,
nullptr,
&swapChain
)
);

DX::ThrowIfFailed(
swapChain.As(&m_swapChain)
);

2. Then, choose a subregion of the swap chain to scale up by setting the source size to a reduced resolution.
The DX Foreground Swap Chains sample calculates a reduced size based on a percentage:

m_d3dRenderSizePercentage = percentage;

UINT renderWidth = static_cast<UINT>(m_d3dRenderTargetSize.Width * percentage + 0.5f);


UINT renderHeight = static_cast<UINT>(m_d3dRenderTargetSize.Height * percentage + 0.5f);

// Change the region of the swap chain that will be presented to the screen.
DX::ThrowIfFailed(
m_swapChain->SetSourceSize(
renderWidth,
renderHeight
)
);

3. Create a viewport to match the subregion of the swap chain.


// In Direct3D, change the Viewport to match the region of the swap
// chain that will now be presented from.
m_screenViewport = CD3D11_VIEWPORT(
0.0f,
0.0f,
static_cast<float>(renderWidth),
static_cast<float>(renderHeight)
);

m_d3dContext->RSSetViewports(1, &m_screenViewport);

4. If Direct2D is being used, the rotation transform needs to be adjusted to compensate for the source region.

Create a hardware overlay swap chain for UI elements


When using swap chain scaling, there is an inherent disadvantage in that the UI is also scaled down, potentially
making it blurry and harder to use. On devices with hardware support for overlay swap chains, this problem is
alleviated entirely by rendering the UI at native resolution in a swap chain that's separate from the real-time game
content. Note that this technique applies only to CoreWindow swap chains - it cannot be used with XAML interop.
Use the following steps to create a foreground swap chain that uses hardware overlay capability. These steps are
performed after first creating a swap chain for real-time game content as described above.
1. First, determine whether the DXGI adapter supports overlays. Get the DXGI output adapter from the swap
chain:

ComPtr<IDXGIAdapter> outputDxgiAdapter;
DX::ThrowIfFailed(
dxgiFactory->EnumAdapters(0, &outputDxgiAdapter)
);

ComPtr<IDXGIOutput> dxgiOutput;
DX::ThrowIfFailed(
outputDxgiAdapter->EnumOutputs(0, &dxgiOutput)
);

ComPtr<IDXGIOutput2> dxgiOutput2;
DX::ThrowIfFailed(
dxgiOutput.As(&dxgiOutput2)
);

The DXGI adapter supports overlays if the output adapter returns True for SupportsOverlays.

m_overlaySupportExists = dxgiOutput2->SupportsOverlays() ? true : false;

Note If the DXGI adapter supports overlays, continue to the next step. If the device does not support
overlays, rendering with multiple swap chains will not be efficient. Instead, render the UI at reduced
resolution in the same swap chain as real-time game content.

2. Create the foreground swap chain with IDXGIFactory2::CreateSwapChainForCoreWindow. The following


options must be set in the DXGI_SWAP_CHAIN_DESC1 supplied to the pDesc parameter:
Specify the DXGI_SWAP_CHAIN_FLAG_FOREGROUND_LAYER swap chain flag to indicate a foreground
swap chain.
Use the DXGI_ALPHA_MODE_PREMULTIPLIED alpha mode flag. Foreground swap chains are always
premultiplied.
Set the DXGI_SCALING_NONE flag. Foreground swap chains always run at native resolution.

foregroundSwapChainDesc.Flags = DXGI_SWAP_CHAIN_FLAG_FOREGROUND_LAYER;
foregroundSwapChainDesc.Scaling = DXGI_SCALING_NONE;
foregroundSwapChainDesc.AlphaMode = DXGI_ALPHA_MODE_PREMULTIPLIED; // Foreground swap chain alpha values must be
premultiplied.

Note Set the DXGI_SWAP_CHAIN_FLAG_FOREGROUND_LAYER again every time the swap chain is
resized.

HRESULT hr = m_foregroundSwapChain->ResizeBuffers(
2, // Double-buffered swap chain.
static_cast<UINT>(m_d3dRenderTargetSize.Width),
static_cast<UINT>(m_d3dRenderTargetSize.Height),
DXGI_FORMAT_B8G8R8A8_UNORM,
DXGI_SWAP_CHAIN_FLAG_FOREGROUND_LAYER // The FOREGROUND_LAYER flag cannot be removed with ResizeBuffers.
);

3. When two swap chains are being used, increase the maximum frame latency to 2 so that the DXGI adapter
has time to present both swap chains simultaneously (within the same VSync interval).

// Create a render target view of the foreground swap chain's back buffer.
if (m_foregroundSwapChain)
{
ComPtr<ID3D11Texture2D> foregroundBackBuffer;
DX::ThrowIfFailed(
m_foregroundSwapChain->GetBuffer(0, IID_PPV_ARGS(&foregroundBackBuffer))
);

DX::ThrowIfFailed(
m_d3dDevice->CreateRenderTargetView(
foregroundBackBuffer.Get(),
nullptr,
&m_d3dForegroundRenderTargetView
)
);
}

4. Foreground swap chains always use premultiplied alpha. Each pixel's color values are expected to be already
multiplied by the alpha value before the frame is presented. For example, a 100% white BGRA pixel at 50%
alpha is set to (0.5, 0.5, 0.5, 0.5).
The alpha premultiplication step can be done in the output-merger stage by applying an app blend state (see
ID3D11BlendState) with the D3D11_RENDER_TARGET_BLEND_DESC structure's SrcBlend field set to
D3D11_SRC_ALPHA. Assets with pre-multiplied alpha values can also be used.
If the alpha premultiplication step is not done, colors on the foreground swap chain will be brighter than
expected.
5. Depending on whether the foreground swap chain was created, the Direct2D drawing surface for UI
elements might need be associated with the foreground swap chain.
Creating render target views:
// Create a render target view of the foreground swap chain's back buffer.
if (m_foregroundSwapChain)
{
ComPtr<ID3D11Texture2D> foregroundBackBuffer;
DX::ThrowIfFailed(
m_foregroundSwapChain->GetBuffer(0, IID_PPV_ARGS(&foregroundBackBuffer))
);

DX::ThrowIfFailed(
m_d3dDevice->CreateRenderTargetView(
foregroundBackBuffer.Get(),
nullptr,
&m_d3dForegroundRenderTargetView
)
);
}

Creating the Direct2D drawing surface:

if (m_foregroundSwapChain)
{
// Create a Direct2D target bitmap for the foreground swap chain.
ComPtr<IDXGISurface2> dxgiForegroundSwapChainBackBuffer;
DX::ThrowIfFailed(
m_foregroundSwapChain->GetBuffer(0, IID_PPV_ARGS(&dxgiForegroundSwapChainBackBuffer))
);

DX::ThrowIfFailed(
m_d2dContext->CreateBitmapFromDxgiSurface(
dxgiForegroundSwapChainBackBuffer.Get(),
&bitmapProperties,
&m_d2dTargetBitmap
)
);
}
else
{
// Create a Direct2D target bitmap for the swap chain.
ComPtr<IDXGISurface2> dxgiSwapChainBackBuffer;
DX::ThrowIfFailed(
m_swapChain->GetBuffer(0, IID_PPV_ARGS(&dxgiSwapChainBackBuffer))
);

DX::ThrowIfFailed(
m_d2dContext->CreateBitmapFromDxgiSurface(
dxgiSwapChainBackBuffer.Get(),
&bitmapProperties,
&m_d2dTargetBitmap
)
);
}

m_d2dContext->SetTarget(m_d2dTargetBitmap.Get());
// Create a render target view of the swap chain's back buffer.
ComPtr<ID3D11Texture2D> backBuffer;
DX::ThrowIfFailed(
m_swapChain->GetBuffer(0, IID_PPV_ARGS(&backBuffer))
);

DX::ThrowIfFailed(
m_d3dDevice->CreateRenderTargetView(
backBuffer.Get(),
nullptr,
&m_d3dRenderTargetView
)
);

6. Present the foreground swap chain together with the scaled swap chain used for real-time game content.
Since frame latency was set to 2 for both swap chains, DXGI can present them both within the same VSync
interval.

// Present the contents of the swap chain to the screen.


void DX::DeviceResources::Present()
{
// The first argument instructs DXGI to block until VSync, putting the application
// to sleep until the next VSync. This ensures that we don't waste any cycles rendering
// frames that will never be displayed to the screen.
HRESULT hr =

You might also like