I was recently working on a Surface project at Microsoft (that will be shown at BETT ) and one of the requirements was to provide an external “administration console”. As part of that console I wanted to show an “screenshot” of the current game running on the Surface unit; after playing around for a while it turned out it was pretty straightforward.
We did consider sending over the RAW XAML from the Surface to the Console, but that would potentially have issues when resources weren’t available, so the approach that was taken was to create a JPG screenshot and send it over as a byte array via WCF.
Rendering to a BitmapFrame
The key to this approach is RenderTargetBitmap which allows us to render any WPF Visual to a BitmapFrame as follows:
RenderTargetBitmap renderTarget = new RenderTargetBitmap(200, 200, 96, 96, PixelFormats.Pbgra32); renderTarget.Render(myVisual); BitmapFrame bitmapFrame = BitmapFrame.Create(renderTarget);
Then from there we can use JpegBitmapEncoder to create a JPG from that BitmapFrame:
JpegBitmapEncoder jpgEncoder = new JpegBitmapEncoder(); jpgEncoder.Frames.Add(bitmapFrame);
Then we can output that JPG to a stream of our choice using the Save() method.
Problems
While this works for many cases, and indeed worked perfectly for the Surface application, we do encounter problems when the source we are rendering has Transforms applied or when it’s not positioned at 0,0. When this occurs the screenshots we take will have the content shifted“out of frame” resulting in black borders, or content missing altogether. The following screenshot demonstrates the problem:
Workaround
To work around the problem we can use a VisualBrush to “draw” our source element onto a new Visual, and render that with our RenderTargetBitmap:
DrawingVisual drawingVisual = new DrawingVisual(); DrawingContext drawingContext = drawingVisual.RenderOpen(); using (drawingContext) { drawingContext.DrawRectangle(sourceBrush, null, new Rect(new Point(0, 0), new Point(200, 200))); } renderTarget.Render(drawingVisual);
It’s not ideal, but I’ve yet to find a better workaround for it.
Putting it all Together
To make it more useful, we can wrap the whole lot up into a Extension Method. Rather than extending Visual, I’ve chosen to use UIElement so I have access to the RenderSize to calculate the required size of the output bitmap. I’ve also included parameters to scale the resulting bitmap and to set the JPG quality level:
/// /// Gets a JPG "screenshot" of the current UIElement /// /// UIElement to screenshot /// Scale to render the screenshot /// JPG Quality /// Byte array of JPG data public static byte[] GetJpgImage(this UIElement source, double scale, int quality) { double actualHeight = source.RenderSize.Height; double actualWidth = source.RenderSize.Width; double renderHeight = actualHeight * scale; double renderWidth = actualWidth * scale; RenderTargetBitmap renderTarget = new RenderTargetBitmap((int) renderWidth, (int) renderHeight, 96, 96, PixelFormats.Pbgra32); VisualBrush sourceBrush = new VisualBrush(source); DrawingVisual drawingVisual = new DrawingVisual(); DrawingContext drawingContext = drawingVisual.RenderOpen(); using (drawingContext) { drawingContext.PushTransform(new ScaleTransform(scale, scale)); drawingContext.DrawRectangle(sourceBrush, null, new Rect(new Point(0, 0), new Point(actualWidth, actualHeight))); } renderTarget.Render(drawingVisual); JpegBitmapEncoder jpgEncoder = new JpegBitmapEncoder(); jpgEncoder.QualityLevel = quality; jpgEncoder.Frames.Add(BitmapFrame.Create(renderTarget)); Byte[] _imageArray; using (MemoryStream outputStream = new MemoryStream()) { jpgEncoder.Save(outputStream); _imageArray = outputStream.ToArray(); } return _imageArray; }