by Bartosz Borowiecki

The all new ASP.NET vNext

Finally, after a long time Microsoft severs ties with IIS, ASP.NET WebForms and System.Web.dll. Their newest tech – ASP.NET vNext is coming.

ASP.NET WebForms  was introduced in early 2000s. It seemed like a good idea at the time – easy transition from WinForms, nice controls and separation of markup and code for better source management. But the Internet technologies change very quickly, every year brings new ideas and solutions.  Soon it became apparent, that the main problem was that ASP.NET was part of the .NET framework itself and it could be only updated along the framework, meaning, not often.

First change came with introduction of the MVC. It was created on top of ASP.NET, not as a part of the framework. It allowed MVC framework to develop faster than .NET itself. MVC is now deployed with website, meaning that different sites can run on different versions. It still relies on ASP.NET, though, so each site references System.Web.dll from ASP.NET. This is one of the biggest assemblies in framework, which means longer loading times, bigger memory consumption and dependence on IIS.

Next big step in ASP.NET evolution came with rising popularity  of RESTful services. Developers needed a new solution for RESTful services, not relying on ASP.NET or an IIS. Microsoft responded with WebAPI, that had no dependency on IIS/ASP.NET. It meant faster loading times, less memory consumption and most importantly – self-hosting.

Why not self-host everything?

Developer community has seen benefits of self-hosting/IIS independence very quickly and wanted more technologies to support that – mainly MVC. It would be a lot of work to write and support a host for every current and future technology, so Microsoft created Open Web Interface for .NET (OWIN) – a standard interface between .NET web servers and web applications. Along with the standard, Microsoft released Katana – their own implementation of OWIN.

The main point of OWIN is to decouple server and application and to create a framework in which each piece of functionality is encapsulated in independent module. This means one can tailor a webpage/service/host as needed, without unnecessary loading time and memory consumption. Each module can be easily switched changed and/or updated without affecting the others, meaning new functionality and bug fixes will be available for use very quickly. Simple hosting environment is also easy to move/replicate on others platforms – like Mac and Linux.

ASP.NET vNext

WebAPI was the first one to support OWIN as it was already self-hosted. Next step was pretty obvious. As popularity of MVC is rising and WebForms are on the decline – Microsoft decided to leave that part of ASP.NET behind and only rewrite parts needed by MVC. Don’t worry, WebForms aren’t forgotten, they’re still supported, just not compatible with OWIN hosts (meaning they still need IIS to run). Moving away from IIS and .NET framework allowed for a few interesting changes.

New project file and K Runtime

The first thing we will notice while creating a new site is lack of CSPROJ file. All needed information is stored in project.json and now. This change ensures that we don’t need MsBuild to build our site. New project file is really simple – the most basic version contains only information about framework that should be used to run application and project dependencies.

A framework that should be used? Yes, .NET is no longer the only framework. With the new ASP, comes K Runtime Engine. KRE basically bootstrap and run an ASP.NET vNext application. Along with it comes KVM – K Version Manager, whose only responsibility is to manage KRE and dependencies versions. It means that KRE can exists side-by-side with other versions of itself without any problems!

Dependencies

So, the new project file defines dependences, but in a different way compared to old CSPROJ. It uses NuGet internally. Each module is a NuGet package. Even the Runtime itself is a NuGet package! What’s more you don’t even need to have them on the disk! KVM will download and load them for you! This means that you even can deploy your site without them and let the platform take care of it. Each dependency can be defined with exact version number if needed, meaning there will be no surprises – all your development, test and production machines will have exactly the same environment!

It’s also worth noting that each feature is now a small module. You want to use IIS? There is package for that! Cookies? Yes. You want to serve static files? Guess what?! There is a separate package for that too!

The new compiler

Next thing that captures attention is Web.config. Or rather lack of it. No more big configs with a lot of XML. All that needs to be configured is set up from code. “From code” you say? “Useless! – I will have to rebuild project after each change!” Fear not – ASP.NET vNext uses new Roslyn – all assemblies are build and loaded to memory in real time. They even don’t need to exist on hard drive! Just change something in code, hit refresh and voila!

All packages are preconfigured by default, which means that they can be set up and used really quickly. You’ll only need to configure special cases. Additionally, many of them uses functs as parameters for configuration – all setting up can be done with few simple lambdas!

Dependency injection, MVC and WebAPI unification

New ASP.NET provides DI out-of-the-box. In previous versions DI was partially supported, usually with some outside container. Now, DI is built-in and available throughout the entire stack! It’s really easy to attach some outside container (it’s Bring Your Own Container), but framework provides a simple one and it’s enough, if you don’t need some advanced injection capabilities.

To maintain separation of ASP.NET and WebAPI, the latter implemented its own base class for controller, different than MVC controller. This allowed the self-hosting and independence from System.Web.dll. Now, the whole framework is independent, so merge of MVC and WebAPI was pretty obvious move. WebAPI no longer implements its own controller (ApiController), but uses base MVC controller class. It’s also uses the same routing now, so you can have one class for all your MVC and API needs.

Open source

Last, but not least, feature – it’s open source. From the early alphas, all code is available on GitHub, you can even create pull requests. Thanks to self-hosts and independence from II, you can set it up on Linux or Mac machine. Any host compatible with OWIN can be used.

The ASP.NET vNext seems to be a response to current trends in web development. Developers will get a lightweight and modular framework that is independent from ASP.NET/IIS. It will be easy to tailor this framework to our needs and add to it. With all modules being NuGet packages and built-in version management, deploying, updating and extending will be effortless. Separation from .NET framework will allow vNext to develop very quickly and respond to web developer needs. All in all, it’s a very promising technology!

by Wojciech Sura

Theme-aware resources in Windows Phone

Windows Phone supports two themes: light and dark. Since UI in this ecosystem is quite… say, minimalistic, the first theme can be described as “black on white” and the second as “white on black”.

The most common way to design appearance of your application is to use predefined resources, which are provided by the system. For instance, to create text in a frame with default theme colors, write:

<Border Margin="4" BorderBrush="{ThemeResource PhoneForegroundBrush}"
        BorderThickness="{ThemeResource ButtonBorderThemeThickness}"
        Background="{ThemeResource PhoneBackgroundBrush}">
    <TextBlock Foreground="{ThemeResource PhoneForegroundBrush}"
        FontSize="{ThemeResource TextStyleExtraLargePlusFontSize}">Themed manually</TextBlock>
</Border>

The border will match current theme color:

lightdark

But what if you want to introduce your own styles to application, but still match current theme?

The solution is to create different ResourceDictionary objects for light and dark scheme and then to import them in special way to your page:

<Page.Resources>
    <ResourceDictionary>
        <ResourceDictionary.ThemeDictionaries>
            <ResourceDictionary x:Key="Light" Source="Light.xaml" />
            <ResourceDictionary x:Key="Dark" Source="Dark.xaml" />
        </ResourceDictionary.ThemeDictionaries>
    </ResourceDictionary>
</Page.Resources>

The phone will now choose appropriate resource file, depending on current theme. Remember to use ThemeResource instead of StaticResource though, such that when theme changes all relevant values will be reloaded from appropriate ResourceDictionary.

by Wojciech Sura

Event-to-command in Windows Phone

During writing my Windows Phone application I ran into a problem: I wanted the button to behave differently, depending on whether user tapped it or pressed-and-held.

Most XAML controls provide a Holding event, which occurs when user keeps touching an item – so the solution seems to be as simple as implementing an event.

But that doesn’t look nice in XAML – most actions can be implemented declaratively – as bindings to commands in the viewmodel. Implementation of event in code-behind seems like a code smell in this beautiful MVVM environment.

There’s a solution though – to use Behaviors SDK.

First of all, add Behaviors SDK (XAML) to your project references – you may find it among few assemblies, which Microsoft provides as optional ones for Windows Phone applications.

Then – as usual – you’ll have to define XML namespaces for two additional C# namespaces:

<Page
    ...
    xmlns:i="using:Microsoft.Xaml.Interactivity"
    xmlns:icore="using:Microsoft.Xaml.Interactions.Core">

Finally, you may add a behavior and action to the button:

<Button Command="{Binding TapCommand}">
	<i:Interaction.Behaviors>
		<icore:EventTriggerBehavior EventName="Holding">
			<icore:InvokeCommandAction Command="{Binding HoldingCommand}" />
		</icore:EventTriggerBehavior>
	</i:Interaction.Behaviors>
	Press or hold me
</Button>

Remember though, that after HoldingCommand, the TapCommand will fire as well. Make sure to implement it properly.

by Wojciech Sura

How to write a tone generator?

Let’s continue our adventure with FMOD. Today we’ll write a simple sine tone generator.

FMOD supports live streaming of the sound – it periodically asks program for new block of data. We’ll use it to generate sine wave of a specific frequency.

First of all, we need to instantiate and initialize a System object. Nothing new here.

public Form1()
{
    InitializeComponent();

    FMOD.Factory.System_Create(ref system);
    system.init(1, FMOD.INITFLAGS.NORMAL, IntPtr.Zero);
}

Now we have to prepare special version of Sound class – a stream. This process is a little bit more complicated than creating a sound from mp3, so let’s take a closer look on the following piece of source code:

private const int sampleRate = 44100;
private const int channels = 1;
private const int lengthInSec = 1;
private const int freq = 440;

private void bPlay_Click(object sender, EventArgs e)
{
    var exInfo = new FMOD.CREATESOUNDEXINFO();
    exInfo.cbsize = Marshal.SizeOf(exInfo);
    exInfo.decodebuffersize = sampleRate; // 1 sec
    exInfo.length = sampleRate * sizeof(short) * lengthInSec;
    exInfo.numchannels = channels;
    exInfo.defaultfrequency = sampleRate;
    exInfo.format = FMOD.SOUND_FORMAT.PCM16;
    exInfo.pcmreadcallback = new FMOD.SOUND_PCMREADCALLBACK(DoReadData);
    exInfo.pcmsetposcallback = new FMOD.SOUND_PCMSETPOSCALLBACK(DoSetPos);

    current = 0;

    system.createStream(String.Empty, FMOD.MODE.OPENUSER | FMOD.MODE.LOOP_NORMAL, ref exInfo, ref sound);
    system.playSound(FMOD.CHANNELINDEX.FREE, sound, false, ref channel);
}

A special structure is used to describe, what kind of stream do we want to create. Inside, we specify:

  • cbsize – size of the structure in bytes. Remember, FMOD is natively a C/C++ library.
  • decodebuffersize – size of the buffer, for which FMOD will be asking us periodically, during the playback. We fill it with sampleRate – 44100 samples, which will represent a second of playback (since there are exactly 44100 per second)
  • length – how long is the sound, which we want to stream. In this case it is irrelevant, since sound will be looped.
  • numchannels – how many channels will we use (in this case only a mono sound)
  • defaultfrequency – frequency of sampling
  • format – format of data, which will be sent to FMOD. PCM16 means, that we will send raw data composed of 16-bit signed integer (short).
  • pcmreadcallback – a delegate to method, which will be called whenever FMOD requires data for playback.
  • pcmsetposcallback – a delegate to method, which will be called if someone tries to change position of playback.

Notice, that we pass FMOD.MODE.LOOP_NORMAL to ensure, that sound will be looped. Now we can implement both callbacks.

private FMOD.RESULT DoSetPos(IntPtr soundraw, int subsound, uint position, FMOD.TIMEUNIT postype)
{
    return FMOD.RESULT.OK;
}

Since we actually don’t plan allowing changing position of the playback, we may simply ignore this callback, returning FMOD.RESULT.OK.

The data read callback is a little bit more complicated.

private FMOD.RESULT DoReadData(IntPtr soundraw, IntPtr data, uint datalen)
{
    int dataCount = (int)(datalen / sizeof(short));
    short[] rawData = new short[dataCount];

    double multiplier = ((double)sampleRate / (double)freq) / (2 * Math.PI);
    for (int i = 0; i < rawData.Length; i++)
        rawData[i] = (short)(Math.Sin((current + i) / multiplier) * short.MaxValue);

    Marshal.Copy(rawData, 0, data, rawData.Length);

    current += rawData.Length;

    return FMOD.RESULT.OK;
}

FMOD gives us two key information. First is data: pointer to buffer in which we will place the actual wave data. Second is datalen, which is size of the buffer (in bytes, so count of samples is actually datalen / sizeof(short)).

Now some maths. We want to achieve sound of frequency 440 Hz. We know, that one second contains 44100 samples, and we want one second to contain 440 complete sine waves, so single wave will be (44100/440) ~= 100.22 samples long. Since sine loops at 2π, we have to multiply the actual data passed to Math.Sin by 2π (such that it will loop at 100.22 samples instead).

The rest is simple – we copy the data to given buffer using the Marshal.Copy method and return FMOD.RESULT.OK. The current field is used in order to provide fluent playback.

Stopping code is identical as in previous example.

private void bStop_Click(object sender, EventArgs e)
{
    if (channel != null)
        channel.stop();
}
by Wojciech Sura

FMOD – great Sound library

A while ago I presented a way to write an mp3 player in 16 lines of code. Let’s try to do the same thing with a different sound library, FMOD. FMOD has an advantage over other sound libraries, because it has the same API for different platforms: Win32, Windows 8, Windows phone 8, Macintosh, iOS, Linux, Android, Blackberry and several gaming platforms, like PS3/4/PSP/Vita, Xbox 360/One, Wii etc.

FMOD comes as a native DLL library – fmodex.dll – which is required for the application to run properly. For convenience, we may add it to the project and set its Copy to Output Directory property to Copy always – Visual Studio will take care for copying that DLL for us.

fmod-1fmod-2

fmodes.dll is native, but fortunately FMOD team provides an almost-complete set of wrappers for C#, so we won’t have to P/Invoke our way to play a sound.

Let’s start with adding a few fields to our class.

        private FMOD.System system;
        private FMOD.Sound sound;
        private FMOD.Channel channel;

System is the core of FMOD – it creates sound objects, allows starting playback etc. Sound is a class representing the actual sound, being played. Finally, Channel is a class, which represents the actual playback process and allows modifying its aspects – for instance, the volume.

The System object can be created with aid of FMOD.Factory. We may create and initialize it in form’s constructor, as there’s usually no point in keeping several instances of System.

public Form1()
{
    InitializeComponent();

    FMOD.Factory.System_Create(ref system);
    system.init(1, FMOD.INITFLAGS.NORMAL, IntPtr.Zero);
}

Now we can implement the Open & Play button. Fortunately, System.createSound method accepts filename as its first parameter, so the process will be as easy as in case of NAudio.

private void bPlay_Click(object sender, EventArgs e)
{
    OpenFileDialog dialog = new OpenFileDialog()
    {
        Filter = "Sound files (*.mp3;*.wav;*.wma)|*.mp3;*.wav;*.wma"
    };

    if (dialog.ShowDialog() == DialogResult.OK)
    {
        system.createSound(dialog.FileName, FMOD.MODE.DEFAULT, ref sound);
        system.playSound(FMOD.CHANNELINDEX.FREE, sound, false, ref channel);
    }
}

Stopping playback is also quite an easy task – it’s only a matter of asking a Channel object to stop.

private void bStop_Click(object sender, EventArgs e)
{
    if (channel != null)
        channel.stop();
}

And that’s all we need to play a sound. Keep in mind, that this example program does not provide any error checking – all FMOD methods return FMOD.Result, which informs about outcome of the operation.

by Wojciech Sura

Custom code snippets in Visual Studio

Visual Studio’s editor provides quite useful feature: code snippets. Code snippets are pieces of frequently used code with – optionally – a few blank fields to fill in.

It turns out, that one may create his own code snippets and import them into Visual Studio. I’ll show you, how can you do it.

First of all, create a new file and name it, for instance, SerializedClass.snippet . Then open it in your favorite text editor and let’s start writing.

<?xml version="1.0" encoding="utf-8"?>
<CodeSnippets xmlns="http://schemas.microsoft.com/VisualStudio/2005/CodeSnippet">
    <CodeSnippet Format="1.0.0">
        <Header>
            <Title>XML serializable class</Title>
            <Author>Spook</Author>
            <Description>Creates an XML serializable class with constructor.</Description>
            <Shortcut>xclass</Shortcut>
        </Header>

So far, everything should be self-explanatory. Now, before entering snippet’s source code, we may define some literals, which will be later replaced by values entered by user.

        <Snippet>
            <Declarations>
                <Literal>
                    <ID>ClassName</ID>
                    <ToolTip>Name of the class</ToolTip>
                    <Default>MyClass</Default>
                </Literal>
            </Declarations>

ID specifies name, which will be used in snippet’s code. To inform, that piece of snippet is a literal, you have to surround it with dolar-signs: $ClassName$.

            <Code Language="CSharp">
                <![CDATA[    [XmlRoot("$ClassName$")]
    public class $ClassName$
    {
        public $ClassName$()
        {

        }
    }]]>
            </Code>
        </Snippet>
    </CodeSnippet>
</CodeSnippets>

Ok, now we have to import the snippet to Visual Studio. Start the IDE and choose Tools | Code Snippets Manager…. Then use the Import button to import new snippet to the IDE. The snippet will be copied to %USERPROFILE%\Documents\Visual Studio 2013\Code Snippets\Visual C#\My Code Snippets (assuming, that you use Visual Studio 2013, of course).

TooltipManager

When the snippet is imported, it immediately is available. Simply write “xclass”:

snippet_code

And then press Tab key twice. Voila!

snippet_inserted

Read more about custom code snippets in the MSDN library.

by Wojciech Sura

Visual Studio 2013 productivity tips

There are a few things you may not know, which can significantly boost your productivity in Visual Studio 2013.

You may access items named in PascalCase by simply writing word initials, like IOE for InvalidOperationException or IOException.

Capitals

The quick way to open context menu on suggested change (like adding a class, method, renaming identifier etc.) is Ctrl+. (Ctrl + dot). If you remember, which item is default (such as renaming identifier), performing desired refactoring is as quick as pressing Ctrl+., Enter.

Shortcut

You can very quickly search for Visual Studio settings in the top-right field accessible quickly by Ctrl+Q.

Search settings

Visual Studio supports so called progressive search feature. Visually it looks almost identically to the Find dialog, but that dialog is closed immediately, when you finish searching (for instance, when you press the arrow key). Also, it does not fill the input box with what is currently under the cursor – instead it waits until you start typing. Shortcut for the progressive search is Ctrl+I. If you want to search for next occurrence, press F3.

Progressive search dialog:

Progressive search

Regular search dialog:

Regular search

There’s also another type of search, which seeks through all symbols and filenames in the solution. The shortcut for this one is Ctrl+, (Ctrl + comma).

SearchForSymbol

If you work with a huge solution, you may narrow the view of Solution Explorer to specific branch by choosing “Scope to this” from the context menu. Use the home icon to return to the solution view again.

Solution explorer

There are also a few shortcut chords used more frequently than the others:

  • Ctrl+K, Ctrl+F – Auto-format selection
  • Ctrl+M, Ctrl+L – Fold all / Unfold all
  • Ctrl+M, Ctrl+M – Fold current block
  • Ctrl+M, Ctrl+O – Fold to definitions
  • Ctrl+K, Ctrl+K – Set bookmark in place of cursor
  • Ctrl+K, Ctrl+N – (like “Next”) – jump to next bookmark in the code
  • Ctrl+K, Ctrl+P – (like “Previous”) – jump to previous bookmark in the code
by Wojciech Sura

Configuring perforce for use with Visual Studio

My favorite (and actually quite popular) diff/merge tool is Perforce’s p4merge. I like its clean interface and advanced comparison algorithms, which perform quite well even when faced with complicated modifications.

Perforce-compare Perforce-merge

There is a way to integrate Perforce with Visual Studio, but this operation is a little bit complicated due to specific Perforce requirements for merged files. Let’s do that step by step.

First of all, create two batch files in Perforce directory:

p4diff.bat

@ECHO OFF
START /WAIT /D "C:\Program Files\Perforce\" p4merge.exe -nl ""%6"" -nr ""%7"" ""%1"" ""%2""

p4merge.bat

@ECHO OFF
COPY /Y NUL ""%4""
START /WAIT /D "C:\Program Files\Perforce\" p4merge.exe -nb ""%8"" -nl ""%6"" -nr ""%7"" -nm ""%9"" ""%3"" ""%1"" ""%2"" ""%4""

Then open Visual Studio’s configuration window and navigate to source control user tools:

p4conf

Now add two tools. First for comparing:

Extension: .*
Operation: Compare
Command: C:\Program Files\Perforce\p4diff.bat
Arguments: %1 %2 "0" "0" "0" %6 %7

And second for merging:

Extension: .*
Operation: Merge
Command: C:\Program Files\Perforce\p4merge.bat
Arguments: %1 %2 %3 %4 "0" %6 %7 %8 %9

The reason for additional batch files is that Perforce expects, that the result of merging already exists, but on the other hand Visual Studio expects the merge tool to create this file.

If you use Microsoft Git Provider, to configure Perforce as a diff tool, run git bash in your repository and execute:

$ git config --local difftool.perforce.cmd '"C:\Program Files\Perforce\p4merge.exe" "$LOCAL" "$REMOTE"'
$ git config --local diff.tool perforce
by Andrzej Kowal

MSSQL on Amazon RDS runs out of space!

Let’s take a simple case which I needed to solve today: our MSSQL Server database hosted in Amazon RDS used whole storage space which we assigned at the moment it was created. Ok, no problem, let’s increase it. Hmm, but there is no such option. So maybe take an RDS snapshot and restore to a larger instance? Nope. Ok, then let’s create a backup file, create new RDS instance with larger storage and restore backup file to that instance? Wrong again. This is just not supported!

The solution is the SQL Azure Migration Wizard. It has many features, but the one we need is the one that moves existing database between servers. The migration includes schema and data. It will support any MSSQL server including Azure, RDS and standalone installations. To solve my problem I first created new RDS instance with same settings and larger storage. Then I migrated DB from the old instance to the new one (with full schema and data) using this tool.

Let’s see the migration process in details:

  1. Download and unzip the application. If you don’t have SQL Server installed on the computer where you run this tool you will need to install Shared Management Objects (SMO) and Command Line Tools from this link: http://www.microsoft.com/en-us/download/details.aspx?id=29065 (this is for MSSQL 2012). You can find the necessary links after expanding “Install Instructions” section.
  2. In some cases you need to edit to SQLAzureMW.exe.config file and adjust the path to folder where the scripts are stored (by default it is c:\SQLAzureMW\BCPData). These scripts can get large, depending on your database size.
  3. Run the tool.
  4. Choose analyze / migrate -> database
  5. Connect to source database (it can be RDS too – in my case it was the RDS instance where we reached storage size limit). If you don’t have a source DB but a backup file – simply restore it to any MSSQL server which you have access to (outside RDS of course). Then connect to that database.
  6. Generate scripts. You might want to take a look at advanced options – there is quite a few of them.
  7. After script is generated connect to target database (if it does not exist, you can create it during the migration process) and execute the import.

 

That’s it. I was working on a 20GB database and it took 15 minutes to pull data and prepare the scripts and around 90 minutes to push the data to target DB. To speed things up I was running this tool on an AWS EC2 instance, which was launched in the same region as RDS instance. After import I even managed to rename the new RDS instance to the old name, so I could keep the old connection strings in all the config files.

And what about Amazon’s guidelines for importing database into RDS? If you take a close look, you will see that one of suggested scenarios involves bcp.exe (Bulk copy, which is part of SQL Command Line Tools). Actually SQLAzureMW uses bcp.exe under the hood to perform the migration. My verdict: go for SQLAzureMW. Its simplicity makes it the best choice for an RDS MSSQL DBA.

by Njål

4K/UHD Display Connectivity

Large (> 30″) UHD/4K displays are finally starting to become affordable. Some people that I know have purchased cheap 4K tv’s and used them as computer monitors (for developement etc.) It gives you enormous screenspace – and the DPI is comparable to todays monitors (around 100+) – which means you won’t have to scale up fonts etc. like you would on a 28″ UHD/4K monitor. (The font upscaling works is not a problem on OS X – but it can be more painful on some programs in Windows – although it got a lot better in 8.1.)

But there is a big drawbacks with these cheap displays (and especially using them as computer monitors):

They usually only have HDMI (1.3) connections – meaning that you’ll only get a refresh rate of 30Hz. This means that you’ll get lagging – like illustrated below:

Luckily – manufacturers such as Philips and Panasonic are starting to release large 4k tv’s with DisplayPort and/or HDMI 2.0. These standards enables 60Hz+ refresh rates – which eliminates the lagging. DisplayPort 1.3 is currently the best standard of the two.

So make sure your display has DP and/or HDMI 2.0. You will regret buying a UHD device with HDMI 1.3 sooner or later – whether you plan to watch sports or use the display as a computer monitor.

Here are a few options that will soon be in stores:

Panasonic TX­50AX802
Rear_panel
This is a 50 inch UHD TV with both DisplayPort and HDMI 2.0 input! Is nice.

 
Philips BDM4065UC/00
phil
This is a more affordable and smaller 40″ UHD monitor than the Panasonic model. Has DisplayPort. Great Success.