by Wojciech Sura

Exporting classes from C++ to C#

Sometimes it happens, that a very useful piece of code is written in C++ and we would like to use it in C#. C++/CLI is one option, but sometimes even this solution can’t be used. In such case the only option left is to export required functionality through a DLL file.

Unfortunately, there’s a problem: DLLs were invented, when C was one of leading languages and they’re designed in such way, that even C programs may use them as well. That means: no object-oriented programming! (by the way, that’s one of the reasons of the creation of .NET)

The core of my ProCalc program is written in C++, but the user interface is written in C#, so I had to solve exactly this kind of problem. Fortunately, there’s a quite simple solution, which allows transferring objects through the DLL’s interface.

Let’s do a case study. The actual C++ class looks like the following:

class Engine
{
// (...)
private:
    ProCalcCore::Core * core;

public:
    Engine();
    ~Engine();

    double GetVariableFloatValue(const char * name);
// (...)
};

In order to pass this class through DLL’s interface, I flattened its constructor and destructor in the following way:

__declspec(dllexport) BOOL __stdcall Engine_Create(ProCalcEngine::Engine * & instance)
{
    instance = new ProCalcEngine::Engine();

    return TRUE;
}

__declspec(dllexport) BOOL __stdcall Engine_Destroy(ProCalcEngine::Engine * instance)
{
    if (instance == NULL)
        return FALSE;

    delete instance;

    return TRUE;
}

So if you want to instantiate that specific class, you’ll have to call Engine_Create function, passing a reference to a pointer through the parameter – and the function will fill that parameter with a pointer to class instance there. The actual value of that pointer is irrelevant – you only have to keep it to reference that specific instance while calling other DLL functions. Methods are implemented in a very similar way:

__declspec(dllexport) BOOL __stdcall Engine_GetVariableFloatValue(ProCalcEngine::Engine * instance,
	char * name,
	double & value)
{
	if (instance == NULL)
		return FALSE;

	try
	{
		value = instance->GetVariableFloatValue(name);

		return TRUE;
	}
	catch (...)
	{
		return FALSE;
	}
}

Now let’s wrap all these functions in a C# class, such that we can interface with this native class in object-oriented manner in C#.

First of all, we have to import all necessary functions from the DLL to C#.

public class Engine : IDisposable
{
    protected class Native
    {
        [DllImport("ProCalc.Engine.dll", CallingConvention = CallingConvention.StdCall)]
        [return: MarshalAs(UnmanagedType.Bool)]
        public static extern bool Engine_Create(ref IntPtr newInstance);

        [DllImport("ProCalc.Engine.dll", CallingConvention = CallingConvention.StdCall)]
        [return: MarshalAs(UnmanagedType.Bool)]
        public static extern bool Engine_Destroy(IntPtr instance);

        [DllImport("ProCalc.Engine.dll", CallingConvention = CallingConvention.StdCall)]
        [return: MarshalAs(UnmanagedType.Bool)]
        public static extern bool Engine_GetVariableFloatValue(IntPtr instance,
            [MarshalAs(UnmanagedType.LPStr)] string name,
            out double value);
        // (...)
    }

    // (...)

Now we can prepare this class to work with DLL internally, but provide a regular C# interface outside (such that user of this class won’t even know, that it is not 100% managed):

    private IntPtr instance = IntPtr.Zero;

    public Engine()
    {
        if (!Native.Engine_Create(ref instance))
            throw new InvalidOperationException("Internal error: cannot create instance of ProCalc engine!");
    }

    public void Dispose()
    {
        if (!Native.Engine_Destroy(instance))
            throw new InvalidOperationException("Internal error: cannot destroy instance of ProCalc engine!");

        instance = IntPtr.Zero;
    }

    public double GetVariableFloatValue(string name)
    {
        double result;
        if (!Native.Engine_GetVariableFloatValue(instance, name, out result))
            throw new InvalidOperationException("Internal error: Cannot get variable value!");

        return result;
    }

    // (...)
}

Bonus chatter: I’m generally not a big fond of using the underscore character in names of identifiers. However, it seemed nice to name functions exported from the DLL, for instance, Engine_GetVariableFloatValue, because they nicely resemble the C++ syntax: Engine::GetVariableFloatValue.

This method simplifies keeping order in all functions exported by the DLL.

by Wojciech Sura

Don’t md5 passwords!

A hash is a function, which must fulfill the following two statements:

  • For any set of input data it should return a result of constant size;
  • A small change (eg. 1 bit) in source data should effect in large change in the result.

Md5 is one of the most known hash functions, which fulfill both of these statements.

The first statement implies another fact: hash function is irreversible, because many different source data may result in the same hash result. This is why md5 was once widely used to store passwords: instead of storing the password (either encrypted or not), one might have store only its hash. Then it is only needed to compare stored value with hash of what user enters – if they both match, user is authenticated.

Since hash function is irreversible, the only way to break it is to perform a brute-force attack and try all possible combinations to find a password, which hashes to specific value. So, theoretically, I can tell you all “c551cff173f6cf6ebee5d521f13aff9d” and sleep peacefully sure, that access to my data is secure?

Well… it turns out, that for the last few years brute-force techniques have evolved greatly…

by Wojciech Sura

Windows 10 announced

Hot news! Microsoft announces Windows 10.

The full article is on Microsoft blog. Also, you can watch the announcement below.

In a nutshell:

  • Microsoft aims to create common platform from on-chip solutions, through phones, tablets, convertibles, notebooks, PCs, servers and gaming platforms (Xbox)
  • The UI won’t be common for all platforms (whew).
  • Start menu is officially back! You will be able to pin tiles to the menu.
  • Snapping to screen edge will be enhanced (Windows will suggest applications to pin to other edges)
  • New window switch button is introduced (works a little like Win+Tab from Aero, but looks more flat)
  • Windows now will support multiple desktops.
by Wojciech Sura

Advanced regular expressions

The man, who invented regular expressions surely should get the Nobel prize. I lost track of how many tasks I have completed a lot faster thanks to this useful feature built in most of text editors.

Today I used a modern extension of regular expressions called negative lookbehind. Let me tell you what it is and how can you use it to your advantage.

Modern editors (including Visual Studio) supports the following regular expression syntax:

  • expr2(?=expr1)
  • expr2(?!expr1)
  • (?<=expr1)expr2
  • (?<!expr1)epxr2

In order, they are:

  • Positive lookahead – matches expr2 if it is immediately followed by expr1.
  • Negative lookahead – matches expr2 if it is not immediately followed by expr1.
  • Positive lookbehind – matches expr2 if it is immediately preceeded by expr1.
  • Negative lookbehind – matches expr2 if it is not immediately preceeded by expr1.

What is most important is the fact, that in each case expr1 does not become part of the match.

What did I use it for today? I needed to clean up part of the HTML – I got a lot of HTML tags in single line and I wanted to break them up automatically. I used the following patterns:

  • Find: (?<![\n\r])<(?!/)
  • Replace: \r\n<

This regular expression finds all “<” characters, which are not preceeded by a newline and not followed by “/” character and adds a newline before them.

Then it was only a matter of pressing Ctrl+K, Ctrl+F, such that Visual Studio formatted the whole HTML for me automatically.

by Wojciech Sura

NTFS Reparse points

Linux ext2 and ext3 file systems supports a very useful feature: hardlinks and symlinks. It is quite uncommon knowledge, that for quite a very long time (as far as Windows XP) NTFS file system had both these features implemented (using special feature called NTFS reparse points, available from NTFS 3.0) and ready to use – except that there is no GUI utility to do so. Fortunately, recent Windows versions provide convenient console command: mklink.

Let’s start the console with administrative rights and make some experiments.

D:\Temporary\Hardlinks>dir /b
file.txt

D:\Temporary\Hardlinks>mklink /H file2.txt file.txt
Hardlink created for file2.txt <<===>> file.txt

What happened? Let’s think of a file in terms of its contents. The file’s contents are placed somewhere on the disk and NTFS index contains entry called file.txt pointing to these contents. Now we have created another entry – file2.txt, which points to the same contents, as file.txt. Effectively now we have one file, which resides in two places at once!

Let’s check, if it is true.

D:\Temporary\Hardlinks>dir /b
file.txt
file2.txt

D:\Temporary\Hardlinks>type file.txt
Alice has a cat
D:\Temporary\Hardlinks>echo and dog >> file2.txt

D:\Temporary\Hardlinks>type file.txt
Alice has a cat and dog

We modified the file2.txt, but contents of file.txt changed as well. So indeed, file.txt and file2.txt are actually the same file!

Now what happens, if we delete one of these files? Well, one entry in the NTFS index is deleted, but since there’s another one pointing to the actual data, file still exists. Only removing last existing NTFS entry for a file will result in actual deletion.

Now what’s the difference between hardlink and symbolic link? Well, the symbolic link does not point to the data, but to a specific path, so it works a little like a Windows shortcut. Let’s make one and observe, how it behaves.

D:\Temporary\Hardlinks>mklink symfile.txt file.txt
symbolic link created for symfile.txt <<===>> file.txt

The first difference can be seen, when we simply list folder’s contents:

D:\Temporary\Hardlinks>dir
 Volume in drive D is Dokumenty
 Volume Serial Number is ECD9-AAF9

 Directory of D:\Temporary\Hardlinks

2014-09-16 07:34 <DIR> .
2014-09-16 07:34 <DIR> ..
2014-09-16 07:27 16 file.txt
2014-09-16 07:29 <SYMLINK> symfile.txt [file.txt]
 2 File(s) 16 bytes
 2 Dir(s) 134 664 282 112 bytes free

Windows Explorer also knows, that this is not the actual file, but merely a link to a file.

Symlink-explorer

So what’s the difference between a shortcut and symlink? They are very lookalike.

Shortcut is a file, which contents describe some location on the computer and has to be interpreted by the operating system to work properly. Symlink behaves as a regular file (you may open it in Notepad etc.), but it is actually a NTFS entry pointing to another file on the file system level. This means, that if you try to examine symlink’s contents, you’ll actually see the target file’s ones.

There’s another difference between symlink and hardlink. If you create symlink to a file and then delete that file, symlink will remain intact, but will point to no longer existent object. This will cause errors, if you try to open it:

D:\Temporary\Hardlinks>del file.txt

D:\Temporary\Hardlinks>type symfile.txt
The system cannot find the file specified.

Hardlinks can be created only withing the same NTFS volume. Symlinks, on the other hand, can be created cross-volumes.

by arnecato

OneDrive for Business failing to synchronize

odb

Are you one of many who are struggling with One Drive for Business’ inability to synchronize files or its faulty Repair functionality?

Or, you get error messages like: Error Code = 0x80004005 Error Source = Groove?

I have files that seem to be stuck and fail to synchronize, even after several attempts at various ways of unsyncing single files and folders, deleting and resyncing all folders, and lastly, letting One Drive Business fix it all with the Repair functionality.

None of this worked.

The solution that worked was to rename the folder Users\%username%\AppData\Local\Microsoft\Office\Spw and delete all synchronized folders. After that I let OneDrive synchronize all folders again.

24 hours later, so far, so good. It got past the previous blockers and all files are now synchronized.

Until next time something breaks…

by Wojciech Sura

Inspecting locked files

Locked file

(source: superuser.com)

Aaaargh! Everyone surely encountered (and hates) such situation. One of the processes keeps open handle to a file or folder and – as an effect – it cannot be deleted. But which one?

SysInternals to the rescue – again! Today we will use another useful tool from this package, ProcExp. The Process Explorer – as says its full name – is actually quite good replacement for its system equivalent – provides a lot more information about each process, allows creating mini or full process memory dump on request and more.

We’ll use it to find the application, which keeps a file open, thus disallowing it to be deleted. In order to do so, we have to open ProcExp, choose Find | Find Handle or DLL… and then enter the name of locked file. And boom, almost immediately we get the guilty process.

procexp

by Andrzej Kowal

Datatables.net – ajax pagination on scroll

Listing and paginating data in a web application can be implemented in many different ways. One technique I really like is automatic ajax pagination on window scroll. It provides great user experience and a clean UI without any paging controls. It requires loading just enough items to force the browser to show the vertical scrollbar. When user decides to scroll the window – more items are loaded with ajax as soon as the scroll reaches around 80% of the window height. This approach is widely used by many websites: Facebook, Twitter, Pinterest, LinkedIn and many more. However when it comes to tabular data plain old paging is still implemented quite often, Time for a change.

Recently, when working on insights feature in Ping.it, I had to implement “expand on scroll” behavior in an HTML table driven by jQuery datatables plugin. It is a very handy utility, which converts any HTML table into a powerful client-side enabled grid. It has two basic operation modes: client and server side. To my surprise in the server mode the grid cannot be easily extended with new rows loaded from the server. It supports only the traditional paging. When I tried to force the grid to load more rows, it ended up refreshing the whole grid. After a bit of research and experiments I came up with a solution. In the scroll event handler more data is loaded from the server with ajax:

var scrollForMoreData = function() {
    $.ajax({
        url: '/LoadMoreData',
        data: { ... },
        success: ajaxSuccess
    });
};

The server JSON response contains formatted HTML table rows:

var result = {
    "content": "<tr>
                  <td>Mary Jane</td>
                  <td>26</td>
                </tr>
                <tr>
                  <td>Peter Parker</td>
                  <td>28</td>
                </tr>...",
    "hasMore": true
};

On ajax success the <tr>s are simply appended to the end of the existing HTML table:

var ajaxSuccess = function (result) {
    $('#my-table tbody').append(result.content)
};

The datatables sorting and filtering still can be used – in such case full grid refresh is the correct behavior (since sorting or filtering can fetch data currently not visible in the grid).

There is one important thing to consider – when loading more data with ajax it is essential to copy the datatable’s auto-generated ajax request data in order to send correct sorting and filtering parameters to the server:

$('#my-table')
    .on('xhr.dt', function () {
        var ajaxData = $(this).api().ajax.params();
        $(this).data('custom-ajax-parameters', ajaxData);
    });

Note that xhr event must be bound with the .dt namespace. Later when user scrolls the page I can retrieve the ajaxData object stored within the #my-table DOM object. And send this object as input to my “scroll” ajax request. This ensures that the server will actually return correctly sorted and filtered result. Since we don’t use datatables paging feature we need to handle it ourselves e.g. by adding additional parameter to “scroll” ajax request – the amount of already displayed rows (or page number, depending on your paging implementation). The adjusted scrollForMoreData function looks like this:

var scrollForMoreData = function() {
    var ajaxData = $('#my-table').data('custom-ajax-parameters');
    ajaxData.skipRows = $('#my-table tbody tr').length;
    $.ajax({
        url: '/LoadMoreData',
        data: ajaxData,
        success: ajaxSuccess
    });
};

Happy data-listing!

by Wojciech Sura

Monitoring process events

Some time ago my friend had a problem with PHP for IIS. Regardless of how did he modified contents of the php.ini file, the PHP failed to load required extension. We damaged that file on purpose to see no effect at all – PHP seemed not even to try opening it. Apparently, it was using a configuration file placed in a different folder (or at least searched for it elsewhere).

But where?

Let’s try to recreate that situation; though because I don’t have a copy of IIS with PHP for my disposal, my own small application, ProCalc volunteered to be an example. Last time we used DbgView – a nice tool being a part of great diagnostic package, SysInternals; this time we’ll use another handy tool – ProcMon – to monitor application’s activity in a great detail.

After starting ProcMon, we’ll be flooded with thousands of notifications – that’s because ProcMon attempts to inspect every possible process. We have to set up a filter to monitor a specific application.

ProcMon-filter

 

ProcMon will now display all low-level thread, file and registry events (regardless of their outcome), which are dispatched by the process. Let’s have a look.

ProcMon-events

Ok, that’s still too much. Let’s narrow our search only to those (changed) files, which have “xml” in their names.

ProcMon-found

And we’ve got it: ProCalc first tries to open a configuration file in its own folder and then searches for it in user’s home directory.

Note, that ProcMon also helps finding out, which dynamic libraries are missing (if any) and which version of .NET assemblies application is using.

by Wojciech Sura

Debugging applications

Sometimes it is useful to output some diagnostic data from your application without using the debugger along with the step-by-step debugging mode. Windows API provides a function OutputDebugString, which allows sending messages to a debugger if one is attached.

If you’re running application from within the Visual Studio, debug strings are captured by the IDE and displayed in the Output window. However, even if no IDE is available, there is a way to capture debug strings (what may be useful, while – for instance – debugging application on client’s computer). In order to do so, one may use a small, free and portable application called DbgView. The DbgView registers as a system debugger and captures the debug messages sent by all applications.

DbgView

If you’re not a C++ programmer, fear not – C# has a static method: System.Diagnostics.Debug.WriteLine, which does exactly the same. And if you’re programming kernel-mode drivers, you may use the function DbgPrint.