Skip to content Skip to footer

DVLab 2012-2014: A Journey from Startup to Hobby Business Supported by the Nyföretagarcenter and the Trygghetsrådet

This journey begins in 2012 with the establishment of DVLab – a startup specializing in computer vision. What started as an ambitious business idea evolved and transformed into a hobby business in 2014. Despite the change in the nature of the business, the value and insights gained during the process remained invaluable. Through these changes, the Entrepreneurial Center and the Security Council proved to be invaluable resources, offering support and guidance while navigating through the various phases of the company.

This documentation delves deeper into the details of this transformation process. It includes descriptions and code files that were the result of demo applications for various online services developed during DVLab’s lifetime. These files serve as a concrete representation of the technical development and progress made by the company and also stand as a testament to the technical skills and creativity that characterized DVLab’s work.

DVLab develops applications for recognition and pattern matching based on open-source code in the field of Computer Vision. We are located in Stockholm, Sweden. During the startup year (2012), DVLab conducted tests and development resulting in a few demo applications for online services.

Throughout the year, DVLab has invested in equipment and necessary system software and installed a development environment to ensure that the web application can handle pattern matching from the most common mobile and computer devices regardless of the platform. DVLab emphasizes that the system is designed so that users’ requests for improvements/changes can be quickly coded, tested, and put into production.

DVLab strives to stay up-to-date with developments in the field of Computer Vision and surrounding technologies to offer the best service available today. This means that our applications will evolve in pace with the rapid advancements in this exciting branch of technology. It is this attention to detail that makes customers return, thereby ensuring DVLab’s success.

DVLab estimates that a commercial website using or linking to our applications can increase its sales, enhance customer satisfaction by providing information quickly without the detour of phone calls or reference books. The website will primarily be information and demo-based.

DVLab has chosen development tools that ensure the requirements for a modern web service are met. A detailed design document is produced, which will also include a log of the development phases and an activity and action list during the project’s course.

For documentation, Doxygen was used – Wikipedia.

index.html

<!DOCTYPE html>
<!--[if lt IE 7]>      <html class="no-js lt-ie9 lt-ie8 lt-ie7"> <![endif]-->
<!--[if IE 7]>         <html class="no-js lt-ie9 lt-ie8"> <![endif]-->
<!--[if IE 8]>         <html class="no-js lt-ie9"> <![endif]-->
<!--[if gt IE 8]><!--> <html class="no-js"> <!--<![endif]-->
<!--[if lt IE 9]>
<script src="https://html5shim.googlecode.com/svn/trunk/html5.js"></script>
<![endif]-->
    <head>
        <meta charset="utf-8">
        <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
        <meta content="text/html; charset=utf-8" http-equiv="Content-Type">
        <title>Datorseende @ DVLab</title>
        <meta name="description" content="">
        <meta name="viewport" content="width=device-width">
        <link rel="stylesheet" href="css/normalize.min.css">
        <link rel="stylesheet" href="css/main.css">
        <link rel="stylesheet" media="all" href="css/style.css"/>
        <script src="js/vendor/modernizr-2.6.2-respond-1.1.0.min.js"></script>
    </head>
    <body>
        <!--[if lt IE 7]>
            <p class="chromeframe">You are using an <strong>outdated</strong> browser. Please <a href="http://browsehappy.com/">upgrade your browser</a> or <a href="http://www.google.com/chromeframe/?redirect=true">activate Google Chrome Frame</a> to improve your experience.</p>
        <![endif]-->
        <div class="header-container">
            
            <header class="wrapper clearfix">
                <h1 class="title">DVLab</h1>
                <nav>
                    <ul>
                        <li><a href="/omoss.html">Om oss</a></li>
                    </ul>
                </nav>
            </header>
        </div>
        
        <div class="main-container">
            <div class="main wrapper clearfix">
                <article>
                        <div id="droparea">
	                    <div class="dropareainner">
		                    <p class="dropfiletext">Släpp din bild här</p>
		                    <p>alternativt</p>
		                    <p><input id="uploadbtn" class="uploadbtn" type="button" value="ta en bild eller välj i befintlig katalog"/></p>
                            <p class="extra">
		                    <input type="radio" id="nocrop" name="croping" value="nocrop" checked /><label for="nocrop">Ingen beskärning</label>
		                    <input type="radio" id="crop" name="croping" value="crop" /><label for="crop">Beskär bilden</label>
		                    </p>
                            <p id="err">Vänta! du måste AKTIVERA Javascript för att det ska fungera!</p>
	                    </div>
	                 <input id="upload" type="file" multiple/>
                   </div>
                      <p class="message">Obs! uppladdade bilder <strong>SPARAS INTE</strong> på servern.</p>
                   <div id="result"></div>
                    </header>
                </article>
                <aside>
                    <h3>Demo webbplats under ständig utveckling!</h3>
                    <h3>Testat med Chrome, Firefox, Opera och IE.</h3>
                    <h3>Smartphone testat med HTC one x/Android med Chrome.</h3>
                    <h3>En sökning tar upp till 15 minuter.</h3>
                    <!-- will download as "test_bilder.zip" -->
                    <h3><strong>Bilderna kan vara skyddade enligt upphovsrättslagen.</strong></h3>
                    <a href="/files/test_bilder.zip" class="sourceCode" download="test_bilder.zip">Obs! 19,2 MB, ladda ner och prova med några bilder.</a>
                </aside>
            </div> <!-- #main -->
            </div> <!-- #main-container -->
            <div class="footer-container">
            <footer class="wrapper">
                <h5>DVLab © <script type="text/javascript">document.write(new Date().getFullYear());</script></h5>
            </footer>
        </div>
        <script src="//ajax.googleapis.com/ajax/libs/jquery/1.10.2/jquery.min.js"></script>
        <script>window.jQuery || document.write('<script src="js/vendor/jquery-1.10.2.min.js"><\/script>')</script>
        <script src="https://code.jquery.com/jquery-migrate-1.2.1.min.js"></script>
        <script>window.jQuery || document.write('<script src="js/vendor/jquery-migrate-1.2.1.min.js"><\/script>')</script>
        <!-- Grab Microsoft CDN's jQuery templates, with a protocol relative URL; -->
        <script src="https://ajax.aspnetcdn.com/ajax/jquery.templates/beta1/jquery.tmpl.min.js"></script>
        <script src="js/vendor/jquery.tmpl.min.js"></script>
        <script>window.jQuery || document.write('<script src="js/vendor/jquery.tmpl.min.js"><\/script>')</script>
        <!-- using parts of canvas example by inWebson.com and paulund -->
        <script id="imageTemplate" type="text/x-jquery-tmpl">
            <center><div class="fb"><div></div><div></div><div></div></div></div></center>
            <div class="imageholder">
            <figure>
		    	<img src="${filePath}" alt="${fileName}"/>
			    <figcaption>
				    ${fileName} <br/>
				    <span>Original Size: ${fileOriSize} KB</span><br/>
				    <span>Upload Size: ${fileUploadSize} KB</span>
			    </figcaption>
		    </figure>
	    </div>
        </script>
        <script>
            //position fixed
            $(document).ready(function() {
                var windowHeight = $(window).height()-$('#header').outerHeight(true)-$('#footer').outerHeight(true)-100;
                $('#wrapper').css('minHeight', windowHeight);
                });
                </script>
        <script src="js/plugins.js"></script>
        <script src="js/main.js"></script>
        <script src="js/script.js"></script>
    </body>
</html>


scriptjs

$(document).ready(function () {
    // Variables
    var imgWidth = 320,
		imgHeight = 240,
		zindex = 0;
        dropzone = $('#droparea'),
		uploadBtn = $('#uploadbtn'),
		defaultUploadBtn = $('#upload');
    // Events Handler
    dropzone.on('dragover', function () {
        // Add hover class when drag over
        dropzone.addClass('hover');
        return false;
    });
    dropzone.on('dragleave', function () {
        // Remove hover class when drag out
        dropzone.removeClass('hover');
        return false;
    });
    dropzone.on('drop', function (e) {
        // Prevent browser from open the file when drop off
        e.stopPropagation();
        e.preventDefault();
        dropzone.removeClass('hover');
        // Retrieve uploaded files data
        var files = e.originalEvent.dataTransfer.files;
        processFiles(files);
        return false;
    });
    uploadBtn.on('click', function (e) {
        e.stopPropagation();
        e.preventDefault();
        // Trigger default file upload button
        defaultUploadBtn.click();
    });
    defaultUploadBtn.on('change', function () {
        // Retrieve selected uploaded files data
        var files = $(this)[0].files;
        processFiles(files);
        return false;
    });
    // Internal functions
    // Bytes to KiloBytes conversion
    function convertToKBytes(number) {
        return (number / 1024).toFixed(1);
    }
    function compareWidthHeight(width, height) {
        var diff = [];
        if (width > height) {
            diff[0] = width - height;
            diff[1] = 0;
        } else {
            diff[0] = 0;
            diff[1] = height - width;
        }
        return diff;
    }
    // Shim the BlobBuilder with the vendor prefixes
    window.BlobBuilder || (window.BlobBuilder = window.MSBlobBuilder || window.MozBlobBuilder || window.WebKitBlobBuilder);
    var BlobBuilder = window.BlobBuilder;
    function dataURItoBlob(dataURI) {
        // Convert base64 to raw binary data held in a string
        var byteString;
        if (dataURI.split(',')[0].indexOf('base64') >= 0)
            byteString = atob(dataURI.split(',')[1]);
        else
            byteString = unescape(dataURI.split(',')[1]);
        // Separate out the mime component
        var mimeString = dataURI.split(',')[0].split(':')[1].split(';')[0]
        // Write the bytes of the string to an ArrayBuffer
        var ab = new ArrayBuffer(byteString.length);
        var ia = new Uint8Array(ab);
        for (var i = 0; i < byteString.length; i++) {
            ia[i] = byteString.charCodeAt(i);
        }
        if ('Blob' in window) {
            try {
                // The new blob interface
                var dataView = new DataView(ab);
                var blob = new Blob([dataView], { type: mimeString });
                return blob;
            }
            catch (e) { }
        }
        if (!blob) {
            try {
                // The depreceated BlobBuilder interface
                var bb = new BlobBuilder();
                bb.append(ab);
                return bb.getBlob(mimeString);
            }
            catch (e) { }
        }
    }
    // Process FileList
    var processFiles = function (files) {
        if (files && typeof FileReader !== "undefined") {
            // Process each files only if browser is supported
            for (var i = 0; i < files.length; i++) {
                readFile(files[i]);
            }
        } else {
        }
    }
    // Read the File Object
    var readFile = function (file) {
        if ((/image/i).test(file.type)) {
            //define FileReader object
            var reader = new FileReader();
            // Init reader onload event handlers
            reader.onload = function (e) {
                var image = $('<img/>')
				.load(function () {
				    // When image fully loaded
				    var newimageurl = getCanvasImage(this);
				    createPreview(file, newimageurl);
				    uploadToServer(file, dataURItoBlob(newimageurl));
				})
				.attr('src', e.target.result);
            };
            // Begin reader read operation
            reader.readAsDataURL(file);
            $('#err').text('');
        } else {
            // Some message for wrong file format
            $('#err').text('*Fel fil format!');
        }
    }
    // Get New Canvas Image URL
    var getCanvasImage = function (image) {
        // Get selected effect
        var croping = $('input[name=croping]:checked').val();
        // Define canvas
        var canvas = document.createElement('canvas');
        canvas.width = imgWidth;
        canvas.height = imgHeight;
        var ctx = canvas.getContext('2d');
        // Default resize variable
        var diff = [0, 0];
        if (croping == 'crop') {
            // Get resized width and height
            diff = compareWidthHeight(image.width, image.height);
        }
        // Draw canvas image	
        ctx.drawImage(image, diff[0] / 2, diff[1] / 2, image.width - diff[0], image.height - diff[1], 0, 0, imgWidth, imgHeight);
        // Convert canvas to jpeg url
        return canvas.toDataURL("image/jpeg");
    }
    // Draw Image Preview
    var createPreview = function (file, newURL) {
        // Populate jQuery Template binding object
        var imageObj = {};
        imageObj.filePath = newURL;
        imageObj.fileName = file.name.substr(0, file.name.lastIndexOf('.')); // Subtract file extension
        imageObj.fileOriSize = convertToKBytes(file.size);
        imageObj.fileUploadSize = convertToKBytes(dataURItoBlob(newURL).size); //  Convert new image URL to blob to get file.size
        // Extend filename
        var effect = $('input[name=effect]:checked').val();
        // Append new image through jQuery Template
        var randvalue = Math.floor(Math.random() * 31) - 15;  // Random number
        var img = $("#imageTemplate").tmpl(imageObj).prependTo("#result")
		.hide()
		.css({
		    'Transform': 'scale(1) rotate(' + randvalue + 'deg)',
		    'msTransform': 'scale(1) rotate(' + randvalue + 'deg)',
		    'MozTransform': 'scale(1) rotate(' + randvalue + 'deg)',
		    'webkitTransform': 'scale(1) rotate(' + randvalue + 'deg)',
		    'OTransform': 'scale(1) rotate(' + randvalue + 'deg)',
		    'z-index': zindex++
		})
		.show();
        if (isNaN(imageObj.fileUploadSize)) {
            $('.imageholder span').last().hide();
        }
    }
    // Upload Image to Server
    var uploadToServer = function (oldFile, newFile) {
        // Prepare FormData
        var formData = new FormData();
        // We still have to use back old file
        // Since new file doesn't contains original file data
        formData.append('filename', oldFile.name);
        formData.append('filetype', oldFile.type);
        formData.append('file', newFile);
        // Submit formData using $.ajax			
        $.ajax({
            url: 'upload.php',
            type: 'POST',
            data: formData,
            processData: false,
            contentType: false,
            dataType: 'html',
            success: function (data) {
                $('#result').html(data);
            }
        });
    }
    // File upload via original byte sequence
    var processFileInIE = function (file) {
        var imageObj = {};
        var extension = ['jpg', 'JPG', 'jpeg', 'JPEG', 'gif', 'GIF', 'png', 'PNG'];
        var filepath = file.value;
        if (filepath) {
            // Get file name
            var startIndex = (filepath.indexOf('\\') >= 0 ? filepath.lastIndexOf('\\') : filepath.lastIndexOf('/'));
            var filedetail = filepath.substring(startIndex);
            if (filedetail.indexOf('\\') === 0 || filedetail.indexOf('/') === 0) {
                filedetail = filedetail.substring(1);
            }
            var filename = filedetail.substr(0, filedetail.lastIndexOf('.'));
            var fileext = filedetail.slice(filedetail.lastIndexOf(".") + 1).toLowerCase();
            // Check file extension
            if ($.inArray(fileext, extension) > -1) {
                // Append using template
                $('#err').text('');
                imageObj.filepath = filepath;
                imageObj.filename = filename;
                var randvalue = Math.floor(Math.random() * 31) - 15;
                $("#imageTemplate").tmpl(imageObj).prependTo("#result")
				.hide()
				.css({
				    'Transform': 'scale(1) rotate(' + randvalue + 'deg)',
				    'msTransform': 'scale(1) rotate(' + randvalue + 'deg)',
				    'MozTransform': 'scale(1) rotate(' + randvalue + 'deg)',
				    'webkitTransform': 'scale(1) rotate(' + randvalue + 'deg)',
				    'oTransform': 'scale(1) rotate(' + randvalue + 'deg)',
				    'z-index': zindex++
				})
				.show();
                $('#result').find('figcaption span').hide();
            } else {
                $('#err').text('*Selected file format not supported!');
            }
        }
    }
    // Browser compatible text
    if (typeof FileReader === "undefined") {
        // $('.extra').hide();
        $('#err').html('Ops! din webbläsare supporterar inte <strong>HTML5 Fil API</strong> <br/>försök med senare version!');
    } else if (!Modernizr.draganddrop) {
        $('#err').html('Ops! Din webbläsare supporterar inte <strong>Drag and Drop API</strong>! <br/>Still, använd knappen istället \'<em>Välj bild</em>\' knappen för att ladda upp bilden =)');
    } else {
        $('#err').text('');
    }
});

upload.php

<?php
if(isset($_FILES['file']))
{
	/// --Start session function
	session_start();
	/// --Function to create temporary session dir
	mkdir('/var/www/html/user/temp/'.session_id(), 02777);
	chmod('/var/www/html/user/temp/'.session_id(), 02777);
	/// --Put the file where we'd like it
	$upfile = '/var/www/html/user/temp/'.session_id().'/'.$_POST['filename'];
	if(move_uploaded_file($_FILES["file"]['tmp_name'], '/var/www/html/user/temp/' . session_id() .'/' .$_POST['filename']))
	{
	} else
	{
		echo '#Problem';
		break;
	}
}
/// --We allow arbitrary number of arguments intentionally here.
$session_id=session_id();
$temp_session_file = '/var/www/html/user/temp/' .session_id(). '/';
$session_path='/var/www/html/user/temp/' .session_id(). '/' . 'success.txt';
/// --Windows
//$work_file='DVLab.exe';
/// --Linux
$work_file='./DVLab';
$workingdirectory='/var/www/html/user';
chdir($workingdirectory);
/// --Windows
//exec(escapeshellcmd("start /B $work_file $upfile $session_path $temp_session_file"));
/// --Linux
exec(escapeshellcmd("$work_file $upfile $session_path $temp_session_file"));
if(file_exists($session_path))
	foreach (glob("/var/www/html/user/temp/$session_id/*.{jpg,JPG,jpeg,JPEG,gif,GIF,png,PNG}", GLOB_BRACE) as $file)
	{
		$filename_parts = pathinfo($file);
		$filename = $filename_parts['filename'];
		$ext = $filename_parts['ext'];
		$filename = str_replace("_alternate", "", $filename);
		$height = AUTO;
		$width = AUTO;
		echo "<div>";
		$imgbinary = fread(fopen($file, "r"), filesize($file));
		$img_str = base64_encode($imgbinary);
		//echo '<a href="http://www.dvlab.se/"><img src="data:image/jpg;base64,'.$img_str.'" height="'.$height.'" width="'.$width.'" " alt="Bild saknas"/>';
		echo '<img src="data:image/jpg;base64,'.$img_str.'" height="'.$height.'" width="'.$width.'" " alt="Bild saknas"/>';
		echo "<div>";
		//echo "<p>Klicka på bilden för att komma till dess ursprungsida.</p>";
		echo "<p><textarea readonly name=\"answer\" cols=\"auto\" rows=\"10\" style=\"text-align:left;\">$filename</textarea></p>";
	}
	else
		{
		echo "<font color='#b71414'>Tyvärr hittade vi ingen bra match antingen saknas objektet i våra databaser eller försök fokusera mer på objektet...</font>";
		}
		/// --Function to empty temporary session dir
		foreach(glob($temp_session_file . '*.*') as $v) {
		unlink($v);
		}
		/// --Function remove to temporary session dir
			rmdir('/var/www/html/user/temp/'.session_id());
			
			//rmdir($temp_session_file.session_id());
			session_destroy();
			?>
<!DOCTYPE html>
<html lang="sv">
    <head>
        <meta charset="utf-8">
        <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
        <title>Datorseende</title>
        <meta name="description" content="">
        <meta name="viewport" content="width=device-width">
        <link rel="stylesheet" href="css/normalize.min.css">
        <link rel="stylesheet" href="css/main.css">
        <link rel="stylesheet" media="all" href="css/style.css"/>
        <script src="js/vendor/modernizr-2.6.2-respond-1.1.0.min.js"></script>
    </head>
    <body>
        
    </body>
</html>


DVLab.cpp

Note: The non-free modules, SIFT & SURF, are patented and were used by me to test OpenCV Computer Vision. There are alternative methods, such as BRISK or FREAK, in OpenCV that can be used instead.

If you wish to use them commercially, you must contact the patent holders.

/*
 * @file DVLab.cpp
 *
 *  Created on: Apr 22, 2014
 *
 *     @author	Bo Sving
 *
 *     @brief	Buildt with OpenCV 2.4.9 64bit/Ubuntu Desktop 14.04 lts 64bit / Windows 8.1 64bit.
 *
 *     @brief	Implements the recognize class with boost and tbb.
 */
#include "opencv2/opencv.hpp"
#include "opencv2/nonfree/nonfree.hpp"
#include "opencv2/photo/photo.hpp"
#include <boost/filesystem.hpp>
#include <stdio.h>
#include <iostream>
#include <fstream>
#include <vector>
#include <ctime>
using namespace boost::filesystem;
using namespace std;
using namespace cv;
#define DRAW_RICH_KEYPOINTS_MODE     0
#define DRAW_OUTLIERS_MODE           0
/**
 * @fn	int GOOD_MATCHES (0);
 *
 * @brief	Good matches.
 *
 * @date	2012-11-27
 *
 * @date	2013-06-28
 * updated by including all parameters in the code instead of sending them with commands as arguments except for argv1 "inputfie".
 *
 * @param	parameter1	The first parameter.
 *
 * @return	.
 */
/// --Global match counter values
int GOOD_MATCHES;
int GOOD_MATCHES_end = (0);
int GLOBAL_Copy_file;
int loop_count = (0);
/// --Global arguments
int argc;
char **argv;
/// --Set time parameters for time stamp & end time
double duration;
double duration_end_time = 600;
/// -- prod = 0 and dev = 1, used in function value of filteredMatches Size
int PROD_OR_DEV_MODE (0);
/// --Windows
#if (defined(WIN32) || defined(_WIN32) || defined(WIN64) || defined(_WIN64))
/// --Parameters for Windows
const string defaultDetectorType = "SIFT";
const string defaultDescriptorType = "OpponentSIFT";
const string defaultMatcherType = "FlannBased";
const string defaultCheckFilter = "CrossCheckFilter";
const string defaultimages_folder = "./image/";
string session_id_dir, session_img_dir;
char* defaultransacReprojThreshold = "3";
#define WIN32
#include <io.h>
#include <direct.h>
// everything else (Linux, *BSD, ...)
#else
/// --Parameters for Linux
string defaultDetectorType = "SIFT";
string defaultDescriptorType = "OpponentSIFT";
string defaultMatcherType = "FlannBased";
string defaultCheckFilter = "CrossCheckFilter";
string defaultimages_folder = "image";
string session_id_dir, session_img_dir;
const char* defaultransacReprojThreshold = "3";
#include <dirent.h>
#endif // windows
#ifdef HAVE_CVCONFIG_H
#include <cvconfig.h>
#endif
#ifdef HAVE_TBB
using namespace tbb;
#include "tbb/task_scheduler_init.h"
#endif
/**
 * @fn	static void ChangeMyGlobal(void)
 *
 * @brief	Change my global.
 *
 * @date	2012-11-27
 */
static void ChangeMyGlobal(void)
	{
		GOOD_MATCHES = GOOD_MATCHES + 1; } /* Reference to external Global variable of GOOD_MATCHES in a function. */
static void ChangeMyGlobalCopy(void)
	{
		GLOBAL_Copy_file--; } /* Reference to external Global variable of GLOBAL_Copy_file in a function. */
/**
 * @fn	static void successfile( void )
 *
 * @brief	Copy Success.txt to session_id_dir if object found.
 *
 * @date	2014-04-12
 */
void successfile()
	{
	std::ofstream successFile;
	successFile.open(argv[2]);
	successFile << "Success\n" << std::endl;
	successFile.close();
	}
/**
 * @fn
 * static void readDirectory( const string&directoryName, vector<string>&filenames,
 * bool addDirectoryName = true )
 *
 * @brief	Reads a directory.
 *
 * @date	2012-11-27
 *
 * @param	directoryName	 	Pathname of the directory.
 * @param [in,out]	filenames	The filenames.
 * @param	addDirectoryName 	(optional) pathname of the add directory.
 */
static void readDirectory( const string&directoryName, vector<string>&filenames, bool addDirectoryName = true )
{
    filenames.clear();
#ifdef WIN32
    struct _finddata_t s_file;
    string str = directoryName + "\\*.*";
    intptr_t h_file = _findfirst( str.c_str(), &s_file );
    if( h_file != static_cast<intptr_t>(-1.0) )
    {
        do {
            if( addDirectoryName )
                filenames.push_back(s_file.name);
            else
                filenames.push_back((string)s_file.name);
        }
        while( _findnext( h_file, &s_file ) == 0 );
    }
    _findclose( h_file );
#else
    DIR* dir = opendir( directoryName.c_str() );
    if( dir != NULL )
    {
        struct dirent* dent;
        while( (dent = readdir(dir)) != NULL )
        {
            if( addDirectoryName )
                filenames.push_back( directoryName + "/" + string(dent->d_name) );
            else
                filenames.push_back( string(dent->d_name) );
        }
    }
#endif
    /**
     * @fn	sort( filenames.begin(), filenames.end() );
     *
     * @brief	Constructor.
     *
     * @date	2012-11-27
     *
     * @param	parameter1	The first parameter.
     * @param	parameter2	The second parameter.
     */
    sort( filenames.begin(), filenames.end() );
}
/**
 * @enum	
 *
 * @brief	Values that represent more accurate filters.
 */
enum { NONE_FILTER = 0, CROSS_CHECK_FILTER = 1 };
/**
 * @fn	static int getMatcherFilterType( const string& str )
 *
 * @brief	Gets matcher filter type.
 *
 * @date	2012-11-27
 *
 * @param	str	The.
 *
 * @return	The matcher filter type.
 */
static int getMatcherFilterType( const string& str )
{
    if( str == "NoneFilter" )
        return NONE_FILTER;
    if( str == "CrossCheckFilter" )
        return CROSS_CHECK_FILTER;
    CV_Error(CV_StsBadArg, "Invalid filter name");
    return -1;
}
/**
 * @fn
 * static void simpleMatching( Ptr<DescriptorMatcher>& descriptorMatcher,
 * const Mat& descriptors1, const Mat& descriptors2, vector<DMatch>& matches12 )
 *
 * @brief	Simple matching.
 *
 * @date	2012-11-27
 *
 * @param [in,out]	descriptorMatcher	The descriptor matcher.
 * @param	descriptors1			 	The first descriptors.
 * @param	descriptors2			 	The second descriptors.
 * @param [in,out]	matches12		 	The second matches 1.
 */
static void simpleMatching( Ptr<DescriptorMatcher>& descriptorMatcher,
                     const Mat& descriptors1, const Mat& descriptors2,
                     vector<DMatch>& matches12 )
{
    vector<DMatch> matches;/// --The vector is created
    descriptorMatcher->match( descriptors1, descriptors2, matches12 );
}
/// --The matches are cross checked and stored within filteredMatches
/**
 * @fn
 * static void crossCheckMatching( Ptr<DescriptorMatcher>& descriptorMatcher,
 * const Mat& descriptors1, const Mat& descriptors2, vector<DMatch>& filteredMatches12,
 * int knn=1 )
 *
 * @brief	Cross check matching.
 *
 * @date	2012-11-27
 *
 * @param [in,out]	descriptorMatcher	The descriptor matcher.
 * @param	descriptors1			 	The first descriptors.
 * @param	descriptors2			 	The second descriptors.
 * @param [in,out]	filteredMatches12	The second filtered matches 1.
 * @param	knn						 	(optional) the knn.
 */
static void crossCheckMatching( Ptr<DescriptorMatcher>& descriptorMatcher,
                         const Mat& descriptors1, const Mat& descriptors2,
                         vector<DMatch>& filteredMatches12, int knn=1 )
{
    filteredMatches12.clear();
    vector<vector<DMatch> > matches12, matches21;
    descriptorMatcher->knnMatch( descriptors1, descriptors2, matches12, knn );
    descriptorMatcher->knnMatch( descriptors2, descriptors1, matches21, knn );
    for( size_t m = 0; m < matches12.size(); m++ )
    {
        bool findCrossCheck = false;
        for( size_t fk = 0; fk < matches12[m].size(); fk++ )
        {
            DMatch forward = matches12[m][fk];
            for( size_t bk = 0; bk < matches21[forward.trainIdx].size(); bk++ )
            {
                DMatch backward = matches21[forward.trainIdx][bk];
                if( backward.trainIdx == forward.queryIdx )
                {
                    filteredMatches12.push_back(forward);
                    findCrossCheck = true;
                    break;
                }
            }
            if( findCrossCheck ) break;
        }
    }
}
/**
 * @fn	static void warpPerspectiveRand( const Mat& src, Mat& dst, Mat& H, RNG& rng )
 *
 * @brief	Warp perspective random.
 *
 * @date	2012-11-27
 *
 * @param	src		   	Source for the.
 * @param [in,out]	dst	Destination for the.
 * @param [in,out]	H  	The Mat& to process.
 * @param [in,out]	rng	The random number generator.
 */
static void warpPerspectiveRand( const Mat& src, Mat& dst, Mat& H, RNG& rng )
{
    /**
     * @fn	H.create(3, 3, CV_32FC1);
     *
     * @brief	Constructor.
     *
     * @date	2012-11-27
     *
     * @param	parameter1	The first parameter.
     * @param	parameter2	The second parameter.
     * @param	parameter3	The third parameter.
     */
    H.create(3, 3, CV_32FC1);
    H.at<float>(0,0) = rng.uniform( 0.8f, 1.2f);
    H.at<float>(0,1) = rng.uniform(-0.1f, 0.1f);
    H.at<float>(0,2) = rng.uniform(-0.1f, 0.1f)*src.cols;
    H.at<float>(1,0) = rng.uniform(-0.1f, 0.1f);
    H.at<float>(1,1) = rng.uniform( 0.8f, 1.2f);
    H.at<float>(1,2) = rng.uniform(-0.1f, 0.1f)*src.rows;
    H.at<float>(2,0) = rng.uniform( -1e-4f, 1e-4f);
    H.at<float>(2,1) = rng.uniform( -1e-4f, 1e-4f);
    H.at<float>(2,2) = rng.uniform( 0.8f, 1.2f);
    warpPerspective( src, dst, H, src.size() );
}
static void doIteration( const Mat& img1, Mat& img2, bool isWarpPerspective,
                  vector<KeyPoint>& keypoints1, const Mat& descriptors1,
                  Ptr<FeatureDetector>& detector, Ptr<DescriptorExtractor>& descriptorExtractor,
                  Ptr<DescriptorMatcher>& descriptorMatcher, int matcherFilter, bool eval,
                  double ransacReprojThreshold, RNG& rng ) {
					  assert( !img1.empty() );
					  Mat H12;
					  if( isWarpPerspective )
						  warpPerspectiveRand(img1, img2, H12, rng );
					  else
						  assert( !img2.empty()/* && img2.cols==img1.cols && img2.rows==img1.rows*/ );
					  /// --Extracting keypoints from second image...
					  cv::initModule_nonfree(); /// --Load SIFT/SURF etc.
					  Ptr<Feature2D> sift = Algorithm::create<Feature2D>("Feature2D.SIFT");
					  FileStorage fs("findMatch.xml", FileStorage::READ);
					  if( fs.isOpened() ) { /// --if we have file with parameters, read them
						  sift->read(fs["findMatch"]);
						  fs.release(); }
					  Mat descriptors2;
					  vector<KeyPoint> keypoints2;
					  (*sift)(img2, noArray(), keypoints2, descriptors2);
					  /// --Computing descriptors for keypoints from second image
					  descriptorExtractor->compute( img2, keypoints2, descriptors2 );
					  if( !H12.empty() && eval ) {
						  /// --Evaluate feature detector...
						  float repeatability; /// --repeatability
						  int correspCount;    /// --correspCount
						  evaluateFeatureDetector( img1, img2, H12, &keypoints1, &keypoints2, repeatability, correspCount ); }
					  /// --Matching descriptors...
					  vector<DMatch> filteredMatches;
					  switch( matcherFilter ) {
						  case CROSS_CHECK_FILTER :
							  crossCheckMatching( descriptorMatcher, descriptors1, descriptors2, filteredMatches, 1 );
							     break;
								 default :
									 simpleMatching( descriptorMatcher, descriptors1, descriptors2, filteredMatches ); }
					  if( !H12.empty() && eval ) {
						  ///Evaluate descriptor matcher...
						  vector<Point2f> curve;
						  Ptr<GenericDescriptorMatcher> gdm = new VectorDescriptorMatcher( descriptorExtractor, descriptorMatcher );
						  evaluateGenericDescriptorMatcher( img1, img2, H12, keypoints1, keypoints2, 0, 0, curve, gdm );
						  Point2f firstPoint = *curve.begin();
						  Point2f lastPoint = *curve.rbegin();
						  int prevPointIndex = -1;
						  /// --1-precision = firstPoint.x & recall = firstPoint.y
						  for( float l_p = 0; l_p <= 1 + FLT_EPSILON; l_p+=0.05f ) {
							  int nearest = getNearestPoint( curve, l_p );
							  if( nearest >= 0 ) {
								  Point2f curPoint = curve[nearest];
								  if( curPoint.x > firstPoint.x && curPoint.x < lastPoint.x && nearest != prevPointIndex ) {
									  /// --1-precision = curPoint.x & recall = curPoint.y
									  prevPointIndex = nearest; }
								  }
							  }
					  }
					  /// --1-precision = lastPoint.x & recall = lastPoint.y
					  vector<int> queryIdxs( filteredMatches.size() ), trainIdxs( filteredMatches.size() );
					  for( size_t i = 0; i < filteredMatches.size(); i++ ) {
						  queryIdxs[i] = filteredMatches[i].queryIdx;
						  trainIdxs[i] = filteredMatches[i].trainIdx; }
					  if( !isWarpPerspective && ransacReprojThreshold >= 0 ) {
						  /// --Computing homography (RANSAC)...
						  vector<Point2f> points1; KeyPoint::convert(keypoints1, points1, queryIdxs);
						  vector<Point2f> points2; KeyPoint::convert(keypoints2, points2, trainIdxs);
						  H12 = findHomography( Mat(points1), Mat(points2), CV_RANSAC, ransacReprojThreshold ); }
					  //cout << "Function value of filteredMatches Size: " << filteredMatches.size(), prod = 0 and dev = 1 << endl;
					  if( ( filteredMatches.size() >= 215 )  && ( PROD_OR_DEV_MODE == 1 ) ) {
						  Mat drawImg;
						  if( !H12.empty() ) { /// --filter outliers
							  vector<char> matchesMask( filteredMatches.size(), 0 );
							  vector<Point2f> points1; KeyPoint::convert(keypoints1, points1, queryIdxs);
							  vector<Point2f> points2; KeyPoint::convert(keypoints2, points2, trainIdxs);
							  Mat points1t; perspectiveTransform(Mat(points1), points1t, H12);
							  double maxInlierDist = ransacReprojThreshold < 0 ? 2 : ransacReprojThreshold;
							  for( size_t i1 = 0; i1 < points1.size(); i1++ ) {
								  if( norm(points2[i1] - points1t.at<Point2f>((int)i1,0)) <= maxInlierDist ) /// --inlier
									  matchesMask[i1] = 1; }
							  /// --draw without inliers and outliers
							  drawMatches( img1, keypoints1, img2, keypoints2, filteredMatches, drawImg, CV_RGB(0, 255, 0), CV_RGB(0, 0, 255),
								  matchesMask, DrawMatchesFlags::DRAW_RICH_KEYPOINTS | DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
							  /// --Write image
							  imwrite( "C:/inetpub/wwwroot/data_vice_lab/user/temp/image_0.jpg", drawImg );
							  /// --Globalvariable add + 1 to GOOD_MATCHES
							  ChangeMyGlobal();
							  if (GOOD_MATCHES == 1) {
								  /// --Break program after drawing matches
								  exit(0); }
							  }
						  }
					  if( ( filteredMatches.size() >= 215 )  && ( PROD_OR_DEV_MODE == 0 && GOOD_MATCHES == 0 ) ) {
						  /// --Globalvariable add + 1 to GOOD_MATCHES
						  ChangeMyGlobal();
						  GLOBAL_Copy_file = 1;
						  successfile(); } /// --Is not necessary because the destructor closes the open file by default
					  else if( ( filteredMatches.size() >= 215 )  && ( PROD_OR_DEV_MODE == 0 && GOOD_MATCHES == 1 ) ) {
						  /// --Globalvariable add + 1 to GOOD_MATCHES
						  ChangeMyGlobal();
						  GLOBAL_Copy_file = 1; }
					  
					  else if( ( filteredMatches.size() >= 215 )  && ( PROD_OR_DEV_MODE == 0 && GOOD_MATCHES == 2 ) ) {
						  /// --Globalvariable add + 1 to GOOD_MATCHES
						  ChangeMyGlobal();
						  GLOBAL_Copy_file = 1; }
					  else if( ( filteredMatches.size() >= 215 )  && ( PROD_OR_DEV_MODE == 0 && GOOD_MATCHES == 3 ) ) {
						  /// --Globalvariable add + 1 to GOOD_MATCHES
						  ChangeMyGlobal();
						  GLOBAL_Copy_file = 1; }
					  else if( ( filteredMatches.size() >= 215 )  && ( PROD_OR_DEV_MODE == 0 && GOOD_MATCHES == 4 ) ) {
						  /// --Globalvariable add + 1 to GOOD_MATCHES
						  ChangeMyGlobal();
						  GLOBAL_Copy_file = 1; }
					  }
/**
 * @fn	int main(int argc, char** argv)
 *
 * @brief	Main entry-point for this application.
 *
 * @date	2012-11-27
 *
 * @param	argc	Number of command-line arguments.
 * @param	argv	Array of command-line argument strings.
 *
 * @return	Exit-code for the process - 0 for success, else an error code.
 */
int main(int ac, char** av)
{
    if(ac != 4)
			{
				cout << "No imagefile inserted" << endl;
	            return -1;
 	        }
    else
    {
    	argc = ac;
    	argv = av;
    	session_id_dir = argv[2];
    	session_img_dir = argv[3];
    	//std::cout << "session_id_dir: " << session_id_dir << std::endl << std::flush;
    	//std::cout << "defaultDetectorType: " << defaultDetectorType << std::endl << std::flush;
    	//std::cout << "defaultDescriptorType: " << defaultDescriptorType << std::endl << std::flush;
    	//std::cout << "defaultCheckFilter: " << defaultCheckFilter << std::endl << std::flush;
    	//std::cout << "defaultransacReprojThreshold: " << defaultransacReprojThreshold << std::endl << std::flush;
    	//std::cout << "ransacReprojThreshold: " << ransacReprojThreshold << std::endl << std::flush;
    }
	/// --Start time stamp
	std::clock_t start;
    start = std::clock();
	initModule_nonfree(); /// --to load SURF/SIFT etc.
	///-Creating detector, descriptor extractor and descriptor matcher
	Ptr<Feature2D> sift = Algorithm::create<Feature2D>("Feature2D.SIFT");
	 FileStorage fs("findMatch.xml", FileStorage::READ);
	 if( fs.isOpened() ) /// --if we have file with parameters, read them
		{
			sift->read(fs["findMatch"]);
			fs.release();
			}
	else /// --Else modify the parameters and store them; user can later edit the file to use different parameters
		{
			sift->set("contrastThreshold", 0.01f); /// --lower the contrast threshold, compared to the default value
			{
				WriteStructContext ws(fs, "findMatch", CV_NODE_MAP);
				sift->write(fs);
				}
		}
	
	bool isWarpPerspective = argc == 7;
    double ransacReprojThreshold = -1;
    if( !isWarpPerspective )
		ransacReprojThreshold = atof(defaultransacReprojThreshold);
    Ptr<FeatureDetector> detector = FeatureDetector::create( defaultDetectorType );
    Ptr<DescriptorExtractor> descriptorExtractor = OpponentColorDescriptorExtractor::create( defaultDescriptorType );
    Ptr<DescriptorMatcher> descriptorMatcher = DescriptorMatcher::create( defaultMatcherType );
    int matcherFilterType = getMatcherFilterType( defaultCheckFilter );
    bool eval = !isWarpPerspective ? false : (atoi(argv[1]) == 0 ? false : true);
	if( detector.empty() || descriptorExtractor.empty() || descriptorMatcher.empty() )
		{
			//cout << "Can not create detector or descriptor exstractor or descriptor matcher of given types" << endl;
			return -1;
			}
	vector<string> images_filenames;
	string images_folder (defaultimages_folder);
	/// --Create & load the reference image
	//Mat reference = imread("C:/inetpub/wwwroot/data_vice_lab/user/reference.jpg", CV_LOAD_IMAGE_COLOR);
    /// --Load the image
	Mat img1 = imread( argv[1], CV_LOAD_IMAGE_COLOR );
	if (!img1.data)
	  return -1;
	session_id_dir = argv[2];
	session_img_dir = argv[3];
	
	cv::Mat mask;
    cv::cvtColor(img1, mask, CV_BGR2GRAY);
    cv::threshold(mask, mask, 220, 255, CV_THRESH_BINARY);
    std::vector<std::vector<cv::Point> > contours;
    cv::findContours(mask.clone(), contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
    for (int unsigned i = 0; i < contours.size(); i++)
		{
			cv::Rect r = cv::boundingRect(contours[i]);
			if (std::abs(1 - (img1.cols/r.width)) > 0.2)
				cv::drawContours(mask, contours, i, CV_RGB(0,0,0), CV_FILLED); }
    Mat descriptors1;
	//cv::inpaint(img1, mask, img1, 1, cv::INPAINT_TELEA);
	
   	vector<KeyPoint> keypoints1;
    (*sift)(img1, noArray(), keypoints1, descriptors1);
	images_folder = defaultimages_folder;
	readDirectory( images_folder, images_filenames );
	/// --Check that we got an image
	if( images_folder.empty() )
		{
			/// --Can not read directory images
			return -1;
		}
	/// --Do loop execution for readDirectory and recognize routine parallel with tbb::parallel_for (C++11 lambda for brevity):
	for( size_t i = 0; i < images_filenames.size(); i++ )
		{
			string initialFilePath = images_folder + images_filenames[i];
			Mat img2 = imread(initialFilePath, CV_LOAD_IMAGE_COLOR);
			
			if(img2.depth() != img1.depth())
				{
					continue; }
			if(img2.channels() != img1.channels())
				{
					continue; }
	                    			
			/// --Start duration time in seconds
			duration = ( std::clock() - start ) / (double) CLOCKS_PER_SEC;
			/// --Computing descriptors for keypoints from first image... 
	        descriptorExtractor->compute( img1, keypoints1, descriptors1 );
	 	 
	        RNG rng = theRNG(); 
	 
            doIteration( img1, img2, isWarpPerspective, keypoints1, descriptors1,
				detector, descriptorExtractor, descriptorMatcher, matcherFilterType,
				eval, ransacReprojThreshold, rng );
			
			if (GLOBAL_Copy_file == 1 )
				{
					ChangeMyGlobalCopy();
					boost::filesystem::path copysrc(initialFilePath);
					boost::filesystem::path copydst(session_img_dir);
					copydst = copydst/copysrc.filename();
					boost::filesystem::copy_file(copysrc, copydst);
					//std::cout << "outputFilePath: " << copydst << std::endl << std::flush;
					//std::cout << "File to copy: " << copysrc << std::endl << std::flush;
			        //std::cout << "Copied file: " << copydst << std::endl << std::flush;
			        //std::cout << "Create Success.txt: " << session_id_dir << std::endl << std::flush;
				}
			/// --If no matches in duration end time, lets exit loop and levae a message.
			if(duration > duration_end_time || GOOD_MATCHES == 2)
			{
				return EXIT_SUCCESS;
			}
			else continue;
	}
			
}