CodeProject, DotNet

Suspend or resume windows controls to be redraw

Recently, I was facing issue with third party Image Viewer to apply multiple operations before redrawing of images.
Third-party control was not allowing to suspend and resume painting of images inherently. After searching for various solutions, I found a solution that can be achieved using windows message WM_SETREDRAW from an application to Windows to allow changes in that control to be redrawn or to prevent changes in that control from being redrawn.

public class UIDrawController
{
[DllImport("user32.dll")]
public static extern int SendMessage(IntPtr hWnd, Int32 wMsg, bool wParam, Int32 lParam);

private const int WM_SETREDRAW = 11;

public static void SuspendDrawing(Control ctrl)
{
SendMessage(ctrl.Handle, WM_SETREDRAW, false, 0);
}

public static void ResumeDrawing(Control ctrl)
{
SendMessage(ctrl.Handle, WM_SETREDRAW, true, 0);
ctrl.Refresh();
}
}

Example Usage:


public ImageViewer : UserControl
{

public void LoadDocument(string filename)
{

UIDrawController.SuspendDrawing(this);

OpenImage(filename);
FitToWidth();
Rotate90();
ApplyImageFilters();

UIDrawController.ResumeDrawing(this);
}
}

CodeProject, Database, sql server

Change Tracking example -Sql Server

If there is a requirement to get incremental or changed data from database frequently without putting a heavy load on database objects, then Change Tracking mechanism of Sql Server can be out of the box solution for this requirement. Normally, developers have to do custom implementation to achieve change tracking behavior. It can be implementation by considering triggers, timestamp columns, or maintaining new tables.

Following is step by step instructions to enable and use of change tracking feature in SQL Server.

Step 1: Check if database compatibility level is set to 90 or greater. If It is lower than 90 then change tracking will not work.

SELECT compatibility_level
FROM sys.databases WHERE name = '';

Step 2: Enable Isolation level on a database to Snapshot. It will ensure change tracking information is consistent.

ALTER DATABASE SET ALLOW_SNAPSHOT_ISOLATION ON

Step 3: Set Change tracking on a database.

ALTER DATABASE SET CHANGE_TRACKING = ON (CHANGE_RETENTION = 2 DAYS,AUTO_CLEANUP = ON)

CHANGE_RETENTION: It specifies the time period for which change tracking information is kept
AUTO_CLEANUP: It enables or disables the cleanup task that removes old change tracking information.

Step 4: Enable change tracking on a table.

ALTER TABLE
ENABLE CHANGE_TRACKING
WITH (TRACK_COLUMNS_UPDATED = OFF)

TRACK_COLUMNS_UPDATED: Setting value to “ON” will make SQL Server Engine storing extra information about columns which are enabled for change tracking. ‘OFF’ is default value to avoid extra overhead on SQL Server to maintain extra columns information.

Step 5: Example to get changed data.

It is example of SQL procedure which will only send changed data from table. Application can pass @lastVersion = 0 first time and going forward application can keep the last version in the cache and pass on last stored version.


CREATE PROCEDURE [dbo].[GetIncrementalChanges]
@lastVersion BIGINT = 0 OUTPUT
AS
BEGIN
DECLARE @curVersion BIGINT = CHANGE_TRACKING_CURRENT_VERSION()
IF @lastVersion = 0
BEGIN
SELECT
a.*
FROM a
END
ELSE
BEGIN
SELECT
a.*
FROM a
INNER JOIN CHANGETABLE(CHANGES , @lastVersion) ct ON A.Id= ct.Id
END

SET @lastVersion = @curVersion

END

Disable Change Tracking

1. Before disabling change tracking on a database, all tables should have change tracking disabled.

Testing Sql statements

You can find working example in attached SQL file or code below:

changetracking


SET NOCOUNT ON
go
PRINT 'Creating test database'
Go
CREATE DATABASE testDb
GO
USE testDb
go
PRINT 'Get compatibility level of db'
GO

SELECT compatibility_level
FROM sys.databases WHERE name = 'v';

GO
PRINT 'Setting db isolation level'
ALTER DATABASE testDb SET ALLOW_SNAPSHOT_ISOLATION ON;

GO
PRINT 'Creating table testchange'
GO
CREATE TABLE dbo.TestChange
(
Id INT NOT NULL ,
NAME VARCHAR(20)
NOT NULL CONSTRAINT [PK_ID] PRIMARY KEY CLUSTERED ( [Id] ASC )
);

GO
PRINT 'Inserting initial values'
GO

INSERT INTO dbo.TestChange
( Id, NAME )
VALUES ( 1, -- Id - int
'ABC' -- NAME - varchar(2)
),
( 2, 'XXX' );
GO

PRINT 'See current change tracking version before Change tracking enabled';

SELECT [change tracking version after Enabling] = CHANGE_TRACKING_CURRENT_VERSION();
GO
PRINT 'Enable Change Tracking on database';

ALTER DATABASE testDb SET CHANGE_TRACKING = ON (CHANGE_RETENTION = 2 DAYS,AUTO_CLEANUP = ON)

GO
PRINT 'Enable Change Tracking on testchange table';
GO
ALTER TABLE dbo.TestChange
ENABLE CHANGE_TRACKING
WITH (TRACK_COLUMNS_UPDATED = OFF);

GO

SELECT [change tracking version after Enabling] = CHANGE_TRACKING_CURRENT_VERSION();

GO
CREATE PROCEDURE [dbo].[GetIncrementalChanges]
@lastVersion BIGINT = 0 OUTPUT
AS
BEGIN
DECLARE @curVersion BIGINT = CHANGE_TRACKING_CURRENT_VERSION()
IF @lastVersion = 0
BEGIN
SELECT
a.*
FROM TestChange a
END
ELSE
BEGIN
SELECT
a.*
FROM TestChange a
INNER JOIN CHANGETABLE(CHANGES dbo.TestChange, @lastVersion) ct ON A.Id= ct.Id
END

SET @lastVersion = @curVersion

END
GO

DECLARE @lastVersion1 BIGINT =0

EXECUTE dbo.GetIncrementalChanges @lastVersion = @lastVersion1 OUTPUT -- bigint

PRINT 'Get Last Version'
SELECT [Last Version] = @lastVersion1

PRINT 'insert new rows in table'

INSERT INTO dbo.TestChange
( Id, NAME )
VALUES ( 3, -- Id - int
'YYYY' -- NAME - varchar(2)
),
( 4, -- Id - int
'ZZZ' -- NAME - varchar(2)
)

EXECUTE dbo.GetIncrementalChanges @lastVersion = @lastVersion1 OUTPUT -- bigint

PRINT 'Get latest Version'
SELECT @lastVersion1

INSERT INTO dbo.TestChange
( Id, NAME )
VALUES ( 5, -- Id - int
'KKKK' -- NAME - varchar(2)
),
( 6, -- Id - int
'LLLL' -- NAME - varchar(2)
)

EXECUTE dbo.GetIncrementalChanges @lastVersion = @lastVersion1 OUTPUT -- bigint

PRINT 'Get latest Version'
SELECT @lastVersion1

GO
PRINT 'Disable Change Tracking on table'
ALTER TABLE dbo.TestChange
DISABLE CHANGE_TRACKING
GO
PRINT 'Current change tracking version after disabling';
SELECT [change tracking version after disabling] = CHANGE_TRACKING_CURRENT_VERSION()
GO
PRINT 'Disable Change Tracking on Database'

ALTER DATABASE testDb SET CHANGE_TRACKING = OFF

GO

PRINT 'test complete, dropping database'
USE master
Go
DROP DATABASE testDb

C#, CodeProject, DotNet, FIX

How to place Order via FIX message?

Purpose

The purpose of this post is to walk-through implementation to “Place an Order on via FIX message channel”.

In my previous posts, I have shown basic implementation of FIX messages like Establish connection with broker/exchange Fix End points, Consume Market Feeds etc.

You can read my previous posts about topics mentioned above:

This post will cover following topics:

  • Different types of Orders.
  • Orders Validity
  • Order workflow.
  • Place Order
  • What is Execution Report?
  • Process Execution Report.

Order Types:

  • Market Order: This is basic type of order; wherein, trader buy or sell at market price without specifying desired buy or sell price.
  • Limit Order: It is an order to buy or sell at a specified price. A limit buy order can execute at specified buy price or lower. A limit sell order can execute at specified price or higher. In this order, trader has to specify price before placing an order.
  • Stop Order: It is similar to market order which execute when specific price triggers. For example, if the market price is 150, a trader places a buy stop order with a price of 60, when price would move 160 or above, this order will become market order and execute at best available price. This type order can be used for Take-Profit and Stop-Loss.
  • Stop Limit Order: This is a combination of stop order and limit order, like stop order, it is only processed if the market reaches a specific price, but it is executed limit order, therefore it will only get filled at the chosen or a better price. For example, if the current price is 150, a trader might place a buy stop limit order with a price of 160. If the market trades at 160 or above, this order will execute as limit order to get filled at 160. However, it might happen that this order may not fill if there is not depth available.

Detail description of each order types can be read on investopedia.

Order Validity

In addition to specify order types, trades can also specify validity of an order for how long particular order is valid. Order would be cancelled after expiry.

Traders can specify following validity of an order:

  • Day: Order is only valid till end of the market session.
  • GTC (Good till Cancel): Order is valid till trader manually cancels it. However, brokers might have max timeline to cancel orders automatically if order is beyond certain days, typically 30, 60 or 90 days.
  • GTD (Good till Date): Order is valid till end of the market session of specified date mention in this order.
  • IOC (Immediate or Cancel): Order should be partially/filled immediately while placing order; otherwise, it would be cancelled.
  • FOK (Fill or Kill): Either order would be fully filled or cancelled. This type of order would not allow any partial fills.

FIX Order workflow

OrderFlow

 

 

 

 

 

What is Execution Report?

It is FIX message which broker side sent in response to order request. Broker side relays status of an order, and there can be multiple Execution Reports for a single order. Execution reports can have following status and information:

Order Status

Order Status Description
Done for Day Order did not fully or partially filled; no further executions are pending for the trading day.
Filled Order completely filled; no remaining quantity.
Suspended Order has been placed in suspended state at the request of the client.
Canceled Canceled order with or without executions.
Expired Order has been canceled in broker’s system due to order validity (Time In Force) instructions.
Partially Filled Outstanding order with executions and remaining quantity.
Replaced Replaced order with or without executions
New Outstanding order with no executions
Rejected Order has been rejected by broker. NOTE: An order can be rejected subsequent to order acknowledgment, i.e. an order can pass from New to Rejected status.
Pending New Order has been received by brokers system but not yet accepted for execution. An execution message with this status will only be sent in response to a Status Request message.
Accepted Order has been received and is being evaluated for pricing.

Important Fields 

Field Description
ClOrderId Unique key of an order requested by client.
OrderId Unique key generated by broker/exchange for an order.
ExecID Unique key of each execution report message.
Account Account number on which order was placed.
OrdType Type of Order e.g. Market , Limit
Price Ordered Price specified to buy or sell
Side Side of an order (buy or Sell)
Symbol Symbol name of instrument on which order placed.
SecurityId InstrumentID
LastPx Orderd executed on this price.
LastQty Traded quantity in each fill
LeavesQty Remaining Qty of an order. It is zero when order is fully filled.
CumQty Total Traded Quantity

Implementation

Technology/Tools Used:

I have downloaded FIX UI Demo code from QuickFix/n Git hub location to save some time. This sample code already has connection and order routines. I will do further changes to show various use cases of order work flow with FIX 4.4 specification. I have also added FIXAcceptor component to process FIX messages locally.

Connect with FIX Acceptor

Click on connect button. Fix Initiator will send Logon message, which will be received by FIX acceptor and acknowledge it in reverse sending logon message.

Connect

Application is ready to place an order after connection is established.

Placing Order

Fix 4.4 supports “Single Order – New <D>” to place any single order by the client. You can find standard specification this message on any fix dictionary available online. However, message specification might differ from broker to broker.

You can see standard FIX messages/tags specification on FIXIMATE .

OrderTicket

 

 

 

 

 

 

 

 

 

 

 

Create an object of “NewOrderSingle” class and set values to properties of class:

Code:

// hard-coded fields
QuickFix.Fields.HandlInst fHandlInst = new QuickFix.Fields.HandlInst(QuickFix.Fields.HandlInst.AUTOMATED_EXECUTION_ORDER_PRIVATE);
// from params
QuickFix.Fields.OrdType fOrdType = FixEnumTranslator.ToField(orderType);
QuickFix.Fields.Side fSide = FixEnumTranslator.ToField(side);
QuickFix.Fields.Symbol fSymbol = new QuickFix.Fields.Symbol(symbol);
QuickFix.Fields.TransactTime fTransactTime = new QuickFix.Fields.TransactTime(DateTime.Now);
QuickFix.Fields.ClOrdID fClOrdID = GenerateClOrdID();
QuickFix.FIX44.NewOrderSingle nos = new QuickFix.FIX44.NewOrderSingle(fClOrdID, fSymbol, fSide, fTransactTime, fOrdType);
nos.HandlInst = fHandlInst;
nos.OrderQty = new QuickFix.Fields.OrderQty(orderQty);
nos.TimeInForce = FixEnumTranslator.ToField(tif);
if (orderType == OrderType.Limit)
    nos.Price = new QuickFix.Fields.Price(price);

Process Execution Report

ExecutionReport

 

 

 

 

public void HandleExecutionReport(QuickFix.FIX44.ExecutionReport msg)
        {
        string execId = msg.ExecID.Obj;
        string execType = FixEnumTranslator.Translate(msg.ExecType);                Trace.WriteLine("EVM: Handling ExecutionReport: " + execId + " / " + execType);
        ExecutionRecord exRec = new  ExecutionRecord(                                msg.ExecID.Obj,                    
        msg.OrderID.Obj,                    
        string.Empty,
        execType,
        msg.Symbol.Obj,                   
        FixEnumTranslator.Translate(msg.Side));

        exRec.LeavesQty = msg.LeavesQty.getValue();        
        exRec.TotalFilledQty = msg.CumQty.getValue();                
        exRec.LastQty = msg.LastQty.getValue();
       }

FIX Acceptor

This is server side component which process messages from FIX clients and send response back to them.

Executor Class

This class is getting various methods callbacks once received FIX message from FIX Client.

public void OnMessage(QuickFix.FIX44.NewOrderSingle n, SessionID s)

This method will be called every time when “NewOrderSingle” message received.

I am simulating different status of an execution report. I have added 1 sec sleep time between each status change and can be clearly seen in UI.

public void OnMessage(QuickFix.FIX44.NewOrderSingle n, SessionID s)
 {
 Symbol symbol = n.Symbol;
 Side side = n.Side;
 OrdType ordType = n.OrdType;
 OrderQty orderQty = n.OrderQty;
 Price price = new Price(DEFAULT_MARKET_PRICE);
 ClOrdID clOrdID = n.ClOrdID;
 
 switch (ordType.getValue())
 {
 case OrdType.LIMIT:
 price = n.Price;
 if (price.Obj == 0)
 throw new IncorrectTagValue(price.Tag);
 break;
 case OrdType.MARKET: break;
 default: throw new IncorrectTagValue(ordType.Tag);
 }

// Send Status New
 SendExecution(s, OrdStatus.NEW, ExecType.NEW, n, n.OrderQty.getValue(), 0, 0, 0, 0);
 Thread.Sleep(1000);
 
 // Send Status Partially Filled
 decimal filledQty = Math.Abs(Math.Round(n.OrderQty.getValue() / 4, 2));
 decimal cumQty = filledQty;
 SendExecution(s, OrdStatus.PARTIALLY_FILLED, ExecType.PARTIAL_FILL, n, filledQty, filledQty, price.getValue(), filledQty, price.getValue());
 Thread.Sleep(1000);

// Send Status Partially Filled
 filledQty = Math.Abs(Math.Round(n.OrderQty.getValue() / 4, 2));
 cumQty += filledQty;
 SendExecution(s, OrdStatus.PARTIALLY_FILLED, ExecType.PARTIAL_FILL, n, n.OrderQty.getValue() - cumQty, cumQty, price.getValue(), filledQty, price.getValue());
Thread.Sleep(1000);

// Send Status Fully Filled
 filledQty = n.OrderQty.getValue() - cumQty;
 cumQty += filledQty;
 SendExecution(s, OrdStatus.FILLED, ExecType.FILL, n, 0, cumQty, price.getValue(), filledQty, price.getValue());
 }

private void SendExecution(SessionID s, char ordStatus, char execType, QuickFix.FIX44.NewOrderSingle n, decimal leavesQty, decimal cumQty, decimal avgPx, decimal lastQty, decimal lastPrice)
 {

 QuickFix.FIX44.ExecutionReport exReport = new QuickFix.FIX44.ExecutionReport(
 new OrderID(GenOrderID()),
 new ExecID(GenExecID()),
 new ExecType(execType),
 new OrdStatus(ordStatus),
 n.Symbol, //shouldn't be here?
 n.Side,
 new LeavesQty(leavesQty),
 new CumQty(cumQty),
 new AvgPx(avgPx));

 exReport.ClOrdID = new ClOrdID(n.ClOrdID.getValue());
 exReport.Set(new LastQty(lastQty));
 exReport.Set(new LastPx(lastPrice));

 if (n.IsSetAccount())
 exReport.SetField(n.Account);

 try
 {
 Session.SendToTarget(exReport, s);
 }
 catch (SessionNotFound ex)
 {
 Console.WriteLine("==session not found exception!==");
 Console.WriteLine(ex.ToString());
 }
 catch (Exception ex)
 {
 Console.WriteLine(ex.ToString());
 }
 }

This post exhibits standard way of handling FIX messages; however, implementation can vary from broker to broker.

I hope, this post will give you good understanding of how order can be placed via FIX channel. I will cover order cancel and replace scenario in next post.

Source Code

Source code can be downloaded from github repository. Executable files are placed in separate folder.

CodeProject, Database, NoSql

NoSql (It’s “Not only SQL” not “No to Sql”)

This is my first post on NoSql database technologies. There have been drastic changes in database technologies over the few years. Increase in user’s requests, high availability of applications, real time performance forced to think on different database technologies. We have traditional RDBMS, memory and NoSql databases available in market to suffice particular business needs. Here I’ll illustrate some of key aspects of NoSql databases like what is NoSql, why we need it, advantages and disadvantages of NoSql.

What is NoSql Movement?

It’s a different way of thinking in database technologies. It is unlike relational database management system where we have tables, procedures, functions, normalization concepts. NoSql databases are not built primarily on tables and don’t use sql for manipulation or querying database.

NoSql databases have specific purpose to achieve, that means NoSql database might not support all the features like in relational databases.

NoSql databases are based on CAP Theorem.

  • Consistency: Most of the applications or services attempt to provide strong consistent data. Interactions with applications/services are expected to behave in transactional manner ie. Operation should be atomic (succeed or failure entirely), uncommitted transactions should be isolated from each other and transaction once committed should be permanent.
  • Availability: Load on services /applications are increasing and eventually services should be highly available to users. Every request should be succeed.
  • Partition tolerant: Your services should provide some amount of fault tolerance in case of crash, failure or heavy load. It is important that in case of these circumstances your services should still perform as expected. Partition tolerant is one of desirable property of service. Services can serve request from multiple nodes

Why NoSql?

Since NoSql databases are using for specific purpose. They are normally using for huge data where performance matters. Relational database systems are hard to scale out in case of write operation. We can load balance database servers by replicating on multiple servers, in this case read operation can be load balance but write operation needs consistency across multiple servers. Writes can be scaled only by partitioning the data. This affects reads as distributed joins are usually slow and hard to implement. We can support increase in no. of users or requests by scaling up relational databases which means we need more hardware support, licensing, increase in costs etc.

Relational databases are not good option on heavy load which are doing read and write operations simultaneously like Facebook, Google, Amazon, Twitter etc.

A NoSQL implementation, on the other hand, can scale out, i.e. distribute the database load across more servers.

clip_image002

Source: Couchbase.com

Common characteristic in NoSql databases

· Aggregating (supported by column databases): Aggregation usage to calculate aggregated values like Count, Max, Avg, Min etc. Some of NoSql provides support for aggregation framework which have inbuilt aggregation of values. Approach in column databases is to store values in columns instead rows (de-normalized data). This kind of data mainly used in data analytics and business intelligence. Google’s BigTable and Apache’s Cassandra supports some feature of column databases.

· Relationships (support by graph databases): A graph database uses graph structures with nodes, edges and properties. Every element contains a direct pointer to adjacent element; in this case it doesn’t need to lookup indexes or scanning whole data. Graph databases are mostly use in relational or social data where elements are connected. Eg. Neo4j, BigData, OrientDB.

 

image

Source: wikipaedia

 

· Document based. Document databases are considered by many as the next logical step from simple key-/value-stores to slightly more complex and meaningful data structures as they at least allow encapsulating key-/value-pairs in documents. Eg. CouchDb, MongoDb.

Mapping of document based db vs relational db

 

Document Based Databases Relational databases
Collections Table
Document Row

 

· Key- Value Store: Values are stored as simply key-value pairs. Values only stored like blob object and doesn’t care about data content. Eg. Dynamo DB, LevelDB, RaptorDB.

· Databases Scale out: When the load increases on databases, database administrators were scaling up tradition databases by increasing hardware, buying bigger databases- instead of scale out i.e. distributing databases on multiple nodes /servers to balance load. Because of increase in transactions rates and availability requirements and availability of databases on cloud or virtual machine, scaling out is not economic pain in increasing cost anymore.

On the other hand, NoSql databases can scale out by distributing on multiple servers. NoSQL databases typically use clusters of cheap commodity servers to manage the exploding and transaction volumes.  The result is that the cost per gigabyte or transaction/second for NoSQL can be many times less than the cost for RDBMS, allowing you to store and process more data at a much lower price;

Now question here is why scaling out in RDBMS is hard to implement. Traditional databases support ACID properties that guarantee that database transactions are processed reliably. A transaction can have write operations for multiple records, so to keep consistency across multiple nodes is slow and complex process, because multiple servers would need to communicate back and forth to keep data integrity and synchronize transactions while preventing deadlock. On the other hand NoSql databases supports single record transaction and data is partitioned on multiple nodes to process transactions fast.

· Auto Sharding (Elasticity): NoSql databases support automatic data sharding (horizontal partitioning of data), where database breaks down into smaller chunks (called shard) and can be shared across distributed servers or cluster. This feature provides faster responses to transactions and data requests.

 

· Data Replication: Most of NoSql supports data-replication like relational databases to support same data-availability across distributed servers.

 

· No schema required (Flexible data model): Data can be inserted in a NoSQL DB without first defining a rigid database schema. The format of the data being inserted can be changed at any time, without application disruption. This provides greater application flexibility, which ultimately delivers significant business flexibility.

 

· Caching: Most of NoSql databases supports integrated caching to support low latency and high throughput. This behavior is contrast with traditional database management systems where it needs separate configuration or development to support.

Challenges of No-SQL

Till now we have seen significant advantages of NoSql over RDBMS, however there are many challenges to implement NoSql.

Maturity: Most of the NoSql databases are in open source or in pre-production stage. In this case it might be risk to adopt these databases on enterprise level. For small business or use case it might be better to consider. On the other hand RDBMS databases are matured, providing many features and having good documentations or resources.

Support: Most of RDBMS are not open source that means they come with commitment and assurance in case of failure. They are reliable products and properly tested. Most of NoSql databases are open source and not widely adopted by organizations. It is very hard to get effective support from open sources databases. Some of NoSql databases created by small startups for specific needs, not for global reach.

Tools: RDBMS databases have lot of tools to monitor databases, queries analyzing, optimizations, performance profiling, analytics and Business Intelligence. Objective of NoSql databases are to minimize use of admin tools which has not achieved fully yet, still there are certain things which need skills and tools to monitor database activities.

When to consider NoSql

Following are some of indicators you can consider while choosing NoSql database for your application:

· If your application needs high performance databases.

· Need less or zero administration of databases.

· You want flexible data model. Minor of major changes should not impact whole system.

· Application that needs less complex transactions.

· High availability.

· Not or less consideration on Business Intelligence and analytics.

References:

· http://nosql-database.org/

· http://www.couchbase.com

· www.mongodb.org

· http://en.wikipedia.org/wiki/Nosql

Enhanced by Zemanta
CodeProject, DotNet, VSTS

General tips: How to run Visual Studio as administrator always

Some times it is needed to run visual studio as administrator for system level activities like if you want to setup IIS virtual directory in your visual studio project, To setup IIS settings etc.

If you want to run visual studio as administrator once, you can right click devenv.exe and select “Run As Administrator”.

But sometimes there are requirement to run visual studio always as admin then you can set privilege level to administrator. Here are the steps:

Step 1 : Goto to executable of Visual studio IDE in “C:\Program Files(x86)\Microsoft Visual Studio 9.0\Common7\IDE\”, Right click on devenv.exe -> Properties -> Compatibility-> And Select “Run as Administrator” check box.

You will also need to setup for visual studio launcher.

Step 2: Go to executable of Visual Studio Launcher on “C:\Program Files (x86)\Common Files\microsoft shared\MSEnv\VsLauncher.exe” and follow same steps like step 1.

C#, CodeProject, FIX

Implementation of FIX messages for Fix 5.0 sp2 and FIXT1.1 specification

 

This post will demonstrate how to connect with FIX5.0 server and FIXT1.1 specification and uses of QuickFix/n (native .net FIX engine).

Introduction

With this release of FIX protocol version 5.0, a new Transport Independence framework (TI) introduced which separates the FIX Session Protocol from the FIX Application Protocol. This gives freedom to send message across different messaging technologies like MSMQ, message bus etc.

Because of different versions of transport and application protocol, we will have to explicitly defines settings in config file.

TransportDataDictionary is used for defining transport protocol version eg. FIXT.1.1.xml

AppDataDictionary is used for defining data dictionary for FIX application protocol version eg. FIX50.xml

You can read more about FIXT1.1 and FIX 5.0 Sp2 specification on fixprotocol.org.

http://fixprotocol.org/specifications/FIXT.1.1

http://fixprotocol.org/specifications/FIX.5.0

 

QuickFix/N

To demonstrate implementation of FIX 5.0 sp2, I’ll use open source FIX engine for .net (QuickFix/N) which is one of open source engine in native .net code.  Code for quickfix.net is available on github and primarily contributed by Connamara System’s developers. These guys are doing commendable job.

 

Implementation

FixInitiator

This is client application which will connect with FIX server to send and receive FIX messages. I am demonstrating implementation of MarketDataRequest and its responses (MarketDataSnapshot & MarketDataIncrementalRefresh).

 

Start with Configuration

First we create configuration file for initiator.

[default]
PersistMessages=Y
ConnectionType=initiator
UseDataDictionary=Y

[SESSION]
ConnectionType=initiator
FileStorePath=store
FileLogPath=fixlog
BeginString=FIXT.1.1
DefaultApplVerID=FIX.5.0

TransportDataDictionary=FIXT.1.1.xml

AppDataDictionary=FIX50.xml
SenderCompID=ABC
TargetCompID=FIXSERVER
SocketConnectHost=127.0.0.1
SocketConnectPort=3500
HeartBtInt=20
ReconnectInterval=30
ResetOnLogon=Y
ResetOnLogout=Y
ResetOnDisconnect=Y

 

*Note- AppDataDictionary is for application protocol eg. FIX 5.0 and TransportDataDictionary is for transport protocol.

You can read more about configuration here.

Create Application Class

Before starting with implementation you would need to have quickFix.dll which is available of github  athttps://github.com/connamara/quickfixn

 

To connect with FIX session, you will have to implement QuickFix.Application interface.

public interface Application
{
void FromAdmin(Message message, SessionID sessionID);
void FromApp(Message message, SessionID sessionID);
void OnCreate(SessionID sessionID);
void OnLogon(SessionID sessionID);
void OnLogout(SessionID sessionID);
void ToAdmin(Message message, SessionID sessionID);
void ToApp(Message message, SessionID sessionId);
}

I create one class named FixClient50Sp2 which implements interface and inherit base class for message cracking and getting message events.

image

 

FIX Application Setup

 

Setting up Initiator

// FIX app settings and related
var settings = new SessionSettings(“C:\\initiator.cfg”);

// FIX application setup
MessageStoreFactory storeFactory = new FileStoreFactory(settings);
LogFactory logFactory = new FileLogFactory(settings);
_client = new FixClient50Sp2(settings);

IInitiator initiator = new SocketInitiator(_client, storeFactory, settings, logFactory);
_client.Initiator = initiator;

 

* _client is instance of class FixClient50Sp2.

Starting Initiator

_client.Start();

 

Implementation of QuickFix.Application Interface methods

/// <summary>
/// every inbound admin level message will pass through this method,
/// such as heartbeats, logons, and logouts.
/// </summary>
/// <param name=”message”></param>
/// <param name=”sessionId”></param>
public void FromAdmin(Message message, SessionID sessionId)
{
Log(message.ToString());
}

/// <summary>
/// every inbound application level message will pass through this method,
/// such as orders, executions, secutiry definitions, and market data.
/// </summary>
/// <param name=”message”></param>
/// <param name=”sessionID”></param>
public void FromApp(Message message, SessionID sessionID)
{
Trace.WriteLine(“## FromApp: ” + message);

Crack(message, sessionID);
}

/// <summary>
/// this method is called whenever a new session is created.
/// </summary>
/// <param name=”sessionID”></param>
public void OnCreate(SessionID sessionID)
{
if (OnProgress != null)
Log(string.Format(“Session {0} created”, sessionID));
}

/// <summary>
/// notifies when a successful logon has completed.
/// </summary>
/// <param name=”sessionID”></param>
public void OnLogon(SessionID sessionID)
{
ActiveSessionId = sessionID;
Trace.WriteLine(String.Format(“==OnLogon: {0}==”, ActiveSessionId));

if (LogonEvent != null)
LogonEvent();
}

/// <summary>
/// notifies when a session is offline – either from
/// an exchange of logout messages or network connectivity loss.
/// </summary>
/// <param name=”sessionID”></param>
public void OnLogout(SessionID sessionID)
{
// not sure how ActiveSessionID could ever be null, but it happened.
string a = (ActiveSessionId == null) ? “null” : ActiveSessionId.ToString();
Trace.WriteLine(String.Format(“==OnLogout: {0}==”, a));

if (LogoutEvent != null)
LogoutEvent();
}

/// <summary>
/// all outbound admin level messages pass through this callback.
/// </summary>
/// <param name=”message”></param>
/// <param name=”sessionID”></param>
public void ToAdmin(Message message, SessionID sessionID)
{
Log(“To Admin : ” + message);
}

/// <summary>
/// all outbound application level messages pass through this callback before they are sent.
/// If a tag needs to be added to every outgoing message, this is a good place to do that.
/// </summary>
/// <param name=”message”></param>
/// <param name=”sessionId”></param>
public void ToApp(Message message, SessionID sessionId)
{
Log(“To App : ” + message);
}

 

 

Callback to Subscription

public void OnMessage(MarketDataIncrementalRefresh message, SessionID session)
{
var noMdEntries = message.NoMDEntries;
var listOfMdEntries = noMdEntries.getValue();
//message.GetGroup(1, noMdEntries);
var group = new MarketDataIncrementalRefresh.NoMDEntriesGroup();

Group gr = message.GetGroup(1, group);

string sym = message.MDReqID.getValue();

var price = new MarketPrice();

for (int i = 1; i <= listOfMdEntries; i++)
{
group = (MarketDataIncrementalRefresh.NoMDEntriesGroup)message.GetGroup(i, group);

price.Symbol = group.Symbol.getValue();

MDEntryType mdentrytype = group.MDEntryType;

if (mdentrytype.getValue() == ‘0’) //bid
{
decimal px = group.MDEntryPx.getValue();
price.Bid = px;
}
else if (mdentrytype.getValue() == ‘1’) //offer
{
decimal px = group.MDEntryPx.getValue();
price.Offer = px;
}

price.Date = Constants.AdjustedCurrentUTCDate.ToString(“yyyyMMdd”);
price.Time = group.MDEntryTime.ToString();
}

if (OnMarketDataIncrementalRefresh != null)
{
OnMarketDataIncrementalRefresh(price);
}
}

 

Code can be found at github.

 

https://github.com/neerajkaushik123/Fix50Sp2SampleApp.git

Asp.Net, CodeProject, DotNet, KnockoutJs

CRUD operations using KnockOutJS and Asp.Net MVC3

 

In last posts about knockoutJs, I gave examples of Search, Binding server side model to KnockoutJs ViewModel. I’ll discuss today about sending data back to server so that we can do server side operations. I would like to present it doing CRUD operations for Account entity.

Account entity has AccountId, Name and AccountBalance.

I start with creating Controller’a action methods Search and Update.

Search method expects search criteria to search records and send back to client in json format.

Update method expects accountid, name etc to from client to do server side operations and send back updated values to client in json format.

Controller Class

AccountController

 

 public class AccountController : Controller
    {
        public ActionResult Index()
        {
            return View("Account");
        }

        /// 
        /// Search Method
        /// 
        public JsonResult Search(string SearchCriteria)
        {
            //Temporary Code
            //todo: can write actual search functionality
            Random rnd = new Random();

            return Json(new { AccountId = 1, Name = "test", 
			Balance = rnd.NextDouble() * 93.244d });
        }

        /// 
        /// Save method
        /// 
       public JsonResult Update(int AccountId, string Name)
        {
            ////Temporary Code
            //todo: can write actual update code
            Random rnd = new Random();

            return Json(new { AccountId = AccountId, Name = "test", 
		Balance = rnd.NextDouble() * 93.244d });
        }

    }

View

In view file( Account.cshtml) html controls are bind with viewmodel fields with data-bind attribute.

 

@{
    ViewBag.Title = "Account";
    Layout = "~/Views/Shared/_Layout.cshtml";
}
@section Scripts {
    <script type="text/javascript" src="../../Scripts/Account.js"></script>
}

<div id="search">
    Search By AccountId <input type="text" data-bind="value:SearchCriteria" />
  <p>
        <input type="button" id='btnSearch' title="Search" value="Search" /></p>
</div>

<div>
    <p>
        AccountId:<input type="text" data-bind="value:AccountId" /></p>
    <p>
        Account Name<input type="text" data-bind="value:Name" /></p>
    <p>
       Balance<input type="text" data-bind="value:Balance" /></p>
    <p>
        <input type="button" id='btnSave' title="Save" value="Update" /></p>
</div>

image

Account.Js

We initialize viewModel that is using as knockoutjs viewmodel.

AccountViewModel contains properties AccountId, Name, Balance and two methods Search and Update.

Search Method:  It sends self.SearchCriteria value in ajax request to call controller action method ie. Search.

Response: an object with AccountId, Name and Balance is returning from server and knockoutjs view model’s properties are further setting up from response.

As view model’s properties are observable in nature here, therefore html controls bind to these values will automatically display changed values.

Update Method: This method sends Ajax request to server (controller action method). Input parameters are from viewmodel ie. AccountId and Name. Similarly we can create Delete,Add method to delete and Add records respectively.

var accountmodel;

$(document).ready(function () {

    //initializing viewmodel

    accountmodel = new viewModel();

    //binding for ko

    ko.applyBindings(accountmodel);

    //bind event

    $("#btnSearch").click({ handler: accountmodel.Search });

    $("#btnSave").click({ handler: accountmodel.Update });


});

function viewModel() {

    var self = this;

    self.AccountId = ko.observable(”);

    self.Name = ko.observable(”);

    self.Balance = ko.observable(null);

    self.SearchCriteria = ko.observable(”);

    self.Search = function () {

        $.ajax({

            url: "Account/Search",

            data: { SearchCriteria: self.SearchCriteria },

            type: "POST",

            success: function (response) {

                self.AccountId(response.AccountId);

                self.Name(response.Name);

            }

        });

    };

    self.Update = function () {

        $.ajax({

            url: "Account/Update",

            data: { AccountId: self.AccountId, Name: self.Name },

            type: "POST",

            success: function (response) {

                self.AccountId(response.AccountId);

                self.Name(response.Name);

                self.Balance(response.Balance);

            }

        });

    }

};

image

You can find code here.