ORU_R01 Parsing

Nov 24, 2014 at 8:28 AM
Edited Nov 24, 2014 at 8:30 AM
Hi everyone ,
                  I am trying to parse v 2.3.1 ORU_R01 Lab result . The message parse by successfully through PipeParser but it takes  near around 8 seconds per message even message with only mendatory segments.  Do not know why this is happening . 
Please suggest or guide what its is causing performance bottleneck

Thanks
Dec 5, 2014 at 11:09 AM
I also have the same issue, does your message contain a report as base64 string or any very long segment?
Dec 5, 2014 at 12:36 PM
Yes . It contains one OBR with base64 encoded long string which is actually PDF report of Lab result. Did you able to solved it ?
Dec 5, 2014 at 1:16 PM
Edited Dec 5, 2014 at 1:16 PM
Yes, basically it is due to a poor design of one of the classes of the NHapi.Base.dll library.
You need to change the code and recompile the library. Are you able to do that? If yes, I will post here the solution.
Dec 8, 2014 at 6:22 AM
Please . Share your solution to this problem . I am not able to change the existing code as it has lot of classes and objects.
Since you already have change this and affcourse it has been tested so it will quite helpful if you can share the solution here .

Thanks in advance
Dec 10, 2014 at 1:42 PM
Edited Dec 10, 2014 at 2:18 PM
My solution is pretty easy, you need to change one class only and recompile the library. The code has not been extensively tested, it works but I strongly recommend you to retest your application, as you are going to touch a central component of the nHapi framework. It would be nice if the owner of this project could review my change and, if possible, apply it to the latest branch.

Here is the solution:
  • open the project NHapi.Base.
  • locate the class Tokenizer
  • replace completely the method
private System.String nextToken(char[] localDelimiters)
{...
}
with the following ones
private System.String nextToken(char[] localDelimiters)
        {
            var token = new StringBuilder("");
            long pos = this.currentPos;

            //skip possible delimiters
            while (Array.IndexOf(localDelimiters, this.chars[currentPos]) != -1)
                //The last one is a delimiter (i.e there is no more tokens)
                if (++this.currentPos == this.chars.Length)
                {
                    this.currentPos = pos;
                    throw new System.ArgumentOutOfRangeException();
                }
            pos = this.currentPos;
            //getting the token
            int index = GetNextDelimiterPos(localDelimiters);

            if (index != -1)
            {
                var tempCharArray = new char[index - pos];
                Array.Copy(chars, pos, tempCharArray, 0, index - pos);
                token.Append(new string(tempCharArray));
                this.currentPos = index;
            }
            else
            {
                var tempCharArray = new char[chars.Length - pos];
                Array.Copy(chars, pos, tempCharArray, 0, chars.Length - pos);
                token.Append(new string(tempCharArray));
                this.currentPos = chars.Length;
            }
            return token.ToString();
        }

        private int GetNextDelimiterPos(IEnumerable<char> localDelimiters)
        {
            int index = -1;
            foreach (var localDelimiter in localDelimiters)
            {
                var nextDelimiterPos = Array.IndexOf(this.chars, localDelimiter, (int)this.currentPos, chars.Length - (int)this.currentPos);
                if (index != -1)
                {
                    if (nextDelimiterPos < index)
                    {
                        index = nextDelimiterPos;
                    }
                }
                else
                {
                    index = nextDelimiterPos;
                }
            }
            return index;
        }
Let me know if that works for you as well.
Dec 13, 2014 at 9:31 AM
Hi ,
    Thank you very much for sharing the solution . Let me deploy it in our test environment and I will share further details once our testing team execute test cases.
Thanks again
Developer
Jan 28, 2015 at 7:36 AM
Just tried ADE2's suggested fix against the latest version (2.4.0.3) and my tests don't show any noticeable difference with or without this change against a ORU_R01 message with an embedded PDF. With or without this fix, it's parsing at around 200ms.

Validating this change against the previous release, 2.4.0.1, I can see a performance improvement from around 1700 ms on my test case down to around 200ms.

However since then Phil Bolduc's https://nhapi.codeplex.com/SourceControl/changeset/8e322b60faf6896b473f9d8dcb31975de636751b change has been merged performance around the string tokenisation has been improved which explains why there is no noticeable difference applying this change in the current master repository.

If you are able to try the latest revision (off Nuget or downloadable here) against your test cases, and find that performance issues still persist I'll be happy to revisit.

Thanks ADE2 and rajputs6 for raising this and offering solutions.