🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

CP TEST

Started by
2 comments, last by Michael Tanczos 11 years, 4 months ago
License: Ms-PL


Introduction


On 16/06/2011, Microsoft released the Kinect .NET SDK Beta, right away I had to download it and give it a try, and it's amazing!!!

In this article, I'll show you how to Get Started with Kinect SDK, from there we will move forward to how you can control the Camera Angle using the SDK and finally I'll show how to use the Skeleton Tracking and with a nice example of how to become The Incredible Hulk.

Background


The Kinect for Windows SDK beta is a programming toolkit for application developers. It enables the academic and enthusiast communities easy access to the capabilities offered by the Microsoft Kinect device connected to computers running the Windows 7 operating system.

The Kinect for Windows SDK beta includes drivers, rich APIs for raw sensor streams and human motion tracking, installation documents, and resource materials. It provides Kinect capabilities to developers who build applications with C++, C#, or Visual Basic by using Microsoft Visual Studio 2010.

Step 1 - Prepare Your Environment


In order to work with Kinect .NET SDK, you need to have the below requirements:

Supported Operating Systems and Architectures

Hardware Requirements

  • Computer with a dual-core, 2.66-GHz or faster processor
  • Windows 7-compatible graphics card that supports Microsoft DirectX 9.0c capabilities
  • 2 GB of RAM
  • Kinect for Xbox 360 sensor-retail edition, which includes special USB/power cabling

    Software Requirements

    • Microsoft Visual Studio 2010 Express or other Visual Studio 2010 edition
    • Microsoft .NET Framework 4.0 (installed with Visual Studio 2010)
    • Kinect for Windows SDK Beta download page

      Step 2: Create New WPF Project


      Add reference to Microsoft.Research.Kinect.Nui (locate under - C:\Program Files (x86)\Microsoft Research KinectSDK) and make sure the solution file for the sample targets the x86 platform, because this Beta SDK includes only x86 libraries.
      2.png
      An application must initialize the Kinect sensor by calling Runtime.Initialize before calling any other methods on the Runtime object. Runtime.Initialize initializes the internal frame-capture engine, which starts a thread that retrieves data from the Kinect sensor and signals the application when a frame is ready. It also initializes the subsystems that collect and process the sensor data. The Initialize method throws InvalidOperationException if it fails to find a Kinect sensor, so the call to Runtime.Initialize appears in a try/catch block.

      Create Windows Load Event and call InitializeNui:


      private void InitializeNui()
      {
      try
      {
      //Declares _kinectNui as a Runtime object,
      //which represents the Kinect sensor instance.
      _kinectNui = new Runtime();

      //Open the video and depth streams, and sets up the event handlers
      //that the runtime calls when a video, depth, or skeleton frame is ready
      //An application must initialize the Kinect sensor by calling
      //Runtime.Initialize before calling any other methods on the Runtime object.
      _kinectNui.Initialize(RuntimeOptions.UseDepthAndPlayerIndex |
      RuntimeOptions.UseSkeletalTracking | RuntimeOptions.UseColor);

      //To stream color images:
      // The options must include UseColor.
      // Valid image resolutions are Resolution1280x1024 and Resolution640x480.
      // Valid image types are Color, ColorYUV, and ColorYUVRaw.
      _kinectNui.VideoStream.Open(ImageStreamType.Video, 2,
      ImageResolution.Resolution640x480, ImageType.ColorYuv);

      //To stream depth and player index data:
      // The options must include UseDepthAndPlayerIndex.
      // Valid resolutions for depth and player index data are
      //Resolution320x240 and Resolution80x60.
      // The only valid image type is DepthAndPlayerIndex.
      _kinectNui.DepthStream.Open(ImageStreamType.Depth, 2,
      ImageResolution.Resolution320x240, ImageType.DepthAndPlayerIndex);

      lastTime = DateTime.Now;

      _kinectNui.VideoFrameReady +=
      new EventHandler(NuiVideoFrameReady);
      _kinectNui.DepthFrameReady +=
      new EventHandler(nui_DepthFrameReady);
      }
      catch (InvalidOperationException ex)
      {
      MessageBox.Show(ex.Message);
      }
      }




      Step 3: Show Video


      Both Video and Depth returns PlanarImage and we just need to create new Bitmap and display on the UI.

      Video Frame Ready Event Handler


      void NuiVideoFrameReady(object sender, ImageFrameReadyEventArgs e)
      {
      PlanarImage Image = e.ImageFrame.Image;

      image.Source = BitmapSource.Create(
      Image.Width, Image.Height, 96, 96, PixelFormats.Bgr32, null,
      Image.Bits, Image.Width * Image.BytesPerPixel);

      imageCmyk32.Source = BitmapSource.Create(
      Image.Width, Image.Height, 96, 96, PixelFormats.Cmyk32, null,
      Image.Bits, Image.Width * Image.BytesPerPixel);
      }




      Depth Frame Ready Event Handler

      Depth is different because the image you are getting back is 16bit and we need to convert it to 32, I've used the same method like in the SDK.


      void nui_DepthFrameReady(object sender, ImageFrameReadyEventArgs e)
      {
      var Image = e.ImageFrame.Image;
      var convertedDepthFrame = convertDepthFrame(Image.Bits);

      depth.Source = BitmapSource.Create(
      Image.Width, Image.Height, 96, 96, PixelFormats.Bgr32,
      null, convertedDepthFrame, Image.Width * 4);

      CalculateFps();
      }

      // Converts a 16-bit grayscale depth frame which includes player
      // indexes into a 32-bit frame
      // that displays different players in different colors
      byte[] convertDepthFrame(byte[] depthFrame16)
      {
      for (int i16 = , i32 = ; i16 < depthFrame16.Length &&
      i32 < depthFrame32.Length; i16 += 2, i32 += 4)
      {
      int player = depthFrame16[i16] & 0x07;
      int realDepth = (depthFrame16[i16 + 1] << 5) | (depthFrame16[i16] >> 3);
      // transform 13-bit depth information into an 8-bit intensity appropriate
      // for display (we disregard information in most significant bit)
      byte intensity = (byte)(255 - (255 * realDepth / 0x0fff));

      depthFrame32[i32 + RED_IDX] = intensity;
      depthFrame32[i32 + BLUE_IDX] = intensity;
      depthFrame32[i32 + GREEN_IDX] = intensity;
      }
      return depthFrame32;
      }

      void CalculateFps()
      {
      ++totalFrames;

      var cur = DateTime.Now;
      if (cur.Subtract(lastTime) > TimeSpan.FromSeconds(1))
      {
      int frameDiff = totalFrames - lastFrames;
      lastFrames = totalFrames;
      lastTime = cur;
      frameRate.Text = frameDiff.ToString() + " fps";
      }
      }




      1.png

      Step 4: Control Camera Angle

      Now, I'll show how easy it is to control Kinect Camera Angle (change the position on the camera).

      There is Minimum and Maximum angles you can control but as you can see from the last picture (right), you can move the Kinect Sensor manually and the Angle will change automatically.
      3.png4.png5.png
      Create object from the Kinect Nui after the initialization.


      private Camera _cam;

      _cam = _kinectNui.NuiCamera;
      txtCameraName.Text = _cam.UniqueDeviceName;




      Here is the Camera definition:


      namespace Microsoft.Research.Kinect.Nui
      {
      public class Camera
      {
      public static readonly int ElevationMaximum;
      public static readonly int ElevationMinimum;

      public int ElevationAngle { get; set; }
      public string UniqueDeviceName { get; }

      public void GetColorPixelCoordinatesFromDepthPixel
      (ImageResolution colorResolution, ImageViewArea viewArea,
      int depthX, int depthY, short depthValue, out int colorX,
      out int colorY);
      }
      }




      Step 5: Up and Down


      Now when you do that, you can control the camera angle as follows:
      To increase the Camera Angle, all you need to do is to increase camera ElevationAngle, there are Min and Max angles for the camera that you can control so don't be afraid to push it too much.


      private void BtnCameraUpClick(object sender, RoutedEventArgs e)
      {
      try
      {
      _cam.ElevationAngle = _cam.ElevationAngle + 5;
      }
      catch (InvalidOperationException ex)
      {
      MessageBox.Show(ex.Message);
      }
      catch (ArgumentOutOfRangeException outOfRangeException)
      {
      //Elevation angle must be between Elevation Minimum/Maximum"
      MessageBox.Show(outOfRangeException.Message);
      }
      }




      And down:


      private void BtnCameraDownClick(object sender, RoutedEventArgs e)
      {
      try
      {
      _cam.ElevationAngle = _cam.ElevationAngle - 5;
      }
      catch (InvalidOperationException ex)
      {
      MessageBox.Show(ex.Message);
      }
      catch (ArgumentOutOfRangeException outOfRangeException)
      {
      //Elevation angle must be between Elevation Minimum/Maximum"
      MessageBox.Show(outOfRangeException.Message);
      }
      }




      Background: Become The Incredible Hulk using Skeleton Tracking


      One of the big strengths of Kinect for Windows SDK is its ability to discover the skeleton of joints of a human standing in front of the sensor, very fast recognition system and requires no training to use.

      The NUI Skeleton API provides information about the location of up to two players standing in front of the Kinect sensor array, with detailed position and orientation information.

      The data is provided to application code as a set of points, called skeleton positions, that compose a skeleton, as shown in the picture below. This skeleton represents a user's current position and pose.

      Applications that use skeleton data must indicate this at NUI initialization and must enable skeleton tracking.

      The Vitruvian Man has 20 points that are called Joints in Kinect SDK.
      7.png8.png

      Step 6: Register To SkeletonFrameReady


      Make sure you Initialize with UseSkeletalTracking, otherwise the Skeleton Tracking will not work.


      _kinectNui.Initialize(RuntimeOptions.UseColor |
      RuntimeOptions.UseSkeletalTracking | RuntimeOptions.UseColor);
      _kinectNui.SkeletonFrameReady +=
      new EventHandler(SkeletonFrameReady);




      The Kinect NUI cannot track more than 2 Skeletons,


      if (SkeletonTrackingState.Tracked != data.TrackingState) continue;




      means the Skeleton is tracked, untracked Skeletons only gives their position without the Joints, also Skeleton will be rendered if full body fits in frame.

      Debugging isn't a simple task when developing for Kinect - Get Up Each time you want to test it.

      Skeleton Joints marked by TrackingID enum that defined its reference position:


      namespace Microsoft.Research.Kinect.Nui
      {
      public enum JointID
      {
      HipCenter,
      Spine,
      ShoulderCenter,
      Head,
      ShoulderLeft,
      ElbowLeft,
      WristLeft,
      HandLeft,
      ShoulderRight,
      ElbowRight,
      WristRight,
      HandRight,
      HipLeft,
      KneeLeft,
      AnkleLeft,
      FootLeft,
      HipRight,
      KneeRight,
      AnkleRight,
      FootRight,
      Count,
      }
      }




      Step 7: Get Joint Position


      The Joint position is defined in Camera Space, and we need to translate to our Size and Position.

      Depth Image Space

      Image frames of the depth map are 640x480, 320240, or 80x60 pixels in size, with each pixel representing the distance, in millimeters, to the nearest object at that particular x and y coordinate. A pixel value of 0 indicates that the sensor did not find any objects within its range at that location. The x and y coordinates of the image frame do not represent physical units in the room, but rather pixels on the depth imaging sensor. The interpretation of the x and y coordinates depends on specifics of the optics and imaging sensor. For discussion purposes, this projected space is referred to as the depth image space.

      Skeleton Space

      Player skeleton positions are expressed in x, y, and z coordinates. Unlike the coordinate of depth image space, these three coordinates are expressed in meters. The x, y, and z axes are the body axes of the depth sensor. This is a right-handed coordinate system that places the sensor array at the origin point with the positive z axis extending in the direction in which the sensor array points. The positive y axis extends upward, and the positive x axis extends to the left (with respect to the sensor array), as shown in Figure 5. For discussion purposes, this expression of coordinates is referred to as the skeleton space.

      9.png


      private Point getDisplayPosition(Joint joint)
      {
      float depthX, depthY;
      _kinectNui.SkeletonEngine.SkeletonToDepthImage(joint.Position, out depthX, out depthY);
      depthX = Math.Max(, Math.Min(depthX * 320, 320)); //convert to 320, 240 space
      depthY = Math.Max(, Math.Min(depthY * 240, 240)); //convert to 320, 240 space
      int colorX, colorY;
      ImageViewArea iv = new ImageViewArea();
      // only ImageResolution.Resolution640x480 is supported at this point
      _kinectNui.NuiCamera.GetColorPixelCoordinatesFromDepthPixel
      (ImageResolution.Resolution640x480, iv, (int)depthX, (int)depthY,
      (short), out colorX, out colorY);

      // map back to skeleton.Width & skeleton.Height
      return new Point((int)(imageContainer.Width * colorX / 640.) - 30,
      (int)(imageContainer.Height * colorY / 480) - 30);
      }




      Step 8: Place Image Based On Joint Type


      A position of type Vector4 (x, y, z, w - The first three attributes define the position in camera space. The last attribute (w) gives the quality level (Range between 0-1)) of the position that indicates the center of mass for that skeleton.

      This value is the only available positional value for passive players.


      void SkeletonFrameReady(object sender, SkeletonFrameReadyEventArgs e)
      {
      foreach (SkeletonData data in e.SkeletonFrame.Skeletons)
      {
      //Tracked that defines whether a skeleton is 'tracked' or not.
      //The untracked skeletons only give their position.
      if (SkeletonTrackingState.Tracked != data.TrackingState) continue;

      //Each joint has a Position property that is defined by a Vector4: (x, y, z, w).
      //The first three attributes define the position in camera space.
      //The last attribute (w)
      //gives the quality level (between 0 and 1) of the
      foreach (Joint joint in data.Joints)
      {
      if (joint.Position.W < .6f) return;// Quality check
      switch (joint.ID)
      {
      case JointID.Head:
      var heanp = getDisplayPosition(joint);

      Canvas.SetLeft(imgHead, heanp.X);
      Canvas.SetTop(imgHead, heanp.Y);

      break;
      case JointID.HandRight:
      var rhp = getDisplayPosition(joint);

      Canvas.SetLeft(imgRightHand, rhp.X);
      Canvas.SetTop(imgRightHand, rhp.Y);
      break;
      case JointID.HandLeft:
      var lhp = getDisplayPosition(joint);

      Canvas.SetLeft(imgLefttHand, lhp.X);
      Canvas.SetTop(imgLefttHand, lhp.Y);
      break;
      }
      }
      }
      }




      6.png

      Enjoy!

      History



      This article was originally published on Codeproject.com and reproduced for the benefit of our viewers under the terms of the Ms-PL license.

Advertisement
License: Ms-PL

1.png

Introduction

The Idea: TV Screen in the Reception, When nothing happens (no one is in the reception) we can display videos on the screen but when someone enters the frame, show him the Kinect Image, and if the user is doing something funny, capture his image and save it.

The question is how can I know if the person is doing something funny?

For that, I've created Image AuthenticManager that contains a set of rules defining what positions or combinations are funny.

For example: If right hand position is higher than head position, then add 2 points, if the left foot position is crossing the right foot, then add an additional 2 points, etc.
When the user reaches the goal, we decide then to take his picture.

Before jumping to coding, let's talk about the application flow:
The main window is controlled by two Timers and the AuthenticManager:

SeriousTimer set to 10 secs and it starts Ticking as the Kinect Skeleton Event is first fired (this event will only work when the Kinect identifies a full person skeleton).
Inside the SkeletonFrameReady, we also have integer called _fpsCount that increases itself by 1 each time the SkeletonFrameReady is called after starting the SeriousTimer, this will help us to make sure the user is standing in front of the Kinect and not just walk by him.

Now how can the _fbsCount tell me that? ll we need to do is multiply the SeriousTimer seconds interval by the Minimum Fps (for example 10) and the _fpsCount should be bigger if the user stands in front of the Kinect.

In this case, the Timer will stop the Video feed and will display the Kinect Image.

IdleTimer by default is set to 30 seconds and each time the SkeletonFrameReady method is fired, we restart the IdleTimer.

So if there are no events for SkeletonFrameReady, the IdleTimer will return the Video feed.

JointID - AuthenticManager works with RuleObject that contains JointID Source and JointID Target (More about Joints - Kinect - Getting Started - Become The Incredible Hulk).
AuthenticManager is the heart of Kinect Reception application, this class will check if the user position is funny by your own rules.
11.png

What is Joint?

The data is provided from the Kinect application code as a set of points, called skeleton positions, that compose a skeleton structure.


public enum JointID
{
HipCenter = ,
Spine = 1,
ShoulderCenter = 2,
Head = 3,
ShoulderLeft = 4,
ElbowLeft = 5,
WristLeft = 6,
HandLeft = 7,
ShoulderRight = 8,
ElbowRight = 9,
WristRight = 10,
HandRight = 11,
HipLeft = 12,
KneeLeft = 13,
AnkleLeft = 14,
FootLeft = 15,
HipRight = 16,
KneeRight = 17,
AnkleRight = 18,
FootRight = 19,
Count = 20,
}




  • Vector - For Source and Target Joints, you also have to define the Vector to check X or Y against one another.
  • Operator - Do you expect the Source Vector to be Bigger or Smaller than the Target Vector?
  • Score - If the rule is true, what is the score for the rule.

    3.png

    Background

    Since Microsoft has released the Kinect.NET SDK, I wrote many articles on that subject:

    I think Kinect is very cool and I'm searching for more projects and good ideas for Kinect. A couple of days ago, I talked with my friend Guy Burstein and he came up with an idea for Kinect application, he said what if people will enter the Microsoft Israel reception and instead of video screen showing commercials, let's add something interesting with Kinect.

    Using the Code

    Using Kinect events, I can see when user enters the frame, using two timers I can check if the user is just passing by or is standing in front of the camera.

    The below image describes the application flow, at the beginning the application will show random videos, when the Kinect Skeleton Event will raise then the Serious Timer will begin ticking, each tick based on the FPS rate will be aggregated to variable called NumTicks, when the Serious Timer completes we check if NumTicks is big enough based on the FPS, if so then we'll start the Idle Timer and switch to Kinect Image.

    Idle Timer - Each time the Kinect Skeleton Event is raised, the Idle Timer will be restart, so if there is no one in front of the Kinect camera, the Idle Timer will switch back to Videos.

    4.png
    As you can see from the images below (or Full Video), when I moved my hands or legs, the Score Bar has changed based on the rules you defined:
    5.png

    6.png

    7.png

    You reached the Goal!!!

    8.png

    The entire application works with 4 managers:

    1. Kinect Manager
    2. Configuration Manager
    3. Video Manager
    4. Authentic Manager

    Kinect Manager

    I've shown how to get started with Kinect many times in my previous posts, but for this application, I've created a singleton class to handle everything related to the Kinect settings, from restarting the Kinect RunTime object to changing the camera angle.


    public KinectManager()
    {
    try
    {

    KinectNui = new Runtime();

    KinectNui.Initialize(RuntimeOptions.UseColor | RuntimeOptions.UseSkeletalTracking |
    RuntimeOptions.UseColor);

    KinectNui.VideoStream.Open(ImageStreamType.Video, 2,
    ImageResolution.Resolution640x480,
    ImageType.ColorYuv);

    KinectNui.SkeletonEngine.TransformSmooth = true;
    var parameters = new TransformSmoothParameters
    {
    Smoothing = 1.0f,
    Correction = .1f,
    Prediction = .1f,
    JitterRadius = .05f,
    MaxDeviationRadius = .05f
    };
    KinectNui.SkeletonEngine.SmoothParameters = parameters;

    _lastTime = DateTime.Now;
    Camera = KinectNui.NuiCamera;

    IsInitialize = true;
    StatusMessage = Properties.Resources.KinectReady;
    }
    catch (InvalidOperationException ex)
    {
    IsInitialize = false;
    StatusMessage = ex.Message;
    }
    }




    Another important method the KinectManager has is the CameraAngle control.


    public void ChangeCameraAngle(ChangeDirection dir)
    {
    if (!IsInitialize) return;

    try
    {
    if (dir == ChangeDirection.Up)
    Camera.ElevationAngle = Camera.ElevationAngle +
    Properties.Settings.Default.ElevationAngleInterval;
    else
    Camera.ElevationAngle = Camera.ElevationAngle -
    Properties.Settings.Default.ElevationAngleInterval;

    StatusMessage = Properties.Resources.KinectReady;
    }
    catch (InvalidOperationException ex)
    {
    StatusMessage = ex.Message;
    }
    catch (ArgumentOutOfRangeException outOfRangeException)
    {
    StatusMessage = outOfRangeException.Message;
    }
    }




    Video Manager

    The Video Manager has two specific types of Videos to play and the main Videos folders to pick random video each time.

    Specific? When the user enters the Kinect Range, you can choose to play a specific video just before the Kinect Image will appear, and when the user leaves Kinect Range, you can choose to play the Out Video.

    Define the type of video you want to play. If you ask for out video and there isn't one, return null - Stop Video and start showing Kinect Image. If you ask for in video and there isn't one, then return random video.

    9.png


    public static Uri GetVideo(VideoType type)
    {
    if (string.IsNullOrEmpty(Properties.Settings.Default.VideosLibraryFolder) ||
    !Directory.Exists(Properties.Settings.Default.VideosLibraryFolder))
    return null;
    else
    {
    string value = null;
    switch (type)
    {
    case VideoType.In:
    value = Properties.Settings.Default.VideosLibraryInFile;
    return string.IsNullOrEmpty(value) || !File.Exists(value) ?
    CollectRandomMovie() : new Uri(value);
    case VideoType.Out:
    value = Properties.Settings.Default.VideosLibraryOutFile;
    return string.IsNullOrEmpty(value) || !File.Exists(value) ?
    null : new Uri(value);
    default:
    return CollectRandomMovie();
    }
    }
    }
    private static Uri CollectRandomMovie()
    {
    var movies = new ArrayList();

    foreach (var filter in Properties.Settings.Default.VideoFilter)
    movies.AddRange(Directory.GetFiles(Properties.Settings.Default.VideosLibraryFolder,
    filter));

    if (movies.Count == ) return null;

    var rnd = new Random();
    return new Uri(movies[rnd.Next(movies.Count)].ToString());
    }




    Configuration Manager
    Kinect Reception allows you to set a range or rules to define what is considered Funny, the rule are based on Joint to Joint and each rule defines the Score if the rule applies.
    10.png

    The RuleObject contains the Source Joint and Target Joint, Vector to check for both, the operator (Bigger or Smaller) and the Score.

    2.png

    So what does the configuration manager do? It saves the rules. (Using MemoryStream to translate the Rule to string and then save it into the application resources.)


    public static ObservableCollection Load()
    {
    var deserializer = new XmlSerializer(typeof(ObservableCollection));
    try
    {
    var xs = new XmlSerializer(typeof(ObservableCollection));
    var memoryStream =
    new MemoryStream(Encoding.UTF8.GetBytes(Properties.Settings.Default.Rules));
    var xmlTextWriter = new XmlTextWriter(memoryStream, Encoding.UTF8);
    return (ObservableCollection)xs.Deserialize(memoryStream);
    }
    catch (Exception)
    {
    return new ObservableCollection();
    }
    }
    public static void Save(ObservableCollection items)
    {
    try
    {
    var memoryStream = new MemoryStream();
    var xs = new XmlSerializer(typeof(ObservableCollection));

    var xmlTextWriter = new XmlTextWriter(memoryStream, Encoding.UTF8);
    xs.Serialize(xmlTextWriter, items);

    memoryStream = (MemoryStream)xmlTextWriter.BaseStream;
    var xmlizedString = Encoding.UTF8.GetString(memoryStream.ToArray());

    Properties.Settings.Default.Rules = xmlizedString;
    }
    catch (Exception ex)
    {
    throw new ArgumentException(ex.Message);
    }
    }




    Authentic Manager

    The Authentic Manager is the core of Kinect Reception, he will take all rules defined by you and check them against the Skeleton Joints.

    The method below will extract the UnTracked joints and will make sure the joints quality are enough for calculation (We don't want the user moving out of the picture to be considered as funny wlEmoticon-sadsmile_3B5BBA5A.png).

    If the Skeleton Joints reach the Goal Score you define then an event will raise telling the main window to save the current Image and display for the user.


    public void ChecksForAuthentic(JointsCollection joints)
    {
    if (_rules.Count == ) return;

    var fixJoints =
    joints.Cast().Where(
    joint => joint.Position.W >= .6f &&
    joint.TrackingState == JointTrackingState.Tracked).ToList();

    var sb = new StringBuilder();
    for (var index = ; index < _rules.Count; index++)
    {
    var rule = _rules[index];
    var s = (from j in fixJoints.Where(joint => joint.ID == rule.Source) select j).
    DefaultIfEmpty(new Joint() { TrackingState = JointTrackingState.NotTracked }).
    Single();

    var t = (from j in fixJoints.Where(joint => joint.ID == rule.Target) select j).
    DefaultIfEmpty(new Joint() { TrackingState = JointTrackingState.NotTracked }).
    Single();

    if (s.TrackingState == JointTrackingState.NotTracked ||
    t.TrackingState == JointTrackingState.NotTracked) break;

    var sv = s.ToFloat(rule.SourceVector);
    var tv = t.ToFloat(rule.TargetVector);

    if (rule.Operator == Operators.Bigger && sv > tv)
    {
    Score = Score + rule.Score;
    sb.AppendLine(string.Format("Bigger -> Source: {0}, Target:{1} , Vector:{2}",
    rule.Source, rule.Target, rule.SourceVector));
    }
    else if (rule.Operator == Operators.Smaller && sv < tv)
    {
    Score = Score + rule.Score;
    sb.AppendLine(string.Format("Smaller -> Source: {0}, Target:{1} , Vector:{2}",
    rule.Source, rule.Target, rule.SourceVector));
    }
    }

    if (Score >= _goal)
    IsAuthentic(Score, sb.ToString());
    }




    History


    This article was originally published on Codeproject.com, authored by Shai Raiten and reproduced for the benefit of our viewers under the terms of the Ms-PL license.

[attachment=9433:babyalex.jpg]

This is a note

This topic is closed to new replies.

Advertisement