Officer Involved Shooting Case 

Case Study Summary

As requested, Kineticorp investigated an officer-involved shooting that occurred at a residential apartment building. According to the incident case summary of the Violent Crime Investigations Unit, Officers were requested to respond to a residence where it was reported that an individual, was acting erratic and had a knife.

Video Transcription

Toby Terpstra: When I was first contacted on this case, the question was, “Can you perform a case analysis for us?” The idea there is I’ll get some information from the client, maybe a video or some photos, and be able to dig through that, dissect the data, and hopefully provide some useful information right away, not only about the case, but also what further analysis might be done on that case.  We were provided with a police report, and in this case, we had video from three of the responding officers, body camera video from them, and what happened, in this case, is there’s multiple tenants in this residence, and one of them had cut another one with a knife, so they show up at the scene, and you can see in their body camera footage, it’s early dawn, and they’re walking into the residence, and they’re trying to assess the situation. They meet with the landlord, who’s there at the scene, and he’s explaining to them what he knows about the case at that point, and pointing them to a bedroom. They end up flipping up a mattress and finding him hiding underneath the bed. He does have a knife still in his hand. At that point, they’re trying to, through verbal commands, get him to drop his knife. He’s not responsive to that. They use their tasers, and then at that point, he continues out, really unresponsive to the tasers as well. As he exits the bedroom, the officers go two different directions down the hallway, and one of the officers shoots at that point in time. Then that tenant was found, in the bathroom actually, directly across the hallway, and that’s where he dies. We were asked to provide a 3D diagram, and dimensions, so everybody could have a better understanding of the environment, and where officers were at inside that environment, what space they had to deal with. So we were also looking to understand the positions of the officers at the time of the shots, and the position of the tenant with the knife, his motion as he’s exiting the doorway there. We were able to determine his speeds through camera matching, based on frame rates, so we’ve matched a number of frames, and his position in those, and based on the frame rate, we can get his speeds as he’s exiting that doorway. That’s helpful in these types of cases, as you talk about the danger that officers are facing at that point in time. How fast is he moving? How far away is he? Is he still holding a knife? Those kinds of things are important to be able to analyze, so we were able to look at those specific issues on this case. We used audio analysis to really sync the two videos together. Then we performed camera matching analysis on specific frames from the video footage, removing lens distortion, and then camera matching for their locations, the specific locations of the camera in the environments. Then once there was an alignment between the 3D environment and the video frame, we were able to put in the character positions and additional evidence inside of that environment as well. We also used shadow analysis, so when we did 3D scans of the environment, there’s a porch light that was unchanged from the time of the incident, so it’s dawn, it’s early light, and you can see light coming in from the entryway of the residence there, and one of the officers, as he’s moving, you can see his shadow up against the wall. Using that light source, we were able to find his position in 3D space. I think video analysis is very different than photogrammetry. Video analysis, really, anybody can look at video and make some determinations based on it. There’s software titles that would allow people to do this with like apps on their phone. Photogrammetry, on the other hand, takes that video and says, “Look, there’s 3D data. This environment exists in 3D space. If we align it to a 3D model, now we can determine specific 3D positions for evidence inside those video frames.” That can be very useful in these use of force cases, in understanding was it really excessive force or not.

The information below was derived and redacted from an actual Kineticorp expert report.  All names and identities have been changed.  For the full redacted report, contact Kineticorp at [email protected].

Process:

According to the incident case summary, two police officers responded to the call, searched the residence, and called for medical aid for “small laceration” received by a party who was in the residence. Officer One also noted a “245” crime “assault with a deadly weapon”. After gaining access to Jon Doe’s room (Bedroom 1) and hearing sounds from with the bedroom, Officer Two moved items including a bed mattress and found Jon Doe underneath a bed/mattress, naked, and holding a knife. After introducing themselves, giving verbal commands, deploying tasers, and pepper spray, Jon Doe moved to exit the bedroom with the knife in hand. In response both Officers backed down the hallway eastward towards, and then into Bedroom 3. Officer Two moved down the hallway westward toward the front entryway. Jon Doe first bent down near the threshold of Bedroom 1, and then stood up, making a quick forward movement. Officer One was in Bedroom 3 at this time, and Officer Two fired five (5) gunshots at Jon Doe. Jon Doe’s forward momentum carried him into the common bathroom directly across from Bedroom 1. Jon Doe was later pronounced dead within this bathroom. According to the Forensic Pathologist, the cause of death was “multiple gunshot wounds”. Figure 2 is a diagram showing the layout of the residence, with labeling for the rooms.

Scope of Work:
Kineticorp was asked to analyze provided materials including video from officer body-worn cameras, and photographs from the day of incident to determine positions of Jon Doe, and the involved officers at and around the time of shooting, to create a scale diagram of the residence, and to create visual materials with measurements determined.

Kineticorp’s investigation of this incident included the review and analysis of provided documents, a list of which can be found below. A scaled computer model was built from this analysis. The reconstruction of this computer model included laser scanning, computer modeling, video analysis, audio analysis, and photogrammetry. Specifically, Kineticorp performed the following list of procedures to analyze provided documents and develop visualization material to demonstrate findings and conclusions.


List of Provided Documents:

o Case Summary
o Suspect Information
o County Sheriff’s Office Reports
o Department of Public Safety Reports
o Police Department Reports
o County Coroner’s Office Reports
o Autopsy Report
o Toxicology Report
o County Sheriff’s Office Evidence Reports
o Event Chronology
o Search Warrants
o Press Information
o Transcripts
o Misc. Documents
o Photographs (same as listed under photographs section)
o Digital Media Information

o 186 Photos of Scene
o Crime Scene Reconstruction Photos (373 images)

o Officer One Body Cam Footage
o Officer Two Body Cam Footage
o Officer Three Body Cam Footage
o Evid #TDN Scene Video

o ENE Statement of City Defendants
o Witness Testimony from Deposition

o FLS Files: FARO 3D Scan of Scene (shortly after incident)
o VIEVU LE3 body camera (physical exemplar camera)

Procedure:
1. Kineticorp reviewed provided materials.
2. Kineticorp analyzed video and photographs to be used in photogrammetry analysis.
3. Kineticorp visited the residence. Kineticorp inspected, documented, and digitally mapped the incident site, as well as both Officers.
4. Kineticorp created a computer model of the residence comprised of 3D scan data from provided from the day of incident, and 3D scan data collected at time of site inspection.
5. Kineticorp inspected an exemplar body camera and created a 3D scale model of the body camera with the same dimensions as those worn by both Officer one and two at time of incident.
6. Kineticorp created scale models of both Officers, including body camera positions, based on 3D scan data and dimensioned photographs.
7. Kineticorp created a scale model of Jon Doe based on photographs and his height as recorded by the incident report.
8. Spectral frequency analysis was used to place audio markers within audio files associated with body camera video from both Officers. These markers were used to synchronize the two videos to the nearest frame.
9. In preparation for photogrammetric analysis, lens distortion was removed from the photograph and video frames.
10. Camera matching photogrammetry was used to locate the position of both Officers, and Jon Doe at multiple points of time as visible within the body camera videos of both Officers.
11. The resulting fully-scaled, computer model, including multiple positions for the parties involved, was built based on photos from the day of the incident, body camera footage from both officers, 3D laser scans from the day of incident, 3D laser scans from Kineticorp’s site inspection, photogrammetry, and dimensioned photographs.
12. Kineticorp produced visualization material to describe these procedures and the resulting 3D model of the incident and incident site. This material fairly and accurately portrays the residence where the incident occurred, and the positions of parties involved at various points in time.

Site Inspection:
Kineticorp inspected, photographed, and digitally mapped the incident site. The site was documented with 273 photographs of the interior and exterior of the residence.  LiDAR mapping of the site was performed using a FARO Focus S350 laser scanner with a ranging error of ±1mm as specified by the manufacturer. This laser scanning of the site consisted of taking scans from 23 different locations of both the interior and exterior of the residence, including the hallway, Bedroom 1, and the common bathroom. These scans generated over 127 million 3D data points. Figure 4 depicts the scan data collected from a perspective view looking southeast. Below are the actual photos and scan data used to map the scene.

 

 

Exemplar body camera inspection:
On December 12, 2018, Kineticorp inspected and photographed an exemplar LE3 VIEVU body camera. This is the same make and model camera as worn by both Officer One and Officer Two at time of incident. Dimensioned photographs were used to create a 3D model of the body camera. The image below depicts the VIEVU body camera with a photograph on the left and a 3D model on the right.

Computer model of involved parties:
On October 24, 2018, Kineticorp inspected, photographed, and digitally mapped Officer Two and Officer one. Thirty-three (33) photographs documented the officers, their height, and the approximate location of their body-worn cameras. They were mapped using the same FARO Focus S350 laser scanner. The front and back of each officer was scanned recording a total of over 1.8 million 3D points for Officer Two and over 1.3 million 3D points for Officer One.

 

Scaled 3D models of Officer One and of Officer Two were created, including the 3D models of their body cameras, based on 3D scan data and dimensioned photographs. A scaled 3D model of Jon Doe was also created, based on provided photographs and his height as specified in the incident report. Figure 10 shows the resulting fully scaled 3D models of the incident involved parties.

Video Analysis:
Three Officer body camera videos from the day of the incident were provided. One for Officer Three, one for Officer One, and one for Officer Two. Officer Three’s video was approximately twenty-two (22) minutes long and was not used in the analysis, because it began sometime after the shots had been fired. Officer One’s video was approximately thirty-five (35) minutes long and began approximately twelve and one-half (12.5) minutes before shots were fired. Officer Two’s video was approximately eighteen (18) minutes long and began approximately forty-five (45) seconds before shots were fired.

The video files from Officer Two and Officer One were synchronized using spectral frequency analysis. The sound pressure levels at the time of shots were too high for the microphone on the VIEVU body camera to record, as indicated by clipping in the audio wav file (green) visible in figure below, however change in gradation visible within the spectral frequency makes the time of the shots apparent. All five (5) shots were marked in both officer videos and the videos were aligned to nearest frame using these audio markings.

Videos and photographs Used in Photogrammetry Analysis:
Of the photographs taken on the day of incident, three (3) were selected for photogrammetric analysis. These photographs were useful for verifying the bed and mattress locations, as well as the subject knife dimensions. Additionally, sixteen (16) video frames were selected for photogrammetry analysis. These frames were taken from both Officer One’s body camera video and Officer Two’s body camera video. They include frames of video before, during, and after shots were fired. As part of the photogrammetric analysis, shadows cast on the hallway wall from an interior light were visible in four (4) of these images and were used to locate Officer Two’s position at these points in time.

Camera Matching Photogrammetry:
Having created a scaled three-dimensional computer model of the site, Kineticorp used camera-matching photogrammetry to analyze photographs and video taken near the time of the incident. This photogrammetric analysis was done to verify the dimensions of the subject knife and determine the location of furniture and the locations of the involved parties at various points in time throughout the sequence of events.

Photogrammetry is a process that uses principles of perspective to analyze and obtain three-dimensional data from photographs or video. These principles and techniques are widely accepted and used within the field of accident reconstruction and computer visualization. The principles, methodologies and procedures utilized in this analysis are described in the following peer-reviewed technical publications authored by Kineticorp.

• Bailey, Ann, James Funk, David Lessley, Chris Sherwood, Jeff Crandall, William Neale, and Nathan Rose. “Validation of a Videogrammetry Technique for Analyzing American Football Helmet Kinematics.” Sports Biomechanics, 2018, 1-23. doi:10.1080/14763141.2018.1513059.

• Terpstra, T., Dickinson, J., and Hashemian, A., “Using Multiple Photographs and USGS LiDAR to Improve Photogrammetric Accuracy,” SAE Technical Paper 2018-01-0516, 2018.

• Terpstra, T., Miller, S., and Hashemian, A., “An Evaluation of Two Methodologies for Lens Distortion Removal when EXIF Data is Unavailable,” SAE Technical Paper 2017-01-1422, 2017, Society of Automotive Engineers, 2017.

• Neale, William T.C., Hessel, David R., Koch, Daniel, “Determining Position and Speed through Pixel Tracking and 2D Coordinate Transformation in a 3D Environment”, Paper Number 2016-01-1478 Society of Automotive Engineers, 2016.

• Carter, Neal, Hashemian, Alireza, Rose, Nathan A. and Neale, William T.C, “Evaluation of the Accuracy of Image Based Scanning as a Basis for Photogrammetric Reconstruction of Physical Evidence”, Paper Number 2016-01-1467 Society of Automotive Engineers, 2016.

• Terpstra, T., Voitel, T., Hashemian, A., “A Survey of Multi-View Photogrammetry Software for Documenting Vehicle Crush.” SAE, Paper 2016-01-1475, Society of Automotive Engineers, 2016.

• Neale, W.T.C., James P. Marr, David R. Hessel, “Video Projection Mapping Photogrammetry through Video Tracking.” Paper 2013-01-0788, Society of Automotive Engineers, Warrendale, PA, 2011. April 2013.

• Neale, W.T.C., Hessel, D., Terpstra, T., “Photogrammetric Measurement Error Associated with Lens Distortion”, Paper Number 2011-01-0286, Society of Automotive Engineers, 2011.

• Rose, Nathan A., Neale, W.T.C., Fenton, S.J., Hessel, D., McCoy, R.W., Chou, C.C., “A Method to Quantify Vehicle Dynamics and Deformation for Vehicle Rollover Tests Using Camera-Matching Video Analysis,” Paper Number 2008-01-0350, Society of Automotive Engineers, 2008.

• Chou, C., McCoy, R., Fenton, S., Neale, W., Rose, N., “Image Analysis of Rollover Crash Test Using Photogrammetry,” Paper Number 2006-01-0723, Society of Automotive Engineers, 2006.

• Neale, W.T.C., Fenton, S., McFadden, S., Rose, N.A., “A Video Tracking Photogrammetry Technique to Survey Roadways for Accident Reconstruction,” Paper Number 2004-01-1221, Society of Automotive Engineers, 2004.

• Fenton, S., Neale, W., Rose, N., Hughes, C., “Determining Crash Data Using Camera-Matching Photogrammetric Technique,” Paper Number 2001-01-3313, Society of Automotive Engineers, 2001.

• Ziernicki, R., Fenton, S., “Forensic Engineering Comparison of Two & Three-Dimensional Photogrammetric Accident Analysis” National Academy of Forensic Engineers Journal Volume 17, June 2000.

The following technical literature also describes the computer modeling and photogrammetric principles and techniques employed by Kineticorp.

• Coleman, C., Tandy, D., Colborn, J., and Ault, N., “Applying Camera Matching Methods to Laser Scanned Three- Dimensional Scene Data with Comparisons to Other Methods,” SAE Technical Paper 2015-01-1416, 2015

• Rucoba, R., Duran, A., Carr, L., “A Three-Dimensional Crush Measurement Methodology Using Two-Dimensional Photographs.” Society of Automotive Engineers Paper Number 2008-01-0163.

• Brach, Raymond M., et al., Vehicle Accident Analysis and Reconstruction Methods, “Chapter 10: Photogrammetry,” Society of Automotive Engineers, 2005.

• Pepe, Michael D., et al., “Accuracy of Three-Dimensional Photogrammetry as Established by Controlled Field Tests,” Society of Automotive Engineers Paper Number 930662.

• Husher, Stein E., Michael S. Varat, John F. Kerhoff, “Survey of Photogrammetric Methodologies for Accident Reconstruction,” Proceedings of the Canadian Multi-Disciplinary Road Safety Conference VII, Vancouver, BC, Canada, June 1991.

• Breen, Kevin C, et al., “The Application of Photogrammetry to Accident Reconstruction,” Society of Automotive Engineers Paper Number 861422.

The computer model of the 3D scans, the geometry defining the interior of the residence, the knife, the body cameras, and the parties involved were combined to generate a fully scaled, recreated environment representative of the time of incident. The primary software packages used in generating scaled computer models and performing the camera-matching photogrammetry process are FARO® Scene 2018, Autodesk® AutoCAD® 2017 and Autodesk® 3ds Max® 2017. In general, the photogrammetric process can be described as the following:

• Computer modeling software was used to create a computer model of the incident site using provided 3D scan data from the day of incident, and 3D scan data that was collected at the site during inspection. This computer model includes features of the site that were unchanged since the time the incident occurred – walls, windows, doorways, floor rugs, electric plates, and light fixtures for example.

• The computer-generated scene model was then imported into a modeling software package and a computer-modeled camera is set up to view the model from a perspective that is visually similar to that shown in the photograph or video frame that is to be analyzed.

• The selected photographs and video frames were then analyzed for lens distortion. Lens distortion was corrected for in photographs and video frames with known camera characteristics, using PTLens version 9.0. For video frames, the straight-line method was employed using PFTrack.

• The photograph or video frame that is to be camera matched was then imported into the modeling software and was designated as a background image for the computer-modeled camera.

• The focal length, field of view, and orientation of the virtual camera was then adjusted until an overlay was achieved between the computer-generated scene model and the geometric features of the scene shown in the photograph. This step yielded a virtual camera that replicated the location and characteristics of the camera that recorded the incident

• Once the location and characteristics of the camera used to take the photograph was reconstructed, other objects and parties involved as visible in the photograph or video were then added and aligned to their correct location within the computer environment.

• As part of the photogrammetric analysis, shadows cast on the hallway wall from an interior light were visible in four (4) of camera matched video frames. With a known light source location and wall geometry upon which the shadow was cast, the location of the shadow was used as an additional positional source for locating Officer Two’s position at those points in time.

• The entire photogrammetric solution including all 19 cameras, defines the location of the object and parties involved, such that the location of objects in multiple views or camera matches is consistent with those photographs and video frames.

As described above, the photogrammetric process involves aligning the computer environment with photograph and video frames such that the position and characteristics of the camera that took the image are matched in the computer environment with a computer-generated camera. The image below shows the computer environment utilized in the photogrammetric analysis, and visually demonstrates the process of camera matching the photograph or video image to the computer scene and locating various geometric features.

The images below show an example of this process with synchronized video from Officer Two and Officer One with the resulting model positions for involved parties within the site model at that point in time .

     

 

Resulting Computer Environment:
The resulting computer model fairly and accurately portrays the incident site, the location of static objects within the site, and the positions of Officer Two, Officer One, and Jon Doe at the points of time analyzed. The procedures, techniques, and processes used to generate this environment are widely accepted in the field of computer visualization.  After completing camera match photogrammetry on the photographs and video, the resulting 3D model and diagram can be viewed from any perspective and distance measurements can be taken.

Range of certainty:
The 19 camera matches, and the 3D objects located using their specific vantages, represent the complete photogrammetry solution. A range of certainty was analyzed for multiple objects to assess the overall accuracy achieved. The following procedure was followed to assess the range of certainty of placing involved parties within the model. Starting with a position determined from the camera match solution, specific character model locations were moved along multiple axes, local to the camera, in varying increments until a range was established such that the model was at the limit of visually aligning to the photogrammetry solution, or the camera matched images where the position was visible. The images below visually demonstrate this process. The table below illustrates the distances established with “Max” indicating the maximum amount of movement while still visually aligned and “Outside” indicating a distance where the model is outside of alignment.

Conclusions: Based on the available evidence, testimony, training, education, and experience, the following conclusions were reached:

• The 3D models of the residence and parties involved are a fair and accurate representation of the subject residence and the parties at various points in time during the incident sequence.
• Based on audio from body camera videos, Jon Doe was given more than forty (40) verbal commands in English and more than thirty (30) verbal commands in Spanish to put the knife down, drop the knife, or some similar variation.
• Jon Doe can be seen holding a knife in both the body camera video from Officer One, as well as the body camera video from Officer Two.
• Jon Doe exited Bedroom 1, knife first, approximately 3.1 seconds before the first shot was fired.
• Officer Two fired 5 shots within approximately 1 second.
• For ~0.9 seconds, beginning ~0.3 seconds before shot 1, Jon Doe was calculated to be moving an average of 5.2 ft/s or 3.5 mph.
• At time of Shot 1, Jon Doe was ~8 feet from Officer Two.
• 8 feet can be covered in ~2.3 seconds at 5.2 ft/s or 3.5 mph.
• At time of Shot 1, Officer One was ~9.8 feet from Jon Doe.
• 9.8 feet can be covered in ~2.8 seconds at 5.2 ft/s or 3.5 mph.
• The knife visible within body camera video from Officer One and body camera video from Officer Two is consistent with the dimensioned photographs of the knife recovered from the bathroom where Jon Doe came to rest.

Do you want to learn more about this case, or do you have similar questions on one of your cases?  Contact Kineticorp’s staff at [email protected].

Back to Top