I'm tracking the position of a vehicle along a certain trajectory using the Kalman filter and the idea is to check for improvements in position estimation through fusion of data from multiple sensors (We consider data from 2 inputs: GPS and Odometry)
GPS data is recorded through an app on the iPhone and gives information about the Latitude, Longitude, Elevation and Timestamp Information. This GPS data is in 3D spherical coordinates and is converted to 2D Cartesian using UTM (UTM is a projected cartesian system). Thus, we have the position of the vehicle through GPS in world coordinates (in m).
Odometry data is recorded through measurement sensors on vehicle and gives the x-position of the vehicle, y-position of the vehicle and orientation of the vehicle. This is calculated based on the wheelticks and is relative to the inertial origin of coordinates. Thus, we have the position of the vehicle through Odometry in vehicle coordinates (in mm). The GPS data updates once every 1000ms and Odometry data is given once every 20ms. We know that, the sensor components on the vehicle refer to the vehicle coordinate System where origin is located between driver and co-passenger where, x is along direction of vehicle, y is to the direction of co-passenger and z to the roof of the vehicle.