Hi,
I'm trying to extract compressed images from a bag file. Here is my launch file:
This results in some frames not being saved and some frames being saves multiple times. Also a lot of frames are saved out of order so the video I make out of the frames is really choppy.
↧
Extracting compressed image data from bag file results in doubled up or dropped frames
↧
Reindex .bag using rosbag API
Hello to everyone,
I am trying to reindex a file using rosbag API in a Python script. When running `rosbag reindex <.bag file>` in the command line, everything goes as usual (it creates a backup of the .bag file with the extension *.orig.bag*), and also creates the reindexed .bag file with the original name.
Now, in my Python code, I am going with the following:
import rosbag
#Reindex the .bag file
with rosbag.Bag('example.bag', 'w') as inbag:
inbag.reindex()
print "Reindexing..."
It does not work. Instead of giving the same result than the command line, it just take the input .bag file (in my case originally around 500MB), and it "resets" the file, reducing the weight to 4kB. After that, it doesn´t make anything else.
Is there anything missing that I am maybe forgetting to include in the script? According to the rosbag API, `reindex(self)` doesn´t required anything more.
Thank you.
↧
↧
rqt_graph when playing a rosbag
Hello, I am really new with ROS and I am having some problems with rqt_graph. I would like to play a rosbag file and watch all the nodes and topics with rqt_graph as it was during execution. Therefore, I run:
$> rosbag play rosbag_name.bag ---- Terminal 1
$> rqt_graph ---- Terminal 2
Then, I can only see:
https://drive.google.com/file/d/1-AjzLNJEjtB5K1At-A9t_BOg5lxOG8BC/view
As I show in the previous image, I can only find a node with the name /play_.... Does it mean that I cannot watch all the nodes when playing a rosbag? Or is there any way to play the bag and plot the nodes graph?
Thank you very much in advance. I have tried almost everything but I was not able to make it work.
Best regards
↧
How to repair a rosbag file download missing some parts
I have a big size (few gigas) rosbag file , I download earlier, and then find out it is missing some info like the image ( form rqt_bag) shows:
https://drive.google.com/open?id=1Jmm5eyapBoLmyJV03sFDtlp-8S3KHHUK
the problem is my Internet connection will take (forever) time to download it form start ,
**Is there any way to download only the missing parts?**
↧
DIGITS plug-ins to read ROSBag
Is there a Nvidia DIGITS plug-in that can read ROSBag data directly for neural network training?
↧
↧
Is rosbag available on ROS2 and how to benchmark it?
Hello everyone,
I'm using ROS2 that I obtained from here: https://github.com/ros2/ros2/wiki/Linux-Development-Setup. Running it on Ubuntu 16.04 with kernel 4.16.7.
I am tasked with benchmarking writing of bag files on ROS2 to SSD. Note that I am an absolute beginner but will probably not use ROS2 extensively since I am not responsible for development on it but for another application (not related to ROS2) which we will be linking to ROS2 application which is being developed by others. However, for the purposes of evaluation I am supposed to benchmark writing bag files to SSD.
Now, I have been reading about the progress of rosbag development on ROS2 and have seen that it is only in the beginning if I'm not wrong. What I would like is to ask either for a confirmation that rosbag is not available on ROS2 yet and cannot be tested, or, if there is an early version of it, what should I do in order to test its speed.
Thank you in advance.
Regards,
Nick
↧
Reconstruct PointCloud from RGBD data
I followed [this wiki entry](http://wiki.ros.org/openni_launch/Tutorials/BagRecordingPlayback) and recorded RGBD data using
rosbag record /rgbd/depth_registered/camera_info /rgbd/depth_registered/image_raw/compressed /rgbd/rgb/image_raw/compressed /rgbd/rgb/camera_info --duration=10s -O test
Note that I used compressed image to get more frames.
Then I tried to replay data to check if pointcloud will be generated automatically by openni as advertised on the wiki. I remapped topics to match openni naming conventions.
rosbag play test.bag /rgbd/depth_registered/camera_info:=camera/depth_registered/camera_info /rgbd/depth_registered/image_raw:=camera/depth_registered/image_raw /rgbd/rgb/camera_info:=camera/rgb/camera_info /rgbd/rgb/image_raw/compressed:=camera/rgb/image_raw/compressed --clock
But /camera/depth_registered/points is empty. How to achieve the same result as here
http://wiki.ros.org/openni_launch/Tutorials/BagRecordingPlayback#Playing_back_data using rgbd_launch directly?
↧
hz of our sensor's the topic was DECREASED, When I record some data with rosbag.
Hello, experts.
I used rosbag to acquire our sensor data as camera, lidar.
You can reference below spec.
Camera Node : 1980x1208 (FHD, RGB), 20 hz
Lidar Node : 64 Channel, 20 hz
I recorded topics on pc using rosbag, but there is a decreasing.
// command `$ rosbag record camera_topoic lidar_points_topic`
When I check on another pc, their spec are blew.
Camera Node: 20 hz -> 5~6 hz
Lidar Node: 20 hz -> 8~10 hz
I don't know where it depend on which problem.
How can I debug this problem?
Is there a general information which is good for this problem.
↧
How to record robot odometry in multiple bagfiles after resetting the robot's computer without resetting its odometry and timestamps?
I'm running the nodes rosaria, rosaria_client , hokuyo node and avt_vimba camera to record some datasets of routes around campus roads with rosbag in a Pioneer 3AT robot. The robot runs Ubuntu 14.04 with kernel 4.4.0 and ROS Indigo.
I have managed to record 3 out of 5 datasets without much trouble. However, the last 2 are giving me trouble as the robot's power runs out before finishing them due to the routes being too long. Initially, I thought the batteries were defective, and while that was the case, after changing the batteries, the robot's power still runs out before it can finish any of the routes left.
The solution my teacher and I came with was to record the datasets in multiple files. However, that would require to save a bagfile until the robot's power is about to run out, shut down the robot, change its batteries for some spares (we have them) and restart the recording from the spot the robot stopped, and that will invariably reset the odometry and timestamps of the recorded messages.
Is there a way to resume that recording while keeping the last recorded odometry and timestamps? If so, how? If not, then is there a workaround?
↧
↧
bag file is empty after rosbag reindexing and fixing .bag.active file
Hello everyone!
I'm trying to convert `.bag.active` file into `.bag` file so I can play it inside `rqt_bag` program. The thing is, I tried to follow [this steps](https://answers.ros.org/question/40116/rosbag-file-cannot-be-made-bagactive/) which is to:
1. `rosbag reindex` the `.bag.active` file, and then
2. `rosbag fix` `.bag.active` file into `.bag` file
I do get the message that `Bag migration is successful.` after running `rosbag fix`, but when trying `rosbag play` it says that there are 0 messages to play.
Someone else [also asked](https://answers.ros.org/question/271655/rosbag-reindex-fix-result-in-empty-bag-file/) this question, but there are no answers, so I'm trying again, maybe you can help me.
↧
image_transport rosbag issue
Has anyone else seen problems recording rosbags for all topics of a node using image_transport?
> ERROR
> [/tmp/buildd/ros-indigo-compressed-depth-image-transport-1.9.2-0trusty-20150729-0539/src/compressed_depth_publisher.cpp:223(CompressedDepthPublisher::publish)
>> ...
>> Compressed Depth Image Transport -Compression requires single-channel 32bit-floating point or 16bit raw depth images (input format is: bgr8).
I see this with [pointgrey_camera_driver (tag 0.12.1)](https://github.com/ros-drivers/pointgrey_camera_driver.git) when doing a `rosbag record -a`. I've noticed some [issues](https://github.com/ros-drivers/axis_camera/issues/32) posted for other packages using image_transport so I thought I'd post a general question in the event there is some broader issue.
UPDATE:
Running `rosbag record -a -x "(.*)/compressed(.*)"` solves the issue of the errors by not recording the "compressed" topics but doesn't explain why they occur in the first place.
↧
Cleanly kill rosbag with python script
I am running several ROS nodes (kinetic) on Raspbian, including a rosbag node that records all published topics. The script below is activated by pressing a physical button and is meant to ensure that the rosbag is killed cleanly (getting rid of the .bag.active state) before shutting the system off. When run manually from the terminal, the script executes without errors and cleanly kills the rosbag. But when the button is used to activate the script, the script is successfully called but stops at the subprocess.check_call(['rosnode', 'kill', '-a',]) line, preventing the rosbag from killing cleanly and preventing the system from shutting down. I'm wondering why the script fails to execute the ROS-specific lines when executed using the button?
Thanks
#!/usr/bin/env python
import RPi.GPIO as GPIO
import subprocess
GPIO.setmode(GPIO.BCM)
# This should turn on the LED
GPIO.setup(12, GPIO.OUT)
GPIO.output(12, GPIO.HIGH)
GPIO.setup(16, GPIO.IN, pull_up_down=GPIO.PUD_UP)
GPIO.wait_for_edge(16, GPIO.FALLING)
subprocess.check_call(['rosnode', 'kill', '-a']) #cleanly kill the rosbag
# -h stands for --power-off
subprocess.call(['shutdown', '-h', 'now'], shell=False)
↧
Custom image message type
I'm writing to enquire about possibility of ROS supporting or creating a custom sensor message type for video stream (images).
Our product, based on the FPGA, generates the rosbag file and logs one video stream in one file. It uses the VideoLink protocol, which is packet based, so all events on the parallel interface need to be packetized. The ROS message data here holds a data block.
The internal structure of a data block consists of a series of data packets each reflecting one line of the video data, or other data arriving via video link. Short data packets (without actual image) are here to store specific events that are natural to the video link and need to be stored in order to allow all features to be present during playback in a later moment. Long data packets contain one line of a video frame, but may also contain a metadata info block coming over the video link, as sent from camera.
Our goal is to be able to access the video (or images) from the recorded bag files using CvBridge or similar library which supports OpenCV image file format. In a current configuration, ROS environment doesn't recognize this type of message. Is there a posibility to include some plug-in conversion function or a different ROS library which will convert our format to some useable ROS message type.
Thank you for your consideration!
↧
↧
how record rosbag with python
Hello
I use the command line (Terminal) to record images (RGB, RGBD), joy, and IMU data>> rosbag record -O subset /camera/depth/image_raw /camera/rgb/image_raw /joy /mobile_base/sensors/imu_data_raw
How can I convert this command to python code script?
any help is appreciated
↧
datatype/md5sum error, publishing with rosbag
I am facing this error when I try to launch my pose3D.launch node : > [ERROR] [1537274177.574505613]:> Client [/pose3D] wants topic /odo to> have datatype/md5sum> [interval_map_msg/odo/18550ae0562d2d3f5ac16bd524fd7ee4],> but our version has> [motion_common_msg/Odometry/53fb859e63296783fbcea5a91fdc0352].> Dropping connection.
I have a rosbag with a topic /vehicle/odometry, and the message type is supposed to be /interval_map_msg/odo.
Has the error message say, I think my rosbag is sending the wrong type of message but when print the msg of my rosbag thanks to a python script i can clearly see this is the good type.
So I am guessing the msg of my rosbag has a type of msg has motion_common_msg/Odometry and even if interval_map_msg/odo msg type is the same, ros can't recognize it.
Is it possible to change the type of a msg in a rosbag ?
↧
Change message type
When i run the following command line: rostopic info /vehicle/odometry
I get this response. I would like to change the type of the msg, from motion_common_msg/Odometry to interval_map/odometry, which I know both to be exactly the same.> Type: motion_common_msg/Odometry>> Publishers: *> /rqt_gui_cpp_node_26530>> Subscribers: None
How can I do that ?
↧
Export bag of CompressedImage to disk without decoding
I have a bag of compressed images (*/camera/image/compressed*). I wrote a Python script which reads the bagfile using the Python API and writes JPEGs to disk in this way:
cv_img = bridge.compressed_imgmsg_to_cv2(msg, desired_encoding="passthrough")
cv2.imwrite(file_name, cv_img)
However, the image quality is not that great. I looked at the source and the `compressed_imgmsg_to_cv2` function calls `cv2.imdecode`. So, essentially, I'm compressing the image twice. Is there a way around this using *cvbridge*? Should I just use PNG instead?
↧
↧
jade - rosbag play with remap
Hi,
I would like to play a bag file with rosbag and remap the topics (e.g.: /myTopic:=/playback/myTopic).
I tried with **launch file**:
` `
and from **command line** also ( rosbag play mybag.bag /myTopic:=/playback/myTopic )
The play starts without any error with both ways, I can see a new topic: myNamespace/clock (or /clock if I start it from command line), but all of the recorded topics are replayed on their original name (e.g. /myTopic) (and the replay is working, I can see it changes the values), so I guess, the remap is not really working or am I missing something?
According to the documentation I saw, this should be possible, but I am not sure it is still working in the jade release.
Thanks in advance!
Balint
↧
Creating a bag file out of a image sequence
Let's say I have a series of images and want to create a bag file out of them to use as input for the camera calibration.
Is it even necessary to create the bag for the calibration or is there another way to use the images?
If not, can a bag file be created from an image sequence? I guess using some python-fu it should be possible. Would that be a good idea?
UPDATE: Using the bag file for calibration seems not feasible as the standard calibration app insists on talking back to the camera. So to use that one will need to hack the script. The other possibility that might be working (testing atm) is a not documented script "tarfile_calibration.py" which can read a tar file containing the image sequence. Simply create a tar file with numbered images in format "left%d.png" and "right%d.png" and you're good to go:
rosrun camera_calibration tarfile_calibration.py MyImages.tar --square 0.108 --size 8x6
↧
Array length handling when serializing messages
I am trying to code a rosbag serializer, and I am trying to generate code from the messages, like https://github.com/ros/genmsg
I noticed there's an inconsitency in how to handle unbounded array lengths.
In some cases, it is assumed the array elements are preceded with the length of the array: CameraInfo
In some others, the length is calculated from other properties of the message: Image, PointCloud2
Then there's cases in which it's not clear if the length needs to be handled based on the length of the whole message itself: CompressedImage.
So, are these cases handled manually? or is there some sort of non obvious rule?
I have noticed that in the definitions of some unbounded arrays, it is stated that the length needs to be computed from other parameters.
↧