Hi All,
I have gathered data using `rosbag`, but listing topics from it and extracting data from it is *extremely* slow. It takes over an hour to process a `.bag` file that is 180 MB in size and contains 47 data streams, with output file sizes ranging from 330 bytes to 23 MB. It's operating principle is similar to the answer by user `x75` in a [closely related question](http://answers.ros.org/question/9102/how-to-extract-data-from-bag/).
Is there anything I can do to improve the parsing speed? My guess is that it is going through the file in full each time, whereas in reality I could save and reuse information from the `.bag` file in the first pass so I don't have to re-read the whole file every time I extract topic data from it.
#!/usr/bin/env bash
bagfile="${1##*/}"
bagfile="${bagfile%.*}"
# Create the output directory.
mkdir -p $bagfile
topic_list_orig=`rostopic list -b $1`
topic_list=$(echo "$topic_list_orig" | sort)
for topic in $topic_list
do
outfname="$bagfile/${topic//\//_}.csv"
echo "Writing CSV data from topic" $topic "to file" $outfname
# Check if file exists. If file exists, do not write to it.
if [ -e $outfname ]
then
echo "File" $outfname "already exists. Taking no action"
else
# Create a temporary file
tmpfile=`mktemp`
echo $tmpfile
rostopic echo -p -b $1 $topic > $tmpfile
# Copy temp file into the final file.
mv $tmpfile $outfname
fi
done
##### END OF FILE #####
↧