examples.sensors package

Submodules

examples.sensors.cam_mic module

A simple example of using the Wrapper class to publish and listen to audio and video streams.

This script demonstrates the capability to transmit audio and video streams using the MiddlewareCommunicator within the Wrapyfi library. The communication follows the PUB/SUB pattern allowing message publishing and listening functionalities between processes or machines.

Demonstrations:
  • Using the Image and AudioChunk messages

  • Transmitting an audio and video stream

  • Applying the PUB/SUB pattern with persistence

  • Transmitting OpenCV images captured from a camera on the publishing end and displayed on the listener’s end

  • Transmitting a sounddevice (PortAudio with NumPy) audio chunk captured from a microphone which can be played back on the listener’s end

  • Spawning of multiple processes specifying different functionality for listeners and publishers

  • Using a single return wrapper functionality in conjunction with synchronous callbacks

Requirements:
  • Wrapyfi: Middleware communication wrapper (refer to the Wrapyfi documentation for installation instructions)

  • YARP, ROS, ROS 2, ZeroMQ (refer to the Wrapyfi documentation for installation instructions)

  • OpenCV: Used for handling and creating images (installed with Wrapyfi)

  • sounddevice, soundfile: Used for handling audio

Install using pip:

pip install sounddevice soundfile

Run:
# Alternative 1: Separate audio and video publishing

# On machine 1 (or process 1): The audio stream publishing

python3 cam_mic.py --mode publish --stream audio --mic_source 0

# On machine 2 (or process 2): The video stream publishing

python3 cam_mic.py --mode publish --stream video --img_source 0

# On machine 3 (or process 3): The audio stream listening

python3 cam_mic.py --mode listen --stream audio

# On machine 4 (or process 4): The video stream listening

python3 cam_mic.py --mode listen --stream video

# Alternative 2: Concurrent audio and video publishing

# On machine 1 (or process 1): The audio/video stream publishing

python3 cam_mic.py --mode publish --stream audio video --img_source 0 --mic_source 0

# On machine 2 (or process 2): The audio/video stream listening

python3 cam_mic.py --mode listen --stream audio video

class examples.sensors.cam_mic.CamMic(*args, stream=('audio', 'video'), mic_source=0, mic_rate=44100, mic_chunk=10000, mic_channels=1, img_source=0, img_width=320, img_height=240, mware=None, **kwargs)[source]

Bases: MiddlewareCommunicator

__init__(*args, stream=('audio', 'video'), mic_source=0, mic_rate=44100, mic_chunk=10000, mic_channels=1, img_source=0, img_width=320, img_height=240, mware=None, **kwargs)[source]

Initialises the middleware communicator.

collect_cam(img_width=320, img_height=240, mware=None)[source]

Collect images from the camera.

collect_mic(aud=None, mic_rate=44100, mic_chunk=8820, mic_channels=1, mware=None)[source]

Collect audio from the microphone.

capture_cam_mic()[source]

Capture audio and video from the camera and microphone.

examples.sensors.cam_mic.parse_args()[source]

Parse command line arguments.

examples.sensors.cam_mic.sound_play(my_aud, blocking=True, device=0)[source]

Play audio using sounddevice.

Parameters:
  • my_aud – Tuple[np.ndarray, int]: The audio chunk and sampling rate to play

  • blocking – bool: Whether to block the execution until the audio is played

  • device – int: The sound device to use for audio playback

Returns:

bool: Whether the audio was played successfully

examples.sensors.cam_mic.main(args)[source]

Module contents