Recherche avancée

Médias (1)

Mot : - Tags -/illustrator

Autres articles (51)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Other interesting software

    13 avril 2011, par

    We don’t claim to be the only ones doing what we do ... and especially not to assert claims to be the best either ... What we do, we just try to do it well and getting better ...
    The following list represents softwares that tend to be more or less as MediaSPIP or that MediaSPIP tries more or less to do the same, whatever ...
    We don’t know them, we didn’t try them, but you can take a peek.
    Videopress
    Website : http://videopress.com/
    License : GNU/GPL v2
    Source code : (...)

Sur d’autres sites (7051)

  • How can I get all handles when I debug a MFC program with Visual Studio ?

    4 décembre 2024, par Goblet Machine

    I have a MFC program with FFMpeg to play video, but when I use the DirectX decoder, I found that every time I close the video, the handle count in Task Manager increases by 3 (sometimes the count decreases after a while, but overall it shows an upward trend).
I tried using WinDbg to capture the handles, but apart from these handles being called by the NVIDIA driver, there was no useful information. So I think maybe I can get more information in Visual Studio.

    


    Can anyone give some help ?

    


  • Issues with Publishing and Subscribing Rates for H.264 Video Streaming over RabbitMQ

    7 octobre 2024, par Luis

    I am working on a project to stream an H.264 video file using RabbitMQ (AMQP protocol) and display it in a web application. The setup involves capturing video frames, encoding them, sending them to RabbitMQ, and then consuming and decoding them on the web application side using Flask and Flask-SocketIO.

    


    However, I am encountering performance issues with the publishing and subscribing rates in RabbitMQ. I cannot seem to achieve more than 10 messages per second. This is not sufficient for smooth video streaming.
I need help to diagnose and resolve these performance bottlenecks.

    


    Here is my code :

    


      

    • Video Capture and Publishing Script :
    • 


    


    # RabbitMQ setup
RABBITMQ_HOST = 'localhost'
EXCHANGE = 'DRONE'
CAM_LOCATION = 'Out_Front'
KEY = f'DRONE_{CAM_LOCATION}'
QUEUE_NAME = f'DRONE_{CAM_LOCATION}_video_queue'

# Path to the H.264 video file
VIDEO_FILE_PATH = 'videos/FPV.h264'

# Configure logging
logging.basicConfig(level=logging.INFO)

@contextmanager
def rabbitmq_channel(host):
    """Context manager to handle RabbitMQ channel setup and teardown."""
    connection = pika.BlockingConnection(pika.ConnectionParameters(host))
    channel = connection.channel()
    try:
        yield channel
    finally:
        connection.close()

def initialize_rabbitmq(channel):
    """Initialize RabbitMQ exchange and queue, and bind them together."""
    channel.exchange_declare(exchange=EXCHANGE, exchange_type='direct')
    channel.queue_declare(queue=QUEUE_NAME)
    channel.queue_bind(exchange=EXCHANGE, queue=QUEUE_NAME, routing_key=KEY)

def send_frame(channel, frame):
    """Encode the video frame using FFmpeg and send it to RabbitMQ."""
    ffmpeg_path = 'ffmpeg/bin/ffmpeg.exe'
    cmd = [
        ffmpeg_path,
        '-f', 'rawvideo',
        '-pix_fmt', 'rgb24',
        '-s', '{}x{}'.format(frame.shape[1], frame.shape[0]),
        '-i', 'pipe:0',
        '-f', 'h264',
        '-vcodec', 'libx264',
        '-pix_fmt', 'yuv420p',
        '-preset', 'ultrafast',
        'pipe:1'
    ]
    
    start_time = time.time()
    process = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
    out, err = process.communicate(input=frame.tobytes())
    encoding_time = time.time() - start_time
    
    if process.returncode != 0:
        logging.error("ffmpeg error: %s", err.decode())
        raise RuntimeError("ffmpeg error")
    
    frame_size = len(out)
    logging.info("Sending frame with shape: %s, size: %d bytes", frame.shape, frame_size)
    timestamp = time.time()
    formatted_timestamp = datetime.fromtimestamp(timestamp).strftime('%H:%M:%S.%f')
    logging.info(f"Timestamp: {timestamp}") 
    logging.info(f"Formatted Timestamp: {formatted_timestamp[:-3]}")
    timestamp_bytes = struct.pack('d', timestamp)
    message_body = timestamp_bytes + out
    channel.basic_publish(exchange=EXCHANGE, routing_key=KEY, body=message_body)
    logging.info(f"Encoding time: {encoding_time:.4f} seconds")

def capture_video(channel):
    """Read video from the file, encode frames, and send them to RabbitMQ."""
    if not os.path.exists(VIDEO_FILE_PATH):
        logging.error("Error: Video file does not exist.")
        return
    cap = cv2.VideoCapture(VIDEO_FILE_PATH)
    if not cap.isOpened():
        logging.error("Error: Could not open video file.")
        return
    try:
        while True:
            start_time = time.time()
            ret, frame = cap.read()
            read_time = time.time() - start_time
            if not ret:
                break
            frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
            frame_rgb = np.ascontiguousarray(frame_rgb) # Ensure the frame is contiguous
            send_frame(channel, frame_rgb)
            cv2.imshow('Video', frame)
            if cv2.waitKey(1) & 0xFF == ord('q'):
                break
            logging.info(f"Read time: {read_time:.4f} seconds")
    finally:
        cap.release()
        cv2.destroyAllWindows()


    


      

    • the backend (flask) :
    • 


    


    app = Flask(__name__)
CORS(app)
socketio = SocketIO(app, cors_allowed_origins="*")

RABBITMQ_HOST = 'localhost'
EXCHANGE = 'DRONE'
CAM_LOCATION = 'Out_Front'
QUEUE_NAME = f'DRONE_{CAM_LOCATION}_video_queue'

def initialize_rabbitmq():
    connection = pika.BlockingConnection(pika.ConnectionParameters(RABBITMQ_HOST))
    channel = connection.channel()
    channel.exchange_declare(exchange=EXCHANGE, exchange_type='direct')
    channel.queue_declare(queue=QUEUE_NAME)
    channel.queue_bind(exchange=EXCHANGE, queue=QUEUE_NAME, routing_key=f'DRONE_{CAM_LOCATION}')
    return connection, channel

def decode_frame(frame_data):
    # FFmpeg command to decode H.264 frame data
    ffmpeg_path = 'ffmpeg/bin/ffmpeg.exe'
    cmd = [
        ffmpeg_path,
        '-f', 'h264',
        '-i', 'pipe:0',
        '-pix_fmt', 'bgr24',
        '-vcodec', 'rawvideo',
        '-an', '-sn',
        '-f', 'rawvideo',
        'pipe:1'
    ]
    process = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
    start_time = time.time()  # Start timing the decoding process
    out, err = process.communicate(input=frame_data)
    decoding_time = time.time() - start_time  # Calculate decoding time
    
    if process.returncode != 0:
        print("ffmpeg error: ", err.decode())
        return None
    frame_size = (960, 1280, 3)  # frame dimensions expected by the frontend
    frame = np.frombuffer(out, np.uint8).reshape(frame_size)
    print(f"Decoding time: {decoding_time:.4f} seconds")
    return frame

def format_timestamp(ts):
    dt = datetime.fromtimestamp(ts)
    return dt.strftime('%H:%M:%S.%f')[:-3]

def rabbitmq_consumer():
    connection, channel = initialize_rabbitmq()
    for method_frame, properties, body in channel.consume(QUEUE_NAME):
        message_receive_time = time.time()  # Time when the message is received

        # Extract the timestamp from the message body
        timestamp_bytes = body[:8]
        frame_data = body[8:]
        publish_timestamp = struct.unpack('d', timestamp_bytes)[0]

        print(f"Message Receive Time: {message_receive_time:.4f} ({format_timestamp(message_receive_time)})")
        print(f"Publish Time: {publish_timestamp:.4f} ({format_timestamp(publish_timestamp)})")

        frame = decode_frame(frame_data)
        decode_time = time.time() - message_receive_time  # Calculate decode time

        if frame is not None:
            _, buffer = cv2.imencode('.jpg', frame)
            frame_data = buffer.tobytes()
            socketio.emit('video_frame', {'frame': frame_data, 'timestamp': publish_timestamp}, namespace='/')
            emit_time = time.time()  # Time after emitting the frame

            # Log the time taken to emit the frame and its size
            rtt = emit_time - publish_timestamp  # Calculate RTT from publish to emit
            print(f"Current Time: {emit_time:.4f} ({format_timestamp(emit_time)})")
            print(f"RTT: {rtt:.4f} seconds")
            print(f"Emit time: {emit_time - message_receive_time:.4f} seconds, Frame size: {len(frame_data)} bytes")
        channel.basic_ack(method_frame.delivery_tag)

@app.route('/')
def index():
    return render_template('index.html')

@socketio.on('connect')
def handle_connect():
    print('Client connected')

@socketio.on('disconnect')
def handle_disconnect():
    print('Client disconnected')

if __name__ == '__main__':
    consumer_thread = threading.Thread(target=rabbitmq_consumer)
    consumer_thread.daemon = True
    consumer_thread.start()
    socketio.run(app, host='0.0.0.0', port=5000)



    


    How can I optimize the publishing and subscribing rates to handle a higher number of messages per second ?

    


    Any help or suggestions would be greatly appreciated !

    


    I attempted to use threading and multiprocessing to handle multiple frames concurrently and I tried to optimize the frame decoding function to make it faster but with no success.

    


  • ffmpeg GRAY16 stream over network

    28 novembre 2023, par Norbert P.

    Im working in a school project where we need to use depth cameras. The camera produces color and depth (in other words 16bit grayscale image). We decided to use ffmpeg, as later on compression could be very useful. For now we got some basic stream running form one PC to other. These settings include :

    


      

    • rtmp
    • 


    • flv as container
    • 


    • pixel format AV_PIX_FMT_YUV420P
    • 


    • codec AV_CODEC_ID_H264
    • 


    


    The problem we are having is with grayscale image. Not every codec is able to cope with this format, so as not every protocol able to work with given codec. I got some settings "working" but receiver side is just stuck on avformat_open_input() method.
I have also tested it with commandline where ffmpeg is listening for connection and same happens.

    


    I include a minimum "working" example of client code. Server can be tested with "ffmpeg.exe -f apng -listen 1 -i rtmp ://localhost:9999/stream/stream1 -c copy -f apng -listen 1 rtmp ://localhost:2222/live/l" or code below. I get no warnings, ffmpeg is newest version installed with "vcpkg install —triplet x64-windows ffmpeg[ffmpeg,ffprobe,zlib]" on windows or packet manager on linux.

    


    The question : Did I miss something ? How do I get it to work ? If you have any better ideas I would very gladly consider them. In the end I need 16 bits of lossless transmission, could be split between channels etc. which I also tried with same effect.

    


    Client code that would have camera and connect to server :

    


    extern "C" {&#xA;#include <libavutil></libavutil>opt.h>&#xA;#include <libavcodec></libavcodec>avcodec.h>&#xA;#include <libavutil></libavutil>channel_layout.h>&#xA;#include <libavutil></libavutil>common.h>&#xA;#include <libavformat></libavformat>avformat.h>&#xA;#include <libavcodec></libavcodec>avcodec.h>&#xA;#include <libavutil></libavutil>imgutils.h>&#xA;}&#xA;&#xA;int main() {&#xA;&#xA;    std::string container = "apng";&#xA;    AVCodecID codec_id = AV_CODEC_ID_APNG;&#xA;    AVPixelFormat pixFormat = AV_PIX_FMT_GRAY16BE;&#xA;&#xA;    AVFormatContext* format_ctx;&#xA;    AVCodec* out_codec;&#xA;    AVStream* out_stream;&#xA;    AVCodecContext* out_codec_ctx;&#xA;    AVFrame* frame;&#xA;    uint8_t* data;&#xA;&#xA;    std::string server = "rtmp://localhost:9999/stream/stream1";&#xA;&#xA;    int width = 1280, height = 720, fps = 30, bitrate = 1000000;&#xA;&#xA;    //initialize format context for output with flv and no filename&#xA;    avformat_alloc_output_context2(&amp;format_ctx, nullptr, container.c_str(), server.c_str());&#xA;    if (!format_ctx) {&#xA;        return 1;&#xA;    }&#xA;&#xA;    //AVIOContext for accessing the resource indicated by url&#xA;    if (!(format_ctx->oformat->flags &amp; AVFMT_NOFILE)) {&#xA;        int avopen_ret = avio_open(&amp;format_ctx->pb, server.c_str(),&#xA;            AVIO_FLAG_WRITE);// , nullptr, nullptr);&#xA;        if (avopen_ret &lt; 0) {&#xA;            fprintf(stderr, "failed to open stream output context, stream will not work\n");&#xA;            return 1;&#xA;        }&#xA;    }&#xA;&#xA;&#xA;    const AVCodec* tmp_out_codec = avcodec_find_encoder(codec_id);&#xA;    //const AVCodec* tmp_out_codec = avcodec_find_encoder_by_name("hevc");&#xA;    out_codec = const_cast(tmp_out_codec);&#xA;    if (!(out_codec)) {&#xA;        fprintf(stderr, "Could not find encoder for &#x27;%s&#x27;\n",&#xA;            avcodec_get_name(codec_id));&#xA;&#xA;        return 1;&#xA;    }&#xA;&#xA;    out_stream = avformat_new_stream(format_ctx, out_codec);&#xA;    if (!out_stream) {&#xA;        fprintf(stderr, "Could not allocate stream\n");&#xA;        return 1;&#xA;    }&#xA;&#xA;    out_codec_ctx = avcodec_alloc_context3(out_codec);&#xA;&#xA;    const AVRational timebase = { 60000, fps };&#xA;    const AVRational dst_fps = { fps, 1 };&#xA;    av_log_set_level(AV_LOG_VERBOSE);&#xA;    //codec_ctx->codec_tag = 0;&#xA;    //codec_ctx->codec_id = codec_id;&#xA;    out_codec_ctx->codec_type = AVMEDIA_TYPE_VIDEO;&#xA;    out_codec_ctx->width = width;&#xA;    out_codec_ctx->height = height;&#xA;    out_codec_ctx->gop_size = 1;&#xA;    out_codec_ctx->time_base = timebase;&#xA;    out_codec_ctx->pix_fmt = pixFormat;&#xA;    out_codec_ctx->framerate = dst_fps;&#xA;    out_codec_ctx->time_base = av_inv_q(dst_fps);&#xA;    out_codec_ctx->bit_rate = bitrate;&#xA;    //if (fctx->oformat->flags &amp; AVFMT_GLOBALHEADER)&#xA;    //{&#xA;    //    codec_ctx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;&#xA;    //}&#xA;&#xA;    out_stream->time_base = out_codec_ctx->time_base; //will be set afterwards by avformat_write_header to 1/1000&#xA;&#xA;    int ret = avcodec_parameters_from_context(out_stream->codecpar, out_codec_ctx);&#xA;    if (ret &lt; 0)&#xA;    {&#xA;        fprintf(stderr, "Could not initialize stream codec parameters!\n");&#xA;        return 1;&#xA;    }&#xA;&#xA;    AVDictionary* codec_options = nullptr;&#xA;    av_dict_set(&amp;codec_options, "tune", "zerolatency", 0);&#xA;&#xA;    // open video encoder&#xA;    ret = avcodec_open2(out_codec_ctx, out_codec, &amp;codec_options);&#xA;    if (ret &lt; 0)&#xA;    {&#xA;        fprintf(stderr, "Could not open video encoder!\n");&#xA;        return 1;&#xA;    }&#xA;    av_dict_free(&amp;codec_options);&#xA;&#xA;    out_stream->codecpar->extradata_size = out_codec_ctx->extradata_size;&#xA;    out_stream->codecpar->extradata = static_cast(av_mallocz(out_codec_ctx->extradata_size));&#xA;    memcpy(out_stream->codecpar->extradata, out_codec_ctx->extradata, out_codec_ctx->extradata_size);&#xA;&#xA;    av_dump_format(format_ctx, 0, server.c_str(), 1);&#xA;&#xA;    frame = av_frame_alloc();&#xA;&#xA;    int sz = av_image_get_buffer_size(pixFormat, width, height, 32);&#xA;#ifdef _WIN32&#xA;    data = (uint8_t*)_aligned_malloc(sz, 32);&#xA;    if (data == NULL)&#xA;        return ENOMEM;&#xA;#else&#xA;    ret = posix_memalign(reinterpret_cast(&amp;data), 32, sz);&#xA;#endif&#xA;    av_image_fill_arrays(frame->data, frame->linesize, data, pixFormat, width, height, 32);&#xA;    frame->format = pixFormat;&#xA;    frame->width = width;&#xA;    frame->height = height;&#xA;    frame->pts = 1;&#xA;    if (avformat_write_header(format_ctx, nullptr) &lt; 0) //Header making problems!!!&#xA;    {&#xA;        fprintf(stderr, "Could not write header!\n");&#xA;        return 1;&#xA;    }&#xA;&#xA;    printf("stream time base = %d / %d \n", out_stream->time_base.num, out_stream->time_base.den);&#xA;&#xA;    double inv_stream_timebase = (double)out_stream->time_base.den / (double)out_stream->time_base.num;&#xA;    printf("Init OK\n");&#xA;    /*  Init phase end*/&#xA;    int dts = 0;&#xA;    int frameNo = 0;&#xA;&#xA;    while (true) {&#xA;        //Fill dummy frame with something&#xA;        for (int y = 0; y &lt; height; y&#x2B;&#x2B;) {&#xA;            uint16_t color = ((y &#x2B; frameNo) * 256) % (256 * 256);&#xA;            for (int x = 0; x &lt; width; x&#x2B;&#x2B;) {&#xA;                data[x&#x2B;y*width] = color;&#xA;            }&#xA;        }&#xA;&#xA;        memcpy(frame->data[0], data, 1280 * 720 * sizeof(uint16_t));&#xA;        AVPacket* pkt = av_packet_alloc();&#xA;&#xA;        int ret = avcodec_send_frame(out_codec_ctx, frame);&#xA;        if (ret &lt; 0)&#xA;        {&#xA;            fprintf(stderr, "Error sending frame to codec context!\n");&#xA;            return ret;&#xA;        }&#xA;        while (ret >= 0) {&#xA;            ret = avcodec_receive_packet(out_codec_ctx, pkt);&#xA;            if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)&#xA;                break;&#xA;            else if (ret &lt; 0) {&#xA;                fprintf(stderr, "Error during encoding\n");&#xA;                break;&#xA;            }&#xA;            pkt->dts = dts;&#xA;            pkt->pts = dts;&#xA;            dts &#x2B;= 33;&#xA;            av_write_frame(format_ctx, pkt);&#xA;            frameNo&#x2B;&#x2B;;&#xA;            av_packet_unref(pkt);&#xA;        }&#xA;        printf("Streamed %d frames\n", frameNo);&#xA;    }&#xA;    return 0;&#xA;}&#xA;

    &#xA;

    And part of server that should receive. code where is stops and waits

    &#xA;

    extern "C" {&#xA;#include <libavcodec></libavcodec>avcodec.h>&#xA;#include <libavformat></libavformat>avformat.h>&#xA;#include <libavformat></libavformat>avio.h>&#xA;}&#xA;&#xA;int main() {&#xA;    AVFormatContext* fmt_ctx = NULL;&#xA;    av_log_set_level(AV_LOG_VERBOSE);&#xA;    AVDictionary* options = nullptr;&#xA;    av_dict_set(&amp;options, "protocol_whitelist", "file,udp,rtp,tcp,rtmp,rtsp,hls", 0);&#xA;    av_dict_set(&amp;options, "timeout", "500000", 0); // Timeout in microseconds &#xA;&#xA;//Next Line hangs   &#xA;    int ret = avformat_open_input(&amp;fmt_ctx, "rtmp://localhost:9999/stream/stream1", NULL, &amp;options);&#xA;    if (ret != 0) {&#xA;        fprintf(stderr, "Could not open RTMP stream\n");&#xA;        return -1;&#xA;    }&#xA;&#xA;    // Find the first video stream&#xA;    ret = avformat_find_stream_info(fmt_ctx, nullptr);&#xA;    if (ret &lt; 0) {&#xA;        return ret;&#xA;    }&#xA;    //...&#xA;} &#xA;&#xA;

    &#xA;

    Edit :&#xA;I tried to just create a animated png and tried to stream that from the console to another console window to avoid any programming mistakes on my side. It was the same, I just could not get 16 PNG encoded stream to work. I hung trying to receive and closed when the file ended with in total zero frames received.

    &#xA;

    I managed to get other thing working :&#xA;To not encode gray frames with YUV420, I installed ffmpeg with libx264 support (was thinking is the same as H264, which in code is, but it adds support to new pixel formats). Used H264 again but with GRAY8 with doubled image width and reconstructing the image on the other side.

    &#xA;

    Maybe as a side note, I could not get any other formats to work. Is "flv" the only option here ? Could I get more performance if I changed it to... what ?

    &#xA;