
Recherche avancée
Autres articles (73)
-
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
Mise à disposition des fichiers
14 avril 2011, parPar défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...) -
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)
Sur d’autres sites (14784)
-
MJPEG decoding is 3x slower when opening a V4L2 input device [closed]
26 octobre 2024, par XenonicI'm trying to decode a MJPEG video stream coming from a webcam, but I'm hitting some performance blockers when using FFmpeg's C API in my application. I've recreated the problem using the example video decoder, where I just simply open the V4L2 input device, read packets, and push them to the decoder. What's strange is if I try to get my input packets from the V4L2 device instead of from a file, the
avcodec_send_packet
call to the decoder is nearly 3x slower. After further poking around, I narrowed the issue down to whether or not I open the V4L2 device at all.

Let's look at a minimal example demonstrating this behavior :


extern "C"
{
#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libavutil></libavutil>opt.h>
#include <libavdevice></libavdevice>avdevice.h>
}

#define INBUF_SIZE 4096

static void decode(AVCodecContext *dec_ctx, AVFrame *frame, AVPacket *pkt)
{
 if (avcodec_send_packet(dec_ctx, pkt) < 0)
 exit(1);
 
 int ret = 0;
 while (ret >= 0) {
 ret = avcodec_receive_frame(dec_ctx, frame);
 if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
 return;
 else if (ret < 0)
 exit(1);

 // Here we'd save off the decoded frame, but that's not necessary for the example.
 }
}

int main(int argc, char **argv)
{
 const char *filename;
 const AVCodec *codec;
 AVCodecParserContext *parser;
 AVCodecContext *c= NULL;
 FILE *f;
 AVFrame *frame;
 uint8_t inbuf[INBUF_SIZE + AV_INPUT_BUFFER_PADDING_SIZE];
 uint8_t *data;
 size_t data_size;
 int ret;
 int eof;
 AVPacket *pkt;

 filename = argv[1];

 pkt = av_packet_alloc();
 if (!pkt)
 exit(1);

 /* set end of buffer to 0 (this ensures that no overreading happens for damaged MPEG streams) */
 memset(inbuf + INBUF_SIZE, 0, AV_INPUT_BUFFER_PADDING_SIZE);

 // Use MJPEG instead of the example's MPEG1
 //codec = avcodec_find_decoder(AV_CODEC_ID_MPEG1VIDEO);
 codec = avcodec_find_decoder(AV_CODEC_ID_MJPEG);
 if (!codec) {
 fprintf(stderr, "Codec not found\n");
 exit(1);
 }

 parser = av_parser_init(codec->id);
 if (!parser) {
 fprintf(stderr, "parser not found\n");
 exit(1);
 }

 c = avcodec_alloc_context3(codec);
 if (!c) {
 fprintf(stderr, "Could not allocate video codec context\n");
 exit(1);
 }

 if (avcodec_open2(c, codec, NULL) < 0) {
 fprintf(stderr, "Could not open codec\n");
 exit(1);
 }

 c->pix_fmt = AV_PIX_FMT_YUVJ422P;

 f = fopen(filename, "rb");
 if (!f) {
 fprintf(stderr, "Could not open %s\n", filename);
 exit(1);
 }

 frame = av_frame_alloc();
 if (!frame) {
 fprintf(stderr, "Could not allocate video frame\n");
 exit(1);
 }

 avdevice_register_all();
 auto* inputFormat = av_find_input_format("v4l2");
 AVDictionary* options = nullptr;
 av_dict_set(&options, "input_format", "mjpeg", 0);
 av_dict_set(&options, "video_size", "1920x1080", 0);

 AVFormatContext* fmtCtx = nullptr;


 // Commenting this line out results in fast encoding!
 // Notice how fmtCtx is not even used anywhere, we still read packets from the file
 avformat_open_input(&fmtCtx, "/dev/video0", inputFormat, &options);


 // Just parse packets from a file and send them to the decoder.
 do {
 data_size = fread(inbuf, 1, INBUF_SIZE, f);
 if (ferror(f))
 break;
 eof = !data_size;

 data = inbuf;
 while (data_size > 0 || eof) {
 ret = av_parser_parse2(parser, c, &pkt->data, &pkt->size,
 data, data_size, AV_NOPTS_VALUE, AV_NOPTS_VALUE, 0);
 if (ret < 0) {
 fprintf(stderr, "Error while parsing\n");
 exit(1);
 }
 data += ret;
 data_size -= ret;

 if (pkt->size)
 decode(c, frame, pkt);
 else if (eof)
 break;
 }
 } while (!eof);

 return 0;
}



Here's a histogram of the CPU time spent in that
avcodec_send_packet
function call with and without opening the device by commenting out thatavformat_open_input
call above.

Without opening the V4L2 device :




With opening the V4L2 device :




Interestingly we can see a significant number of function calls are in that 25ms time bin ! But most of them are 78ms... why ?


So what's going on here ? Why does opening the device destroy my decode performance ?


Additionally, if I try and run a seemingly equivalent pipeline through the ffmpeg tool itself, I don't hit this problem. Running this command :


ffmpeg -f v4l2 -input_format mjpeg -video_size 1920x1080 -r 30 -c:v mjpeg -i /dev/video0 -c:v copy out.mjpeg



Is generating an output file with a reported speed of just barely over 1.0x, aka. 30 FPS. Perfect, why doesn't the C API give me the same results ? One thing to note is I do get periodic errors from the MJPEG decoder (about every second), not sure if these are a concern or not :


[mjpeg @ 0x5590d6b7b0] Application provided invalid, non monotonically increasing dts to muxer in stream 0: 27 >= 27
[mjpeg @ 0x5590d6b7b0] Application provided invalid, non monotonically increasing dts to muxer in stream 0: 30 >= 30
...



I'm running on a Raspberry Pi CM4 with FFmpeg 6.1.1


-
Segmentation fault with avcodec_encode_video2() while encoding H.264
16 juillet 2015, par Baris DemirayI’m trying to convert a
cv::Mat
to anAVFrame
to encode it then in H.264 and wanted to start from a simple example, as I’m a newbie in both. So I first read in a JPEG file, and then do the pixel format conversion withsws_scale()
fromAV_PIX_FMT_BGR24
toAV_PIX_FMT_YUV420P
keeping the dimensions the same, and it all goes fine until I callavcodec_encode_video2()
.I read quite a few discussions regarding an
AVFrame
allocation and the question segmetation fault while avcodec_encode_video2 seemed like a match but I just can’t see what I’m missing or getting wrong.Here is the minimal code that you can reproduce the crash, it should be compiled with,
g++ -o OpenCV2FFmpeg OpenCV2FFmpeg.cpp -lopencv_imgproc -lopencv_highgui -lopencv_core -lswscale -lavutil -lavcodec -lavformat
It’s output on my system,
cv::Mat [width=420, height=315, depth=0, channels=3, step=1260]
I'll soon crash..
Segmentation faultAnd that
sample.jpg
file’s details byidentify
tool,~temporary/sample.jpg JPEG 420x315 420x315+0+0 8-bit sRGB 38.3KB 0.000u 0:00.000
Please note that I’m trying to create a video out of a single image, just to keep things simple.
#include <iostream>
#include <cassert>
using namespace std;
extern "C" {
#include <libavcodec></libavcodec>avcodec.h>
#include <libswscale></libswscale>swscale.h>
#include <libavformat></libavformat>avformat.h>
}
#include <opencv2></opencv2>core/core.hpp>
#include <opencv2></opencv2>highgui/highgui.hpp>
const string TEST_IMAGE = "/home/baris/temporary/sample.jpg";
int main(int /*argc*/, char** argv)
{
av_register_all();
avcodec_register_all();
/**
* Initialise the encoder
*/
AVCodec *h264encoder = avcodec_find_encoder(AV_CODEC_ID_H264);
AVFormatContext *cv2avFormatContext = avformat_alloc_context();
/**
* Create a stream and allocate frames
*/
AVStream *h264outputstream = avformat_new_stream(cv2avFormatContext, h264encoder);
avcodec_get_context_defaults3(h264outputstream->codec, h264encoder);
AVFrame *sourceAvFrame = av_frame_alloc(), *destAvFrame = av_frame_alloc();
int got_frame;
/**
* Pixel formats for the input and the output
*/
AVPixelFormat sourcePixelFormat = AV_PIX_FMT_BGR24;
AVPixelFormat destPixelFormat = AV_PIX_FMT_YUV420P;
/**
* Create cv::Mat
*/
cv::Mat cvFrame = cv::imread(TEST_IMAGE, CV_LOAD_IMAGE_COLOR);
int width = cvFrame.size().width, height = cvFrame.size().height;
cerr << "cv::Mat [width=" << width << ", height=" << height << ", depth=" << cvFrame.depth() << ", channels=" << cvFrame.channels() << ", step=" << cvFrame.step << "]" << endl;
h264outputstream->codec->pix_fmt = destPixelFormat;
h264outputstream->codec->width = cvFrame.cols;
h264outputstream->codec->height = cvFrame.rows;
/**
* Prepare the conversion context
*/
SwsContext *bgr2yuvcontext = sws_getContext(width, height,
sourcePixelFormat,
h264outputstream->codec->width, h264outputstream->codec->height,
h264outputstream->codec->pix_fmt,
SWS_BICUBIC, NULL, NULL, NULL);
/**
* Convert and encode frames
*/
for (uint i=0; i < 250; i++)
{
/**
* Allocate source frame, i.e. input to sws_scale()
*/
avpicture_alloc((AVPicture*)sourceAvFrame, sourcePixelFormat, width, height);
for (int h = 0; h < height; h++)
memcpy(&(sourceAvFrame->data[0][h*sourceAvFrame->linesize[0]]), &(cvFrame.data[h*cvFrame.step]), width*3);
/**
* Allocate destination frame, i.e. output from sws_scale()
*/
avpicture_alloc((AVPicture *)destAvFrame, destPixelFormat, width, height);
sws_scale(bgr2yuvcontext, sourceAvFrame->data, sourceAvFrame->linesize,
0, height, destAvFrame->data, destAvFrame->linesize);
/**
* Prepare an AVPacket for encoded output
*/
AVPacket avEncodedPacket;
av_init_packet(&avEncodedPacket);
avEncodedPacket.data = NULL;
avEncodedPacket.size = 0;
// av_free_packet(&avEncodedPacket); w/ or w/o result doesn't change
cerr << "I'll soon crash.." << endl;
if (avcodec_encode_video2(h264outputstream->codec, &avEncodedPacket, destAvFrame, &got_frame) < 0)
exit(1);
cerr << "Checking if we have a frame" << endl;
if (got_frame)
av_write_frame(cv2avFormatContext, &avEncodedPacket);
av_free_packet(&avEncodedPacket);
av_frame_free(&sourceAvFrame);
av_frame_free(&destAvFrame);
}
}
</cassert></iostream>Thanks in advance !
EDIT : And the stack trace after the crash,
Thread 2 (Thread 0x7fffe5506700 (LWP 10005)):
#0 0x00007ffff4bf6c5d in poll () at /lib64/libc.so.6
#1 0x00007fffe9073268 in () at /usr/lib64/libusb-1.0.so.0
#2 0x00007ffff47010a4 in start_thread () at /lib64/libpthread.so.0
#3 0x00007ffff4bff08d in clone () at /lib64/libc.so.6
Thread 1 (Thread 0x7ffff7f869c0 (LWP 10001)):
#0 0x00007ffff5ecc7dc in avcodec_encode_video2 () at /usr/lib64/libavcodec.so.56
#1 0x00000000004019b6 in main(int, char**) (argv=0x7fffffffd3d8) at ../src/OpenCV2FFmpeg.cpp:99EDIT2 : Problem was that I hadn’t
avcodec_open2()
the codec as spotted by Ronald. Final version of the code is at https://github.com/barisdemiray/opencv2ffmpeg/, with leaks and probably other problems hoping that I’ll improve it while learning both libraries. -
How to restream IPTV playlist with Nginx RTMP, FFmpeg, and Python without recording, but getting HTTP 403 error ? [closed]
1er avril, par boyuna1720I have an IPTV playlist from a provider that allows only one user to connect and watch. I want to restream this playlist through my own server without recording it and in a lightweight manner. I’m using Nginx RTMP, FFmpeg, and Python TCP for the setup, but I keep getting an HTTP 403 error when trying to access the stream.


Here’s a summary of my setup :


Nginx RTMP : Used for streaming.


FFmpeg : Used to handle the video stream.


Python TCP : Trying to handle the connection between my server and the IPTV source.


#!/usr/bin/env python3

import sys
import socket
import threading
import requests
import time

def accept_connections(server_socket, clients, clients_lock):
 """
 Continuously accept new client connections, perform a minimal read of the
 client's HTTP request, send back a valid HTTP/1.1 response header, and
 add the socket to the broadcast list.
 """
 while True:
 client_socket, addr = server_socket.accept()
 print(f"[+] New client connected from {addr}")
 threading.Thread(
 target=handle_client,
 args=(client_socket, addr, clients, clients_lock),
 daemon=True
 ).start()

def handle_client(client_socket, addr, clients, clients_lock):
 """
 Read the client's HTTP request minimally, send back a proper HTTP/1.1 200 OK header,
 and then add the socket to our broadcast list.
 """
 try:
 # Read until we reach the end of the request headers
 request_data = b""
 while b"\r\n\r\n" not in request_data:
 chunk = client_socket.recv(1024)
 if not chunk:
 break
 request_data += chunk

 # Send a proper HTTP response header to satisfy clients like curl
 response_header = (
 "HTTP/1.1 200 OK\r\n"
 "Content-Type: application/octet-stream\r\n"
 "Connection: close\r\n"
 "\r\n"
 )
 client_socket.sendall(response_header.encode("utf-8"))

 with clients_lock:
 clients.append(client_socket)
 print(f"[+] Client from {addr} is ready to receive stream.")
 except Exception as e:
 print(f"[!] Error handling client {addr}: {e}")
 client_socket.close()

def read_from_source_and_broadcast(source_url, clients, clients_lock):
 """
 Continuously connect to the source URL (following redirects) using custom headers
 so that it mimics a curl-like request. In case of connection errors (e.g. connection reset),
 wait a bit and then try again.
 
 For each successful connection, stream data in chunks and broadcast each chunk
 to all connected clients.
 """
 # Set custom headers to mimic curl
 headers = {
 "User-Agent": "curl/8.5.0",
 "Accept": "*/*"
 }

 while True:
 try:
 print(f"[+] Fetching from source URL (with redirects): {source_url}")
 with requests.get(source_url, stream=True, allow_redirects=True, headers=headers) as resp:
 if resp.status_code >= 400:
 print(f"[!] Got HTTP {resp.status_code} from the source. Retrying in 5 seconds.")
 time.sleep(5)
 continue

 # Stream data and broadcast each chunk
 for chunk in resp.iter_content(chunk_size=4096):
 if not chunk:
 continue
 with clients_lock:
 for c in clients[:]:
 try:
 c.sendall(chunk)
 except Exception as e:
 print(f"[!] A client disconnected or send failed: {e}")
 c.close()
 clients.remove(c)
 except requests.exceptions.RequestException as e:
 print(f"[!] Source connection error, retrying in 5 seconds: {e}")
 time.sleep(5)

def main():
 if len(sys.argv) != 3:
 print(f"Usage: {sys.argv[0]} <port>")
 sys.exit(1)

 source_url = sys.argv[1]
 port = int(sys.argv[2])

 # Create a TCP socket to listen for incoming connections
 server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
 server_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
 server_socket.bind(("0.0.0.0", port))
 server_socket.listen(5)
 print(f"[+] Listening on port {port}...")

 # List of currently connected client sockets
 clients = []
 clients_lock = threading.Lock()

 # Start a thread to accept incoming client connections
 t_accept = threading.Thread(
 target=accept_connections,
 args=(server_socket, clients, clients_lock),
 daemon=True
 )
 t_accept.start()

 # Continuously read from the source URL and broadcast to connected clients
 read_from_source_and_broadcast(source_url, clients, clients_lock)

if __name__ == "__main__":
 main()
</port>


When i write command
python3 proxy_server.py 'http://channelurl' 9999

I getting error.

[+] Listening on port 9999...
[+] Fetching from source URL (with redirects): http://ate91060.cdn-akm.me:80/dc31a19e5a6a/fc5e38e28e/325973
[!] Got HTTP 403 from the source. Retrying in 5 seconds.
^CTraceback (most recent call last):
 File "/home/namepirate58/nginx-1.23.1/proxy_server.py", line 127, in <module>
 main()
 File "/home/namepirate58/nginx-1.23.1/proxy_server.py", line 124, in main
 read_from_source_and_broadcast(source_url, clients, clients_lock)
 File "/home/namepirate58/nginx-1.23.1/proxy_server.py", line 77, in read_from_source_and_broadcast
 time.sleep(5)
KeyboardInterrupt
</module>