
Recherche avancée
Autres articles (38)
-
Participer à sa traduction
10 avril 2011Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
Actuellement MediaSPIP n’est disponible qu’en français et (...) -
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
-
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (3730)
-
Generate a thumbnail image every X seconds
1er août 2013, par rabotaliusI'm trying to generate a thumbnail every minute, but i'm getting the following error :
ffmpeg -i Hung.mp4 -f image2 -vf fps=fps=1/60 img%03d.jpg
[mjpeg @ 0x2f06da0] bitrate tolerance too small for bitrate
[mjpeg @ 0x2f1b7c0] ff_frame_thread_encoder_init failed
Output #0, image2, to 'img%03d.jpg':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: mp41
Stream #0:0(eng): Video: mjpeg, yuvj420p, 1280x536 [SAR 1:1 DAR 160:67], q=2-31, 200 kb/s, 90k tbn, 0.02 tbc
Metadata:
creation_time : 2013-07-22 00:01:41
handler_name : VideoHandler
Stream mapping:
Stream #0:0 -> #0:0 (h264 -> mjpeg)
Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height -
ffmpeg GRAY16 stream over network
28 novembre 2023, par Norbert P.Im working in a school project where we need to use depth cameras. The camera produces color and depth (in other words 16bit grayscale image). We decided to use ffmpeg, as later on compression could be very useful. For now we got some basic stream running form one PC to other. These settings include :


- 

- rtmp
- flv as container
- pixel format AV_PIX_FMT_YUV420P
- codec AV_CODEC_ID_H264










The problem we are having is with grayscale image. Not every codec is able to cope with this format, so as not every protocol able to work with given codec. I got some settings "working" but receiver side is just stuck on avformat_open_input() method.
I have also tested it with commandline where ffmpeg is listening for connection and same happens.


I include a minimum "working" example of client code. Server can be tested with "ffmpeg.exe -f apng -listen 1 -i rtmp ://localhost:9999/stream/stream1 -c copy -f apng -listen 1 rtmp ://localhost:2222/live/l" or code below. I get no warnings, ffmpeg is newest version installed with "vcpkg install —triplet x64-windows ffmpeg[ffmpeg,ffprobe,zlib]" on windows or packet manager on linux.


The question : Did I miss something ? How do I get it to work ? If you have any better ideas I would very gladly consider them. In the end I need 16 bits of lossless transmission, could be split between channels etc. which I also tried with same effect.


Client code that would have camera and connect to server :


extern "C" {
#include <libavutil></libavutil>opt.h>
#include <libavcodec></libavcodec>avcodec.h>
#include <libavutil></libavutil>channel_layout.h>
#include <libavutil></libavutil>common.h>
#include <libavformat></libavformat>avformat.h>
#include <libavcodec></libavcodec>avcodec.h>
#include <libavutil></libavutil>imgutils.h>
}

int main() {

 std::string container = "apng";
 AVCodecID codec_id = AV_CODEC_ID_APNG;
 AVPixelFormat pixFormat = AV_PIX_FMT_GRAY16BE;

 AVFormatContext* format_ctx;
 AVCodec* out_codec;
 AVStream* out_stream;
 AVCodecContext* out_codec_ctx;
 AVFrame* frame;
 uint8_t* data;

 std::string server = "rtmp://localhost:9999/stream/stream1";

 int width = 1280, height = 720, fps = 30, bitrate = 1000000;

 //initialize format context for output with flv and no filename
 avformat_alloc_output_context2(&format_ctx, nullptr, container.c_str(), server.c_str());
 if (!format_ctx) {
 return 1;
 }

 //AVIOContext for accessing the resource indicated by url
 if (!(format_ctx->oformat->flags & AVFMT_NOFILE)) {
 int avopen_ret = avio_open(&format_ctx->pb, server.c_str(),
 AVIO_FLAG_WRITE);// , nullptr, nullptr);
 if (avopen_ret < 0) {
 fprintf(stderr, "failed to open stream output context, stream will not work\n");
 return 1;
 }
 }


 const AVCodec* tmp_out_codec = avcodec_find_encoder(codec_id);
 //const AVCodec* tmp_out_codec = avcodec_find_encoder_by_name("hevc");
 out_codec = const_cast(tmp_out_codec);
 if (!(out_codec)) {
 fprintf(stderr, "Could not find encoder for '%s'\n",
 avcodec_get_name(codec_id));

 return 1;
 }

 out_stream = avformat_new_stream(format_ctx, out_codec);
 if (!out_stream) {
 fprintf(stderr, "Could not allocate stream\n");
 return 1;
 }

 out_codec_ctx = avcodec_alloc_context3(out_codec);

 const AVRational timebase = { 60000, fps };
 const AVRational dst_fps = { fps, 1 };
 av_log_set_level(AV_LOG_VERBOSE);
 //codec_ctx->codec_tag = 0;
 //codec_ctx->codec_id = codec_id;
 out_codec_ctx->codec_type = AVMEDIA_TYPE_VIDEO;
 out_codec_ctx->width = width;
 out_codec_ctx->height = height;
 out_codec_ctx->gop_size = 1;
 out_codec_ctx->time_base = timebase;
 out_codec_ctx->pix_fmt = pixFormat;
 out_codec_ctx->framerate = dst_fps;
 out_codec_ctx->time_base = av_inv_q(dst_fps);
 out_codec_ctx->bit_rate = bitrate;
 //if (fctx->oformat->flags & AVFMT_GLOBALHEADER)
 //{
 // codec_ctx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
 //}

 out_stream->time_base = out_codec_ctx->time_base; //will be set afterwards by avformat_write_header to 1/1000

 int ret = avcodec_parameters_from_context(out_stream->codecpar, out_codec_ctx);
 if (ret < 0)
 {
 fprintf(stderr, "Could not initialize stream codec parameters!\n");
 return 1;
 }

 AVDictionary* codec_options = nullptr;
 av_dict_set(&codec_options, "tune", "zerolatency", 0);

 // open video encoder
 ret = avcodec_open2(out_codec_ctx, out_codec, &codec_options);
 if (ret < 0)
 {
 fprintf(stderr, "Could not open video encoder!\n");
 return 1;
 }
 av_dict_free(&codec_options);

 out_stream->codecpar->extradata_size = out_codec_ctx->extradata_size;
 out_stream->codecpar->extradata = static_cast(av_mallocz(out_codec_ctx->extradata_size));
 memcpy(out_stream->codecpar->extradata, out_codec_ctx->extradata, out_codec_ctx->extradata_size);

 av_dump_format(format_ctx, 0, server.c_str(), 1);

 frame = av_frame_alloc();

 int sz = av_image_get_buffer_size(pixFormat, width, height, 32);
#ifdef _WIN32
 data = (uint8_t*)_aligned_malloc(sz, 32);
 if (data == NULL)
 return ENOMEM;
#else
 ret = posix_memalign(reinterpret_cast(&data), 32, sz);
#endif
 av_image_fill_arrays(frame->data, frame->linesize, data, pixFormat, width, height, 32);
 frame->format = pixFormat;
 frame->width = width;
 frame->height = height;
 frame->pts = 1;
 if (avformat_write_header(format_ctx, nullptr) < 0) //Header making problems!!!
 {
 fprintf(stderr, "Could not write header!\n");
 return 1;
 }

 printf("stream time base = %d / %d \n", out_stream->time_base.num, out_stream->time_base.den);

 double inv_stream_timebase = (double)out_stream->time_base.den / (double)out_stream->time_base.num;
 printf("Init OK\n");
 /* Init phase end*/
 int dts = 0;
 int frameNo = 0;

 while (true) {
 //Fill dummy frame with something
 for (int y = 0; y < height; y++) {
 uint16_t color = ((y + frameNo) * 256) % (256 * 256);
 for (int x = 0; x < width; x++) {
 data[x+y*width] = color;
 }
 }

 memcpy(frame->data[0], data, 1280 * 720 * sizeof(uint16_t));
 AVPacket* pkt = av_packet_alloc();

 int ret = avcodec_send_frame(out_codec_ctx, frame);
 if (ret < 0)
 {
 fprintf(stderr, "Error sending frame to codec context!\n");
 return ret;
 }
 while (ret >= 0) {
 ret = avcodec_receive_packet(out_codec_ctx, pkt);
 if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
 break;
 else if (ret < 0) {
 fprintf(stderr, "Error during encoding\n");
 break;
 }
 pkt->dts = dts;
 pkt->pts = dts;
 dts += 33;
 av_write_frame(format_ctx, pkt);
 frameNo++;
 av_packet_unref(pkt);
 }
 printf("Streamed %d frames\n", frameNo);
 }
 return 0;
}



And part of server that should receive. code where is stops and waits


extern "C" {
#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libavformat></libavformat>avio.h>
}

int main() {
 AVFormatContext* fmt_ctx = NULL;
 av_log_set_level(AV_LOG_VERBOSE);
 AVDictionary* options = nullptr;
 av_dict_set(&options, "protocol_whitelist", "file,udp,rtp,tcp,rtmp,rtsp,hls", 0);
 av_dict_set(&options, "timeout", "500000", 0); // Timeout in microseconds 

//Next Line hangs 
 int ret = avformat_open_input(&fmt_ctx, "rtmp://localhost:9999/stream/stream1", NULL, &options);
 if (ret != 0) {
 fprintf(stderr, "Could not open RTMP stream\n");
 return -1;
 }

 // Find the first video stream
 ret = avformat_find_stream_info(fmt_ctx, nullptr);
 if (ret < 0) {
 return ret;
 }
 //...
} 




Edit :
I tried to just create a animated png and tried to stream that from the console to another console window to avoid any programming mistakes on my side. It was the same, I just could not get 16 PNG encoded stream to work. I hung trying to receive and closed when the file ended with in total zero frames received.


I managed to get other thing working :
To not encode gray frames with YUV420, I installed ffmpeg with libx264 support (was thinking is the same as H264, which in code is, but it adds support to new pixel formats). Used H264 again but with GRAY8 with doubled image width and reconstructing the image on the other side.


Maybe as a side note, I could not get any other formats to work. Is "flv" the only option here ? Could I get more performance if I changed it to... what ?


-
node.js ffmpeg spawn child_process unexpected data output and not possible to kill or stop
1er septembre 2021, par PLNR
I'm rather new to backend stuff, so please excuse, if my question is trivial.

For an Intranet project, I want to present a video element in a webpage (React, HLS.js Player) with the possibility reload the stream in case it hung.

The video sources are mpeg-ts streams delivered as udp multicast.

A small node.js / express server should handle the ffmpeg commands, to transcode the multicast to hls to display them in a browser.



My Problem is, that I am unable to stop the process...

I've tried to SIGINT, SIGTERM the process with it's pid, but the encoding is just going on.

Here is the code I wrote for transcoding so far :


const express = require("express");
const { spawn, exec } = require("child_process");
const cors = require("cors");
const process = require("process")

let ls;
let pid;

const app = express();

app.use(cors());

app.get('/cam/:source', (body) => {
 const cam = body.params.source;
 console.log(cam);

 let source = "udp://239.1.1.1:4444";
 let stream = "/var/www/html/streams/tmp/cam1.m3u8"


 stream = spawn("ffmpeg", ["-re", "-i", source, "-c:v", "libx264", "-crf", "21", "-preset", "veryfast", "-c:a", "aac", "-b:a", "128k", "-ac", "2", "-f", "hls", "-hls_list_size", "5", "-hls_flags", "delete_segments", stream], {detached: true});

 pid = stream.pid;

 stream.stdout.on("data", data => {
 console.log(`stdout: ${data}`);
 });

 stream.stderr.on("data", data => {
 console.log(`stderr: ${data}`);
 console.log("pid: ", pid);
 });

 stream.on("error", error => {
 console.log(`error: ${error.message}`);
 });

 stream.on("close", code => {
 console.log(`child process exited with code ${code}`);
 });
})

app.get('/cam/quit', () => {
 process.kill(pid, 'SIGINT')
})

app.listen(5000, ()=> {
 console.log('Listening');
})



On requesting the /cam/quit i would expect the process to stop... but it doesn't.

I've tried :

app.get('/cam/quit', () => {
 process.kill(pid, 'SIGINT')
})



app.get('/cam/quit', () => {
 process.kill(stream.pid, 'SIGINT')
})



app.get('/cam/quit', () => {
 stream.kill(pid, 'SIGINT')
})



app.get('/cam/quit', () => {
 stream.kill(0, 'SIGINT')
})



But none of this worked... do I miss something ? Is there something I need to change ?


Problem 2 - Output :

The output is emitted on stderr... even the process is working as expected.

This is maybe only cosmetics, but it makes me wondering.

Here is the terminal output :

[nodemon] starting `node server.js`
Listening
camera stream reloaded
stderr: ffmpeg version 4.3.2-0+deb11u1ubuntu1 Copyright (c) 2000-2021 the FFmpeg developers
 built with gcc 10 (Ubuntu 10.2.1-20ubuntu1)
 configuration: --prefix=/usr --extra-version=0+deb11u1ubuntu1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libdav1d --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librabbitmq --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-pocketsphinx --enable-libmfx --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-nvenc --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared
 libavutil 56. 51.100 / 56. 51.100
 libavcodec 58. 91.100 / 58. 91.100
 libavformat 58. 45.100 / 58. 45.100
 libavdevice 58. 10.100 / 58. 10.100
 libavfilter 7. 85.100 / 7. 85.100
 libavresample 4. 0. 0 / 4. 0. 0
 libswscale 5. 7.100 / 5. 7.100
 libswresample 3. 7.100 / 3. 7.100
 libpostproc 55. 7.100 / 55. 7.100

pid: 4206
stderr: Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'udp://239.1.1.1':
 Metadata:
 major_brand : isom
 minor_version : 512
 compatible_brands: isomiso2avc1mp41
 creation_time : 
pid: 4206
stderr: 1970-01-01T00:00:00.000000Z
 encoder : Lavf52.54.0
 Duration: 01:55:59.20, start: 0.000000, bitrate: 1436 kb/s
 Stream #0:0(und): Video: h264 (Constrained Baseline) (avc1 / 0x31637661), yuv420p, 720x384 [SAR 1:1 DAR 15:8], 1272 kb/s, 25 fps, 25 tbr, 25 tbn, 50 tbc (default)
 Metadata:
 creation_time : 1970-01-01T00:00:00.000000Z
 handler_name : VideoHandler
 Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 159 kb/s (default)
 Metadata:
 creation_time : 1970-01-01T00:00:00.000000Z
 handler_name : SoundHandler

pid: 4206
stderr: Stream mapping:
 Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))
 Stream #0:1 -> #0:1 (aac (native) -> aac (native))
Press [q] to stop, [?] for help

pid: 4206
stderr: [libx264 @ 0x5616b38b5440] using SAR=1/1

pid: 4206
stderr: [libx264 @ 0x5616b38b5440] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2

pid: 4206
stderr: [libx264 @ 0x5616b38b5440] profile High, level 3.0, 4:2:0, 8-bit

pid: 4206
stderr: [libx264 @ 0x5616b38b5440] 264 - core 160 r3011 cde9a93 - H.264/MPEG-4 AVC codec - Copyleft 2003-2020 - http://www.videolan.org/x264.html - options: cabac=1 ref=1 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=2 psy=1 psy_rd=1.00:0.00 mixed_ref=0 me_range=16 chroma_me=1 trellis=0 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=0 threads=6 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=1 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=10 rc=crf mbtree=1 crf=21.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, hls, to '/var/www/html/streams/tmp/aft.m3u8':
 Metadata:
 major_brand : isom
 minor_version : 512
 compatible_brands: isomiso2avc1mp41
 encoder : Lavf58.45.100
 Stream #0:0(und): Video: h264 (libx264), yuv420p, 720x384 [SAR 1:1 DAR 15:8], q=-1--1, 25 fps, 90k tbn, 25 tbc (default)
 Metadata:
 creation_time : 1970-01-01T00:00:00.000000Z
 handler_name : VideoHandler
 encoder : Lavc58.91.100 libx264
 Side data:
 cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A
 Stream #0:1(und): Audio: aac (LC), 48000 Hz, stereo, fltp, 128 kb/s (default)
 Metadata:
 creation_time : 1970-01-01T00:00:00.000000Z
 handler_name : SoundHandler
 encoder : Lavc58.91.100 aac

pid: 4206
stderr: frame= 8 fps=0.0 q=0.0 size=N/A time=00:00:00.46 bitrate=N/A speed=0.931x 
pid: 4206
stderr: frame= 21 fps= 21 q=26.0 size=N/A time=00:00:00.96 bitrate=N/A speed=0.95x 
pid: 4206
stderr: frame= 33 fps= 22 q=26.0 size=N/A time=00:00:01.49 bitrate=N/A speed=0.982x 
pid: 4206
stderr: frame= 46 fps= 23 q=26.0 size=N/A time=00:00:02.00 bitrate=N/A speed=0.989x 
pid: 4206
stderr: frame= 58 fps= 23 q=26.0 size=N/A time=00:00:02.49 bitrate=N/A speed=0.986x 
pid: 4206



and so on...



Any help would be highly appreciated !

Many thanks in advance