
Recherche avancée
Médias (1)
-
La conservation du net art au musée. Les stratégies à l’œuvre
26 mai 2011
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (38)
-
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
De l’upload à la vidéo finale [version standalone]
31 janvier 2010, parLe chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
Upload et récupération d’informations de la vidéo source
Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...) -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)
Sur d’autres sites (7791)
-
FFMPEG in Bash - Too many inputs specified for the "movie" filter
15 juin 2021, par Rodion GrinbergBasically, I am doing a script to automate video watermarking, border insert, and noise adding.


When I use the following combination :


ffmpeg -y -i "$INPUT" -vf "noise=alls=$NOISE_INDEX:allf=t , movie=$WATERMARK [watermark]; [in]scale=512:trunc(ow/a/2)*2 [scale]; [scale][watermark] overlay=$OVERLAY_SETTINGS_WATERMARK [out] , drawtext=text=$TEXT:$OVERLAY_SETTINGS_TEXT:fontsize=32:fontcolor=black:box=1:boxcolor=white@1: boxborderw=5 , pad=iw+50:ih+50:iw/8:ih/8:color=red" $OUTPUT



...it shows the following error :


Too many inputs specified for the "movie" filter.
Error reinitializing filters!
Failed to inject frame into filter network: Invalid argument



Can someone help me with that ?


-
Reasons for "Segmentation fault (core dumped)" when using Python extension and FFmpeg
24 août 2021, par Christian VorhemusI want to write a Python C extension that includes a function
convertVideo()
that converts a video from one format to another making use of FFmpeg 3.4.8 (the libav* libaries). The code of the extension is at the end of the question. The extension compiles successfully but whenever I open Python and want to call it (using a simple Python wrapper code that I don't include here), I get

Python 3.7.10 (default, May 2 2021, 18:28:10)
[GCC 9.1.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import myModule
>>> myModule.convert_video("/home/admin/video.h264", "/home/admin/video.mp4")
convert 0
convert 1
Format raw H.264 video, duration -9223372036854775808 us
convert 2
Segmentation fault (core dumped)



The interesting thing is, I wrote a simple helper program
test_convert.cc
that callsconvertVideo()
like so

#include 
#include 

int convertVideo(const char *in_filename, const char *out_filename);

int main() {
 int res = convertVideo("/home/admin/video.h264", "/home/admin/video.mp4");
 return 0;
}



and I compiled this program making use of the shared library that Python generates when building the C extension like so


gcc test_convert.cc /usr/lib/python3.7/site-packages/_myModule.cpython-37m-aarch64-linux-gnu.so -o test_convert



And it works ! The output is


root# ./test_convert
convert 0
convert 1
Format raw H.264 video, duration -9223372036854775808 us
convert 2
convert 3
convert 4
convert 5
convert 6
Output #0, mp4, to '/home/admin/video.mp4':
 Stream #0:0: Video: h264 (High), yuv420p(tv, bt470bg, progressive), 1280x720 [SAR 1:1 DAR 16:9], q=2-31
convert 7



The extension code looks like this


#include 

#include 
#include 
#include 
#include 
#include 
#include 
#include 

extern "C"
{
#include "libavformat/avformat.h"
#include "libavutil/imgutils.h"
}

int convertVideo(const char *in_filename, const char *out_filename)
{
 // Input AVFormatContext and Output AVFormatContext
 AVFormatContext *input_format_context = avformat_alloc_context();
 AVPacket pkt;

 int ret, i;
 int frame_index = 0;
 printf("convert 0\n");
 av_register_all();
 printf("convert 1\n");
 // Input
 if ((ret = avformat_open_input(&input_format_context, in_filename, NULL,
 NULL)) < 0)
 {
 printf("Could not open input file.");
 return 1;
 }
 else
 {
 printf("Format %s, duration %lld us\n",
 input_format_context->iformat->long_name,
 input_format_context->duration);
 }
 printf("convert 2\n");
 if ((ret = avformat_find_stream_info(input_format_context, 0)) < 0)
 {
 printf("Failed to retrieve input stream information");
 return 1;
 }
 printf("convert 3\n");
 AVFormatContext *output_format_context = avformat_alloc_context();
 AVPacket packet;
 int stream_index = 0;
 int *streams_list = NULL;
 int number_of_streams = 0;
 int fragmented_mp4_options = 0;
 printf("convert 4\n");
 avformat_alloc_output_context2(&output_format_context, NULL, NULL,
 out_filename);
 if (!output_format_context)
 {
 fprintf(stderr, "Could not create output context\n");
 ret = AVERROR_UNKNOWN;
 return 1;
 }
 printf("convert 5\n");
 AVOutputFormat *fmt = av_guess_format(0, out_filename, 0);
 output_format_context->oformat = fmt;

 number_of_streams = input_format_context->nb_streams;
 streams_list =
 (int *)av_mallocz_array(number_of_streams, sizeof(*streams_list));

 if (!streams_list)
 {
 ret = AVERROR(ENOMEM);
 return 1;
 }
 for (i = 0; i < input_format_context->nb_streams; i++)
 {
 AVStream *out_stream;
 AVStream *in_stream = input_format_context->streams[i];
 AVCodecParameters *in_codecpar = in_stream->codecpar;
 if (in_codecpar->codec_type != AVMEDIA_TYPE_AUDIO &&
 in_codecpar->codec_type != AVMEDIA_TYPE_VIDEO &&
 in_codecpar->codec_type != AVMEDIA_TYPE_SUBTITLE)
 {
 streams_list[i] = -1;
 continue;
 }
 streams_list[i] = stream_index++;

 out_stream = avformat_new_stream(output_format_context, NULL);
 if (!out_stream)
 {
 fprintf(stderr, "Failed allocating output stream\n");
 ret = AVERROR_UNKNOWN;
 return 1;
 }
 ret = avcodec_parameters_copy(out_stream->codecpar, in_codecpar);
 if (ret < 0)
 {
 fprintf(stderr, "Failed to copy codec parameters\n");
 return 1;
 }
 }
 printf("convert 6\n");
 av_dump_format(output_format_context, 0, out_filename, 1);
 if (!(output_format_context->oformat->flags & AVFMT_NOFILE))
 {
 ret = avio_open(&output_format_context->pb, out_filename, AVIO_FLAG_WRITE);
 if (ret < 0)
 {
 fprintf(stderr, "Could not open output file '%s'", out_filename);
 return 1;
 }
 }
 AVDictionary *opts = NULL;
 printf("convert 7\n");
 ret = avformat_write_header(output_format_context, &opts);
 if (ret < 0)
 {
 fprintf(stderr, "Error occurred when opening output file\n");
 return 1;
 }
 int n = 0;

 while (1)
 {
 AVStream *in_stream, *out_stream;
 ret = av_read_frame(input_format_context, &packet);
 if (ret < 0)
 break;
 in_stream = input_format_context->streams[packet.stream_index];
 if (packet.stream_index >= number_of_streams ||
 streams_list[packet.stream_index] < 0)
 {
 av_packet_unref(&packet);
 continue;
 }
 packet.stream_index = streams_list[packet.stream_index];

 out_stream = output_format_context->streams[packet.stream_index];

 out_stream->codec->time_base.num = 1;
 out_stream->codec->time_base.den = 30;

 packet.pts = n * 3000;
 packet.dts = n * 3000;
 packet.duration = 3000;

 packet.pos = -1;

 ret = av_interleaved_write_frame(output_format_context, &packet);
 if (ret < 0)
 {
 fprintf(stderr, "Error muxing packet\n");
 break;
 }
 av_packet_unref(&packet);
 n++;
 }

 av_write_trailer(output_format_context);
 avformat_close_input(&input_format_context);
 if (output_format_context &&
 !(output_format_context->oformat->flags & AVFMT_NOFILE))
 avio_closep(&output_format_context->pb);
 avformat_free_context(output_format_context);
 av_freep(&streams_list);
 if (ret < 0 && ret != AVERROR_EOF)
 {
 fprintf(stderr, "Error occurred\n");
 return 1;
 }
 return 0;
}
// PyMethodDef and other orchestration code is skipped



What is the reason that the code works as expected in my test_convert but not within Python ?


-
ffmpeg says "No JPEG data found in image" when reading image paths from Linux pipe
18 septembre 2021, par user16945608I'm trying to convert a set of pictures into a video, and I want to read the file paths of the pictures from the pipe. The command I would like to run looks like this :


find dir/*.JPG | sort | ffmpeg -f image2pipe -r 1 -vcodec mjpeg -s 6000x4000 -pix_fmt yuvj422p -i - -vcodec libx264 -s 1080x720 -r 20 -pix_fmt yuv420p out.mkv


But I keep obtaining the
No JPEG data found in image
error. Here is the full log :

Input #0, image2pipe, from 'pipe:':
 Duration: N/A, bitrate: N/A
 Stream #0:0: Video: mjpeg, yuvj422p(bt470bg/unknown/unknown), 6000x4000, 1 fps, 1 tbr, 1 tbn, 1 tbc
Stream mapping:
 Stream #0:0 -> #0:0 (mjpeg (native) -> h264 (libx264))
[mjpeg @ 0x558e98cd7300] No JPEG data found in image
Error while decoding stream #0:0: Invalid data found when processing input
[swscaler @ 0x558e98ce9440] deprecated pixel format used, make sure you did set range correctly
[libx264 @ 0x558e98cdaac0] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX
[libx264 @ 0x558e98cdaac0] profile High, level 3.1, 4:2:0, 8-bit
[libx264 @ 0x558e98cdaac0] 264 - core 161 r3039 544c61f - H.264/MPEG-4 AVC codec - Copyleft
2003-2021 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=20 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, matroska, to 'out.mkv':
 Metadata:
 encoder : Lavf58.76.100
 Stream #0:0: Video: h264 (H264 / 0x34363248), yuv420p, 1080x720, q=2-31, 20 fps, 1k tbn
 Metadata:
 encoder : Lavc58.134.100 libx264
 Side data:
 cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A
frame= 0 fps=0.0 q=0.0 Lsize= 1kB time=00:00:00.00 bitrate=N/A speed= 0x
video:0kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
Conversion failed!



The pictures are in the following format (with
mediainfo
) and the filenames are in the formDSC_1234.JPG
:

Format : JPEG
Video
Format : JPEG
Width : 6 000 pixels
Height : 4 000 pixels
Display aspect ratio : 3:2
Color space : YUV
Chroma subsampling : 4:2:2
Bit depth : 8 bits
Compression mode : Lossy



Also, I would like to avoid using a solution without piping the paths (with
-f image2 -i DSC_%04d.JPG
for example). Do you have any idea what's happening ?