
Recherche avancée
Autres articles (101)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Participer à sa traduction
10 avril 2011Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
Actuellement MediaSPIP n’est disponible qu’en français et (...) -
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)
Sur d’autres sites (10067)
-
FFMPEG - How to pipe RMS_level and pts_time metadata without generating unwanted metadata
6 février 2020, par Tovi NewmanI am trying to find the loudest (highest rms_level) moment in an audio file, but I need to pipe the metadata rather than write to a file.
I converted the answer found here : [https://superuser.com/questions/1183663/determining-audio-level-peaks-with-ffmpeg][1]
By removing the write to file command and adding a pipe.
Here’s what I’ve got.ffmpeg -i loudSoft.mp3 -af astats=metadata=1:reset=1,ametadata=print:key=lavfi.astats.Overall.RMS_level -f null - 2> result.txt
The only problem is, now I’ve got a lot of unwanted metadata before and after the RMS_level and pts_time data as well as
[Parsed_ametadata_1 @ 0x7f9d42c37500]
being printed on each line. None of that was being written when I was writing to a file instead of piping. (all I need is the time and the rms.)Here is an abridged version of what I get when I write to file :
frame:0 pts:0 pts_time:0
lavfi.astats.Overall.RMS_level=-inf
frame:1 pts:47 pts_time:0.00106576
lavfi.astats.Overall.RMS_level=-165.163347
frame:2 pts:1199 pts_time:0.0271882
lavfi.astats.Overall.RMS_level=-99.736394
frame:3 pts:2351 pts_time:0.0533107
lavfi.astats.Overall.RMS_level=-88.112282
frame:4 pts:3503 pts_time:0.0794331
lavfi.astats.Overall.RMS_level=-86.554314
frame:5 pts:4655 pts_time:0.105556
lavfi.astats.Overall.RMS_level=-82.977501
frame:6 pts:5807 pts_time:0.131678
lavfi.astats.Overall.RMS_level=-79.698739
frame:7 pts:6959 pts_time:0.1578
lavfi.astats.Overall.RMS_level=-76.629393
frame:8 pts:8111 pts_time:0.183923
lavfi.astats.Overall.RMS_level=-71.581211
frame:9 pts:9263 pts_time:0.210045
lavfi.astats.Overall.RMS_level=-75.038503
frame:10 pts:10415 pts_time:0.236168And here is what I’m looking at :
ffmpeg version 4.2.2 Copyright (c) 2000-2019 the FFmpeg developers
built with Apple clang version 11.0.0 (clang-1100.0.33.16)
configuration: --prefix=/usr/local/Cellar/ffmpeg/4.2.2_1 --enable-shared --enable-pthreads --enable-version3 --enable-avresample --cc=clang --host-cflags='-I/Library/Java/JavaVirtualMachines/adoptopenjdk-13.0.1.jdk/Contents/Home/include -I/Library/Java/JavaVirtualMachines/adoptopenjdk-13.0.1.jdk/Contents/Home/include/darwin -fno-stack-check' --host-ldflags= --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libbluray --enable-libmp3lame --enable-libopus --enable-librubberband --enable-libsnappy --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librtmp --enable-libspeex --enable-libsoxr --enable-videotoolbox --disable-libjack --disable-indev=jack
libavutil 56. 31.100 / 56. 31.100
libavcodec 58. 54.100 / 58. 54.100
libavformat 58. 29.100 / 58. 29.100
libavdevice 58. 8.100 / 58. 8.100
libavfilter 7. 57.100 / 7. 57.100
libavresample 4. 0. 0 / 4. 0. 0
libswscale 5. 5.100 / 5. 5.100
libswresample 3. 5.100 / 3. 5.100
libpostproc 55. 5.100 / 55. 5.100
Input #0, mp3, from 'loudSoft2.mp3':
Metadata:
encoder : Lavf58.29.100
Duration: 00:00:09.85, start: 0.025057, bitrate: 128 kb/s
Stream #0:0: Audio: mp3, 44100 Hz, stereo, fltp, 128 kb/s
Metadata:
encoder : Lavc58.54
Stream mapping:
Stream #0:0 -> #0:0 (mp3 (mp3float) -> pcm_s16le (native))
Press [q] to stop, [?] for help
[Parsed_ametadata_1 @ 0x7f9d42c37500] frame:0 pts:0 pts_time:0
[Parsed_ametadata_1 @ 0x7f9d42c37500] lavfi.astats.Overall.RMS_level=-inf
Output #0, null, to 'pipe:':
Metadata:
encoder : Lavf58.29.100
Stream #0:0: Audio: pcm_s16le, 44100 Hz, stereo, s16, 1411 kb/s
Metadata:
encoder : Lavc58.54.100 pcm_s16le
[Parsed_ametadata_1 @ 0x7f9d42c37500] frame:1 pts:47 pts_time:0.00106576
[Parsed_ametadata_1 @ 0x7f9d42c37500] lavfi.astats.Overall.RMS_level=-165.163347
[Parsed_ametadata_1 @ 0x7f9d42c37500] frame:2 pts:1199 pts_time:0.0271882
[Parsed_ametadata_1 @ 0x7f9d42c37500] lavfi.astats.Overall.RMS_level=-99.736394
[Parsed_ametadata_1 @ 0x7f9d42c37500] frame:3 pts:2351 pts_time:0.0533107
*** MIDDLE OMITTED FOR BREVITY ***
[Parsed_ametadata_1 @ 0x7f9d42c37500] lavfi.astats.Overall.RMS_level=-88.532185
[Parsed_ametadata_1 @ 0x7f9d42c37500] frame:375 pts:430895 pts_time:9.77086
[Parsed_ametadata_1 @ 0x7f9d42c37500] lavfi.astats.Overall.RMS_level=-88.594276
[Parsed_ametadata_1 @ 0x7f9d42c37500] frame:376 pts:432047 pts_time:9.79698
[Parsed_ametadata_1 @ 0x7f9d42c37500] lavfi.astats.Overall.RMS_level=-88.654138
size=N/A time=00:00:09.82 bitrate=N/A speed=82.6x
video:0kB audio:1692kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
[Parsed_astats_0 @ 0x7f9d42c37280] Channel: 1
[Parsed_astats_0 @ 0x7f9d42c37280] DC offset: 0.000001
[Parsed_astats_0 @ 0x7f9d42c37280] Min level: -0.000106
[Parsed_astats_0 @ 0x7f9d42c37280] Max level: 0.000115
[Parsed_astats_0 @ 0x7f9d42c37280] Min difference: 0.000000
[Parsed_astats_0 @ 0x7f9d42c37280] Max difference: 0.000077
[Parsed_astats_0 @ 0x7f9d42c37280] Mean difference: 0.000017
[Parsed_astats_0 @ 0x7f9d42c37280] RMS difference: 0.000022
[Parsed_astats_0 @ 0x7f9d42c37280] Peak level dB: -78.752617
[Parsed_astats_0 @ 0x7f9d42c37280] RMS level dB: -88.654138
[Parsed_astats_0 @ 0x7f9d42c37280] RMS peak dB: -88.654138
[Parsed_astats_0 @ 0x7f9d42c37280] RMS trough dB: -88.654138
[Parsed_astats_0 @ 0x7f9d42c37280] Crest factor: 3.126627
[Parsed_astats_0 @ 0x7f9d42c37280] Flat factor: 0.000000
[Parsed_astats_0 @ 0x7f9d42c37280] Peak count: 2
[Parsed_astats_0 @ 0x7f9d42c37280] Bit depth: 32/32
[Parsed_astats_0 @ 0x7f9d42c37280] Dynamic range: 76.274252
[Parsed_astats_0 @ 0x7f9d42c37280] Zero crossings: 246
[Parsed_astats_0 @ 0x7f9d42c37280] Zero crossings rate: 0.222624
[Parsed_astats_0 @ 0x7f9d42c37280] Number of NaNs: 0
[Parsed_astats_0 @ 0x7f9d42c37280] Number of Infs: 0
[Parsed_astats_0 @ 0x7f9d42c37280] Number of denormals: 0
[Parsed_astats_0 @ 0x7f9d42c37280] Channel: 2
[Parsed_astats_0 @ 0x7f9d42c37280] DC offset: 0.000001
[Parsed_astats_0 @ 0x7f9d42c37280] Min level: -0.000106
[Parsed_astats_0 @ 0x7f9d42c37280] Max level: 0.000115
[Parsed_astats_0 @ 0x7f9d42c37280] Min difference: 0.000000
[Parsed_astats_0 @ 0x7f9d42c37280] Max difference: 0.000077
[Parsed_astats_0 @ 0x7f9d42c37280] Mean difference: 0.000017
[Parsed_astats_0 @ 0x7f9d42c37280] RMS difference: 0.000022
[Parsed_astats_0 @ 0x7f9d42c37280] Peak level dB: -78.752617
[Parsed_astats_0 @ 0x7f9d42c37280] RMS level dB: -88.654138
[Parsed_astats_0 @ 0x7f9d42c37280] RMS peak dB: -88.654138
[Parsed_astats_0 @ 0x7f9d42c37280] RMS trough dB: -88.654138
[Parsed_astats_0 @ 0x7f9d42c37280] Crest factor: 3.126627
[Parsed_astats_0 @ 0x7f9d42c37280] Flat factor: 0.000000
[Parsed_astats_0 @ 0x7f9d42c37280] Peak count: 2
[Parsed_astats_0 @ 0x7f9d42c37280] Bit depth: 32/32
[Parsed_astats_0 @ 0x7f9d42c37280] Dynamic range: 76.274252
[Parsed_astats_0 @ 0x7f9d42c37280] Zero crossings: 246
[Parsed_astats_0 @ 0x7f9d42c37280] Zero crossings rate: 0.222624
[Parsed_astats_0 @ 0x7f9d42c37280] Number of NaNs: 0
[Parsed_astats_0 @ 0x7f9d42c37280] Number of Infs: 0
[Parsed_astats_0 @ 0x7f9d42c37280] Number of denormals: 0
[Parsed_astats_0 @ 0x7f9d42c37280] Overall
[Parsed_astats_0 @ 0x7f9d42c37280] DC offset: 0.000001
[Parsed_astats_0 @ 0x7f9d42c37280] Min level: -0.000106
[Parsed_astats_0 @ 0x7f9d42c37280] Max level: 0.000115
[Parsed_astats_0 @ 0x7f9d42c37280] Min difference: 0.000000
[Parsed_astats_0 @ 0x7f9d42c37280] Max difference: 0.000077
[Parsed_astats_0 @ 0x7f9d42c37280] Mean difference: 0.000017
[Parsed_astats_0 @ 0x7f9d42c37280] RMS difference: 0.000022
[Parsed_astats_0 @ 0x7f9d42c37280] Peak level dB: -78.752617
[Parsed_astats_0 @ 0x7f9d42c37280] RMS level dB: -88.654138
[Parsed_astats_0 @ 0x7f9d42c37280] RMS peak dB: -88.654138
[Parsed_astats_0 @ 0x7f9d42c37280] RMS trough dB: -88.654138
[Parsed_astats_0 @ 0x7f9d42c37280] Flat factor: 0.000000
[Parsed_astats_0 @ 0x7f9d42c37280] Peak count: 2.000000
[Parsed_astats_0 @ 0x7f9d42c37280] Bit depth: 32/32
[Parsed_astats_0 @ 0x7f9d42c37280] Number of samples: 1105
[Parsed_astats_0 @ 0x7f9d42c37280] Number of NaNs: 0.000000
[Parsed_astats_0 @ 0x7f9d42c37280] Number of Infs: 0.000000
[Parsed_astats_0 @ 0x7f9d42c37280] Number of denormals: 0.000000 -
Controlling ffmpeg at runtime with zmq
3 avril 2023, par GavinI want to dynamically change the rectilinear view (eg its yaw) of a 360 video as it plays.


basic command


To take a 360 video and show a flat/normal view at a yaw perspective of 60 degrees


ffmpeg -i input360.mp4 -vf "v360=input=e:rectilinear:yaw=60,scale=iw/4:-1" out60.mp4

This works fine.

Changing the yaw during playback


I understand ffmpeg has two methods to change filter params at runtime ; sendcmd (file based) and zmq (message based)


I got sendcmd method working, but am struggling to understand zmq syntax and use. ffmpeg's zmq docs are pretty sparse.
I am using a local Windows 10 PC


sendcmd


ffplay -i input360.mp4 -vf "sendcmd=f=cmd.txt,v360=input=e:rectilinear:reset_rot=1,scale=iw/4:-1"


with a cmd.txt file


0-5 [expr] v360 yaw 'lerp(0,90,TI)';
5-10 [expr] v360 yaw 'lerp(90,0,TI)';



result : yaw changes from 0 to 90 degrees from t=0-5s and then 90 to 0 degrees from t=5-10s. Perfect


zmq


ffplay -i input360.mp4 -vf "v360=input=e:rectilinear:reset_rot=1,zmq,scale=iw/4:-1"


I got zmqsend from ffmpeg-tools.zip and added to my ffmpeg bin directory


- 

-
execute above ffplay command from terminal window #1 - and see video playing


-
Open terminal window #2 and execute : echo v360 yaw 90 > zmqsend








result : no change to video yaw. No errors either


What am I doing wrong ? Im not sure if my ffmpeg command is wrong, or if my zmq message does not reach ffmpeg (or both). I checked my ffmpeg v2022-10-10 config has —enable-libzmq


-
-
Streaming H.264 over UDP using FFmpeg, and "dimensions not set" error
3 septembre 2015, par Baris DemirayI’m trying to stream H.264 over UDP with no luck so far. Here is a minimal code that you can reproduce the problem.
To compile,
g++ -o test -lavcodec -lavformat -lavutil test.cpp
Extra information, I start
ffplay
as follows. Currently it’s of no use.ffplay -i udp://127.0.0.1:8554/live.sdp
Output of my code (see
avio_open()
call),[libx264 @ 0x6a26c0] using mv_range_thread = 24
[libx264 @ 0x6a26c0] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.1 Cache64
[libx264 @ 0x6a26c0] profile High, level 3.1
Output #0, h264, to 'udp://127.0.0.1:8554/live.sdp':
Stream #0:0, 0, 0/0: Video: h264 (libx264), -1 reference frame, none, q=-1--1
[h264 @ 0x6a2020] dimensions not set
Cannot write header to stream: SuccessAnd the code,
extern "C" {
#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libavutil></libavutil>avutil.h>
}
#include <iostream>
using namespace std;
int main() {
AVCodecContext* m_codecContext;
AVCodec* m_codec;
AVFormatContext* m_formatContext;
AVStream* m_stream;
unsigned m_outWidth = 768;
unsigned m_outHeight = 608;
av_register_all();
avcodec_register_all();
avformat_network_init();
int errorStatus = 0;
char errorLog[128] = { 0 };
av_log_set_level(AV_LOG_TRACE);
string m_output("udp://127.0.0.1:8554/live.sdp");
if (avformat_alloc_output_context2(&m_formatContext, NULL, "h264", m_output.c_str()) < 0) {
cerr << "Cannot allocate output context: "
<< av_make_error_string(errorLog, 128, errorStatus) << endl;
return -1;
}
AVOutputFormat *m_outputFormat = m_formatContext->oformat;
m_codec = avcodec_find_encoder(AV_CODEC_ID_H264);
if (!m_codec) {
cerr << "Cannot find an encoder: "
<< av_make_error_string(errorLog, 128, errorStatus) << endl;
return -1;
}
m_codecContext = avcodec_alloc_context3(m_codec);
if (!m_codecContext) {
cerr << "Cannot allocate a codec context: "
<< av_make_error_string(errorLog, 128, errorStatus) << endl;
return -1;
}
m_codecContext->pix_fmt = AV_PIX_FMT_YUV420P;
m_codecContext->width = m_outWidth;
m_codecContext->height = m_outHeight;
if (avcodec_open2(m_codecContext, m_codec, NULL) < 0) {
cerr << "Cannot open codec: "
<< av_make_error_string(errorLog, 128, errorStatus) << endl;
return -1;
}
m_stream = avformat_new_stream(m_formatContext, m_codec);
if (!m_stream) {
cerr << "Cannot create a new stream: "
<< av_make_error_string(errorLog, 128, errorStatus) << endl;
return -1;
}
av_dump_format(m_formatContext, 0, m_output.c_str(), 1);
if ((errorStatus = avio_open(&m_formatContext->pb, m_output.c_str(), AVIO_FLAG_WRITE)) < 0) {
cerr << "Cannot open output: "
<< av_make_error_string(errorLog, 128, errorStatus) << endl;
return -1;
}
if (avformat_write_header(m_formatContext, NULL) < 0) {
cerr << "Cannot write header to stream: "
<< av_make_error_string(errorLog, 128, errorStatus) << endl;
return -1;
}
cout << "All done." << endl;
return 0;
}
</iostream>For those who has even more time to spare on my problem, when I change
m_output
tortsp://127.0.0.1:8554/live.sdp
, andffplay
command toffplay -rtsp_flags listen -i rtsp://127.0.0.1:8554/live.sdp
I get the error,[libx264 @ 0x1e056c0] using mv_range_thread = 24
[libx264 @ 0x1e056c0] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.1 Cache64
[libx264 @ 0x1e056c0] profile High, level 3.1
Output #0, h264, to 'rtsp://127.0.0.1:8554/live.sdp':
Stream #0:0, 0, 0/0: Video: h264 (libx264), -1 reference frame, none, q=-1--1
Cannot open output: Protocol not foundAm I naive to expect that streaming protocol will be changed like this ?