
Advanced search
Medias (91)
-
Richard Stallman et le logiciel libre
19 October 2011, by
Updated: May 2013
Language: français
Type: Text
-
Stereo master soundtrack
17 October 2011, by
Updated: October 2011
Language: English
Type: Audio
-
Elephants Dream - Cover of the soundtrack
17 October 2011, by
Updated: October 2011
Language: English
Type: Picture
-
#7 Ambience
16 October 2011, by
Updated: June 2015
Language: English
Type: Audio
-
#6 Teaser Music
16 October 2011, by
Updated: February 2013
Language: English
Type: Audio
-
#5 End Title
16 October 2011, by
Updated: February 2013
Language: English
Type: Audio
Other articles (18)
-
La sauvegarde automatique de canaux SPIP
1 April 2010, byDans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...) -
D’autres logiciels intéressants
12 April 2011, byOn ne revendique pas d’être les seuls à faire ce que l’on fait ... et on ne revendique surtout pas d’être les meilleurs non plus ... Ce que l’on fait, on essaie juste de le faire bien, et de mieux en mieux...
La liste suivante correspond à des logiciels qui tendent peu ou prou à faire comme MediaSPIP ou que MediaSPIP tente peu ou prou à faire pareil, peu importe ...
On ne les connais pas, on ne les a pas essayé, mais vous pouvez peut être y jeter un coup d’oeil.
Videopress
Site Internet : (...) -
Encoding and processing into web-friendly formats
13 April 2011, byMediaSPIP automatically converts uploaded files to internet-compatible formats.
Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
All uploaded files are stored online in their original format, so you can (...)
On other websites (4541)
-
iOS Objective C Opus audio stream kAudioFormatOpus — specification / conversion query
16 April 2022, by deffodeffoI'm working on a React Native voice app wanting to record straight to an Opus stream.


On the iOS side at device level, I'm working in Objective C and I'm using an AVAudioSession recorder with formatID set to kAudioFormatOpus. The recorder captures audio data in the specified format and passes packets for upstreaming to the React Native app.


This is all working well and emitting a stream of audio data when I run my code on an iOS Simulator or real device.


My problem is when I receive the audio stream in the backend, I find I'm unable to reliably decode the Opus stream using ffmpeg. (I wish to decode to PCM.)


Whilst I could post up specific code to show what I'm doing in more detail, my question at this stage is more generic in nature:


Is anyone familiar with the format of Opus audio stream that is generated by iOS AudioSession recorder under audio format kAudioFormatOpus ? Is anyone able to suggest proven conversion techniques? (e.g. some ffmpeg commands), or else hook me up with some links to format specs so I can figure out what is going on here ?


The Apple Developer documentation contains zero useful information: https://developer.apple.com/documentation/coreaudiotypes/1572096-audio_data_format_identifiers/kaudioformatopus?changes=la__2&language=objc


I have looked all over the internet, but I'm unable to find any useful spec info that corresponds to the stream that AVAudioSession recorder is outputting. I have seen a few posts which say Apple have not fully complied with the Opus spec, but I don't know enough about the proper structure of Opus to ascertain this for myself.


Any help would be very much appreciated.


Thanks all


-
ffplay: how does it calculate the fps for playback?
21 October 2020, by DanielI'm trying to playback a live media (h264) which is produced by a hardware encoder.


The actual desired FPS on the encoder is set to 20, and when checking the logs of the encoder it prints "FPS statistics" every minute:


2020-10-21 17:26:54.787 [ info] video_stream_thread(),video chn 0, fps: 19.989270
2020-10-21 17:27:54.836 [ info] video_stream_thread(),video chn 0, fps: 19.989270
2020-10-21 17:28:54.837 [ info] video_stream_thread(),video chn 0, fps: 20.005924
2020-10-21 17:29:54.837 [ info] video_stream_thread(),video chn 0, fps: 19.989270
2020-10-21 17:30:54.888 [ info] video_stream_thread(),video chn 0, fps: 19.989274
2020-10-21 17:31:54.918 [ info] video_stream_thread(),video chn 0, fps: 19.989264



You can see it's varying, but not too much around 20.


Question1: Is this normal? Or it should be exactly 20 every time? To avoid confusion: I'd like to know if by the standard of H264, can this be accepted as a valid stream or this violates some
rule?


I'm trying to playback this stream with
ffplay
:

$ ffplay rtsp://this_stream
Input #0, rtsp, from 'xyz'
 Metadata:
 title : 
 comment : substream
 Duration: N/A, start: 0.040000, bitrate: N/A
 Stream #0:0: Video: h264 (Constrained Baseline), yuv420p(progressive), 640x360, 25 fps, 25 tbr, 90k tbn, 180k tbc



The thing is that ffplay thinks this is a stream with 25fps. And it also plays 25 frames each sec, causing the stream to stall and buffer in every few seconds.


I believe the fps is calculated by some pts/dts values in the stream itself, and it's not hardcoded. Am I wrong here?


If I'm not wrong, why does ffplay thinks this stream runs at 25fps, whereas it only runs at (around) 20?


-
syncing a video file with audio using ffmpeg is returning a video without audio
31 July 2019, by abboodWhen I attempt to sync a .mov video with an .m4a audio file using ffmpeg like so
ffmpeg -ss 00:00:2 -i test.m4a -i test.mov -c:v copy -c:a aac -strict experimental output.mp4
it works like a charm. But right now I’ve recorded a usability test video using adobe XD, and it returned an .mp4 (MPEG-4) video and I manually recorded the audio using quicktimre returning a .m4a audio file.
Trying to merge them using ffmpeg is returning a video without any audio:
ffmpeg -i video.mp4 -i audio.m4a -c:v copy -c:a aac -strict experimental output.mp4
ffmpeg version 4.1 Copyright (c) 2000-2018 the FFmpeg developers
built with Apple LLVM version 10.0.0 (clang-1000.11.45.5)
configuration: --prefix=/usr/local/Cellar/ffmpeg/4.1_1 --enable-shared --enable-pthreads --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-ffplay --enable-gpl --enable-libmp3lame --enable-libopus --enable-libsnappy --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-libxvid --enable-lzma --enable-opencl --enable-videotoolbox
libavutil 56. 22.100 / 56. 22.100
libavcodec 58. 35.100 / 58. 35.100
libavformat 58. 20.100 / 58. 20.100
libavdevice 58. 5.100 / 58. 5.100
libavfilter 7. 40.101 / 7. 40.101
libavresample 4. 0. 0 / 4. 0. 0
libswscale 5. 3.100 / 5. 3.100
libswresample 3. 3.100 / 3. 3.100
libpostproc 55. 3.100 / 55. 3.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'video.mp4':
Metadata:
major_brand : qt
minor_version : 0
compatible_brands: qt
creation_time : 2019-07-31T14:43:18.000000Z
com.apple.quicktime.make: Apple
com.apple.quicktime.model: MacBookPro15,1
com.apple.quicktime.software: Mac OS X 10.14.5 (18F203)
com.apple.quicktime.creationdate: 2019-07-31T17:43:17+0300
Duration: 00:03:46.05, start: 0.000000, bitrate: 498 kb/s
Stream #0:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p(tv, bt709), 374x812 [SAR 1:1 DAR 187:406], 390 kb/s, 60 fps, 60 tbr, 6k tbn, 12k tbc (default)
Metadata:
creation_time : 2019-07-31T14:43:18.000000Z
handler_name : Core Media Video
encoder : H.264
Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 8000 Hz, stereo, fltp, 0 kb/s (default)
Metadata:
creation_time : 2019-07-31T14:43:18.000000Z
handler_name : Core Media Audio
Input #1, mov,mp4,m4a,3gp,3g2,mj2, from 'audio.m4a':
Metadata:
major_brand : M4A
minor_version : 0
compatible_brands: M4A mp42isom
creation_time : 2019-07-31T14:47:26.000000Z
iTunSMPB : 00000000 00000840 00000000 00000000009DFBC0 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
Duration: 00:03:54.78, start: 0.047891, bitrate: 225 kb/s
Stream #1:0(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 224 kb/s (default)
Metadata:
creation_time : 2019-07-31T14:47:26.000000Z
handler_name : Core Media Audio
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Stream #0:1 -> #0:1 (aac (native) -> aac (native))
Press [q] to stop, [?] for help
[aac @ 0x7fb5a4806a00] Too many bits 16384.000000 > 12288 per frame requested, clamping to max
Output #0, mp4, to 'output.mp4':
Metadata:
major_brand : qt
minor_version : 0
compatible_brands: qt
com.apple.quicktime.creationdate: 2019-07-31T17:43:17+0300
com.apple.quicktime.make: Apple
com.apple.quicktime.model: MacBookPro15,1
com.apple.quicktime.software: Mac OS X 10.14.5 (18F203)
encoder : Lavf58.20.100
Stream #0:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p(tv, bt709), 374x812 [SAR 1:1 DAR 187:406], q=2-31, 390 kb/s, 60 fps, 60 tbr, 12k tbn, 6k tbc (default)
Metadata:
creation_time : 2019-07-31T14:43:18.000000Z
handler_name : Core Media Video
encoder : H.264
Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 8000 Hz, stereo, fltp, 96 kb/s (default)
Metadata:
creation_time : 2019-07-31T14:43:18.000000Z
handler_name : Core Media Audio
encoder : Lavc58.35.100 aac
frame=13563 fps=0.0 q=-1.0 Lsize= 10903kB time=00:03:46.08 bitrate= 395.1kbits/s speed= 916x
video:10699kB audio:10kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 1.805767%How can I get around this limitation?