
Recherche avancée
Autres articles (36)
-
La file d’attente de SPIPmotion
28 novembre 2010, parUne file d’attente stockée dans la base de donnée
Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...) -
Contribute to documentation
13 avril 2011Documentation is vital to the development of improved technical capabilities.
MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
To contribute, register to the project users’ mailing (...) -
Use, discuss, criticize
13 avril 2011, parTalk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
A discussion list is available for all exchanges between users.
Sur d’autres sites (5836)
-
avformat/dump : Print stream start offsets for input streams
23 février 2019, par softworkzavformat/dump : Print stream start offsets for input streams
Seeing the offset of video and audio streams to each other is often
a useful metric in diagnosing and understanding issues with playback
or transcoding.
This commit adds those offsets to the stream info print.Signed-off-by : softworkz <softworkz@hotmail.com>
-
Start Android Java VM from native C code NDK (ffmpeg mediacodec use)
19 avril 2021, par eusoubrasileiroI managed to cross-compile
ffmpeg
using the NDK for armv8a api 27 withMediaCodec
hardware acceleration support.

Using root after setting permissions, folders and setting properly
LD_LIBRARY_PATH
etc. I can run it without problems on a terminal session (ssh). Only if I don't try to use the-hwaccel
option.

If try to run something using
-hwaccel
, like :

ffmpeg -rtsp_transport tcp -an -hwaccel mediacodec -c:v hevc_mediacodec -i rtsp://user:pass@192.168.0.100:554/onvif1 -f null - -benchmark



I get the error bellow about
No Java virtual machine
.

...
Input #0, rtsp, from 'rtsp://user:pass@192.168.0.100:554/onvif1':
 Metadata:
 title : H.265 Video, RtspServer_0.0.0.2
 Duration: N/A, start: 0.000000, bitrate: N/A
 Stream #0:0: Video: hevc (Main), yuv420p(tv, bt470bg), 1920x1080 [SAR 1:1 DAR 16:9], 10 fps, 10 tbr, 90k tbn, 10 tbc
 Stream #0:1: Audio: pcm_alaw, 8000 Hz, mono, s16, 64 kb/s
[amediaformat @ 0x7e2ea27300] No Java virtual machine has been registered
[hevc_mediacodec @ 0x7e2eb44f00] Failed to create media format
Stream mapping:
 Stream #0:0 -> #0:0 (hevc (hevc_mediacodec) -> wrapped_avframe (native))
Error while opening decoder for input stream #0:0 : Generic error in an external library



Would it be possible to start (create or launch ?) the Dalvik Java VM directly from the C code ? I don't even know if those are correct terms. Make it visible for
ffmpeg
?

Any information that will help a Android Newbie get on his feet will be greatly appreciated. If that is possible I would write a little patch on the
ffmpeg
code.

I really would not like to package this in an app only to be able to test around with this.


-
ffmpeg : How to keep audio synced when doing many (100) cuts with filter select='between(t,start,stop)+between...'
4 février 2021, par LouisI am cutting out silent parts of a 45 minute video (a lecture).
To do this, I use a filter to select, say one hundred, non-silent parts (I already know their start and end times).


ffmpeg -i in.mp4
-vf "select='between(t,start_1,stop_1)+...+between(t,start_100,stop_100)', setpts=N/FRAME_RATE/TB"
-af "aselect='between(t,start_1,stop_1)+...+between(t,start_100,stop_100)', asetpts=N/SR/TB"
-c:a aac -c:v libx264 out.mp4



It works, but at the end of the video the images are delayed relative to the audio.
After reading this answer I also added


-shortest -avoid_negative_ts make_zero -fflags +genpts



at the end of the command. It didn't help.


As audio and video are concatenated independently I'm not surprised that tiny time errors due to finite frame rate add up.


Is there a solution that doesn't involve saving every non-silent part as a file ?