
Recherche avancée
Médias (1)
-
The Great Big Beautiful Tomorrow
28 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Texte
Autres articles (98)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
ANNEXE : Les plugins utilisés spécifiquement pour la ferme
5 mars 2010, parLe site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)
Sur d’autres sites (9727)
-
given a list of mp4, generate fmp4 and concatenate
6 août 2021, par onionmy input is a list of mp4 (for example each one is 10s), each mp4 will have correct timestamp, for example the second mp4 represents from 10s - 20s data.


To simulate my input, I generate a list of mp4 this way


ffmpeg -i ../origin-long-video.mp4 -map 0 -c copy -f segment -segment_time 10 -force_key_frames "expr:gte(t,n_forced*2)" -reset_timestamps 0 videos/output_%03d.mp4



Note I use reset_timestamps 0 so that the timestamp is preserved.


Then I convert each mp4 to fragment mp4 by using


ffmpeg -y -i videos/output_001.mp4 -force_key_frames "expr:gte(t,n_forced*2)" -sc_threshold 0 -s 1280x720 -c:v libx264 -b:v 1500k -c:a copy -hls_time 100 -hls_playlist_type vod -hls_segment_type fmp4 -hls_segment_filename "hls1/file%d.m4s" -copyts hls1/index.m3u8



The above cmd is for the first mp4 file, i did same operation for other mp4 in the list.
Note I used a large hls_time so that each mp4 will result in one fmp4, and also I use copyts to preserve the timestamp.


Then I want to concatenate for example the 2nd and the 3rd fmp4 to generate one mp4, I used


cat init.mp4 > rs.mp4
cat 2nd fmp4 >> rs.mp4
cat 3rd fmp4 >> rs.mp4



However when trying to play the generated rs.mp4, it is 20s black screen then 10s video (corresponding to the 3rd mp4).


I tried another approach which just generated a list of fmp4 directly like this :


ffmpeg -y -i ../origin-long-video.mp4 -force_key_frames "expr:gte(t,n_forced*2)" -sc_threshold 0 -s 1280x720 -c:v libx264 -b:v 1500k -c:a copy -hls_time 10 -hls_playlist_type vod -hls_segment_type fmp4 -hls_segment_filename "videos/file%d.m4s" videos/index.m3u8



And then if I concatenate the 2nd and 3rd generated fmp4 use the same way above, the result mp4 plays well.


I wonder what is the difference between the generated fragment mp4 of the two approach so that they leads to different behavior when concatenation. Thank you !


-
Revision 4e2732c3c3 : Separate encode_block for pass 1 and 2 The encode_block for pass 1 takes simple
23 octobre 2013, par Jingning HanChanged Paths :
Modify /vp9/encoder/vp9_encodemb.c
Separate encode_block for pass 1 and 2The encode_block for pass 1 takes simpler functionalities and can
save a few branches. The main reason is to make encode_block only
used after running rate-distortion optimization search in pass 2,
hence allowing dual buffer stack approach later.Change-Id : I9e549ffb758e554fe185e48a07d6e0e01e475bcf
-
How to use FFMPEG to write image as bytes using ProcessBuilder
7 mai 2022, par ljnoahI have a callback function that gives me frames as bytes type to which I would like to pass as FFMPEG parameter to write them to a rtmp URL. but I don't really have any experience with ffmpeg, thus far I was not able to find an example on how to do it. Basically, I would like to know can I use use the bytes array that is FrameData that holds the images I am getting and write to ffmpeg as a parameter to be sent via streaming to a server using ProcessBuilder.


private byte[] FrameData = new byte[384 * 288 * 4];
private final IFrameCallback mIFrameCallback = new IFrameCallback() {
 @Override
 public void onFrame(final ByteBuffer frameData) {
 frameData.clear();
 frameData.get(FrameData, 0, frameData.capacity());
 ProcessBuilder pb = new ProcessBuilder(ffmpeg , "-y", "-f", "rawvideo", "vcodec", "rawvideo", "-pix_fmt", "bgr24",
 "-r", "25",
 "-i", "-",
 "-c:v", "libx264",
 "-pix_fmt", "yuv420p",
 "-preset", "ultrafast",
 "-f", "flv",
 "rtmp://192.168.0.13:1935/live/test");
 }
 Log.e(TAG, "mIFrameCallback: onFrame------");
 try {
 pb.inheritIO().start().waitFor();
 } catch (InterruptedException | IOException e) {
 e.printStackTrace();
 }
 };



This callback gives me the frames from my camera on the fly and writes it to FrameData, which I can compress to a bitmap if needed. The current attempt isn't working as I have no idea how to pass my byte array as a parameter to ffmpeg to be streamed via rtmp as above


to push my frames from the camera that are stored FrameData byte buffer via RTMP/RTSP to my server IP. I would use a similar approach in python like this :


import subprocess
fps = 25
width = 224
height = 224
command = ['ffmpeg', '-y', '-f', 'rawvideo', '-vcodec', 'rawvideo', '-pix_fmt', 'bgr24',
 '-s', "{}x{}".format(width, height),
 '-r', str(fps),
 '-i', '-',
 '-c:v', 'libx264',
 '-pix_fmt', 'yuv420p',
 '-preset', 'ultrafast',
 '-f', 'flv',
 'rtmp://192.168.0.13:1935/live/test']
p = subprocess.Popen(command, stdin=subprocess.PIPE)
while(True):
 frame = np.random.randint([255], size=(224, 224, 3))
 frame = frame.astype(np.uint8)
 p.stdin.write(frame.tobytes())



I really don't understand how to write my byte arrays to the ffmpeg as I would in this Python example above