
Recherche avancée
Médias (1)
-
The pirate bay depuis la Belgique
1er avril 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Image
Autres articles (112)
-
Keeping control of your media in your hands
13 avril 2011, parThe vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...) -
Participer à sa traduction
10 avril 2011Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
Actuellement MediaSPIP n’est disponible qu’en français et (...) -
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
Sur d’autres sites (12897)
-
Videos written with moviepy on amazon aws S3 are empty
10 avril 2019, par cellistigsI am working on processing a dataset of large videos ( 100 GB) for a collaborative project. To make it easier to share data and results, I am keeping all videos remotely on an amazon S3 bucket, and processing it by mounting the bucket on an EC2 instance.
One of the processing steps I am trying to do involves cropping the videos, and rewriting them into smaller segments. I am doing this with moviepy, splitting the video with the subclip method and calling :
subclip.write_videofile("PathtoS3Bucket"+VideoName.split('.')[0]+'part' +str(segment)+ '.mp4',codec = 'mpeg4',bitrate = "1500k",threads = 2)
I found that when the videos are too large (parameters set as above) calls to this function will sometimes generate empty files in my S3 bucket ( 10% of the time). Does anyone have insight into features of moviepy/ffmpeg/S3 that would lead to this ?
-
FFMPEG loses drop frame flag when converting from 29.97 mov to mp4
18 décembre 2018, par dementisI’ve got a 29.97 drop frame video, proven with the mediainfo output lines
framerate: 29.97
andDelay_DropFrame: Yes
. I want to convert it to an mp4.Running
ffmpeg -i INPUT.mov -y -b:v 1500k -vcodec libx264 -vf scale=640:-1 -pix_fmt yuv420p -movflags +faststart -strict -2 OUTPUT.mp4
yields a non-drop frame video. When I run mediainfo against the new file, I can see that the framerate is still 29.97, but there is no indication of drop frame anymore.
This also happens when i run the above outputting an mov, and also when i runffmpeg -i INPUT.mov OUTPUT.mov
, so i think it’s purely in the conversion where the data is being lost.One lead i’m following is that mediainfo outputs 3 sections for the original mov, [General], [Video], [Other], while my new mp4 only outputs [General] and [Video].
My FFMPEG version is 3.4.1
Any ideas ? -
Does a track run in a fragmented MP4 have to start with a key frame ?
18 janvier 2021, par stevendesuI'm ingesting an RTMP stream and converting it to a fragmented MP4 file in JavaScript. It took a week of work but I'm almost finished with this task. I'm generating a valid
ftyp
atom,moov
atom, andmoof
atom and the first frame of the video actually plays (with audio) before it goes into an infinite buffering with no errors listed inchrome://media-internals



Plugging the video into
ffprobe
, I get an error similar to :


[mov,mp4,m4a,3gp,3g2,mj2 @ 0x558559198080] Failed to add index entry
 Last message repeated 368 times
[h264 @ 0x55855919b300] Invalid NAL unit size (-619501801 > 966).
[h264 @ 0x55855919b300] Error splitting the input into NAL units.




This led me on a massive hunt for data alignment issues or invalid byte offsets in my
tfhd
andtrun
atoms, however no matter where I looked or how I sliced the data, I couldn't find any problems in themoof
atom.


I then took the original FLV file and converted it to an MP4 in
ffmpeg
with the following command :


ffmpeg -i ~/Videos/rtmp/big_buck_bunny.flv -c copy -ss 5 -t 10 -movflags frag_keyframe+empty_moov+faststart test.mp4




I opened both the MP4 I was creating and the MP4 output by
ffmpeg
in an atom parsing file and compared the two :





The first thing that jumped out at me was the
ffmpeg
-generated file has multiple video samples permoof
. Specifically, everymoof
started with 1 key frame, then contained all difference frames until the next key frame (which was used as the start of the followingmoof
atom)


Contrast this with how I'm generating my MP4. I create a
moof
atom every time an FLVVIDEODATA
packet arrives. This means mymoof
may not contain a key frame (and usually doesn't)


Could this be why I'm having trouble ? Or is there something else I'm missing ?



The video files in question can be downloaded here :






Another issue I noticed was
ffmpeg
's prolific use ofbase_data_offset
in thetfhd
atom. However when I tried tracking the total number of bytes appended and setting thebase_data_offset
myself, I got an error in Chrome along the lines of : "MSE doesn't support base_data_offset". Per the ISO/IEC 14996-10 spec :




If not provided, the base-data-offset for the first track in the movie fragment is the position of the first byte of the enclosing Movie Fragment Box, and for second and subsequent track fragments, the default is the end of the data defined by the preceding fragment.





This wording leads me to believe that the
data_offset
in the firsttrun
atom should be equal to the size of themoof
atom and thedata_offset
in the secondtrun
atom should be0
(0 bytes from the end of the data defined by the preceding fragment). However when I tried this I got an error that the video data couldn't be parsed. What did lead to data that could be parsed was the length of themoof
atom plus the total length of the first track (as if the base offset were the first byte of the enclosingmoof
box, same as the first track)