
Recherche avancée
Autres articles (101)
-
Qualité du média après traitement
21 juin 2013, parLe bon réglage du logiciel qui traite les média est important pour un équilibre entre les partis ( bande passante de l’hébergeur, qualité du média pour le rédacteur et le visiteur, accessibilité pour le visiteur ). Comment régler la qualité de son média ?
Plus la qualité du média est importante, plus la bande passante sera utilisée. Le visiteur avec une connexion internet à petit débit devra attendre plus longtemps. Inversement plus, la qualité du média est pauvre et donc le média devient dégradé voire (...) -
Dépôt de média et thèmes par FTP
31 mai 2013, parL’outil MédiaSPIP traite aussi les média transférés par la voie FTP. Si vous préférez déposer par cette voie, récupérez les identifiants d’accès vers votre site MédiaSPIP et utilisez votre client FTP favori.
Vous trouverez dès le départ les dossiers suivants dans votre espace FTP : config/ : dossier de configuration du site IMG/ : dossier des média déjà traités et en ligne sur le site local/ : répertoire cache du site web themes/ : les thèmes ou les feuilles de style personnalisées tmp/ : dossier de travail (...) -
Prérequis à l’installation
31 janvier 2010, parPréambule
Cet article n’a pas pour but de détailler les installations de ces logiciels mais plutôt de donner des informations sur leur configuration spécifique.
Avant toute chose SPIPMotion tout comme MediaSPIP est fait pour tourner sur des distributions Linux de type Debian ou dérivées (Ubuntu...). Les documentations de ce site se réfèrent donc à ces distributions. Il est également possible de l’utiliser sur d’autres distributions Linux mais aucune garantie de bon fonctionnement n’est possible.
Il (...)
Sur d’autres sites (8056)
-
Can anyone help merge these 2 FFmpeg commands ?
7 juin 2019, par WebDevi have two commands which are part of a larger set of commands. basically i need to merge these two into one command to speed things up. can anyone help me please ?
ffmpeg -y -f concat -safe 0 -protocol_whitelist "file,http,https,tcp,tls" -i "photos.txt" -i "mainscreen.png" -i "audio.mp3" -filter_complex "scale=3840x2160,zoompan=z='if(lte(zoom,1.0),1.2,max(1.001,zoom-0.0015))':x='iw/2-(iw/zoom/2)':y='ih/2-(ih/zoom/2)':s=1280x720:fps=15:d=120, overlay=(main_w-overlay_w)/2:(main_h-overlay_h)/2, drawtext=fontfile=font.otf:text='%%~ni':fontcolor=black:fontsize=32:x=90:y=582" -preset veryfast -tune stillimage -shortest -pix_fmt yuv420p "slideshow.mp4"
ffmpeg -y -i "slideshow.mp4" -filter_complex "[0:a]showwaves=mode=cline:s=110x36:r=15:scale=sqrt:colors=0x222222,colorkey=0x000000:0.01:0.1,format=yuva420p[v];[0:v][v]overlay=77:444,scale=1280:720[outv]" -map "[outv]" -map 0:a -preset veryfast "done.mp4"firstly creates a slideshow and adds text.
then draws a showwaves effect onto the video.thank you in advance
UPDATE :
from Gyan’s response, and after tinkering for a while, it kind of works how i needed it to. it does what I wanted, but keeps throwing "depreciated pixel format" error. heres the updated command once i finished. can you spot the problem ? and is the command written properly ?
ffmpeg -y -f concat -safe 0 -protocol_whitelist "file,http,https,tcp,tls" -i "images.txt" -i "screen.png" -i "audio.mp3" -filter_complex "scale=3840x2160,zoompan=z='if(lte(zoom,1.0),1.2,max(1.001,zoom-0.0015))':x='iw/2-(iw/zoom/2)':y='ih/2-(ih/zoom/2)':s=1280x720:fps=15:d=120, overlay=(main_w-overlay_w)/2:(main_h-overlay_h)/2, drawtext=fontfile=Assets/Fonts/font.otf:text='%%~ni':fontcolor=black:fontsize=32:x=90:y=582[v];[2:a]showwaves=mode=cline:s=110x36:r=15:scale=sqrt:colors=0x222222,colorkey=0x000000:0.01:0.1,format=yuva420p[w];[v][w]overlay=77:444,scale=1280:720[outv]" -map "[outv]" -map 2:a -c:v libx264 -c:a aac -preset veryfast -shortest -pix_fmt yuv420p "done.mp4"
SECOND UPDATE :
thanks to Gyan (once again) for helping me better understand the command. here is the final code which does what I need :
ffmpeg -y -f concat -safe 0 -protocol_whitelist "file,http,https,tcp,tls" -i "images.txt" -i "screen.png" -i "tmp.audio.mp3" -filter_complex "[0]scale=3840x2160,zoompan=z='if(lte(zoom,1.0),1.2,max(1.001,zoom-0.0015))':x='iw/2-(iw/zoom/2)':y='ih/2-(ih/zoom/2)':s=1280x720:fps=15:d=120, overlay=(main_w-overlay_w)/2:(main_h-overlay_h)/2, drawtext=fontfile=font.otf:text='%%~ni':fontcolor=black:fontsize=32:x=90:y=582[v];[2:a]showwaves=mode=cline:s=110x36:r=15:scale=sqrt:colors=0x222222,colorkey=0x000000:0.01:0.1,format=yuva420p[w];[v][w]overlay=77:444,scale=1280:720[outv]" -map "[outv]" -map 2:a -preset veryfast -shortest -pix_fmt yuv420p "done.mp4"
only change from Gyan’s code is i removed [p] ;[1][p] and replaced with a comma to achieve what I needed. seems to work perfect now, ignoring the depreciated pixels warning.
FINAL UPDATE :
Gyan’s updated code works perfect, a massive thank you to you sir. you also helped me understand your work which was very helpful. -
Does a track run in a fragmented MP4 have to start with a key frame ?
18 janvier 2021, par stevendesuI'm ingesting an RTMP stream and converting it to a fragmented MP4 file in JavaScript. It took a week of work but I'm almost finished with this task. I'm generating a valid
ftyp
atom,moov
atom, andmoof
atom and the first frame of the video actually plays (with audio) before it goes into an infinite buffering with no errors listed inchrome://media-internals



Plugging the video into
ffprobe
, I get an error similar to :


[mov,mp4,m4a,3gp,3g2,mj2 @ 0x558559198080] Failed to add index entry
 Last message repeated 368 times
[h264 @ 0x55855919b300] Invalid NAL unit size (-619501801 > 966).
[h264 @ 0x55855919b300] Error splitting the input into NAL units.




This led me on a massive hunt for data alignment issues or invalid byte offsets in my
tfhd
andtrun
atoms, however no matter where I looked or how I sliced the data, I couldn't find any problems in themoof
atom.


I then took the original FLV file and converted it to an MP4 in
ffmpeg
with the following command :


ffmpeg -i ~/Videos/rtmp/big_buck_bunny.flv -c copy -ss 5 -t 10 -movflags frag_keyframe+empty_moov+faststart test.mp4




I opened both the MP4 I was creating and the MP4 output by
ffmpeg
in an atom parsing file and compared the two :





The first thing that jumped out at me was the
ffmpeg
-generated file has multiple video samples permoof
. Specifically, everymoof
started with 1 key frame, then contained all difference frames until the next key frame (which was used as the start of the followingmoof
atom)


Contrast this with how I'm generating my MP4. I create a
moof
atom every time an FLVVIDEODATA
packet arrives. This means mymoof
may not contain a key frame (and usually doesn't)


Could this be why I'm having trouble ? Or is there something else I'm missing ?



The video files in question can be downloaded here :






Another issue I noticed was
ffmpeg
's prolific use ofbase_data_offset
in thetfhd
atom. However when I tried tracking the total number of bytes appended and setting thebase_data_offset
myself, I got an error in Chrome along the lines of : "MSE doesn't support base_data_offset". Per the ISO/IEC 14996-10 spec :




If not provided, the base-data-offset for the first track in the movie fragment is the position of the first byte of the enclosing Movie Fragment Box, and for second and subsequent track fragments, the default is the end of the data defined by the preceding fragment.





This wording leads me to believe that the
data_offset
in the firsttrun
atom should be equal to the size of themoof
atom and thedata_offset
in the secondtrun
atom should be0
(0 bytes from the end of the data defined by the preceding fragment). However when I tried this I got an error that the video data couldn't be parsed. What did lead to data that could be parsed was the length of themoof
atom plus the total length of the first track (as if the base offset were the first byte of the enclosingmoof
box, same as the first track)

-
Zlib vs. XZ on 2SF
I recently released my Game Music Appreciation website. It allows users to play an enormous range of video game music directly in their browsers. To do this, the site has to host the music. And since I’m a compression bore, I have to know how small I can practically make these music files. I already published the results of my effort to see if XZ could beat RAR (RAR won, but only slightly, and I still went with XZ for the project) on the corpus of Super Nintendo chiptune sets. Next is the corpus of Nintendo DS chiptunes.
Repacking Nintendo DS 2SF
The prevailing chiptune format for storing Nintendo DS songs is the .2sf format. This is a subtype of the Portable Sound Format (PSF). The designers had the foresight to build compression directly into the format. Much of payload data in a PSF file is compressed with zlib. Since I already incorporated Embedded XZ into the player project, I decided to try repacking the PSF payload data from zlib -> xz.In an effort to not corrupt standards too much, I changed the ’PSF’ file signature (seen in the first 3 bytes of a file) to ’psf’.
Results
There are about 900 Nintendo DS games currently represented in my website’s archive. Total size of the original PSF archive, payloads packed with zlib : 2.992 GB. Total size of the same archive with payloads packed as xz : 2.059 GB.Using xz vs. zlib saved me nearly a gigabyte of storage. That extra storage doesn’t really impact my hosting plan very much (I have 1/2 TB, which is why I’m so nonchalant about hosting the massive MPlayer Samples Archive). However, smaller individual files translates to a better user experience since the files are faster to download.
Here is a pretty picture to illustrate the space savings :
The blue occasionally appears to dip below the orange but the data indicates that xz is always more efficient than zlib. Here’s the raw data (comes in vanilla CSV flavor too).
Interface Impact
So the good news for the end user is that the songs are faster to load up front. The downside is that there can be a noticeable delay when changing tracks. Even though all songs are packaged into one file for download, and the entire file is downloaded before playback begins, each song is individually compressed. Thus, changing tracks triggers another decompression operation. I’m toying the possibility of some sort of background process that decompresses song (n+1) while playing song (n) in order to help compensate for this.I don’t like the idea of decompressing everything up front because A) it would take even longer to start playing ; and B) it would take a huge amount of memory.
Corner Case
There was at least one case in which I found zlib to be better than xz. It looks like zlib’s minimum block size is smaller than xz’s. I think I discovered xz to be unable to compress a few bytes to a block any smaller than about 60-64 bytes while zlib got it down into the teens. However, in those cases, it was more efficient to just leave the data uncompressed anyway.