
Recherche avancée
Médias (91)
-
Richard Stallman et le logiciel libre
19 octobre 2011, par
Mis à jour : Mai 2013
Langue : français
Type : Texte
-
Stereo master soundtrack
17 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Audio
-
Elephants Dream - Cover of the soundtrack
17 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Image
-
#7 Ambience
16 octobre 2011, par
Mis à jour : Juin 2015
Langue : English
Type : Audio
-
#6 Teaser Music
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#5 End Title
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
Autres articles (86)
-
Les sons
15 mai 2013, par -
Soumettre bugs et patchs
10 avril 2011Un logiciel n’est malheureusement jamais parfait...
Si vous pensez avoir mis la main sur un bug, reportez le dans notre système de tickets en prenant bien soin de nous remonter certaines informations pertinentes : le type de navigateur et sa version exacte avec lequel vous avez l’anomalie ; une explication la plus précise possible du problème rencontré ; si possibles les étapes pour reproduire le problème ; un lien vers le site / la page en question ;
Si vous pensez avoir résolu vous même le bug (...) -
Contribute to a better visual interface
13 avril 2011MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.
Sur d’autres sites (6171)
-
For modifying MPEG-2 Part 4 video, which is the easiest library/approach can I use ?
17 décembre 2015, par liamzebedeeI’m trying to implement a video watermarking system which modifies a subset of individual pixels (i.e. the RGB values at sets of
x,y
). The base use case would be modifying an MP4, which consists of modifying the contained MPEG-2 Part 4 Video stream.I’ve done some research, and have found that it isn’t as simple as just modifying the raw frames, as the ubiquitous P-frames and B-frames rely on compressing the output by only storing the differences between frames.
I’m relatively technology-agnostic, I just want to find a solution. Which library/framework should I use (seems like ffmpeg for now) and which approach do I take ?
-
Player downloads all moof part of a fragment mp4 file before playing
28 mars 2023, par happyz90I converted a mp4 to fragment mp4 with the following command :


ffmpeg -i ./input.mp4 -y -b:a 32k -vcodec libx265 -b:v 320k -bufsize 320k -tag:v hvc1 -color_primaries bt709 -color_trc bt709 -colorspace bt709 -c:a libfdk_aac -profile:a aac_he_v2 -movflags faststart+frag_keyframe+empty_moov+dash+global_sidx -chunk_duration 500000000 -max_muxing_queue_size 1024 ./output.mp4



But when playing, the player send many http requests to download all the moof part :






But I think in normal case the player only need to download the head part of the video file to play.


So is there anything wrong with my ffmpeg parameters ? Please help me. Thank you.


-
Download a part of youtube video using a powershell script
26 octobre 2024, par Nguyễn Đức MinhI'm writing this Powershell script :


$URL = "https://www.youtube.com/watch?v=KbuwueqEJL0"
$from = 00:06:15
$to = 00:09:17

$cmdOutput = (youtube-dl --get-url $URL) 

ffmpeg -ss $from -to $to -i -ss $from -to $to -i output.mkv



This script's purpose is to download a part of a Youtube video. I've set the variable $URL to specify the Youtube URL, while
$from
and$to
is the start and end time of the part I want to download.

$cmdOutput
is used to output the stream URL. The output would have two lines : the first one is the URL for the video stream, while the second one is the audio stream URL.

Currently, I don't know how to use the output as a variable and specify the line number of $cmdOutput to put it into the correct stream. I guess
and
would be replaced by something like
$cmdOutput[line 1]
, and$cmdOutput[line 2]
, though I know that those are incorrect.

I've consulted this answer, and it is handy for me to write this script. I've also read Boris Lipschitz's answer on how to do the same thing with Python, but his answer does not work.


In that script, the
-ss
flag inputs the seeking point, and the-t <duration></duration>
flag tells FFmpeg to stop encoding after the specified duration. For example, if the start time is 00:02:00 and the duration is 00:03:00, FFmpeg would download from 00:02:00 to 00:05:00, which is not the expected outcome. For some reason, his Python script skips the first 5 seconds of output, even if I replace the-t
flag with-to
. I've tried to edit his script, but it does not work unless you explicitly specify the time for both video and audio stream, as well as their respective stream URL.