
Recherche avancée
Médias (1)
-
Revolution of Open-source and film making towards open film making
6 octobre 2011, par
Mis à jour : Juillet 2013
Langue : English
Type : Texte
Autres articles (81)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)
Sur d’autres sites (8413)
-
Embedding "Dolby Digital Plus with Dolby Atmos" into MP4 using ffmpeg
8 mars, par KrystianHow can I encode Dolby Atmos into MP4 ? I am using this command
ffmpeg -i input.mp4 -i input.ec3 -map 0:v -map 1:a -c:v copy -c:a copy -disposition:a default -metadata:s:a:0 complexity_index=16 -metadata:s:a:0 title="Dolby Atmos" output_test.mp4
I have to provide every single piece of data that is inside this EC3 file. I have noticed that when I am running this command the MP4 file contains actual Dolby Atmos stream but it's missing complexity index. Raw EC3 file has Complexity Index set to 16 but after adding it to MP4 file the complexity index is not present and this MP4 is not identified as a video with proper Dolby Atmos stream

I thought that after using this command, complexity index will be added correctly. I don't want to use MKV but it looks like I will have to do that.


-
Reading colors encoded in image at a position changes its value after decoded from video using php and ffmpeg
29 novembre 2022, par Jeenus JunanioI created a piece of code to encode unique color on image and converted the image to PNG so that it would be lossless. After This I created a video with the frame using this image using the ffmpeg in php shellexec(). After saving this video I reopened it to extract the frmae image added and tried to read those values from the image. Now the values on the image are a bit changed.


Here is the code that I tried to create the video :


$canvas = imagecreatefromjpeg('translate/first_frame.jpg');
 // create a random color
 $rand = str_pad(dechex(rand(0x000000, 0xFFFFFF)), 6, 0, STR_PAD_LEFT);
 $dec_color= hexdec($rand);

 // add the new color to image

 for ($i=0; $i < 24; $i++) { 
 imagesetpixel($canvas,$i,0,$dec_color);
 }

 // store the image and close the file opened

 // $filename = 'translate/test/output.png'; 
 $filename = 'translate/test/output.bmp'; 

 // imagepng($canvas, $filename);
 imagebmp($canvas, $filename);

 imagedestroy($canvas);

 $frame = $filename; // an image(png,gif,etc)
 $audio = 'translate/output/audio/abcdefghijklmnopqrstuvwxya.mp3';
 $output = 'translate/output/video/'.time().'.mp4'; 

 $cmd = 'ffmpeg -loop 1 -y -i '.$frame.' -i '.$audio.' -c:v libx264 -tune stillimage -c:a copy -shortest '.$output;
 shell_exec($cmd);



This above code is creating the video with the image.


Now I tried to extract the image video and color from image, the colors are a bit changed.


if($request->hasFile('video')){
 $file = $request->file('video');
 $filename = $file->getClientOriginalName();
 $path = public_path('translate/test/');
 }else{
 return 'No file uploaded';
 }
 
 
 if ($file->move($path, $filename)) {
 $video = 'translate/test/'.$filename;
 }else{
 return 'error file upload';
 }

 
 // $output = 'translate/output/image/'.time().'.png';
 $output = 'translate/output/image/'.time().'.bmp';
 // $output = 'translate/output/image/'.time().'.jpg';

 $cmd = 'ffmpeg -i '.$video.' -vframes 1 '.$output;
 shell_exec($cmd);


// $dimg = imagecreatefrompng($output);
 $dimg = imageCreateFromBmp($output);
 // $dimg = imagecreatefromjpeg($output);

 $extracted_color = array();

 for ($x=0; $x < 24 ; $x++) { 
 $extracted_color[]= imagecolorat($dimg, $x, 0);
 }

 echo "<br />Retrived colors:<pre>".print_r($extracted_color,1)."</pre>";

 imagedestroy($dimg);






The color added was 44743072 but the colors retrieved are 4539914,4474121,4408072,4408326 from x=0,y=0 to x=24,y=0.


In both PNG and BMP I am loosing the added pixels. You can clearly see in my code i have commented the code for png to read the image as bmp.


Can someone let me know if I miss anything here.


-
How does ffmpeg divide into frames EXACTLY
6 décembre 2022, par JFCorleoneI'm using this piece of code in python to split a video into frames.


def ffmpeg(self, video_file, fps, start_number, **trim_kwargs):
 ffmpeg.input(video_file) \
 .filter('fps', fps=fps) \
 .trim(**trim_kwargs) \
 .output(os.path.join(self._output_dir, f"%0{NAME_PADDING}d.JPG"),
 **{'qscale:v': 1, 'vsync': 'drop', 'start_number': start_number}) \
 .run()



I sometimes use also trimming options more or less like this :


ffmpeg(video_file, fps, 0, start=XXX,end=YYY)



Additionally, I have a list with timestamps (starting from point zero) with some additional metadata at certain points. I'm trying to figure out what are the mechanics of ffmpeg of using fps for dividing into frames (for example fps = 1), because when I try to jump through my timestamped log manually with the same "fps", I often get less entries than ffmpeg by 1. It's like ffmpeg always took first and last frame or something. Can someone explain to me how it's done exactly, so I could match metadata with generate frames in the best manner ?