
Recherche avancée
Médias (1)
-
Richard Stallman et le logiciel libre
19 octobre 2011, par
Mis à jour : Mai 2013
Langue : français
Type : Texte
Autres articles (89)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...) -
Mise à jour de la version 0.1 vers 0.2
24 juin 2013, parExplications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...)
Sur d’autres sites (14552)
-
multichannel (dolby digital, DTS, ecc.) delay difference on original master
17 janvier 2017, par espositoI have different flac files with 6/8 channels (this is the audio of movie) and I need to mix with ffmpeg part of rear channels with front channels and sometimes viceversa.
There is no problem do it the operation work with an automate software.
But I’am sure there is a phase difference beetween front channels and rear channels and I like to add a delay to compensate this difference (when I listen it is clear there is a phase offset and this degrade the result).
I don’t have an idea about how many milliseconds I need to add to compensate and what channel and if the difference is negative or positive.
Someone can help me please to understand approximately what are the range of delay for each channel (a table is very useful) ?
Thank you !!
-
Why JPEG compressing an uncompressed image differs its original (FFmpeg, NvJPEG, ...)
22 juin 2021, par FruchtzwergI am currently struggling to understand why recompressing an uncompressed JPEG image differs its original.


It's clear, that JPEG is a lossy compression, but what if the image to compress is already uncompressed, which means all sampling losses are already included ? In other words : Downsampling and DCT should be inversable at this point without loosing data.




To make sure losses are not effected by the color space conversion, this step is skipped and YUV images are used.


- 

- Compress YUV image to JPEG (image.yuv —> image.yuv.jpg)
- Uncompress JPEG image to YUV image (image.yuv.jpg —> image.yuv.jpg.yuv)
- Compress YUV image to JPEG (image.yuv.jpg.yuv —> image.yuv.jpg.yuv.jpg)
- Uncompress JPEG image to YUV image (image.yuv.jpg.yuv.jpg —> image.yuv.jpg.yuv.jpg.yuv)










Step 1 includes a lossy compression, so we will not deal with this step anymore. For me, intresting is what happens afterwards :


Uncompressing the JPEG image back to YUV (step 2) leads to an image which perfectly fits all sampling steps if compressed again (step 3). So the JPEG image after step 3 should (from my understanding) be exactly the same as after step 1. Also the YUV images after step 4 and step 2 should equal each other.


Looking at the steps for one 8x8 block the following simplified sequence should illustrate what I am trying to descibe. Lets start with the original YUV image, which can only be decompressed loosing all decimal places :


[ 1.123, 2.345, 3.456, ... ] (YUV)
 DTC + Quantization
[ -26, -3, -6, ... ] (Quantized frequency space)
 Inverse DTC + Quantization
[ 1, 2, 3, ... ] (YUV)



Doing this with input, which already matches all steps, which may lead to loss of data afterwards (using round numbers in my example), the decompressed image should match its original :


[ 1, 2, 3, ... ] (YUV)
 DTC + Quantization
[ -26, -3, -6, ... ] (Quantized frequency space)
 Inverse DTC + Quantization
[ 1, 2, 3, ... ] (YUV)



There are also some sources and discussions, which are confirming my idea :


- 

- need help creating Jpeg Generational Degradation code
- What factors cause or prevent “generational loss” when JPEGs are recompressed multiple times ?
- Lossless Chroma Subampling








So much for theory. In praxis, I've runned these steps using ffmpeg and Nvidias jpeg samples (using NvJPEGEncoder).


ffmpeg :


#Create YUV image
ffmpeg -y -i image.jpg -s 1920x1080 -pix_fmt yuv420p image.yuv
#YUV to JPEG
ffmpeg -y -s 1920x1080 -pix_fmt yuv420p -i image.yuv image.yuv.jpg
#JPEG TO YUV
ffmpeg -y -i image.yuv.jpg -s 1920x1080 -pix_fmt yuv420p image.yuv.jpg.yuv
#YUV to JPEG
ffmpeg -y -s 1920x1080 -pix_fmt yuv420p -i image.yuv.jpg.yuv image.yuv.jpg.yuv.jpg
#JPEG TO YUV
ffmpeg -y -i image.yuv.jpg.yuv.jpg -s 1920x1080 -pix_fmt yuv420p image.yuv.jpg.yuv.jpg.yuv
#YUV to JPEG
ffmpeg -y -s 1920x1080 -pix_fmt yuv420p -i image.yuv.jpg.yuv.jpg.yuv image.yuv.jpg.yuv.jpg.yuv.jpg



Nvidia :


#Create YUV image
./jpeg_decode num_files 1 image.jpg image.yuv
#YUV to JPEG
./jpeg_encode image.yuv 1920 1080 image.yuv.jpg
#JPEG TO YUV
./jpeg_decode num_files 1 image.yuv.jpg image.yuv.jpg.yuv
#YUV to JPEG
./jpeg_encode image.yuv.jpg.yuv 1920 1080 image.yuv.jpg.yuv.jpg
#JPEG TO YUV
./jpeg_decode num_files 1 image.yuv.jpg.yuv.jpg image.yuv.jpg.yuv.jpg.yuv
#YUV to JPEG
./jpeg_encode image.yuv.jpg.yuv.jpg.yuv 1920 1080 image.yuv.jpg.yuv.jpg.yuv.jpg



But a comparison of the images


- 

- image.yuv.jpg.yuv and image.yuv.jpg.yuv.jpg.yuv
- image.yuv.jpg.yuv.jpg and image.yuv.jpg.yuv.jpg.yuv.jpg






showing differences in the files. That brings me to my question why and where the difference gets happen, since from my understanding the files should be equal.


-
How To Get Original Audio Settings From Video
13 avril 2017, par Orxan AbdullazadəIn my code I extract frames from video, and then I make a new video from these frames but my problem is adding AUDIO.
In
dll
(Accord.Video.FFMPEG.dll) has add frames method which has two options : First option is make video (frame only, soundless). A second option hasadd frame
with audio.I am using this code (see full second snippet further below) :
writer.Open(CurrentProject+ @"\\Video\\Prepared.mp4", width, height, 25, VideoCodec.Default, Video BitRate?, AudioCodec, Audio bitRate, sampleRate, channel);
My problem for second option is how to know what is my original video’s
AudioCodec ID
(eg : how to check if wav, mp3 or aac etc ?)Audio BitRate
(eg : how to check if 128kbps ?)Sample Rate
(eg : how to check if 44100hz)Channel
(eg : how to check if one or two ?)
Here is my code :
First snippet isframeExtraction
(works ok). Second snippet isaddingFrame
(trying sound).(1) Extracting Video Frame (working fine) :
string CurrentProject= Path.GetDirectoryName(Path.GetDirectoryName(System.IO.Directory.GetCurrentDirectory()));
DirectoryInfo dir = new DirectoryInfo(CariProjeAdres + "\\Frames\\");
foreach (FileInfo fi in dir.GetFiles())
{
fi.IsReadOnly = false;
fi.Delete();
}
VideoFileReader reader = new VideoFileReader();
string FrameAdres = CurrentProject+ "\\Frames\\";
string VideoAdress="move.mp4"
reader.Open(VideoAdress);
FrameCount= ((int)axWindowsMediaPlayer1.currentMedia.duration * reader.FrameRate);
for (int i = 0; i <= FrameSayisi; i++)
{
Bitmap videoFrame = reader.ReadVideoFrame();
videoFrame.Save(FrameAdres + i + ".png");
videoFrame.Dispose();
}
reader.Close();
reader.Dispose();
DirectoryInfo dir2 = new DirectoryInfo(CariProjeAdres + "\\Sound\\");
foreach (FileInfo fi in dir2.GetFiles())
{
fi.IsReadOnly = false;
fi.Delete();
}
File.Copy(Video, CariProjeAdres + "\\SES\\Sound.mp3");(2) Make Video by frames (problem adding audio speifics settings)
string CurrentProject= Path.GetDirectoryName(Path.GetDirectoryName(System.IO.Directory.GetCurrentDirectory()));
int width=0;
int height=0;
if (File.Exists(CurrentProject+ @"\\Frames\\" + "0.png"))
{
width = Image.FromFile(CurrentProject+ @"\\Frames\\" + "0.png").Width;
height = Image.FromFile(CurrentProject+ @"\\Frames\\" + "0.png").Height;
Accord.Video.FFMPEG.VideoFileWriter writer = new Accord.Video.FFMPEG.VideoFileWriter();
writer.Open(CurrentProject+ @"\\Video\\Prepared.mp4", width, height, 25, VideoCodec.Default,must be BitRate,AudioCodec,Audio bitRate, sample BitRate,channel);
byte[] Sound= File.ReadAllBytes(CurrentProject+ "\\SES\\PreparedSound.mp3");
writer.WriteAudioFrame(Sound);
Bitmap image = new Bitmap(width, height, PixelFormat.Format24bppRgb);
DirectoryInfo dir = new DirectoryInfo(CurrentProject+ "\\Frames\\");
int FrameCount= dir.GetFiles().Length;
for (int i = 0; i < FrameCount; i++)
{
image = (Bitmap)Image.FromFile(CariProjeAdres + "\\Frames\\" + i.ToString() + ".png");
writer.WriteVideoFrame(image);
}
writer.Close();
writer.Dispose();