
Recherche avancée
Médias (91)
-
Head down (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
Echoplex (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
Discipline (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
Letting you (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
1 000 000 (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
999 999 (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
Autres articles (35)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Support de tous types de médias
10 avril 2011Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)
-
Keeping control of your media in your hands
13 avril 2011, parThe vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)
Sur d’autres sites (10171)
-
How correctly show video with transparency in Qt with OpenCV + FFMpeg
11 avril 2022, par TheEnigmistI'm trying to show a video with transparency in a Qt6 application using OpenCV + FFMPEG.
Actually those are tool versions :


- 

- Win 11
- Qt 6.3.0
- OpenCV 4.5.5 (built with CMake)
- FFMPEG 2022-04-03-git-1291568c98-full_build-www.gyan.dev










I've used a base .mov video with transparency as test (link provided below).
First of all I've converted .mov video to .webm video (VP9) and I see in output text that alpha channel remains




ffmpeg -i '.\Retro Bars.mov' -c:v libvpx-vp9 -crf 30 -b:v 0 output.webm




Input #0, mov,mp4,m4a,3gp,3g2,mj2,
 ...
 Stream #0:0[0x1](eng): Video: qtrle (rle / 0x20656C72), argb(progressive),
 ...

Output #0, webm, 
 ...
 Stream #0:0(eng): Video: vp9, yuva420p(tv, progressive),
 ...



But when I show info of output file with ffmpeg it loses alpha channel :




ffmpeg -i .\output.webm




Input #0, matroska,webm,
 ...
 Stream #0:0(eng): Video: vp9 (Profile 0), yuv420p(tv, progressive),
 ...



If I open output.webm with OBS it is shown correctly without a background, as shown in picture :



If I try to open it with OpenCV + FFMPEG it shows a black background under bars, as shown in picture :



This is how I load video in Qt :


cv::VideoCapture capture;
capture.open(filename, cv::CAP_FFMPEG);
capture.set(cv::CAP_PROP_CONVERT_RGB, false); // try forcing load alpha channel
... //in a thread
while (capture.read(frame)) {
 qDebug() << "c" << frame.channels() << "t" << frame.type() << "d" << frame.depth(); // output: c 3 t 16 d 0
 cv::cvtColor(frame, frame, cv::COLOR_BGR2RGBA); //useless since no alpha channel is detected
 img = QImage(frame.data, frame.cols, frame.rows, QImage::Format_RGBA8888);
 emit processedImage(img); // to show image in a QLabel with QPixmap::fromImage(img)
}



I think the problem is when I load the video with OpenCV, it doens't detect alpha channel, since I can load correctly in other player (obs, html5, etc.)


What I'm wrong with all process to show this video in Qt with transparency ?


EDIT : Added dropbox link with test video + ffmpeg outputs :
sample items


-
Show progress bar of a ffmpeg video convertion
13 juin 2022, par stackexchange.com-upvm25mzI'm trying to print a progress bar while executing ffmpeg but I'm having trouble getting the total number of frames and the current frame being processed. The error I get is
AttributeError: 'NoneType' object has no attribute 'groups'
. I've copied the function from this program and for some reason it works there but not here, even though I haven't changed that part.



pattern_duration = re.compile(
 'duration[ \t\r]?:[ \t\r]?(.+?),[ \t\r]?start', re.IGNORECASE)
pattern_progress = re.compile('time=(.+?)[ \t\r]?bitrate', re.IGNORECASE)

def execute_ffmpeg(self, manager, cmd):
 proc = expect.spawn(cmd, encoding='utf-8')
 self.update_progress_bar(proc, manager)
 self.raise_ffmpeg_error(proc)

def update_progress_bar(self, proc, manager):
 total = self.get_total_frames(proc)
 cont = 0
 pbar = self.initialize_progress_bar(manager)
 try:
 proc.expect(pattern_duration)
 while True:
 progress = self.get_current_frame(proc)
 percent = progress / total * 100
 pbar.update(percent - cont)
 cont = percent
 except expect.EOF:
 pass
 finally:
 if pbar is not None:
 pbar.close()

def raise_ffmpeg_error(self, proc):
 proc.expect(expect.EOF)
 res = proc.before
 res += proc.read()
 exitstatus = proc.wait()
 if exitstatus:
 raise ffmpeg.Error('ffmpeg', '', res)

def initialize_progress_bar(self, manager):
 pbar = None
 pbar = manager.counter(
 total=100,
 desc=self.path.rsplit(os.path.sep, 1)[-1],
 unit='%',
 bar_format=BAR_FMT,
 counter_format=COUNTER_FMT,
 leave=False
 )
 return pbar

def get_total_frames(self, proc):
 return sum(map(lambda x: float(
 x[1])*60**x[0], enumerate(reversed(proc.match.groups()[0].strip().split(':')))))

def get_current_frame(self, proc):
 proc.expect(pattern_progress)
 return sum(map(lambda x: float(
 x[1])*60**x[0], enumerate(reversed(proc.match.groups()[0].strip().split(':')))))



-
Show video length in HLS player before all TS files are created
14 novembre 2022, par WilliamTacoI have a spring-boot backend which on request (on demand) uses ffmpeg to create a m3u8 playlist with its ts files from a mp4 file. So basically my react frontend requests the index.m3u8 from the backend and if it doesnt already exist it creates it and then start serving it with its ts files. This causes the frontend HLS player to show the length of the video to the combined length of the generated chunks which gets longer as time goes on until its fully there. It totally makes sense but was wondering what the correct way of showing the full length in the player even though its not fully created yet ?


Im using react-hls-player for playing the stream and spring-boot + a java ffmpeg wrapper to transcode the video.


Might be thinking about this the wrong way so feel free to correct me if im in the wrong path !