
Recherche avancée
Médias (1)
-
The pirate bay depuis la Belgique
1er avril 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Image
Autres articles (71)
-
Mise à jour de la version 0.1 vers 0.2
24 juin 2013, parExplications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...) -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)
Sur d’autres sites (10999)
-
Create a demultiplexer for MPEG 2 TS in android
17 novembre 2014, par anzI have a requirement where I need to extract ID3 tags from a MPEG2 TS(HLS STREAM). MPEG2 has a limited support in android in regards to playing the file. But my concern is to extract the ID3 tags(playing the file is not necessary). Hence I am not concerned with the codecs(encoding and decoding).
I have explored the following options :
libstagefright and OpenMax : A playback engine implemented by Google from Android 2.0.
It has a MediaExtractor is responsible for retrieving track data and the corresponding meta data from the underlying file system or http stream. But according to this post Adding video codec to Android I need to build my own firmware or my own media player.I am hoping I don’t have to go down that path. More info on stagefright and openMax can be found here :An overview of Stagefright player
Android’s Stagefright Media Player Architecture
Custom Wrapper Codec Integration into Android
How to integrate a decoder to multimedia framework
Compiling and using FFMPEG : A complete, cross-platform solution to record, convert and stream audio and video. We can demultiplex ts files with this library as mentioned here :
FFmpeg - Extracting video and audio from transport stream file (.ts).
But I am not sure if I will be able to extract the ID3 tags from the HLS Stream. libavformat might be able to do this but I still need to come up with a mechanism for signaling the read metadata to my application.
Compiling vlc for android : I have compiled vlc for android and made some modifications inside the transport module in demux component for extracting the tags, but it is not able to play all the streams that I am supplying to it.
After looking through these options , I am still at a fix in how to achieve this. I don’t want to create a media player as I will not be playing the files nor do I want to build my own firmware. Using ffmpeg seems to be the most viable option, but I want to try this without using any third-party or open source library. My questions are :
Is it even possible to create a demultiplexer from scratch that will work on android ?
If possible then ,how to go about it ?
Any options that I have missed ?
I am new to this. Any help would be greatly appreciated..Thanks
-
FFmpeg muxing theora/vorbis unable to flush ?
11 novembre 2013, par user2979732I'm pretty new to ffmpeg and it's confusing. I'm working on a basic muxer and have been spending over a week on this - I don't normally post as I solve 98% of my issues with google, but unable to get this one so far.
The basis of my source is FFmpeg's own muxing.c example. When I try to force it using libvorbis for audio, and create "test.ogg" it demonstrates the same issues I'm having in my own derivation of muxing.c. The problem is with ogg/theora/vorbis. I'm forcing the use of audio codec like this :
audio_st = add_stream(oc, &audio_codec, avcodec_find_encoder_by_name("libvorbis")->id);
It seems the problem is in not setting audio pts in the muxing.c sample. There is a confusion in general about this, nobody apart from this guy didn't address what I am looking for http://webcache.googleusercontent.com/search?q=cache:6ml82RMN3YYJ:ffmpeg.org/pipermail/libav-user/2013-April/004304.html+&cd=4&hl=en&ct=clnk&gl=cz
I couldn't find any answers to that naturally - like why don't they set the audio pts ? Laziness ? Not needed ? Do they believe all encoders will produce the pts for them(not true as seen below) ?
Anyway, when you try muxing.c with mp4/libx264/forced libmp3lame all is fine, but the encoder says that "encoder did not produce valid pts, making some up.". However, it's silent with ogg/theora/vorbis, as if there were valid pts(?) but the result is no audio packets present in the stream(!), at least from what I saw using ffprobe. Which results in the video not being able to replay even, until you take out the empty audio stream. Then it plays the video, which shows that stream is fine.
Coming to my original issue. I tried setting the pts on the audio frame you're sending to the encoder to fix that problem(this already sucks). I was unable to find a definite answer how to properly set the pts - that's the other big issue as I'm trying stuff which I'm not sure works. Anyway, in the end when setting "some" pts, this results in ogg with sound.
if (frame->pts == AV_NOPTS_VALUE) frame->pts = audio_sync_opts;
audio_sync_opts = frame->pts + frame->nb_samples;I'm aware I should probably use rescaling to adjust for the container time bases etc..if this was present/explained in ffmpeg's own sample I wouldn't have to guess now (as I'm stil not 100% sure about time base relationship between container and codec, I think container time base takes somehow over the codec one).
My other problem is flushing - but that might have something to do with the screwed up pts. So I won't rather get into that in detail - the basic problem is, when I send finite number of audio frames, like 20, I get 2 packets only for example. From my understanding, I need to flush the rest of audio after all the encoding/muxing is done, which I managed to do with mp4/libx264/libmp3lame, but with ogg/theora/vorbis it doesn't flush. Why not, I have no idea.
If someone could rework muxing.c into sending it finite number of audio / video frames - ie . not until duration > X, but until it sent 20 video & 100 audio frames(just an example). So that number of frames I have is important, not the video time I end up with. Then encode / mux all the frames - with proper video/audio pts, working with theora/ogg and flushing if needed, that would probably solve all of my issues. I'm sure for an expert ffmpeg'er modifying muxing.c addressing all those things would be a pretty quick exercise and could help more than 1 confused person.
Thanks !
-
Opening file with unknown extension (Mjpeg ?) in OpenCV python
14 novembre 2013, par bw4szI am trying to open a third party video file into OpenCV with python.
My camera (plotwatcher camera trap) shoots in a silly proprietary format. The extension is unique (.tlv) but i can play the file in VLC, and using ffmpeg i can see the following encoding :C:\Users\Ben>ffmpeg -i C:/Users/Ben/Documents/OpenCV_HummingbirdsMotion/PlotwatcherTest.tlv
ffmpeg version N-58037-g355cea8 Copyright (c) 2000-2013 the FFmpeg developers
built on Nov 11 2013 18:01:42 with gcc 4.8.2 (GCC)
configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-av
isynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enab
le-iconv --enable-libass --enable-libbluray --enable-libcaca --enable-libfreetyp
e --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --ena
ble-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-l
ibopus --enable-librtmp --enable-libschroedinger --enable-libsoxr --enable-libsp
eex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-aa
cenc --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavp
ack --enable-libx264 --enable-libxavs --enable-libxvid --enable-zlib
libavutil 52. 52.100 / 52. 52.100
libavcodec 55. 41.100 / 55. 41.100
libavformat 55. 21.100 / 55. 21.100
libavdevice 55. 5.100 / 55. 5.100
libavfilter 3. 90.102 / 3. 90.102
libswscale 2. 5.101 / 2. 5.101
libswresample 0. 17.104 / 0. 17.104
libpostproc 52. 3.100 / 52. 3.100
Input #0, avi, from 'C:/Users/Ben/Documents/OpenCV_HummingbirdsMotion/Plotwatche
rTest.tlv':
Duration: 00:00:05.00, start: 0.000000, bitrate: 14608 kb/s
Stream #0:0: Video: mjpeg (MJPG / 0x47504A4D), yuvj420p(pc), 1280x720, 10 tb
r, 10 tbn, 10 tbcFrom this i can see file is encoded into mjpeg format.
How can i open this file in open cv ?
import cv2
#import cv2.cv as cv
import numpy as np
cap = cv2.VideoCapture("C:/Users/Ben/Documents/OpenCV_HummingbirdsMotion/PlotwatcherTest.mjpg")
ret, frame = cap.read()
#show first image
cv2.imshow('my window',frame)
cv2.waitKey(0)
cv2.destroyWindow('my window')I can see nothing has been loaded. When i try to view the first frame i get the error :
File "C:\Users\Ben\Documents\OpenCV_HummingbirdsMotion\Test.py", line 21, in <module>
cv2.imshow('my window',frame)
error: ..\..\..\..\opencv\modules\highgui\src\window.cpp:261: error: (-215)
size.width>0 && size.height>0
</module>I've tried leaving the native .tlv, mjpeg and mjpg, and .MJPG endings following the conceptual idea found here : MJPEG stream fails to open in OpenCV 2.4
i appreciate all help !