
Recherche avancée
Médias (1)
-
GetID3 - Bloc informations de fichiers
9 avril 2013, par
Mis à jour : Mai 2013
Langue : français
Type : Image
Autres articles (79)
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)
Sur d’autres sites (10233)
-
Revision ef887974aa : vp8 fast quantizer with intrinsics Reduce dependency on offsets file by using i
2 février 2013, par JohannChanged Paths : Modify /vp8/encoder/x86/quantize_sse2.asm Add /vp8/encoder/x86/quantize_sse2.c Modify /vp8/vp8cx.mk vp8 fast quantizer with intrinsics Reduce dependency on offsets file by using intrinsics. Disassembly shows improvements over previous assembly specifically in register management, (...)
-
GStreamer with MPEG-TS Video4Linux ATSC/DVB Recording
21 juin 2013, par Dustin OpreaI'm having an impossible time setting up a filtergraph to read from a recording that I made from my DVB video4linux device. Any help would be vastly appreciated.
How I made the recording :
To tune the channel :
azap -c ~/channels.conf "Florida"
To record the channel :
cat /dev/dvb/adapter0/dvr0 > /tmp/test
This is the way that recordings must be made (I can not use any GST DVB plugins to do this for me).
I used tstools to identify that the recording is a TS stream :
tstools/bin$ ./stream_type ~/recordings/20130129-202049
Reading from /home/dustin/recordings/20130129-202049
It appears to be Transport Stream...but that there are no PAT/PMT frames :
tstools/bin$ ./tsinfo ~/recordings/20130129-202049
Reading from /home/dustin/recordings/20130129-202049
Scanning 10000 TS packets
Found 0 PAT packets and 0 PMT packets in 10000 TS packetsI was able to produce a single ES (elementary stream) stream, by running ts2es :
tstools/bin$ ./ts2es -pid 97 ~/recordings/20130129-202049 ~/recordings/20130129-202049.es
Reading from /home/dustin/recordings/20130129-202049
Writing to /home/dustin/recordings/20130129-202049.es
Extracting packets for PID 0061 (97)
!!! 4 bytes ignored at end of file - not enough to make a TS packet
Extracted 219258 of 248113 TS packetsI am able to play the ES stream (even though the video is frozen on the first frame) :
gst-launch-0.10 filesrc location=~/recordings/20130129-202049.es ! decodebin2 ! autovideosink
gst-launch-0.10 filesrc location=~/recordings/20130129-202049.es ! decodebin2 ! xvimagesink
gst-launch-0.10 playbin2 uri=file:///home/dustin/recordings/20130129-202049.esNo matter what I do, though, I can't get the original TS file to open. However, it opens in Mplayer/FFMPEG, perfectly (but not VLC). This is the output of FFMPEG :
ffmpeg -i 20130129-202049
ffmpeg version 0.8.5-4:0.8.5-0ubuntu0.12.04.1, Copyright (c) 2000-2012 the Libav developers
built on Jan 24 2013 18:03:14 with gcc 4.6.3
*** THIS PROGRAM IS DEPRECATED ***
This program is only provided for compatibility and will be removed in a future release. Please use avconv instead.
[mpeg2video @ 0x9be7be0] mpeg_decode_postinit() failure
Last message repeated 4 times
[mpegts @ 0x9be3aa0] max_analyze_duration reached
[mpegts @ 0x9be3aa0] PES packet size mismatch
Input #0, mpegts, from '20130129-202049':
Duration: 00:03:39.99, start: 9204.168844, bitrate: 1696 kb/s
Stream #0.0[0x61]: Video: mpeg2video (Main), yuv420p, 528x480 [PAR 40:33 DAR 4:3], 15000 kb/s, 30.57 fps, 29.97 tbr, 90k tbn, 59.94 tbc
Stream #0.1[0x64]: Audio: ac3, 48000 Hz, stereo, s16, 192 kb/s
At least one output file must be specifiedThis tells us that the video stream has PID 0x61 (97).
I have been trying for a few days, so the following are only examples of a couple of attempts. I'll provide the playbin2 example first, since I know that thousands of people will respond, insisting that I just use that. It doesn't work.
gst-launch-0.10 playbin2 uri=file:///home/dustin/recordings/20130129-202049
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
ERROR: from element /GstPlayBin2:playbin20/GstURIDecodeBin:uridecodebin0/GstDecodeBin2:decodebin20/GstMpegTSDemux:mpegtsdemux0: Could not determine type of stream.
Additional debug info:
gstmpegtsdemux.c(2931): gst_mpegts_demux_sink_event (): /GstPlayBin2:playbin20/GstURIDecodeBin:uridecodebin0/GstDecodeBin2:decodebin20/GstMpegTSDemux:mpegtsdemux0:
No valid streams found at EOS
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
Freeing pipeline ...It fails [probably] because no PID has been specified with which to find the video (the "EOS" error, I think).
Naturally, I tried the following to start by demuxing the TS format. I believe it's the "es-pids" property that receives the PID in the absence of PMT information (which tstools said there weren't any, above), but I tried "program-number", too, just in case. gst-inspect indicates that one is hex and the other is decimal :
gst-launch-0.10 -v filesrc location=20130129-202049 ! mpegtsdemux es-pids=0x61 ! fakesink
gst-launch-0.10 -v filesrc location=20130129-202049 ! mpegtsdemux program-number=97 ! fakesinkOutput :
gst-launch-0.10 filesrc location=20130129-202049 ! mpegtsdemux es-pids=0x61 ! fakesink
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
ERROR: from element /GstPipeline:pipeline0/GstMpegTSDemux:mpegtsdemux0: Could not determine type of stream.
Additional debug info:
gstmpegtsdemux.c(2931): gst_mpegts_demux_sink_event (): /GstPipeline:pipeline0/GstMpegTSDemux:mpegtsdemux0:
No valid streams found at EOS
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
Freeing pipeline ...
dustin@dustinmicro:~/recordings$ gst-launch-0.10 -v filesrc location=20130129-202049 ! mpegtsdemux es-pids=0x61 ! fakesink
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
ERROR: from element /GstPipeline:pipeline0/GstMpegTSDemux:mpegtsdemux0: Could not determine type of stream.
Additional debug info:
gstmpegtsdemux.c(2931): gst_mpegts_demux_sink_event (): /GstPipeline:pipeline0/GstMpegTSDemux:mpegtsdemux0:
No valid streams found at EOS
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
Freeing pipeline ...
dustin@dustinmicro:~/recordings$ gst-launch-0.10 -v filesrc location=20130129-202049 ! mpegtsdemux es-pids=0x61 ! fakesink
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
ERROR: from element /GstPipeline:pipeline0/GstMpegTSDemux:mpegtsdemux0: Could not determine type of stream.
Additional debug info:
gstmpegtsdemux.c(2931): gst_mpegts_demux_sink_event (): /GstPipeline:pipeline0/GstMpegTSDemux:mpegtsdemux0:
No valid streams found at EOS
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
Freeing pipeline ...However, when I try mpegpsdemux (for program streams (PS), as opposed to transport streams (TS)), I get further :
gst-launch-0.10 filesrc location=20130129-202049 ! mpegpsdemux ! fakesink
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
(gst-launch-0.10:14805): GStreamer-CRITICAL **: gst_event_new_new_segment_full: assertion `position != -1' failed
(gst-launch-0.10:14805): GStreamer-CRITICAL **: gst_mini_object_ref: assertion `mini_object != NULL' failed
(gst-launch-0.10:14805): GStreamer-CRITICAL **: gst_pad_push_event: assertion `event != NULL' failed
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
(gst-launch-0.10:14805): GStreamer-CRITICAL **: gst_event_new_new_segment_full: assertion `position != -1' failed
(gst-launch-0.10:14805): GStreamer-CRITICAL **: gst_mini_object_ref: assertion `mini_object != NULL' failed
(gst-launch-0.10:14805): GStreamer-CRITICAL **: gst_pad_push_event: assertion `event != NULL' failed
(gst-launch-0.10:14805): GStreamer-CRITICAL **: gst_event_new_new_segment_full: assertion `position != -1' failed
(gst-launch-0.10:14805): GStreamer-CRITICAL **: gst_mini_object_ref: assertion `mini_object != NULL' failed
(gst-launch-0.10:14805): GStreamer-CRITICAL **: gst_pad_push_event: assertion `event != NULL' failed
(gst-launch-0.10:14805): GStreamer-CRITICAL **: gst_event_new_new_segment_full: assertion `position != -1' failed
(gst-launch-0.10:14805): GStreamer-CRITICAL **: gst_mini_object_ref: assertion `mini_object != NULL' failed
(gst-launch-0.10:14805): GStreamer-CRITICAL **: gst_pad_push_event: assertion `event != NULL' failed
...
Got EOS from element "pipeline0".
Execution ended after 1654760008 ns.
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...I'll still get the same problem whenever I use the mpegtsdemux, even if it follows mpegpsdemux, above.
I don't understand this, since I haven't even picked a PID yet.
What am I doing wrong ?
Dustin
-
Patent skullduggery : Tandberg rips off x264 algorithm
Update : Tandberg claims they came up with the algorithm independently : to be fair, I can actually believe this to some extent, as I think the algorithm is way too obvious to be patented. Of course, they also claim that the algorithm isn’t actually identical, since they don’t want to lose their patent application.
I still don’t trust them, but it’s possible it’s merely bad research (and thus being unaware of prior art) as opposed to anything malicious. Furthermore, word from within their office suggests they’re quite possibly being honest : supposedly the development team does not read x264 code at all. So this might just all be very bad luck.
Regardless, the patent is still complete tripe, and should never have been filed.
Most importantly, stop harassing the guy whose name is on the patent (Lars) : he’s just a programmer, not the management or lawyers responsible for filing the patent. This is stupid and unnecessary. I’ve removed the original post because of this ; it can be found here for those who want to read it.
Appendix : the details of the patent :
I figure I’ll go over the exact correspondence between the patent and my code here.
1. A method for calculating run and level representations of quantized transform coefficients representing pixel values included in a block of a video picture, the method comprising :
Translation : It’s a run-level coder.
packing, at a video processing apparatus, each quantized transform coefficients in a value interval [Max, Min] by setting all quantized transform coefficients greater than Max equal to Max, and all quantized transform coefficients less than Min equal to Min
The quantized coefficients are clipped to a certain valid range to allow them to be packed into bytes (they start as 16-bit values).
reordering, at the video processing apparatus, the quantized transform ID coefficients according to a predefined order depending on respective positions in the block resulting in an array C of reordered quantized transform coefficients
This is the zigzag pattern used in H.264 (and most formats) for reordering DCT coefficients. In x264, this is done before the run-level coder ste.
masking, at the video processing apparatus, C by generating an array M containing ones in positions corresponding to positions of C having non-zero values, and zeros in positions corresponding to positions of C having zero values
This is creating a bitmask based on the coefficient values, the pmovmskb step.
is generating, at the video processing apparatus, for each position containing a one in M, a run and a level representation by setting the level value equal to an occurring value in a corresponding position of C ; and setting, at the video processing apparatus, for each position containing a one in M5 the run value equal to the number of proceeding positions relative to a current position in M since a previous occurrence of one in M.
This is the process of creating run/level values from the bitmask.
Now into the detailed claims :
2. The method according to Claim 1, wherein the masking further includes, creating an array C from C where positions corresponding to positions of nonzero values in C are filled with ones, and positions corresponding to positions of zero values in C are filled with zeros, and creating M from C by extracting the most significant bit from values in respective position of C and inserting the bits in corresponding positions in M.
They’re extracting the most significant bit of the values to create a bitmask. This is exactly what the pmovmskb in my algorithm does.
3. The method according to Claim 2, wherein the creating of the array C is executed by a C++ function PCMPGTB, and the creating of M from C is executed by a C++ function PMOVMSKB.
And here they use pcmpgtb (they call it a C++ function for some reason, but it’s a SSE instruction) to do the clipping of the input values. This is exactly the same method I used in decimate_score. They also use pmovmskb as mentioned.
4. The method according to Claim 1 , wherein the generating of the run and level representation further includes determining positions containing non-zero values in C by corresponding positions containing ones in M.
5. The method according to Claim 4, wherein the determining of positions containing non-zero values in C is executed by a C++ function BSF.
Here they iterate over the bitmask of transform coefficients using a “BSF” function to find runs, which is exactly what I did. Of course, BSF isn’t a function, it’s an x86 instruction.
6. The method according to Claim 1 , wherein Max is 256 and Min is 0.
This is almost surely a typo or mistake of some sort. They mean the Max should be 255, not 256 : 256 doesn’t fit in a uint8_t.
7. The method according to Claim 1 , wherein the predefined order follows a zigzag path of transform coefficient positions in the block starting in an upper left corner heading towards a lower right corner.
This is a description of the typical DCT zigzag pattern (like in H.264, MPEG-2, Theora, etc).
Everything after this part is just repeating itself with the phrase “an apparatus” added in order to make the USPTO listen to them.