Recherche avancée

Médias (1)

Mot : - Tags -/punk

Autres articles (58)

  • Diogene : création de masques spécifiques de formulaires d’édition de contenus

    26 octobre 2010, par

    Diogene est un des plugins ? SPIP activé par défaut (extension) lors de l’initialisation de MediaSPIP.
    A quoi sert ce plugin
    Création de masques de formulaires
    Le plugin Diogène permet de créer des masques de formulaires spécifiques par secteur sur les trois objets spécifiques SPIP que sont : les articles ; les rubriques ; les sites
    Il permet ainsi de définir en fonction d’un secteur particulier, un masque de formulaire par objet, ajoutant ou enlevant ainsi des champs afin de rendre le formulaire (...)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Gestion des droits de création et d’édition des objets

    8 février 2011, par

    Par défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;

Sur d’autres sites (3819)

  • VP8 Codec Optimization Update

    15 juin 2010, par noreply@blogger.com (John Luther) — inside webm

    Since WebM launched in May, the team has been working hard to make the VP8 video codec faster. Our community members have contributed improvements, but there’s more work to be done in some interesting areas related to performance (more on those below).


    Encoder


    The VP8 encoder is ripe for speed optimizations. Scott LaVarnway’s efforts in writing an x86 assembly version of the quantizer will help in this goal significantly as the quantizer is called many times while the encoder makes decisions about how much detail from the image will be transmitted.

    For those of you eager to get involved, one piece of low-hanging fruit is writing a SIMD version of the ARNR temporal filtering code. Also, much of the assembly code only makes use of the SSE2 instruction set, and there surely are newer extensions that could be made use of. There are also redundant code removal and other general cleanup to be done ; (Yaowu Xu has submitted some changes for these).

    At a higher level, someone can explore some alternative motion search strategies in the encoder. Eventually the motion search can be decoupled entirely to allow motion fields to be calculated elsewhere (for example, on a graphics processor).

    Decoder


    Decoder optimizations can bring higher resolutions and smoother playback to less powerful hardware.

    Jeff Muizelaar has submitted some changes which combine the IDCT and summation with the predicted block into a single function, helping us avoid storing the intermediate result, thus reducing memory transfers and avoiding cache pollution. This changes the assembly code in a fundamental way, so we will need to sync the other platforms up or switch them to a generic C implementation and accept the performance regression. Johann Koenig is working on implementing this change for ARM processors, and we’ll merge these changes into the mainline soon.

    In addition, Tim Terriberry is attacking a different method of bounds checking on the "bool decoder." The bool decoder is performance-critical, as it is called several times for each bit in the input stream. The current code handles this check with a simple clamp in the innermost loops and a less-frequent copy into a circular buffer. This can be expensive at higher data rates. Tim’s patch removes the circular buffer, but uses a more complex clamp in the innermost loops. These inner loops have historically been troublesome on embedded platforms.

    To contribute in these efforts, I’ve started working on rewriting higher-level parts of the decoder. I believe there is an opportunity to improve performance by paying better attention to data locality and cache layout, and reducing memory bus traffic in general. Another area I plan to explore is improving utilization in the multi-threaded decoder by separating the bitstream decoding from the rest of the image reconstruction, using work units larger than a single macroblock, and not tying functionality to a specific thread. To get involved in these areas, subscribe to the codec-devel mailing list and provide feedback on the code as it’s written.

    Embedded Processors


    We want to optimize multiple platforms, not just desktops. Fritz Koenig has already started looking at the performance of VP8 on the Intel Atom platform. This platform need some attention as we wrote our current x86 assembly code with an out-of-order processor in mind. Since Atom is an in-order processor (much like the original Pentium), the instruction scheduling of all of the x86 assembly code needs to be reexamined. One option we’re looking at is scheduling the code for the Atom processor and seeing if that impacts the performance on other x86 platforms such as the Via C3 and AMD Geode. This is shaping up to be a lot of work, but doing it would provide us with an opportunity to tighten up our assembly code.

    These issues, along with wanting to make better use of the larger register file on x86_64, may reignite every assembly programmer’s (least ?) favorite debate : whether or not to use intrinsics. Yunqing Wang has been experimenting with this a bit, but initial results aren’t promising. If you have experience in dealing with a lot of assembly code across several similar-but-kinda-different platforms, these maintainability issues might be familiar to you. I hope you’ll share your thoughts and experiences on the codec-devel mailing list.

    Optimizing codecs is an iterative (some would say never-ending) process, so stay tuned for more posts on the progress we’re making, and by all means, start hacking yourself.

    It’s exciting to see that we’re starting to get substantial code contributions from developers outside of Google, and I look forward to more as WebM grows into a strong community effort.

    John Koleszar is a software engineer at Google.

  • Error while processing the decoded data for stream using ffmpeg

    15 juin 2018, par Robert Smith

    I am using the following command :

    ffmpeg

       -i "video1a.flv"
       -i "video1b.flv"
       -i "video1c.flv"
       -i "video2a.flv"
       -i "video3a.flv"
       -i "video4a.flv"
       -i "video4b.flv"
       -i "video4c.flv"
       -i "video4d.flv"
       -i "video4e.flv"

       -filter_complex

       nullsrc=size=640x480[base];
       [0:v]setpts=PTS-STARTPTS+0.12/TB,scale=320x240[1a];
       [1:v]setpts=PTS-STARTPTS+3469.115/TB,scale=320x240[1b];
       [2:v]setpts=PTS-STARTPTS+7739.299/TB,scale=320x240[1c];
       [5:v]setpts=PTS-STARTPTS+4390.466/TB,scale=320x240[4a];
       [6:v]setpts=PTS-STARTPTS+6803.937/TB,scale=320x240[4b];
       [7:v]setpts=PTS-STARTPTS+8242.005/TB,scale=320x240[4c];
       [8:v]setpts=PTS-STARTPTS+9811.577/TB,scale=320x240[4d];
       [9:v]setpts=PTS-STARTPTS+10765.19/TB,scale=320x240[4e];
       [base][1a]overlay=eof_action=pass[o1];
       [o1][1b]overlay=eof_action=pass[o1];
       [o1][1c]overlay=eof_action=pass:shortest=1[o1];
       [o1][4a]overlay=eof_action=pass:x=320:y=240[o4];
       [o4][4b]overlay=eof_action=pass:x=320:y=240[o4];
       [o4][4c]overlay=eof_action=pass:x=320:y=240[o4];
       [o4][4d]overlay=eof_action=pass:x=320:y=240[o4];
       [o4][4e]overlay=eof_action=pass:x=320:y=240;
       [0:a]asetpts=PTS-STARTPTS+0.12/TB,aresample=async=1,pan=1c|c0=c0,apad[a1a];
       [1:a]asetpts=PTS-STARTPTS+3469.115/TB,aresample=async=1,pan=1c|c0=c0,apad[a1b];
       [2:a]asetpts=PTS-STARTPTS+7739.299/TB,aresample=async=1,pan=1c|c0=c0[a1c];
       [3:a]asetpts=PTS-STARTPTS+82.55/TB,aresample=async=1,pan=1c|c0=c0,apad[a2a];
       [4:a]asetpts=PTS-STARTPTS+2687.265/TB,aresample=async=1,pan=1c|c0=c0,apad[a3a];
       [a1a][a1b][a1c][a2a][a3a]amerge=inputs=5

       -c:v libx264 -c:a aac -ac 2 output.mp4

    This is the stream data from ffmpeg :

    Input #0
       Stream #0:0: Video: vp6f, yuv420p, 160x128, 1k tbr, 1k tbn
       Stream #0:1: Audio: nellymoser, 11025 Hz, mono, flt
    Input #1
       Stream #1:0: Audio: nellymoser, 11025 Hz, mono, flt
       Stream #1:1: Video: vp6f, yuv420p, 160x128, 1k tbr, 1k tbn
    Input #2
       Stream #2:0: Audio: nellymoser, 11025 Hz, mono, flt
       Stream #2:1: Video: vp6f, yuv420p, 160x128, 1k tbr, 1k tbn
    Input #3
       Stream #3:0: Audio: nellymoser, 11025 Hz, mono, flt
    Input #4
       Stream #4:0: Audio: nellymoser, 11025 Hz, mono, flt
    Input #5
       Stream #5:0: Video: vp6f, yuv420p, 1680x1056, 1k tbr, 1k tbn
    Input #6
       Stream #6:0: Video: vp6f, yuv420p, 1680x1056, 1k tbr, 1k tbn
    Input #7
       Stream #7:0: Video: vp6f, yuv420p, 1680x1056, 1k tbr, 1k tbn
    Input #8
       Stream #8:0: Video: vp6f, yuv420p, 1680x1056, 1k tbr, 1k tbn
    Input #9
       Stream #9:0: Video: vp6f, yuv420p, 1680x1056, 1k tbr, 1k tbn
    Stream mapping:
     Stream #0:0 (vp6f) -> setpts
     Stream #0:1 (nellymoser) -> asetpts
     Stream #1:0 (nellymoser) -> asetpts
     Stream #1:1 (vp6f) -> setpts
     Stream #2:0 (nellymoser) -> asetpts
     Stream #2:1 (vp6f) -> setpts
     Stream #3:0 (nellymoser) -> asetpts
     Stream #4:0 (nellymoser) -> asetpts
     Stream #5:0 (vp6f) -> setpts
     Stream #6:0 (vp6f) -> setpts
     Stream #7:0 (vp6f) -> setpts
     Stream #8:0 (vp6f) -> setpts
     Stream #9:0 (vp6f) -> setpts
     overlay -> Stream #0:0 (libx264)
     amerge -> Stream #0:1 (aac)

    This is the error :

    Press [q] to stop, [?] for help

    Enter command: <target>|all <time>|-1 <command>[ <argument>]

    Parse error, at least 3 arguments were expected, only 1 given in string 'ho Oscar'
    [Parsed_amerge_44 @ 0a7238c0] No channel layout for input 1
    [Parsed_amerge_44 @ 0a7238c0] Input channel layouts overlap: output layout will be determined by the number of distinct input channels
    [Parsed_pan_27 @ 07681880] Pure channel mapping detected: 0
    [Parsed_pan_31 @ 07681b40] Pure channel mapping detected: 0
    [Parsed_pan_35 @ 0a7232c0] Pure channel mapping detected: 0
    [Parsed_pan_38 @ 0a7234c0] Pure channel mapping detected: 0
    [Parsed_pan_42 @ 0a723740] Pure channel mapping detected: 0
    [libx264 @ 069e8a40] using SAR=1/1
    [libx264 @ 069e8a40] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
    [libx264 @ 069e8a40] profile High, level 3.0
    [libx264 @ 069e8a40] 264 - core 155 r2901 7d0ff22 - H.264/MPEG-4 AVC codec - Copyleft 2003-2018 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=15 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
    Output #0, mp4, to 'output.mp4':
     Metadata:
       canSeekToEnd    : false
       encoder         : Lavf58.16.100
       Stream #0:0: Video: h264 (libx264) (avc1 / 0x31637661), yuv420p(progressive), 640x480 [SAR 1:1 DAR 4:3], q=-1--1, 25 fps, 12800 tbn, 25 tbc (default)
       Metadata:
         encoder         : Lavc58.19.102 libx264
       Side data:
         cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
       Stream #0:1: Audio: aac (LC) (mp4a / 0x6134706D), 11025 Hz, stereo, fltp, 128 kb/s (default)
       Metadata:
         encoder         : Lavc58.19.102 aac
    frame=  200 fps=0.0 q=28.0 size=       0kB time=00:00:07.82 bitrate=   0.0kbits/s speed=15.6x    
    ...  
    frame=30132 fps=497 q=28.0 size=   29952kB time=00:20:05.14 bitrate= 203.6kbits/s speed=19.9x    
    Error while filtering: Cannot allocate memory
    Failed to inject frame into filter network: Cannot allocate memory
    Error while processing the decoded data for stream #2:1
    [libx264 @ 069e8a40] frame I:121   Avg QP: 8.83  size:  7052
    [libx264 @ 069e8a40] frame P:7609  Avg QP:18.33  size:  1527
    [libx264 @ 069e8a40] frame B:22367 Avg QP:25.44  size:   112
    [libx264 @ 069e8a40] consecutive B-frames:  0.6%  0.7%  1.0% 97.8%
    [libx264 @ 069e8a40] mb I  I16..4: 75.7% 18.3%  6.0%
    [libx264 @ 069e8a40] mb P  I16..4:  0.3%  0.7%  0.1%  P16..4: 10.6%  3.3%  1.6%  0.0%  0.0%    skip:83.4%
    [libx264 @ 069e8a40] mb B  I16..4:  0.0%  0.0%  0.0%  B16..8:  3.2%  0.2%  0.0%  direct: 0.2%  skip:96.5%  L0:47.7% L1:48.2% BI: 4.0%
    [libx264 @ 069e8a40] 8x8 transform intra:37.4% inter:70.2%
    [libx264 @ 069e8a40] coded y,uvDC,uvAC intra: 38.9% 46.1% 28.7% inter: 1.7% 3.3% 0.1%
    [libx264 @ 069e8a40] i16 v,h,dc,p: 78%  8%  4% 10%
    [libx264 @ 069e8a40] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 33% 20% 12%  3%  6%  8%  6%  6%  7%
    [libx264 @ 069e8a40] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 37% 22%  9%  4%  6%  7%  5%  5%  4%
    [libx264 @ 069e8a40] i8c dc,h,v,p: 60% 16% 17%  7%
    [libx264 @ 069e8a40] Weighted P-Frames: Y:0.7% UV:0.6%
    [libx264 @ 069e8a40] ref P L0: 65.5% 12.3% 14.2%  8.0%  0.0%
    [libx264 @ 069e8a40] ref B L0: 90.2%  7.5%  2.3%
    [libx264 @ 069e8a40] ref B L1: 96.4%  3.6%
    [libx264 @ 069e8a40] kb/s:99.58
    [aac @ 069e9600] Qavg: 65519.982
    [aac @ 069e9600] 2 frames left in the queue on closing
    Conversion failed!
    </argument></command></time></target>

    I am trying to figure out how to fix these errors :

    Error while filtering: Cannot allocate memory
    Failed to inject frame into filter network: Cannot allocate memory
    Error while processing the decoded data for stream #2:1

    Observation #1

    If I run the following command on stream #2:1 by itself :

    ffmpeg -i video1c.flv -vcodec libx264 -acodec aac video1c.mp4

    The files is converted fine with no errors.

    Observation #2

    Running MediaInfo on video1c.flv (stream #2) shows the following :

    Format: Flash Video
    Video Codecs: On2 VP6
    Audio Codecs: Nellymoser

    Any help would be appreciated in resolving this error.

    Update #1

    I have tried splitting the filter graph into two as requested but I receive the same errors :

    Error while filtering: Cannot allocate memory
    Failed to inject frame into filter network: Cannot allocate memory
    Error while processing the decoded data for stream #1:1

    However, I did discover something, if I try to bring up stream #1:1 mentioned above (video1b.flv) using VLC Media Player, I can hear the audio file but I cannot see the video and I receive this error message :

    No suitable decoder module:
    VLC Does not support the audio or video format "undf".
    Unfortunately there is no way for you to fix this.

    Update #2

    The above error was with the 32bit version of ffmpeg. I switched to a 64 bit machine and am now running the 64 bit ffmpeg version ffmpeg-20180605-b748772-win64-static.

    Now I no longer receive the following error :

    Error while processing the decoded data for stream #1:1

    But, I have a new error. About an hour into running it, I receive the following error :

    av_interleaved_write_frame(): Cannot allocate memory
    [mp4 @ 000000000433f080] Application provided duration: 3327365388930198318
    / timestamp: 17178820096 is out of range for mov/mp4 format

    I also tried first remuxing all the files as suggested and using those files to run the above command and that did not help. I still get the same error.

  • How Media Analytics for Piwik gives you the insights you need to measure how effective your video and audio marketing is – Part 1

    31 janvier 2017, par InnoCraft — Community

    Do you have video or audio content on your website or in your app ? If you answered this with yes, you should continue reading and learn everything about our Media Analytics premium feature.

    When you produce video or audio content, you are either spending money or time or often both money and time on your content in the hope of increasing conversions or sales. This means you have to know how your media is being used, when it is used, for how long and by whom. You can simply not afford not to know how this content affects your overall business goals as you are likely losing money and time by not making the most out of it. Would you be able to answer any of the above questions ? Do you know whether you can justify the cost and time for producing them, which videos work better than others and how they support your marketing strategy ? Luckily, getting all these insights is now so trivial it is almost a crime to not measure it.

    Getting Media Analytics and Installation

    Media Analytics can be purchased from the Piwik Marketplace where you find all sorts of free plugins as well as several premium features such as A/B Testing or Funnel. After the purchase you will receive a license key that you can enter in your Piwik to install and update the plugin with just one click.

    The feature will in most cases automatically start tracking your media content and you don’t even need to change the tracking code on your website. Currently supported players are for example YouTube, Vimeo, HTML 5, JW Player, VideoJS and many more players. You can also easily extend it by adding a custom media player or simply by letting us know which player you use and we will add support for it for you.

    By activating this feature, you get more than 15 new media reports, even more exportable widgets, new segments, APIs, and more. We will cover some of those features in this blog post and in part 2. For a full list of features check out the Media Analytics page on the Piwik Marketplace.

    Media Overview

    As the name says, it gives you an overview over your media usage and how it performs over time. You can choose any media metrics in the big evolution graph and the sparklines below give you an overview over all important metrics in a glance.

    It lets you for example see how often media was shown to your users, how often users start playing your media, for how long they watched it, how often they finished it, and more. If you see some spikes there, you should definitely have a deeper look at the other reports. When you hover a metric, it will show you a tooltip explaining how the data for this is collected and what it means.

    Real-Time Media

    On the Real-Time page you can see how your content is being used by your visitors right now, for example within the last 30 minutes, last 60 minutes and last 24 hours.

    It shows you how many plays you had in the last minutes, for how long they played it, and it shows you currently most popular media titles. This is great to discover which media content performs best right now and lets you make decisions based on user behaviour that is happening right now.

    Below you can see our Audience Real-Time Map that shows you from where in the world your media is being played. A bigger circle indicates that a media play happened more recently and of course you can zoom in down to countries and regions.

    All the reports update every few seconds so you can always have a look at it and see in just a second how your content is doing and how certain marketing campaigns affect it. All these real-time reports can be also added as widgets to any of your Piwik Dashboards and they can be exported for example as an iframe.

    Video, Audio and Media Player reports

    Those reports come with so many features, we need a separate blog post and cover this in part 2.

    Events

    Media Analytics will automatically track events so you can see how often users pressed for example play or pause, how often they resumed a video and how often they finished a video. This helps you better understand how your media is being used.

    For example in the past we noticed a couple of videos with lots of pause and resume events. We then had a look at the Audience Log – which we will cover next – to better understand why visitors paused the videos so often. We then realized they did this especially for videos that were served from a specific server and because the videos were loading so slow, users often pressed pause to let the media buffer, then played the media for a few seconds and then paused it again as they had to wait for the video to load. Moving those videos to another, faster server showed us immediate results in the number of pauses going down and on average visitors watched the videos for much longer.

    Audience Log

    At InnoCraft, we understand that not only aggregated metrics matter but also that you often need the ability to dig into your data and “debug” certain behaviours to understand the cause for some unusual high or low metrics. For example you may find out that many of your users often pause a video, then you wonder how each individual user behaved so you can better understand the why.

    The audience log shows you a detailed log of every visitor. You can chronologically see every action a visitor has performed during their whole visit. If you click on the visitor profile link, you can even see all visits of a specific visitor, and all actions they have ever performed on your website.

    This lets you ultimately debug and understand your visitors and see exactly which actions they performed before playing your media, which media they played, how they played your media, and how they behaved after playing your media.

    The visitor log of course also shows important information about each visitor like where they came from (referrer), their location, software, device and much more information.

    Audience Map

    The Audience Map is similar to the Real-Time Map but it shows you the locations of your visitors based on a selected date range and not in real time. The darker the blue, the more visitors from that country, region or city have interacted with your media.

    Coming in part 2

    In the next part we will cover which video, audio and media player reports Media Analytics provides, how segmenting gives you insights into different personas, and how nicely it integrates into Piwik.

    How to get Media Analytics and related features

    You can get Media Analytics on the Piwik Marketplace. If you want to learn more about this feature, you might be also interested in the Media Analytics User Guide and the Media Analytics FAQ.