Recherche avancée

Médias (1)

Mot : - Tags -/Christian Nold

Autres articles (49)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

Sur d’autres sites (8593)

  • Resurrecting SCD

    6 août 2010, par Multimedia Mike — Reverse Engineering

    When I became interested in reverse engineering all the way back in 2000, the first Win32 disassembler I stumbled across was simply called "Win32 Program Disassembler" authored by one Sang Cho. I took to calling it ’scd’ for Sang Cho’s Disassembler. The original program versions and source code are still available for download. I remember being able to compile v0.23 of the source code with gcc under Unix ; 0.25 is no go due to extensive reliance on the Win32 development environment.

    I recently wanted to use scd again but had some trouble compiling. As was the case the first time I tried compiling the source code a decade ago, it’s necessary to transform line endings from DOS -> Unix using ’dos2unix’ (I see that this has been renamed to/replaced by ’fromdos’ on Ubuntu).

    Beyond this, it seems that there are some C constructs that used to be valid but are now rejected by gcc. The first looks like this :

    C :
    1. return (int) c = *(PBYTE)((int)lpFile + vCodeOffset) ;

    Ahem, "error : lvalue required as left operand of assignment". Removing the "(int)" before the ’c’ makes the problem go away. It’s a strange way to write a return statement in general. However, ’c’ is a global variable that is apparently part of some YACC/BISON-type output.

    The second issue is when a case-switch block has a default label but with no code inside. Current gcc doesn’t like that. It’s necessary to at least provide a break statement after the default label.

    Finally, the program turns out to not be 64-bit safe. It is necessary to compile it in 32-bit mode (compile and link with the ’-m32’ flag or build on a 32-bit system). The static 32-bit binary should run fine under a 64-bit kernel.

    Alternatively : What are some other Win32 disassemblers that work under Linux ?

  • FFmpeg and Code Coverage Tools

    21 août 2010, par Multimedia Mike — FATE Server, Python

    Code coverage tools likely occupy the same niche as profiling tools : Tools that you’re supposed to use somewhere during the software engineering process but probably never quite get around to it, usually because you’re too busy adding features or fixing bugs. But there may come a day when you wish to learn how much of your code is actually being exercised in normal production use. For example, the team charged with continuously testing the FFmpeg project, would be curious to know how much code is being exercised, especially since many of the FATE test specs explicitly claim to be "exercising XYZ subsystem".

    The primary GNU code coverage tool is called gcov and is probably already on your GNU-based development system. I set out to determine how much FFmpeg source code is exercised while running the full FATE suite. I ran into some problems when trying to use gcov on a project-wide scale. I spackled around those holes with some very ad-hoc solutions. I’m sure I was just overlooking some more obvious solutions about which you all will be happy to enlighten me.

    Results
    I’ve learned to cut to the chase earlier in blog posts (results first, methods second). With that, here are the results I produced from this experiment. This Google spreadsheet contains 3 sheets : The first contains code coverage stats for a bunch of FFmpeg C files sorted first by percent coverage (ascending), then by number of lines (descending), thus highlighting which files have the most uncovered code (ffserver.c currently tops that chart). The second sheet has files for which no stats were generated. The third sheet has "problems". These files were rejected by my ad-hoc script.

    Here’s a link to the data in CSV if you want to play with it yourself.

    Using gcov with FFmpeg
    To instrument a program for gcov analysis, compile and link the target program with the -fprofile-arcs and -ftest-coverage options. These need to be applied at both the compile and link stages, so in the case of FFmpeg, configure with :

      ./configure \
        —extra-cflags="-fprofile-arcs -ftest-coverage" \
        —extra-ldflags="-fprofile-arcs -ftest-coverage"
    

    The building process results in a bunch of .gcno files which pertain to code coverage. After running the program as normal, a bunch of .gcda files are generated. To get coverage statistics from these files, run 'gcov sourcefile.c'. This will print some basic statistics as well as generate a corresponding .gcov file with more detailed information about exactly which lines have been executed, and how many times.

    Be advised that the source file must either live in the same directory from which gcov is invoked, or else the path to the source must be given to gcov via the '-o, --object-directory' option.

    Resetting Statistics
    Statistics in the .gcda are cumulative. Should you wish to reset the statistics, doing this in the build directory should suffice :

      find . -name "*.gcda" | xargs rm -f
    

    Getting Project-Wide Data
    As mentioned, I had to get a little creative here to get a big picture of FFmpeg code coverage. After building FFmpeg with the code coverage options and running FATE,

    for file in `find . -name "*.c"` \
    do \
      echo "*****" $file \
      gcov -o `dirname $file` `basename $file` \
    done > ffmpeg-code-coverage.txt 2>&1
    

    After that, I ran the ffmpeg-code-coverage.txt file through a custom Python script to print out the 3 CSV files that I later dumped into the Google Spreadsheet.

    Further Work
    I’m sure there are better ways to do this, and I’m sure you all will let me know what they are. But I have to get the ball rolling somehow.

    There’s also TestCocoon. I’d like to try that program and see if it addresses some of gcov’s shortcomings (assuming they are indeed shortcomings rather than oversights).

    Source for script : process-gcov-slop.py

    PYTHON :
    1. # !/usr/bin/python
    2.  
    3. import re
    4.  
    5. lines = open("ffmpeg-code-coverage.txt").read().splitlines()
    6. no_coverage = ""
    7. coverage = "filename, % covered, total lines\n"
    8. problems = ""
    9.  
    10. stats_exp = re.compile(’Lines executed :(\d+\.\d+)% of (\d+)’)
    11. for i in xrange(len(lines)) :
    12.   line = lines[i]
    13.   if line.startswith("***** ") :
    14.     filename = line[line.find(’./’)+2 :]
    15.     i += 1
    16.     if lines[i].find(":cannot open graph file") != -1 :
    17.       no_coverage += filename + \n
    18.     else :
    19.       while lines[i].find(filename) == -1 and not lines[i].startswith("***** ") :
    20.         i += 1
    21.       try :
    22.         (percent, total_lines) = stats_exp.findall(lines[i+1])[0]
    23.         coverage += filename + ’, ’ + percent + ’, ’ + total_lines + \n
    24.       except IndexError :
    25.         problems += filename + \n
    26.  
    27. open("no_coverage.csv", ’w’).write(no_coverage)
    28. open("coverage.csv", ’w’).write(coverage)
    29. open("problems.csv", ’w’).write(problems)
  • NAB 2010 wrapup

    15 avril 2010

    Another year of NAB has come and gone. Making it out of Vegas with some remaining faith in humanity seems like a successful outcome. So, anything worth talking about at the show ?

    First off, there’s 3d. 3D is The Next Big Thing, and that was obvious to anyone who spent half a second on the show floor. Everything from camera rigs, to post production apps, to display technology was all 3d, all the time. I’m not a huge fan of 3d in most cases, but the industry is at least feigning interest.

    Luckily, at a show as big as NAB, there’s plenty of other cool stuff to see. So, what struck my fancy ?

    First off, Avid and Adobe were showing new versions of Media Composer and Premiere. Both sounded pretty amazing on paper, but I must say I was somewhat underwhelmed by both in reality. Premiere felt a little rough around the edges - the Mercurial Engine wasn’t the sort of next generation tech that I expected. Media Composer 5 has some nice new tweaks, but it’s still rather Avid-y - which is good for Avid people, less interesting for the rest of us.

    In other software news, Blackmagic Design was showing off some of what they’re doing with the DaVinci technology that they acquired. Software-only Da Vinci Resolve for $999 is a pretty amazing deal, and the demos were quite nice. That said, color correction is an art, so just making the technology cheaper isn’t necessarily going to dramatically change the number of folks who do it well - see Color.

    Blackmagic also has a pile of new USB 3.0 hardware devices, including the absolutely gorgeous UltraStudio Pro. Makes me pine for USB 3.0 on the mac.

    On the production side, we saw new cameras from just about everyone. To start at the high end, the Arri Alexa was absolutely stunning. Perhaps the nicest digital cinema footage I’ve seen. Not only that, but they’ve worked out a usable workflow, recording to ProRes plus RAW. At the price point they’re promising, the world is going to get a lot more difficult for RED.

    Sony’s new XDCam EX gear is another good step forward for that format. Nothing groundbreaking, but another nice progression. I was kind of hoping we’d see 4:2:2 EX gear from them, but I suppose they need to justify the disc based formats for a while longer.

    The Panasonic AG-AF100 is another interesting camera, bringing micro 4/3rds into video. The only strange thing is the recording side - AVCHD to SD cards. While I’m thrilled to see them using SD instead of P2, it sure would have been nice to have an AVCIntra option.

    Finally, Canon’s 4:2:2 XF cams are a nice option for the ENG/EFP market. Nothing groundbreaking, aside from the extra color sampling, but it’s a nice step up from what they’ve been doing.

    Speaking of Canon, it’s interesting to see the ways that the 5d and 7d have made their way into mainstream filmmaking. At one point, I thought they’d be relegated to the indie community - folks looking for nice DoF on a budget. Instead, they seem to have been adopted by a huge range of productions, from episodic TV to features. While they’re not right for everyone, the price and quality make them an easy choice in many cases.

    One of the stars of the show for me was the GoPro, a small waterproof HD camera that ships with a variety of mounts, designed to be used in places where you couldn’t or wouldn’t use a more full featured camera. No LCD, just a record button and a wide angle lens. I bought two.

    Those are the things that stand out for me. While there was plenty of interesting stuff to be seen, given the current economic conditions at the University, I wasn’t exactly in a shopping mindset. The show definitely felt more optimistic than it did last year, and companies are again pushing out new products. However, attendances was about 20% lower than 2008, and that was definitely noticeable on the show floor.