Recherche avancée

Médias (5)

Mot : - Tags -/open film making

Autres articles (50)

  • Submit bugs and patches

    13 avril 2011

    Unfortunately a software is never perfect.
    If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
    If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
    You may also (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

Sur d’autres sites (12210)

  • lavfi : adding dejudder filter to remove judder produced by partially telecined material.

    14 février 2014, par Nicholas Robbins
    lavfi : adding dejudder filter to remove judder produced by partially telecined material.
    

    Signed-off-by : Nicholas Robbins <nickrobbins@yahoo.com>
    Signed-off-by : Michael Niedermayer <michaelni@gmx.at>

    • [DH] Changelog
    • [DH] doc/filters.texi
    • [DH] libavfilter/Makefile
    • [DH] libavfilter/allfilters.c
    • [DH] libavfilter/vf_dejudder.c
  • Cloaked Archive Wiki

    16 mai 2011, par Multimedia Mike — General

    Google’s Chrome browser has made me phenomenally lazy. I don’t even attempt to type proper, complete URLs into the address bar anymore. I just type something vaguely related to the address and let the search engine take over. I saw something weird when I used this method to visit Archive Team’s site :



    There’s greater detail when you elect to view more results from the site :



    As the administrator of a MediaWiki installation like the one that archiveteam.org runs on, I was a little worried that they might have a spam problem. However, clicking through to any of those out-of-place pages does not indicate anything related to pharmaceuticals. Viewing source also reveals nothing amiss.

    I quickly deduced that this is a textbook example of website cloaking. This is when a website reports different content to a search engine than it reports to normal web browsers (humans, presumably). General pseudocode :

    C :
    1. if (web_request.user_agent_string == CRAWLER_USER_AGENT)
    2.  return cloaked_data ;
    3. else
    4.  return real_data ;

    You can verify this for yourself using the wget command line utility :

    <br />
    $ wget --quiet --user-agent="<strong>Mozilla/5.0</strong>" \<br />
     http://www.archiveteam.org/index.php?title=Geocities -O - | grep \&lt;title\&gt;<br />
    &lt;title&gt;GeoCities - Archiveteam&lt;/title&gt;

    $ wget —quiet —user-agent="Googlebot/2.1"
    http://www.archiveteam.org/index.php?title=Geocities -O - | grep \<title\>
    <title>Cheap xanax | Online Drug Store, Big Discounts</title>

    I guess the little web prank worked because the phaux-pharma stuff got indexed. It makes we wonder if there’s a MediaWiki plugin that does this automatically.

    For extra fun, here’s a site called the CloakingDetector which purports to be able to detect whether a page employs cloaking. This is just one humble observer’s opinion, but I don’t think the site works too well :



  • Videos written with moviepy on amazon aws S3 are empty

    10 avril 2019, par cellistigs

    I am working on processing a dataset of large videos ( 100 GB) for a collaborative project. To make it easier to share data and results, I am keeping all videos remotely on an amazon S3 bucket, and processing it by mounting the bucket on an EC2 instance.

    One of the processing steps I am trying to do involves cropping the videos, and rewriting them into smaller segments. I am doing this with moviepy, splitting the video with the subclip method and calling :

    subclip.write_videofile("PathtoS3Bucket"+VideoName.split('.')[0]+'part' +str(segment)+ '.mp4',codec = 'mpeg4',bitrate = "1500k",threads = 2)

    I found that when the videos are too large (parameters set as above) calls to this function will sometimes generate empty files in my S3 bucket ( 10% of the time). Does anyone have insight into features of moviepy/ffmpeg/S3 that would lead to this ?