Recherche avancée

Médias (2)

Mot : - Tags -/doc2img

Autres articles (52)

  • Diogene : création de masques spécifiques de formulaires d’édition de contenus

    26 octobre 2010, par

    Diogene est un des plugins ? SPIP activé par défaut (extension) lors de l’initialisation de MediaSPIP.
    A quoi sert ce plugin
    Création de masques de formulaires
    Le plugin Diogène permet de créer des masques de formulaires spécifiques par secteur sur les trois objets spécifiques SPIP que sont : les articles ; les rubriques ; les sites
    Il permet ainsi de définir en fonction d’un secteur particulier, un masque de formulaire par objet, ajoutant ou enlevant ainsi des champs afin de rendre le formulaire (...)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Utilisation et configuration du script

    19 janvier 2011, par

    Informations spécifiques à la distribution Debian
    Si vous utilisez cette distribution, vous devrez activer les dépôts "debian-multimedia" comme expliqué ici :
    Depuis la version 0.3.1 du script, le dépôt peut être automatiquement activé à la suite d’une question.
    Récupération du script
    Le script d’installation peut être récupéré de deux manières différentes.
    Via svn en utilisant la commande pour récupérer le code source à jour :
    svn co (...)

Sur d’autres sites (3524)

  • Why iFrame is a good idea

    15 octobre 2009

    I’ve seen some hilariously uninformed posts about the new Apple iFrame specification. Let me take a minute to explain what it actually is.

    First off, as opposed to what the fellow in the Washington Post writes, it’s not really a new format. iFrame is just a way of using formats that we’ve already know and love. As the name suggests, iFrame is just an i-frame only H.264 specification, using AAC audio. An intraframe version of H.264 eh ? Sounds a lot like AVC-Intra, right ? Exactly. And for exactly the same reasons - edit-ability. Whereas AVC-Intra targets the high end, iFrame targets the low end.

    Even when used in intraframe mode, H.264 has some huge advantage over the older intraframe codecs like DV or DVCProHD. For example, significantly better entropy coding, adaptive quantization, and potentially variable bitrates. There are many others. Essentially, it’s what happens when you take DV and spend another 10 years working on making it better. That’s why Panasonic’s AVC-Intra cameras can do DVCProHD quality video at half (or less) the bitrate.

    Why does iFrame matter for editing ? Anyone who’s tried to edit video from one of the modern H.264 cameras without first transcoding to an intraframe format has experienced the huge CPU demands and sluggish performance. Behind the scenes it’s even worse. Because interframe H.264 can have very long GOPs, displaying any single frame can rely on dozens or even hundreds of other frames. Because of the complexity of H.264, building these frames is very high-cost. And it’s a variable cost. Decoding the first frame in a GOP is relatively trivial, while decoding the middle B-frame can be hugely expensive.

    Programs like iMovie mask that from the user in some cases, but at the expensive of high overhead. But, anyone who’s imported AVC-HD video into Final Cut Pro or iMovie knows that there’s a long "importing" step - behind the scenes, the applications are transcoding your video into an intraframe format, like Apple Intermediate or ProRes. It sort of defeats one of the main purposes of a file-based workflow.

    You’ve also probably noticed the amount of time it takes to export a video in an interframe format. Anyone who’s edited HDV in Final Cut Pro has experienced this. With DV, doing an "export to quicktime" is simply a matter of Final Cut Pro rewriting all of the data to disk - it’s essentially a file copy. With HDV, Final Cut Pro has to do a complete reencode of the whole timeline, to fit everything into the new GOP structure. Not only is this time consuming, but it’s essentially a generation loss.

    iFrame solves these issues by giving you an intraframe codec, with modern efficiency, which can be decoded by any of the H.264 decoders that we already know and love.

    Having this as an optional setting on cameras is a huge step forward for folks interested in editing video. Hopefully some of the manufacturers of AVC-HD cameras will adopt this format as well. I’ll gladly trade a little resolution for instant edit-ability.

  • Convert video with paperclip and ffmpeg in Ruby on Rails

    16 juin 2014, par Atu

    I want to convert my uploaded video with ffmpeg, but I had few error. I use paperclip and ffmpeg but nothing happen. The structure of my application is one post has_many videos.

    this my video model :

    belongs_to :event
    validates_attachment_presence :source
    has_attached_file :source

    after_create :convert_in_flv, :set_new_filename

    def convert_in_flv
     flv = File.join(File.dirname(source.path), "#{id}.flv")
     system("ffmpeg -i #{source.path} -ar 22050 -ab 32 -s 480x360 -vcodec flv -r 25 -qscale 8 -f flv -y #{flv}")
    end

    def set_new_filename
     update_attribute(:source_file_name, "#{id}.flv")
    end

    and this my video controller

    def create
     @event = Event.find(params[:event_id])
     @video = @event.videos.create(params[:video])
     redirect_to event_path(@event)
    end

    def destroy
     @event = Event.find(params[:event_id])
     @video = @event.videos.find(params[:id])
     @video.destroy
     redirect_to event_path(@event)
    end

    The video is successful upload but not converted. You had any solution ?

  • Video Processing via Bluetooth

    16 juillet 2012, par kerim yucel

    The application I am currently developing processes each frames using a native code and it should record the video as well. I tried SDK for this purpose but certain restrictions didn't allow me to do so, so I switched to NDK for a video recording code piece.

    Apparently, my algorithm seriously uses CPU, upto %70 percent in the worst case. Before I actually start working on a video recorder, I wanted to try the following approach.

    I will process the preview frames using an android phone and send it to another phone (which uses same application and same model) for recording. My questions are :

    1.Should I try WiFi instead of Bluetooth ? I am developing the application for API 8 so I don't have WiFi-Direct, therefore I should do some socket programming, which would possibly complicate things a bit for me since Bluetooth can easily be set up using SDK.

    2- Will I be available to record the frames as a video at the receiving end ? I will receive each frame with certain metadata embedded to it and should record them using the other phone. I doubt I will be able to do it using SDK, so NDK along with ffmpeg seems to be the best choice ? Any suggestions related to this question will be more than welcome.

    3-Here comes the best part. I am recording the video with the lowest resolution that,after compressed, takes no more than 14mb space for a 10 minute long video. I have to reach the raw frames to send it to other end, encode and compress it. Any ideas related to possible flooding of Bluetooth/Wi-Fi because of big-sized raw frames ?

    Any other approaches and answers will be much appreciated. Thanks.