Recherche avancée

Médias (2)

Mot : - Tags -/documentation

Autres articles (112)

  • La sauvegarde automatique de canaux SPIP

    1er avril 2010, par

    Dans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
    Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...)

  • Gestion de la ferme

    2 mars 2010, par

    La ferme est gérée dans son ensemble par des "super admins".
    Certains réglages peuvent être fais afin de réguler les besoins des différents canaux.
    Dans un premier temps il utilise le plugin "Gestion de mutualisation"

  • ANNEXE : Les plugins utilisés spécifiquement pour la ferme

    5 mars 2010, par

    Le site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)

Sur d’autres sites (5655)

  • Adding A New System To The Game Music Website

    1er août 2012, par Multimedia Mike — General

    At first, I was planning to just make a little website where users could install a Chrome browser extension and play music from old 8-bit NES games. But, like many software projects, the goal sort of ballooned. I created a website where users can easily play old video game music. It doesn’t cover too many systems yet, but I have had individual requests to add just about every system you can think of.

    The craziest part is that I know it’s possible to represent most of the systems. Eventually, it would be great to reach Chipamp parity (a combination plugin for Winamp that packages together plugins for many of these chiptunes). But there is a process to all of this. I have taken to defining a number of phases that are required to get a new system covered.

    Phase 0 informally involves marveling at the obscurity of some of the console systems for which chiptune collections have evolved. WonderSwan ? Sharp X68000 ? PC-88 ? I may be viewing this through a terribly Ameri-centric lens. I’ve at least heard of the ZX Spectrum and the Amstrad CPC even if I’ve never seen either.

    No matter. The goal is to get all their chiptunes cataloged and playable.

    Phase 1 : Finding A Player
    The first step is to find a bit of open source code that can play a particular format. If it’s a library that can handle many formats, like Game Music Emu or Audio Overload SDK, even better (probably). The specific open source license isn’t a big concern for me. I’m almost certain that some of the libraries that SaltyGME currently mixes are somehow incompatible, license-wise. I’ll worry about it when I encounter someone who A) cares, and B) is in a position to do something about it. Historical preservation comes first, and these software libraries aren’t getting any younger (I’m finding some that haven’t been touched in a decade).

    Phase 2 : Test Program
    The next phase is to create a basic test bench program that sends a music file into the library, generates a buffer of audio, and shoves it out to the speakers via PulseAudio’s simple API (people like to rip on PulseAudio, but its simple API really lives up to its name and requires pages less boilerplate code to play a few samples than ALSA).

    Phase 3 : Plug Into Web Player
    After successfully creating the test bench and understanding exactly which source files need to be built, the next phase is to hook it up to the main SaltyGME program via the ad-hoc plugin API I developed. This API requires that a player backend can, at the very least, initialize itself based on a buffer of bytes and generate audio samples into an array of 16-bit numbers. The API also provides functions for managing files with multiple tracks and toggling individual voices/channels if the library supports such a feature. Having the test bench application written beforehand usually smooths out this step.

    But really, I’m just getting started.

    Phase 4 : Collecting A Song Corpus
    Then there is the matter of staging a collection of songs for a given system. It seems like it would just be a matter of finding a large collection of songs for a given format, downloading them in bulk, and mirroring them. Honestly, that’s the easy part. People who are interested in this stuff have been lovingly curating massive collections of these songs for years (see SNESmusic.org for one of the best examples, and they also host a torrent of all their music for really quick and easy hoarding).

    In my drive to make this game music website more useful for normal people, the goal is to extract as much metadata as possible to make searching better, and to package the data so that it’s as convenient as possible for users. Whenever I seek to add a new format to the collection, this is the phase where I invariably find that I have to fundamentally modify some of the assumptions I originally made in the player.

    First, there were the NES Sound Format (NSF) files, the original format I wanted to play. These are files that have any number of songs packed into a single file. Playback libraries expose APIs to jump to individual tracks. So the player was designed around that. Game Boy GBS files also fall into this category but present a different challenge vis-à-vis metadata, addressed in the next phase.

    Then, there were the SPC files. Each SPC file is its own song and multiple SPC files are commonly bundled as RAR files. Not wanting to deal with RAR, or any format where I interacted with a general compression API to pull a few files out, I created a custom resource format (inspired by so many I have studied and documented) and compressed it with a simpler compression API. I also had to modify some of the player’s assumptions to deal with this archive format. Genesis VGMs, bundled either in .zip or .7z, followed the same model as SPC in RAR.

    Then it was suggested that I attempt to bring SaltyGME closer to feature parity with Chipamp, rather than just being a Chrome browser frontend for Game Music Emu. When I studied the Portable Sound Format (PSF), I realized it didn’t fit into the player model I already had. PSF uses a sort of shared library model for code execution and I developed another resource archive format to cope with it. So that covers quite a few formats.

    One more architecture challenge arose when I started to study one of the prevailing metadata formats, explained in the next phase.

    Phase 5 : Metadata
    Finally, for the collections to really be useful, I need to harvest that juicy metadata for search and presentation.

    I have created a series of programs and scripts to scrape metadata out of these music files and store it all in a database that drives the website and search engine. I recognize that it’s no good to have a large corpus of songs with minimal metadata and while importing bulk quantities of music, the scripts harshly reject songs that have too little metadata.

    Again, challenges abound. One of the biggest challenges I’m facing is the peculiar quasi-freeform metadata format that emerged as .m3u that takes a form similar to :

    #################################################################
    #
    # GRADIUS2
    # (c) KONAMI  by Furukawa Motoaki, IKACHAN
    #
    #################################################################
    

    nemesis2.kss::KSS,62,[Nemesis2] (Opening),2:23,,0
    nemesis2.kss::KSS,61,[Nemesis2] (Start),7,,0
    nemesis2.kss::KSS,43,[Nemesis2] (Air Battle),34,0-
    nemesis2.kss::KSS,44,[Nemesis2] (1st. BGM),51,0-
    [...]

    A lot of file formats (including Game Boy GBS mentioned earlier) store their metadata separately using this format. I have some ideas about tools I can use to help me process this data but I’m pretty sure each one will require some manual intervention.

    As alluded to in phase 4, .m3u presents another architectural challenge : Notice the second field in the CSV .m3u data. That’s a track number. A player can’t expect every track in a bundled chiptune file to be valid, nor to be in any particular order. Thus, I needed to alter the architecture once more to take this into account. However, instead of modifying the SaltyGME player, I simply extended the metadata database to include a playback order which, by default, is the same as the track order but can also accommodate this new issue. This also has the bonus of providing a facility to exclude playback of certain tracks. This comes in handy for many PSF archives which tend to include files that only provide support for other files and aren’t meant to be played on their own.

    Bright Side
    The reward for all of this effort is that the data lands in a proper database in the end. None of it goes back into the chiptune files themselves. This makes further modification easier as all of the data that is indexed and presented on the site comes from the database. Somewhere down the road, I should probably create an API for accessing this metadata.

  • Identification of Frame Type using Xuggle

    1er décembre 2012, par Samveen

    I'm trying to extract just the I-Frames from a video using the Xuggle API. I took the first example from the Xuggle source from here and removing the display code, added the following code snippet to identify the frame type of the current frame :

    //    Decode loop {
             if (picture.isComplete()) {
                 com.xuggle.xuggler.IVideoPicture.PictType type = picture.getPictureType()
                 if(type == com.xuggle.xuggler.IVideoPicture.PictType.I_TYPE)
                     System.out.println("(I-Frame)\n");
                 else if(type == com.xuggle.xuggler.IVideoPicture.PictType.B_TYPE)
                     System.out.println("(B-Frame)\n");
                 else if(type == com.xuggle.xuggler.IVideoPicture.PictType.P_TYPE)
                     System.out.println("(P-Frame)\n");
                 else if(type == com.xuggle.xuggler.IVideoPicture.PictType.S_TYPE)
                     System.out.println("(S-Frame)\n");
                 else if(type ==com.xuggle.xuggler.IVideoPicture.PictType.DEFAULT_TYPE)
                     System.out.println("(Default)\n");
                 else
                     System.out.println("(Other)\n");
             }
    //    }

    Using this code, I never get an I-Frame in my test video, which by definition of being it being a video is impossible (every video's first frame MUST be an I-Frame) and every frame is identified as

    When I use mplayer to extract the I-Frames from the same video using :

    mplayer video20.mp4 -benchmark -nosound -noaspect -noframedrop -ao null \
           -vo png:z=6 -vf framestep=I

    I correctly get a set of I-Frames.

    The number of frames displayed by my code is identical to that extracted by mplayer without the framestep filter

    mplayer video20.mp4 -benchmark -nosound -noaspect -noframedrop -ao null \
           -vo png:z=6

    The input video in question is an H264 encoded video.

    My question is that what can I do to correctly get the I-Frames from the video using xuggle

    I picked up the method from Java - Keyframe extraction from a video

  • No such file or directory Error with FFMPEG screenshot and CarrierWave custom model method

    9 juillet 2013, par dodgerogers747

    I am using AWS CORS to upload videos to my site, all of which works as planned.

    I have the following model method which runs as an after_create callback (for speed) to take a screenshot from the video file on AWS. I plan to move this out into a delayed job but I don't think this will solve this particular issue. Please advise if mistaken.

    I use FFMPEG to take a screenshot from the AWS self.file location, I then send the file to CarrierWave by saving the file to self.screenshot where it is uploaded to AWS.

    Approx. 50% of the time it errors out with Errno::ENOENT - No such file or directory for the location of the screenshot image.

    How can I rectify my code to remove this error and how come it only occurs around 50% of the time ? If anyone needs more code just shout.

    video.rb

    after_create :take_screenshot

    mount_uploader :screenshot, ImageUploader

     def take_screenshot
       location = "#{Rails.root}/public/uploads/tmp/screenshots/#{unique}_#{File.basename(file)}.jpg"
       system `ffmpeg #{log_level} -i #{self.file} -ss 00:00:0#{time_frame} -vframes 1 #{location}`
       logger.debug "Trying to take screenshot from #{self.file}"
       #pass the actual file to CarrierWave to handle the image upload
       self.screenshot = File.open(location)
       self.save
       logger.debug "Deleting tmp file: #{location}: #{File.delete(location)}" if self.screenshot.present?
     end

    def unique
       (0..6).map{(65+rand(26)).chr}.join
     end

    def log_level
       "-loglevel panic"
     end

     def time_frame
       rand(0..3)
     end

    Stack trace :

    Started POST "/videos" for 127.0.0.1 at 2013-07-10 03:58:49 +0800
    Processing by VideosController#create as JS
     Parameters: {"utf8"=>"✓", "authenticity_token"=>"6M1Ia+Ag2E3HVKH2PO/p7jewxSpMPdWeVHGA933Bzjw=", "video"=>{"file"=>"http://bucketname.s3.amazonaws.com/uploads/video/file/671a87fb-91de-4eaf-a38a-1b25c51798e5/Good_7iron.m4v"}}
     User Load (0.3ms)  SELECT `users`.* FROM `users` WHERE `users`.`id` = 9 LIMIT 1
      (0.1ms)  BEGIN
     SQL (0.2ms)  INSERT INTO `videos` (`created_at`, `file`, `question_id`, `screenshot`, `updated_at`, `user_id`) VALUES ('2013-07-09 19:58:49', 'http://bucketname.s3.amazonaws.com/uploads/video/file/671a87fb-91de-4eaf-a38a-1b25c51798e5/Good_7iron.m4v', NULL, NULL, '2013-07-09 19:58:49', 9)
    Trying to take screenshot from http://bucketname.s3.amazonaws.com/uploads/video/file/671a87fb-91de-4eaf-a38a-1b25c51798e5/Good_7iron.m4v
      (0.8ms)  ROLLBACK
    Completed 500 Internal Server Error in 3550ms

    Errno::ENOENT - No such file or directory - /Users/me/rails/project/public/uploads/tmp/screenshots/WCACLIC_Good_7iron.m4v.jpg:
     app/models/video.rb:24:in `initialize'
     app/models/video.rb:24:in `open'
     app/models/video.rb:24:in `take_screenshot'