Recherche avancée

Médias (2)

Mot : - Tags -/media

Autres articles (41)

  • Changer son thème graphique

    22 février 2011, par

    Le thème graphique ne touche pas à la disposition à proprement dite des éléments dans la page. Il ne fait que modifier l’apparence des éléments.
    Le placement peut être modifié effectivement, mais cette modification n’est que visuelle et non pas au niveau de la représentation sémantique de la page.
    Modifier le thème graphique utilisé
    Pour modifier le thème graphique utilisé, il est nécessaire que le plugin zen-garden soit activé sur le site.
    Il suffit ensuite de se rendre dans l’espace de configuration du (...)

  • Initialisation de MediaSPIP (préconfiguration)

    20 février 2010, par

    Lors de l’installation de MediaSPIP, celui-ci est préconfiguré pour les usages les plus fréquents.
    Cette préconfiguration est réalisée par un plugin activé par défaut et non désactivable appelé MediaSPIP Init.
    Ce plugin sert à préconfigurer de manière correcte chaque instance de MediaSPIP. Il doit donc être placé dans le dossier plugins-dist/ du site ou de la ferme pour être installé par défaut avant de pouvoir utiliser le site.
    Dans un premier temps il active ou désactive des options de SPIP qui ne le (...)

  • (Dés)Activation de fonctionnalités (plugins)

    18 février 2011, par

    Pour gérer l’ajout et la suppression de fonctionnalités supplémentaires (ou plugins), MediaSPIP utilise à partir de la version 0.2 SVP.
    SVP permet l’activation facile de plugins depuis l’espace de configuration de MediaSPIP.
    Pour y accéder, il suffit de se rendre dans l’espace de configuration puis de se rendre sur la page "Gestion des plugins".
    MediaSPIP est fourni par défaut avec l’ensemble des plugins dits "compatibles", ils ont été testés et intégrés afin de fonctionner parfaitement avec chaque (...)

Sur d’autres sites (7219)

  • Vedanti and Max Sound vs. Google

    14 août 2014, par Multimedia Mike — Legal/Ethical

    Vedanti Systems Limited (VSL) and Max Sound Coporation filed a lawsuit against Google recently. Ordinarily, I wouldn’t care about corporate legal battles. However, this one interests me because it’s multimedia-related. I’m curious to know how coding technology patents might hold up in a real court case.

    Here’s the most entertaining complaint in the lawsuit :

    Despite Google’s well-publicized Code of Conduct — “Don’t be Evil” — which it explains is “about doing the right thing,” “following the law,” and “acting honorably,” Google, in fact, has an established pattern of conduct which is the exact opposite of its claimed piety.

    I wonder if this is the first known case in which Google has been sued over its long-obsoleted “Don’t be evil” mantra ?

    Researching The Plaintiffs

    I think I made a mistake by assuming this lawsuit might have merit. My first order of business was to see what the plaintiff organizations have produced. I have a strong feeling that these might be run of the mill patent trolls.

    VSL currently has a blank web page. Further, the Wayback Machine only has pages reaching back to 2011. The earliest page lists these claims against a plain black background (I’ve highlighted some of the more boisterous claims and the passages that make it appear that Vedanti doesn’t actually produce anything but is strictly an IP organization) :

    The inventions key :
    The patent and software reduced any data content, without compressing, up to a 97% total reduction of the data which also produces a lossless result. This physics based invention is often called the Holy Grail.

    Vedanti Systems Intellectual Property
    Our strategic IP portfolio is granted in all of the world’s largest technology development and use countries. A major value indemnification of our licensee products is the early date of invention filing and subsequent Issue. Vedanti IP has an intrinsic 20 year patent protection and valuation in royalties and licensing. The original data transmission art has no prior art against it.

    Vedanti Systems invented among other firsts, The Slice and Partitioning of Macroblocks within a RGB Tri level region in a frame to select or not, the pixel.

    Vedanti Systems invention is used in nearly every wireless chipset and handset in the world

    Our original pixel selection system revolutionized wireless handset communications. An example of this system “Slice” and “Macroblock Partitioning” is used throughout Satellite channel expansion, Wireless partitioning, Telecom – Video Conferencing, Surveillance Cameras, and 2010 developing Media applications.

    Vedanti Systems is a Semiconductor based software, applications, and IP Continuations Intellectual Property company.

    Let’s move onto the other plaintiff, Max Sound. They have a significantly more substantive website. They also have an Android app named Spins HD Audio, which appears to be little more than a music player based on the screenshots.

    Max Sound also has a stock ticker symbol : MAXD. Something clicked into place when I looked up their ticker symbol : While worth only a few pennies, it was worth a few more pennies after this lawsuit was announced, which might be one of the motivations behind the lawsuit.

    Here’s a trick I learned when I was looking for a new tech job last year : When I first look at a company’s website and am trying to figure out what they really do, I head straight to their jobs/careers page. A lot of corporate websites have way too much blathering corporatese that can be tough to cut through. But when I see what mix of talent and specific skills they are hoping to hire, that gives me a much better portrait of what the company does.

    The reason I bring this up is because this tech company doesn’t seem to have jobs/careers page.

    The Lawsuit
    The core complaint centers around Patent 7974339 : Optimized data transmission system and method. It was filed in July 2004 (or possibly as early as January 2002), issued in July 2011, and assigned (purchased ?) by Vedanti in May 2012. The lawsuit alleges that nearly everything Google has ever produced (or, more accurately, purchased) leverages the patented technology.

    The patent itself has 5 drawings. If you’ve ever seen a multimedia codec patent, or any whitepaper on a multimedia codec, you’ve seen these graphs before. E.g., “Raw pixels come in here -> some analysis happens here -> more analysis happens over here -> entropy coding -> final bitstream”. The text of a patent document isn’t meant to be particularly useful. I’ve tried to understand this stuff before and it never goes well. Skimming the text, I just see a blur of the words data, transmission, pixel, and matrix.

    So I read the complaint to try to figure out what this is all about. To summarize the storyline as narrated by the lawsuit, some inventors were unhappy with the state of video compression in 2001 and endeavored to create something better. So they did, and called it the VSL codec. This codec is so far undocumented on the MultimediaWiki, so it probably has yet to be seen “in the wild”. Good luck finding hard technical data on it now since searches for “VSL codec” are overwhelmed by articles about this lawsuit. Also, the original codec probably wasn’t called VSL because VSL is apparently an IP organization formed much later.

    Then, the protagonists of the lawsuit patented the codec. Then, years later, Google wanted to purchase a video codec that they could open source and use to supplant H.264.

    The complaint goes on to allege that in 2010, Google specifically contacted VSL to possibly license or acquire this mysterious VSL technology. Google was allegedly allowed to study the technology, eventually decided not to continue discussions, and shipped back the proprietary materials.

    Here’s where things get weird. When Google shipped back the materials, they allegedly shipped back a bunch of Post-It notes. The notes are alleged to contain a ton of incriminating evidence. The lawsuit claims that the notes contained such tidbits as :

    • Google was concerned that its infringement could be considered “recklessness” (the standard applicable to willful infringement) ;
    • Google personnel should “try” to destroy incriminating emails ;
    • Google should consider a “design around” because it was facing a “risk of litigation.”

    Actually, given Google’s acquisition of On2, I can totally believe that last one (On2’s codecs have famously contained a lot of weirdness which is commonly suspected to be attributable to designing around known patents).

    Anyway, a lot of this case seems to hinge on the authenticity of these Post-It notes :

    “65. The Post-It notes are unequivocal evidence of Google’s knowledge of the ’339 Patent and infringement by Defendants”

    I wish I could find a stock photo of a stack of Post-It notes in an evidence bag.

    I’ve worked at big technology companies. Big tech companies these days are very diligent about indoctrinating employees about IP liability issues. The reason this Post-It situation strikes me as odd is because the alleged contents of the notes basically outline everything the corporate lawyers tell you NOT to do.

    Analysis
    I’m trying to determine what specific algorithms and coding techniques. I guess I was expecting to see a specific claim that, “Our patent outlines this specific coding technique and here is unequivocal proof that Google A) uses the same technique, and B) specifically did so after looking at our patent.” I didn’t find that (well, a bit of part B, c.f., the Post-It note debacle), but maybe that’s not how these patent lawsuits operate. I’ve never kept up before.

    Maybe it’s just a patent troll. Maybe it’s for the stock bump. I’m expecting to see pump-n-dump stock spam featuring the stock symbol MAXD anytime now.

    I’ve never been interested in following a lawsuit case carefully before. I suddenly find myself wondering if I can subscribe to the RSS feed for this case ? Too much to hope for. But I found this item through Pando and maybe they’ll stay on top of it.

  • Grand Unified Theory of Compact Disc

    1er février 2013, par Multimedia Mike — General

    This is something I started writing about a decade ago (and I almost certainly have some of it wrong), back when compact discs still had a fair amount of relevance. Back around 2002, after a few years investigating multimedia technology, I took an interest in compact discs of all sorts. Even though there may seem to be a wide range of CD types, I generally found that they’re all fundamentally the same. I thought I would finally publishing something, incomplete though it may be.

    Physical Perspective
    There are a lot of ways to look at a compact disc. First, there’s the physical format, where a laser detects where pits/grooves have disturbed the smooth surface (a.k.a. lands). A lot of technical descriptions claim that these lands and pits on a CD correspond to ones and zeros. That’s not actually true, but you have to decide what level of abstraction you care about, and that abstraction is good enough if you only care about the discs from a software perspective.

    Grand Unified Theory (Software Perspective)
    Looking at a disc from a software perspective, I have generally found it useful to view a CD as a combination of a 2 main components :

    • table of contents (TOC)
    • a long string of sectors, each of which is 2352 bytes long

    I like to believe that’s pretty much all there is to it. All of the information on a CD is stored as a string of sectors that might be chopped up into a series of anywhere from 1-99 individual tracks. The exact sector locations where these individual tracks begin are defined in the TOC.

    Audio CDs (CD-DA / Red Book)
    The initial purpose for the compact disc was to store digital audio. The strange sector size of 2352 bytes is an artifact of this original charter. “CD quality audio”, as any multimedia nerd knows, is formally defined as stereo PCM samples that are each 16 bits wide and played at a frequency of 44100 Hz.

    (44100 audio frames / 1 second) * (2 samples / audio frame) * 
      (16 bits / 1 sample) * (1 byte / 8 bits) = 176,400 bytes / second
    (176,400 bytes / 1 second) / (2352 bytes / 1 sector) = 75
    

    75 is the number of sectors required to store a single second of CD-quality audio. A single sector stores 1/75th of a second, or a ‘frame’ of audio (though I think ‘frame’ gets tossed around at all levels when describing CD formats).

    The term “red book” is thrown around in relation to audio CDs. There is a series of rainbow books that define various optical disc standards and the red book describes audio CDs.

    Basic Data CD-ROMs (Mode 1 / Yellow Book)
    Somewhere along the line, someone decided that general digital information could be stored on these discs. Hence, the CD-ROM was born. The standard model above still applies– TOC and string of 2352-byte sectors. However, it’s generally only useful to have a single track on a CD-ROM. Thus, the TOC only lists a single track. That single track can easily span the entire disc (something that would be unusual for a typical audio CD).

    While the model is mostly the same, the most notable difference between and audio CD and a plain CD-ROM is that, while each sector is 2352 bytes long, only 2048 bytes are used to store actual data payload. The remaining bytes are used for synchronization and additional error detection/correction.

    At least, the foregoing is true for mode 1 / form 1 CD-ROMs (which are the most common). “Mode 1″ CD-ROMs are defined by a publication called the yellow book. There is also mode 1 / form 2. This forgoes the additional error detection and correction afforded by form 1 and dedicates 2336 of the 2352 sector bytes to the data payload.

    CD-ROM XA (Mode 2 / Green Book)
    From a software perspective, these are similar to mode 1 CD-ROMs. There are also 2 forms here. The first form gives a 2048-byte data payload while the second form yields a 2324-byte data payload.

    Video CD (VCD / White Book)
    These are CD-ROM XA discs that carry MPEG-1 video and audio data.

    Photo CD (Beige Book)
    This is something I have never personally dealt with. But it’s supposed to conform to the CD-ROM XA standard and probably fits into my model. It seems to date back to early in the CD-ROM era when CDs were particularly cost prohibitive.

    Multisession CDs (Blue Book)
    Okay, I admit that this confuses me a bit. Multisession discs allow a user to burn multiple sessions to a single recordable disc. I.e., burn a lump of data, then burn another lump at a later time, and the final result will look like all the lumps were recorded as the same big lump. I remember this being incredibly useful and cost effective back when recordable CDs cost around US$10 each (vs. being able to buy a spindle of 100 CD-Rs for US$10 or less now). Studying the cdrom.h file for the Linux OS, I found a system call named CDROMMULTISESSION that returns the sector address of the start of the last session. If I were to hypothesize about how to make this fit into my model, I might guess that the TOC has some hint that the disc was recorded in multisession (which needs to be decided up front) and the CDROMMULTISESSION call is made to find the last session. Or it could be that a disc read initialization operation always leads off with the CDROMMULTISESSION query in order to determine this.

    I suppose I could figure out how to create a multisession disc with modern software, or possibly dig up a multisession disc from 15+ years ago, and then figure out how it should be read.

    CD-i
    This type puzzles my as well. I do have some CD-i discs and I thought that I could read them just fine (the last time I looked, which was many years ago). But my research for this blog post has me thinking that I might not have been seeing the entire picture when I first studied my CD-i samples. I was able to see some of the data, but sources indicate that only proper CD-i hardware is able to see all of the data on the disc (apparently, the TOC doesn’t show all of the sectors on disc).

    Hybrid CDs (Data + Audio)
    At some point, it became a notable selling point for an audio CD to have a data track with bonus features. Even more common (particularly in the early era of CD-ROMs) were computer and console games that used the first track of a disc for all the game code and assets and the remaining tracks for beautifully rendered game audio that could also be enjoyed outside the game. Same model : TOC points to the various tracks and also makes notes about which ones are data and which are audio.

    There seems to be 2 distinct things described above. One type is the mixed mode CD which generally has the data in the first track and the audio in tracks 2..n. Then there is the enhanced CD, which apparently used multisession recording and put the data at the end. I think that the reasoning for this is that most audio CD player hardware would only read tracks from the first session and would have no way to see the data track. This was a positive thing. By contrast, when placing a mixed-mode CD into an audio player, the data track would be rendered as nonsense noise.

    Subchannels
    There’s at least one small detail that my model ignores : subchannels. CDs can encode bits of data in subchannels in sectors. This is used for things like CD-Text and CD-G. I may need to revisit this.

    In Summary
    There’s still a lot of ground to cover, like how those sectors might be formatted to show something useful (e.g., filesystems), and how the model applies to other types of optical discs. Sounds like something for another post.

  • Permission denied when trying to call ffmpeg on heroku

    1er décembre 2013, par chuck w

    I have a rails 3.1 web app on Heroku (cedar) that that allows users to post stories with photo and video attachments using the paperclip gem (3.5.1). Uploads are stored on s3. This has been working 100% for many months.

    I have recently added video transcoding and thumbnailing to my dev machine with the paperclip-ffmpeg gem (1.0.1) and ffmpeg running locally. This is also working.

    I've followed these instructions to build and install ffmpeg on heroku, and I've altered my Heroku app's path so that it reads as follows :

    PATH:   bin:vendor/ffmpeg/bin:vendor/bundle/ruby/1.9.1/bin:/usr/local/bin:/usr/bin:/bin

    and my app's LD_LIBRARY_PATH so that it reads as follows :

    LD_LIBRARY_PATH:             vendor/ffmpeg/lib:/usr/local/lib

    Finally, I've vendored ffmpeg into my app with the following structure :

    vendoring ffmpeg

    When I commit and push these changes to Heroku and POST a story that has a video I get the following output on Heroku logs :

    2013-11-30T22:51:46.711010+00:00 app[web.1]: Started POST "/stories" for 71.80.218.93 at        
    2013-11-30 22:51:46 +0000
    2013-11-30T22:51:47.023119+00:00 app[web.1]: Cocaine::ExitStatusError (Command 'ffprobe
    "/tmp/sideways video20131130-2-14w1q4j.mp4" 2>&1' returned 126. Expected 0
    2013-11-30T22:51:47.023119+00:00 app[web.1]: Here is the command output:
    2013-11-30T22:51:47.023119+00:00 app[web.1]:
    2013-11-30T22:51:47.023119+00:00 app[web.1]:   app/controllers/stories_controller.rb:11:in    
    `create'
    2013-11-30T22:51:47.023119+00:00 app[web.1]:
    2013-11-30T22:51:47.023119+00:00 app[web.1]:
    2013-11-30T22:51:47.019228+00:00 heroku[router]: at=info method=POST path=/stories    
    host=myapp.herokuapp.com fwd="71.80.218.93" dyno=web.1 connect=1ms    
    service=3130ms status=500 bytes=754
    2013-11-30T22:51:47.023119+00:00 app[web.1]:
    2013-11-30T22:51:47.023600+00:00 app[web.1]: cache: [POST /stories] invalidate, pass
    2013-11-30T22:51:47.023119+00:00 app[web.1]: sh: ffprobe: Permission denied
    2013-11-30T22:51:47.023119+00:00 app[web.1]: ):

    Also, running the console command : heroku run "ffmpeg -version" -a myapp
    returns :

    Running ffmpeg -version attached to terminal... up, run.6591
    bash: vendor/ffmpeg/bin/ffmpeg: Permission denied

    Here is my Post model paperclip set-up :

    class Post < ActiveRecord::Base

     attr_accessible :contents, :photo, :video

     belongs_to    :story, :touch => true
     belongs_to    :user, :touch => true

     has_attached_file :photo,
                       :styles => {
                         :thumb => ["100x140>", :jpg],
                         :medium => ["400x400>", :jpg],
                         :large => ["800x800>", :jpg]
                       },
                       :processors => [:thumbnail],
                       :storage => :s3,
                       :s3_credentials => "#{Rails.root.to_s}/config/s3.yml",
                       :path => "/:style/:id/:filename"

     has_attached_file :video,
                       :storage => :s3,
                       :s3_credentials => "#{Rails.root.to_s}/config/s3.yml",
                       :path => "/video/:id/:filename",
                       :styles => {
                         :thumb => { :geometry => "140x100#", :format => 'jpg', :time => 10 },
                         :medium => { :geometry => "480x360", :format => 'mp4' },
                       }, :processors => [:ffmpeg]

    Why am I getting these "permission denieds" ? Any help would be greatly appreciated !!!