Recherche avancée

Médias (0)

Mot : - Tags -/alertes

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (60)

  • Submit bugs and patches

    13 avril 2011

    Unfortunately a software is never perfect.
    If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
    If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
    You may also (...)

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (7805)

  • A way around HTML5 video limits in browsers ?

    18 mars 2013, par CoryG

    I'm setting up a CCTV system using ffmpeg, ffserver and zoneminder and everything has gone well with one exception - the zoneminder display is horrible outside. I have ffserver streaming as live webm videos so I'd like to have a viewer in chrome, but when I load more than 6 320x240 videos on the screen at a time any subsequent videos fire the suspend event and stop. I attached a console.log to each event of the streams and this is the output if it's of help in answering this question :

    0 "LOAD" 0.htm:11
    1 "LOAD" 0.htm:11
    2 "LOAD" 0.htm:11
    3 "LOAD" 0.htm:11
    4 "LOAD" 0.htm:11
    5 "LOAD" 0.htm:11
    6 "LOAD" 0.htm:11
    7 "LOAD" 0.htm:11
    8 "LOAD" 0.htm:11
    9 "LOAD" 0.htm:11
    10 "LOAD" 0.htm:11
    11 "LOAD" 0.htm:11
    12 "LOAD" 0.htm:11
    13 "LOAD" 0.htm:11
    14 "LOAD" 0.htm:11
    15 "LOAD" 0.htm:11
    stream0 loadstart 0.htm:30
    stream1 loadstart 0.htm:30
    stream2 loadstart 0.htm:30
    stream3 loadstart 0.htm:30
    stream4 loadstart 0.htm:30
    stream5 loadstart 0.htm:30
    stream6 loadstart 0.htm:30
    stream7 loadstart 0.htm:30
    stream8 loadstart 0.htm:30
    stream9 loadstart 0.htm:30
    stream10 loadstart 0.htm:30
    stream11 loadstart 0.htm:30
    stream12 loadstart 0.htm:30
    stream13 loadstart 0.htm:30
    stream14 loadstart 0.htm:30
    stream15 loadstart 0.htm:30
    stream4 durationchange 0.htm:37
    stream4 loadedmetadata 0.htm:24
    stream2 durationchange 0.htm:37
    stream2 loadedmetadata 0.htm:24
    stream0 durationchange 0.htm:37
    stream0 loadedmetadata 0.htm:24
    stream0 loadeddata 0.htm:43
    stream0 canplay 0.htm:47
    stream0 canplaythrough 0.htm:32
    stream0 play 0.htm:45
    stream0 playing 0.htm:36
    stream0 timeupdate 0.htm:41
    stream4 loadeddata 0.htm:43
    stream4 canplay 0.htm:47
    stream4 canplaythrough 0.htm:32
    stream4 play 0.htm:45
    stream4 playing 0.htm:36
    stream4 timeupdate 0.htm:41
    stream2 loadeddata 0.htm:43
    stream2 canplay 0.htm:47
    stream2 canplaythrough 0.htm:32
    stream2 play 0.htm:45
    stream2 playing 0.htm:36
    stream2 timeupdate 0.htm:41
    stream6 stalled 0.htm:35
    stream7 stalled 0.htm:35
    stream8 stalled 0.htm:35
    stream9 stalled 0.htm:35
    stream10 stalled 0.htm:35
    stream11 stalled 0.htm:35
    stream12 stalled 0.htm:35
    stream13 stalled 0.htm:35
    stream14 stalled 0.htm:35
    stream15 stalled 0.htm:35
    stream5 durationchange 0.htm:37
    stream5 loadedmetadata 0.htm:24
    stream5 loadeddata 0.htm:43
    stream5 canplay 0.htm:47
    stream5 canplaythrough 0.htm:32
    stream5 play 0.htm:45
    stream5 playing 0.htm:36
    stream5 timeupdate 0.htm:41
    stream1 durationchange 0.htm:37
    stream1 loadedmetadata 0.htm:24
    stream1 loadeddata 0.htm:43
    stream1 canplay 0.htm:47
    stream1 canplaythrough 0.htm:32
    stream1 play 0.htm:45
    stream1 playing 0.htm:36
    stream1 timeupdate 0.htm:41
    stream3 durationchange 0.htm:37
    stream3 loadedmetadata 0.htm:24
    stream3 loadeddata 0.htm:43
    stream3 canplay 0.htm:47
    stream3 canplaythrough 0.htm:32
    stream3 play 0.htm:45
    stream3 playing 0.htm:36
    stream3 timeupdate 0.htm:41

    I've tried using ffmpeg's -filter_complex flag to combine the videos into a 1280x960 stream but it comes in around 3-6 FPS with the following -filter_complex code :

    "
    nullsrc=size=1280x960 [bg];
    [0:v] setpts=PTS-STARTPTS, scale=320x240 [v0];
    [1:v] setpts=PTS-STARTPTS, scale=320x240 [v1];
    [2:v] setpts=PTS-STARTPTS, scale=320x240 [v2];
    [3:v] setpts=PTS-STARTPTS, scale=320x240 [v3];
    [4:v] setpts=PTS-STARTPTS, scale=320x240 [v4];
    [5:v] setpts=PTS-STARTPTS, scale=320x240 [v5];
    [6:v] setpts=PTS-STARTPTS, scale=320x240 [v6];
    [7:v] setpts=PTS-STARTPTS, scale=320x240 [v7];
    [8:v] setpts=PTS-STARTPTS, scale=320x240 [v8];
    [9:v] setpts=PTS-STARTPTS, scale=320x240 [v9];
    [10:v] setpts=PTS-STARTPTS, scale=320x240 [v10];
    [11:v] setpts=PTS-STARTPTS, scale=320x240 [v11];
    [12:v] setpts=PTS-STARTPTS, scale=320x240 [v12];
    [13:v] setpts=PTS-STARTPTS, scale=320x240 [v13];
    [14:v] setpts=PTS-STARTPTS, scale=320x240 [v14];
    [15:v] setpts=PTS-STARTPTS, scale=320x240 [v15];
    [bg][v0] overlay=shortest=1 [bg];
    [bg][v1] overlay=shortest=1:x=320 [bg];
    [bg][v2] overlay=shortest=1:x=640 [bg];
    [bg][v3] overlay=shortest=1:x=960 [bg];
    [bg][v4] overlay=shortest=1:y=240 [bg];
    [bg][v5] overlay=shortest=1:x=320:y=240 [bg];
    [bg][v6] overlay=shortest=1:x=640:y=240 [bg];
    [bg][v7] overlay=shortest=1:x=960:y=240 [bg];
    [bg][v8] overlay=shortest=1:y=480 [bg];
    [bg][v9] overlay=shortest=1:x=320:y=480 [bg];
    [bg][v10] overlay=shortest=1:x=640:y=480 [bg];
    [bg][v11] overlay=shortest=1:x=960:y=480 [bg];
    [bg][v12] overlay=shortest=1:y=720 [bg];
    [bg][v13] overlay=shortest=1:x=320:y=720 [bg];
    [bg][v14] overlay=shortest=1:x=640:y=720 [bg];
    [bg][v15] overlay=shortest=1:x=960:y=720
    "

    If there isn't a way around the chrome limitation (this is for use on a LAN, so high bandwidth is fine) I'd be happy to hear of a tool that can combine live streams and be fed back into ffserver (or a faster way of using ffmpeg than what I have above). Though ideally I'd like a way to force chrome to load all 16 videos.

    Edit : The JavaScript (minus the console.log's) :

    var streams = new Array();
    streams.Load = function (index) {
       console.log(index, 'LOAD');
       streams.push(document.createElement('video'));
       streams[index].id = 'stream' + index;
       streams[index].autoplay = 'autoplay';
       streams[index].style.display = 'block';
       streams[index].style.position = 'absolute';
       streams[index].width = this.Width;
       streams[index].height = this.Height;
       streams[index].style.left = ((index - (Math.floor(index / 4) * 4)) * this.Width) + 'px';
       streams[index].style.top = (Math.floor(index / 4) * this.Height) + 'px';
       streams[index].style.width = this.Width + 'px';
       streams[index].style.height = this.Height + 'px';
       $(streams[index]).bind('loadedmetadata', streams[index], function (event) {
           var actualRatio = event.data.videoWidth / event.data.videoHeight;
           var targetRatio = $(event.data).width() / $(event.data).height();
           var adjustmentRatio = (targetRatio / actualRatio);
           $(event.data).css('-webkit-transform', 'scaleX(' + adjustmentRatio + ')');
       });
       streams[index].source = document.createElement('source');
       streams[index].source.src = 'http://10.1.1.15:8090/' + index + '.webm';
       streams[index].source.type = 'video/webm';
       streams[index].appendChild(streams[index].source);
       divMain.appendChild(streams[index]);
       streams.LoadNext();
    };
    streams.LoadNext = function () { if (this.length < 16) { this.Load(this.length); } };
    streams.Width = 320;
    streams.Height = 240;
    $(window).ready(function() {
       var divMain = document.getElementById('divMain');
       streams.Width = divMain.offsetWidth / 4;
       streams.Height = divMain.offsetHeight / 4;
       var tsource = null;
       streams.LoadNext();
    });
    $(window).resize(function() {
       var divMain = document.getElementById('divMain');
       streams.Width = divMain.offsetWidth / 4;
       streams.Height = divMain.offsetHeight / 4;
       for (var i = 0; i < streams.length; i++) {
           streams[i].width = streams.Width;
           streams[i].height = streams.Height;
           streams[i].style.left = ((i - (Math.floor(i / 4) * 4)) * streams.Width) + 'px';
           streams[i].style.top = (Math.floor(i / 4) * streams.Height) + 'px';
           streams[i].style.width = streams.Width + 'px';
           streams[i].style.height = streams.Height + 'px';
           var actualRatio = streams[i].videoWidth / streams[i].videoHeight;
           var targetRatio = $(streams[i]).width() / $(streams[i]).height();
           var adjustmentRatio = (targetRatio / actualRatio);
           if (adjustmentRatio >= 1) {
               $(streams[i]).css('-webkit-transform', 'scaleX(' + adjustmentRatio + ')');
           } else {
               $(streams[i]).css('-webkit-transform', 'scaleY(' + (1 / adjustmentRatio) + ')');
           }
       }
    });
  • Python : Extracting device and lens information from video metadata

    14 mai 2023, par cat_got_my_tongue

    I am interested in extracting device and lens information from videos. Specifically, make and model of the device and the focal length. I was able to do this successfully for still images using the exifread module and extract a whole bunch of very useful information :

    


    image type      : MPO
Image ImageDescription: Shot with DxO ONE
Image Make: DxO
Image Model: DxO ONE
Image Orientation: Horizontal (normal)
Image XResolution: 300
Image YResolution: 300
Image ResolutionUnit: Pixels/Inch
Image Software: V3.0.0 (2b448a1aee) APP:1.0
Image DateTime: 2022:04:05 14:53:45
Image YCbCrCoefficients: [299/1000, 587/1000, 57/500]
Image YCbCrPositioning: Centered
Image ExifOffset: 158
Thumbnail Compression: JPEG (old-style)
Thumbnail XResolution: 300
Thumbnail YResolution: 300
Thumbnail ResolutionUnit: Pixels/Inch
Thumbnail JPEGInterchangeFormat: 7156
Thumbnail JPEGInterchangeFormatLength: 24886
EXIF ExposureTime: 1/3
EXIF FNumber: 8
EXIF ExposureProgram: Aperture Priority
EXIF ISOSpeedRatings: 100
EXIF SensitivityType: ISO Speed
EXIF ISOSpeed: 100
EXIF ExifVersion: 0221
EXIF DateTimeOriginal: 2022:04:05 14:53:45
EXIF DateTimeDigitized: 2022:04:05 14:53:45
EXIF ComponentsConfiguration: CrCbY
EXIF CompressedBitsPerPixel: 3249571/608175
EXIF ExposureBiasValue: 0
EXIF MaxApertureValue: 212/125
EXIF SubjectDistance: 39/125
EXIF MeteringMode: MultiSpot
EXIF LightSource: Unknown
EXIF Flash: Flash did not fire
EXIF FocalLength: 1187/100
EXIF SubjectArea: [2703, 1802, 675, 450]
EXIF MakerNote: [68, 88, 79, 32, 79, 78, 69, 0, 12, 0, 0, 0, 21, 0, 3, 0, 5, 0, 2, 0, ... ]
EXIF SubSecTime: 046
EXIF SubSecTimeOriginal: 046
EXIF SubSecTimeDigitized: 046
EXIF FlashPixVersion: 0100
EXIF ColorSpace: sRGB
EXIF ExifImageWidth: 5406
EXIF ExifImageLength: 3604
Interoperability InteroperabilityIndex: R98
Interoperability InteroperabilityVersion: [48, 49, 48, 48]
EXIF InteroperabilityOffset: 596
EXIF FileSource: Digital Camera
EXIF ExposureMode: Auto Exposure
EXIF WhiteBalance: Auto
EXIF DigitalZoomRatio: 1
EXIF FocalLengthIn35mmFilm: 32
EXIF SceneCaptureType: Standard
EXIF ImageUniqueID: C01A1709306530020220405185345046
EXIF BodySerialNumber: C01A1709306530


    


    Unfortunately, I have been unable to extract this kind of info from videos so far.

    


    This is what I have tried so far, with the ffmpeg module :

    


    import ffmpeg
from pprint import pprint

test_video = "my_video.mp4"
pprint(ffmpeg.probe(test_video)["streams"])


    


    And the output I get contains a lot of info but nothing related to the device or lens, which is what I am looking for :

    


    [{'avg_frame_rate': '30/1',
  'bit_rate': '1736871',
  'bits_per_raw_sample': '8',
  'chroma_location': 'left',
  'codec_long_name': 'H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10',
  'codec_name': 'h264',
  'codec_tag': '0x31637661',
  'codec_tag_string': 'avc1',
  'codec_time_base': '1/60',
  'codec_type': 'video',
  'coded_height': 1088,
  'coded_width': 1920,
  'display_aspect_ratio': '16:9',
  'disposition': {'attached_pic': 0,
                  'clean_effects': 0,
                  'comment': 0,
                  'default': 1,
                  'dub': 0,
                  'forced': 0,
                  'hearing_impaired': 0,
                  'karaoke': 0,
                  'lyrics': 0,
                  'original': 0,
                  'timed_thumbnails': 0,
                  'visual_impaired': 0},
  'duration': '20.800000',
  'duration_ts': 624000,
  'has_b_frames': 0,
  'height': 1080,
  'index': 0,
  'is_avc': 'true',
  'level': 40,
  'nal_length_size': '4',
  'nb_frames': '624',
  'pix_fmt': 'yuv420p',
  'profile': 'Constrained Baseline',
  'r_frame_rate': '30/1',
  'refs': 1,
  'sample_aspect_ratio': '1:1',
  'start_pts': 0,
  'start_time': '0.000000',
  'tags': {'creation_time': '2021-05-08T13:23:20.000000Z',
           'encoder': 'AVC Coding',
           'handler_name': 'VideoHandler',
           'language': 'und'},
  'time_base': '1/30000',
  'width': 1920},
 {'avg_frame_rate': '0/0',
  'bit_rate': '79858',
  'bits_per_sample': 0,
  'channel_layout': 'stereo',
  'channels': 2,
  'codec_long_name': 'AAC (Advanced Audio Coding)',
  'codec_name': 'aac',
  'codec_tag': '0x6134706d',
  'codec_tag_string': 'mp4a',
  'codec_time_base': '1/48000',
  'codec_type': 'audio',
  'disposition': {'attached_pic': 0,
                  'clean_effects': 0,
                  'comment': 0,
                  'default': 1,
                  'dub': 0,
                  'forced': 0,
                  'hearing_impaired': 0,
                  'karaoke': 0,
                  'lyrics': 0,
                  'original': 0,
                  'timed_thumbnails': 0,
                  'visual_impaired': 0},
  'duration': '20.864000',
  'duration_ts': 1001472,
  'index': 1,
  'max_bit_rate': '128000',
  'nb_frames': '978',
  'profile': 'LC',
  'r_frame_rate': '0/0',
  'sample_fmt': 'fltp',
  'sample_rate': '48000',
  'start_pts': 0,
  'start_time': '0.000000',
  'tags': {'creation_time': '2021-05-08T13:23:20.000000Z',
           'handler_name': 'SoundHandler',
           'language': 'und'},
  'time_base': '1/48000'}]


    


    Are these pieces of info available for videos ? Should I be using a different package ?

    


    Thanks.

    


    Edit :

    


    pprint(ffmpeg.probe(test_video)["format"]) gives

    


    {'bit_rate': '1815244',
 'duration': '20.864000',
 'filename': 'my_video.mp4',
 'format_long_name': 'QuickTime / MOV',
 'format_name': 'mov,mp4,m4a,3gp,3g2,mj2',
 'nb_programs': 0,
 'nb_streams': 2,
 'probe_score': 100,
 'size': '4734158',
 'start_time': '0.000000',
 'tags': {'artist': 'Microsoft Game DVR',
          'compatible_brands': 'mp41isom',
          'creation_time': '2021-05-08T12:12:33.000000Z',
          'major_brand': 'mp42',
          'minor_version': '0',
          'title': 'Snipping Tool'}}


    


  • Method For Crawling Google

    28 mai 2011, par Multimedia Mike — Big Data

    I wanted to crawl Google in order to harvest a large corpus of certain types of data as yielded by a certain search term (we’ll call it “term” for this exercise). Google doesn’t appear to offer any API to automatically harvest their search results (why would they ?). So I sat down and thought about how to do it. This is the solution I came up with.



    FAQ
    Q : Is this legal / ethical / compliant with Google’s terms of service ?
    A : Does it look like I care ? Moving right along…

    Manual Crawling Process
    For this exercise, I essentially automated the task that would be performed by a human. It goes something like this :

    1. Search for “term”
    2. On the first page of results, download each of the 10 results returned
    3. Click on the next page of results
    4. Go to step 2, until Google doesn’t return anymore pages of search results

    Google returns up to 1000 results for a given search term. Fetching them 10 at a time is less than efficient. Fortunately, the search URL can easily be tweaked to return up to 100 results per page.

    Expanding Reach
    Problem : 1000 results for the “term” search isn’t that many. I need a way to expand the search. I’m not aiming for relevancy ; I’m just searching for random examples of some data that occurs around the internet.

    My solution for this is to refine the search using the “site” wildcard. For example, you can ask Google to search for “term” at all Canadian domains using “site :.ca”. So, the manual process now involves harvesting up to 1000 results for every single internet top level domain (TLD). But many TLDs can be more granular than that. For example, there are 50 sub-domains under .us, one for each state (e.g., .ca.us, .ny.us). Those all need to be searched independently. Same for all the sub-domains under TLDs which don’t allow domains under the main TLD, such as .uk (search under .co.uk, .ac.uk, etc.).

    Another extension is to combine “term” searches with other terms that are likely to have a rich correlation with “term”. For example, if “term” is relevant to various scientific fields, search for “term” in conjunction with various scientific disciplines.

    Algorithmically
    My solution is to create an SQLite database that contains a table of search seeds. Each seed is essentially a “site :” string combined with a starting index.

    Each TLD and sub-TLD is inserted as a searchseed record with a starting index of 0.

    A script performs the following crawling algorithm :

    • Fetch the next record from the searchseed table which has not been crawled
    • Fetch search result page from Google
    • Scrape URLs from page and insert each into URL table
    • Mark the searchseed record as having been crawled
    • If the results page indicates there are more results for this search, insert a new searchseed for the same seed but with a starting index 100 higher

    Digging Into Sites
    Sometimes, Google notes that certain sites are particularly rich sources of “term” and offers to let you search that site for “term”. This basically links to another search for ‘term site:somesite”. That site gets its own search seed and the program might harvest up to 1000 URLs from that site alone.

    Harvesting the Data
    Armed with a database of URLs, employ the following algorithm :

    • Fetch a random URL from the database which has yet to be downloaded
    • Try to download it
    • For goodness sake, have a mechanism in place to detect whether the download process has stalled and automatically kill it after a certain period of time
    • Store the data and update the database, noting where the information was stored and that it is already downloaded

    This step is easy to parallelize by simply executing multiple copies of the script. It is useful to update the URL table to indicate that one process is already trying to download a URL so multiple processes don’t duplicate work.

    Acting Human
    A few factors here :

    • Google allegedly doesn’t like automated programs crawling its search results. Thus, at the very least, don’t let your script advertise itself as an automated program. At a basic level, this means forging the User-Agent : HTTP header. By default, Python’s urllib2 will identify itself as a programming language. Change this to a well-known browser string.
    • Be patient ; don’t fire off these search requests as quickly as possible. My crawling algorithm inserts a random delay of a few seconds in between each request. This can still yield hundreds of useful URLs per minute.
    • On harvesting the data : Even though you can parallelize this and download data as quickly as your connection can handle, it’s a good idea to randomize the URLs. If you hypothetically had 4 download processes running at once and they got to a point in the URL table which had many URLs from a single site, the server might be configured to reject too many simultaneous requests from a single client.

    Conclusion
    Anyway, that’s just the way I would (and did) do it. What did I do with all the data ? That’s a subject for a different post.

    Adorable spider drawing from here.