Recherche avancée

Médias (1)

Mot : - Tags -/musée

Autres articles (6)

  • Selection of projects using MediaSPIP

    2 mai 2011, par

    The examples below are representative elements of MediaSPIP specific uses for specific projects.
    MediaSPIP farm @ Infini
    The non profit organizationInfini develops hospitality activities, internet access point, training, realizing innovative projects in the field of information and communication technologies and Communication, and hosting of websites. It plays a unique and prominent role in the Brest (France) area, at the national level, among the half-dozen such association. Its members (...)

  • Other interesting software

    13 avril 2011, par

    We don’t claim to be the only ones doing what we do ... and especially not to assert claims to be the best either ... What we do, we just try to do it well and getting better ...
    The following list represents softwares that tend to be more or less as MediaSPIP or that MediaSPIP tries more or less to do the same, whatever ...
    We don’t know them, we didn’t try them, but you can take a peek.
    Videopress
    Website : http://videopress.com/
    License : GNU/GPL v2
    Source code : (...)

  • Sélection de projets utilisant MediaSPIP

    29 avril 2011, par

    Les exemples cités ci-dessous sont des éléments représentatifs d’usages spécifiques de MediaSPIP pour certains projets.
    Vous pensez avoir un site "remarquable" réalisé avec MediaSPIP ? Faites le nous savoir ici.
    Ferme MediaSPIP @ Infini
    L’Association Infini développe des activités d’accueil, de point d’accès internet, de formation, de conduite de projets innovants dans le domaine des Technologies de l’Information et de la Communication, et l’hébergement de sites. Elle joue en la matière un rôle unique (...)

Sur d’autres sites (3589)

  • Adding h264 frames to mp4 file

    5 mars 2024, par Dinamo

    I have raw h264 video frames

    


    


    Stream #0:0 : Video : h264 (Main), yuvj420p(pc, bt709, progressive),
1280x720, 25 fps, 25 tbr, 1200k tbn, 50 tbc

    


    


    and raw audio frames :

    


    


    Stream #0:0 : Audio : pcm_s16le, 16000 Hz, 1 channels, s16, 256 kb/s

    


    


    I also have a list of timestamps in microseconds of each frame

    


    600 0xd96533 (audio)
601 0xd9e1dd (audio)
602 0xda4f52 (audio)
603 0xda5a63 (video)
604 0xdacc4b (audio)
605 0xdb39a3 (audio)
606 0xdb5ee9 (video)
607 0xdbb6d8 (audio)
608 0xdc23fe (audio)
609 0xdcb255 (audio)
610 0xdd0e69 (audio)
611 0xdd8b96 (audio)
612 0xdd67d0 (video)
613 0xddf8bd (audio)


    


    note that the timestamp difference between two audio frames is  0.032s or  0.028s (average 0.03s ?)

    


    and the timestamp difference between two video frames is mutiply of  0.06666s (0.0666,0.1333,0.2)

    


    this data was captured from a camera that is capturing at max 15fps according to the spec.

    


    I want to merge them into one mp4 file.

    


    raw video frame info

    


    [FRAME]
media_type=video
stream_index=0
key_frame=1
pkt_pts=N/A
pkt_pts_time=N/A
-> pkt_dts=N/A
-> pkt_dts_time=N/A
best_effort_timestamp=N/A
best_effort_timestamp_time=N/A
-> pkt_duration=48000
-> pkt_duration_time=0.040000
pkt_pos=1476573
pkt_size=57677
width=1280
height=720
pix_fmt=yuvj420p
sample_aspect_ratio=N/A
pict_type=I
coded_picture_number=189
display_picture_number=0
interlaced_frame=0
top_field_first=0
repeat_pict=0
color_range=pc
color_space=bt709
color_primaries=bt709
color_transfer=bt709
chroma_location=left
[/FRAME]
[FRAME]
media_type=video
stream_index=0
key_frame=0
pkt_pts=N/A
pkt_pts_time=N/A
-> pkt_dts=N/A
-> pkt_dts_time=N/A
best_effort_timestamp=N/A
best_effort_timestamp_time=N/A
-> pkt_duration=48000
-> pkt_duration_time=0.040000
pkt_pos=1534250
pkt_size=3928
width=1280
height=720
pix_fmt=yuvj420p
sample_aspect_ratio=N/A
pict_type=P
coded_picture_number=190
display_picture_number=0
interlaced_frame=0
top_field_first=0
repeat_pict=0
color_range=pc
color_space=bt709
color_primaries=bt709
color_transfer=bt709
chroma_location=left
[/FRAME]


    


    The result frames should have values similar to this :

    


    video frames

    


    [FRAME]
media_type=video
stream_index=0
key_frame=0
pkt_pts=N/A
pkt_pts_time=N/A
-> pkt_dts=500
-> pkt_dts_time=16.666667
best_effort_timestamp=500
best_effort_timestamp_time=16.666667
-> pkt_duration=1
-> pkt_duration_time=0.033333
pkt_pos=1772182
pkt_size=3070
width=1280
height=720
pix_fmt=yuvj420p
sample_aspect_ratio=N/A
pict_type=P
coded_picture_number=191
display_picture_number=0
interlaced_frame=0
top_field_first=0
repeat_pict=0
color_range=pc
color_space=bt709
color_primaries=bt709
color_transfer=bt709
chroma_location=left
[/FRAME]


    


    pkt_duration_time is always 0.033333, pkt_dts maybe even or odd not both (per stream) and also pkt_dts almost always jumps by 2, but sometimes by 4

    


    audio frames

    


    [FRAME]
media_type=audio
stream_index=1
key_frame=1
pkt_pts=0
pkt_pts_time=0.000000
pkt_dts=0
pkt_dts_time=0.000000
best_effort_timestamp=0
best_effort_timestamp_time=0.000000
pkt_duration=480
pkt_duration_time=0.030000
pkt_pos=608
pkt_size=960
sample_fmt=s16
nb_samples=480
channels=1
channel_layout=unknown
[/FRAME]
[FRAME]
media_type=audio
stream_index=1
key_frame=1
pkt_pts=480
pkt_pts_time=0.030000
pkt_dts=480
pkt_dts_time=0.030000
best_effort_timestamp=480
best_effort_timestamp_time=0.030000
pkt_duration=480
pkt_duration_time=0.030000
pkt_pos=1654
pkt_size=960
sample_fmt=s16
nb_samples=480
channels=1
channel_layout=unknown
[/FRAME]
[FRAME]
media_type=audio
stream_index=1
key_frame=1
pkt_pts=960
pkt_pts_time=0.060000
pkt_dts=960
pkt_dts_time=0.060000
best_effort_timestamp=960
best_effort_timestamp_time=0.060000
pkt_duration=480
pkt_duration_time=0.030000
pkt_pos=2726
pkt_size=960
sample_fmt=s16
nb_samples=480
channels=1
channel_layout=unknown
[/FRAME]


    


    those are the sequences

    


    //Audio
frame_len=480
pkt_duration_time=0.030000
pkt_pts=frame_len*frame_index
pkt_pts_time=pkt_duration_time*frame_index
pkt_pos=LAST_FRAME_PTS + ~1000 //or timestamp_us/x ?
//Video
pkt_duration_time=0.033333
pkt_dts=(2 or 4)*frame_index
pkt_dts_time=LAST_FRAME_DTS+pkt_duration_time


    


    Here is my current code for adding a video frame :

    


    #include <libavformat></libavformat>avformat.h>&#xA;AVFormatContext *format_context;&#xA;AVStream *out_stream;&#xA;&#xA;void init_out_stream(){&#xA;        out_stream->id = 0;&#xA;        out_stream->time_base = (AVRational){1, 30}; //&lt;-------------&#xA;        out_stream->codec->codec_id   = AV_CODEC_ID_H264;&#xA;        out_stream->codec->width      = 1280;&#xA;        out_stream->codec->height     = 720;&#xA;        out_stream->codec->pix_fmt    = AV_PIX_FMT_YUV420P;&#xA;}&#xA;int WriteH264VideoSample(unsigned char *sample, unsigned int sample_size, int iskeyframe, unsigned long long int timestamp_us){&#xA;&#xA;        AVPacket packet = { 0 };&#xA;        av_init_packet(&amp;packet);&#xA;&#xA;        packet.stream_index = 0;&#xA;        packet.data         = sample;&#xA;        packet.size         = sample_size;&#xA;        packet.pos          = -1;&#xA;&#xA;        timestamp = timestamp_us / 1000; //to ms&#xA;        /*pts = last pts &#x2B; timebase unit (1/30 or 33ms) difference &#xA;        between last and current timestamps*/&#xA;        pkt.pts = last_pts &#x2B; (timestamp - last_timestamp) / 33; &#xA;        last_pts = pkt.pts;&#xA;        pkt.dts = pkt.pts;&#xA;        last_timestamp = timestamp;&#xA;        packet.duration = 0;&#xA;&#xA;        av_packet_rescale_ts(&amp;packet, (AVRational){1, 25}, out_stream->time_base); //&lt;-------------&#xA;&#xA;        if (iskeyframe) {&#xA;            packet.flags |= AV_PKT_FLAG_KEY;&#xA;        }&#xA;&#xA;        if (av_interleaved_write_frame(format_context, &amp;packet) &lt; 0) {&#xA;            printf("Fail to write frame\n");&#xA;            return 0;&#xA;        }&#xA;&#xA;        //file_duration &#x2B;= duration;&#xA;&#xA;        return 1;&#xA;}&#xA;&#xA;&#xA;int main(){&#xA;        avformat_alloc_output_context2(&amp;format_context, 0, "avi", 0);&#xA;        out_stream = avformat_new_stream(format_context, 0);&#xA;        init_out_stream();&#xA;&#xA;        return 0;&#xA;}&#xA;

    &#xA;

    However the pts i use doesn't sync correctly, with my code sometimes the pts jumps by 3 and sometimes by 2 each frame however the synced result should jump by 2 or 4. (all even or all odd (per stream))

    &#xA;

    for the audio i tried

    &#xA;

    AVPacket packet = { 0 };&#xA;av_init_packet(&amp;packet);&#xA;&#xA;packet.stream_index = 1;&#xA;packet.data         = sample;&#xA;pkt.size            = 960;&#xA;packet.pos          = -1;&#xA;&#xA;/* 32000/2 = 16000; 16000/33.33333 = ~480 */&#xA;/* 28000/2 = 14000; 14000/33.33333 = ~420 ??*/&#xA;timestamp = timestamp_us / 2;&#xA;pkt.pts = last_audio_pts &#x2B; round(timestamp/33.333333333333);&#xA;pkt.dts = pkt.pts;&#xA;last_audio_pts = pkt.pts;&#xA;&#xA;pkt.duration = 0;&#xA;&#xA;av_packet_rescale_ts(&amp;packet, (AVRational){1, 25}, (AVRational){1, 30});&#xA;

    &#xA;

    In this case every frame has the correct info but pkt_duration is 240 instead of 480 and pkt_pts_time jumps by 0.06s instead of 0.03s

    &#xA;

    What is wrong with my calculation ?&#xA;Thanks.

    &#xA;

  • lavc/vvc_ps : Correct NoOutputBeforeRecoveryFlag of IDR

    11 mars 2024, par Fei Wang
    lavc/vvc_ps : Correct NoOutputBeforeRecoveryFlag of IDR
    

    The NoOutputBeforeRecoveryFlag of an IDR frame should be set to 1 as
    spec says in 8.1.1.

    Signed-off-by : Fei Wang <fei.w.wang@intel.com>

    • [DH] libavcodec/vvc/vvc_ps.c
  • How to write UI tests for your plugin – Introducing the Piwik Platform

    18 février 2015, par Thomas Steur — Development

    This is the next post of our blog series where we introduce the capabilities of the Piwik platform (our previous post was How to write unit tests for your plugin). This time you’ll learn how to write UI tests in Piwik. For this tutorial you will need to have basic knowledge of JavaScript and the Piwik platform.

    What is a UI test ?

    Some might know a UI test under the term ‘CSS test’ or ‘screenshot test’. When we speak of UI tests we mean automated tests that capture a screenshot of a URL and then compare the result with an expected image. If the images are not exactly the same the test will fail. For more information read our blog post about UI Testing.

    What is a UI test good for ?

    We use them to test our PHP Controllers, Twig templates, CSS, and indirectly test our JavaScript. We do usually not write Unit or Integration tests for our controllers. For example we use UI tests to ensure that the installation, the login and the update process works as expected. We also have tests for most pages, reports, settings, etc. This increases the quality of our product and saves us a lot of time as it is easy to write and maintain such tests. All UI tests are executed on Travis after each commit and compared with our expected screenshots.

    Getting started

    In this post, we assume that you have already installed Piwik 2.11.0 or later via git, set up your development environment and created a plugin. If not, visit the Piwik Developer Zone where you’ll find the tutorial Setting up Piwik and other Guides that help you to develop a plugin.

    Next you need to install the needed packages to execute UI tests.

    Let’s create a UI test

    We start by using the Piwik Console to create a new UI test :

    ./console generate:test --testtype ui

    The command will ask you to enter the name of the plugin the created test should belong to. I will use the plugin name “Widgetize”. Next it will ask you for the name of the test. Here you usually enter the name of the page or report you want to test. I will use the name “WidgetizePage” in this example. There should now be a file plugins/Widgetize/tests/UI/WidgetizePage_spec.js which contains already an example to get you started easily :

    describe("WidgetizePage", function () {
       var generalParams = 'idSite=1&amp;period=day&amp;date=2010-01-03';

       it('should load a simple page by its module and action', function (done) {
           var screenshotName = 'simplePage';
           // will save image in "processed-ui-screenshots/WidgetizePageTest_simplePage.png"

           expect.screenshot(screenshotName).to.be.capture(function (page) {
               var urlToTest = "?" + generalParams + "&amp;module=Widgetize&amp;action=index";
               page.load(urlToTest);
           }, done);
       });
    });

    What is happening here ?

    This example declares a new set of specs by calling the method describe(name, callback) and within that a new spec by calling the method it(description, func). Within the spec we load a URL and once loaded capture a screenshot of the whole page. The captured screenshot will be saved under the defined screenshotName. You might have noticed we write our UI tests in BDD style.

    Capturing only a part of the page

    It is good practice to not always capture the full page. For example many pages contain a menu and if you change that menu, all your screenshot tests would fail. To avoid this you would instead have a separate test for your menu. To capture only a part of the page simply specify a jQuery selector and call the method captureSelector instead of capture :

    var contentSelector = '#selector1, .selector2 .selector3';
    // Only the content of both selectors will be in visible in the captured screenshot
    expect.screenshot('page_partial').to.be.captureSelector(contentSelector, function (page) {
       page.load(urlToTest);
    }, done);

    Hiding content

    There is a known issue with sparklines that can fail tests randomly. Also version numbers or a date that changes from time to time can fail tests without actually having an error. To avoid this you can prevent elements from being visible in the captured screenshot via CSS as we add a CSS class called uiTest to the HTML element while tests are running.

    .uiTest .version { visibility:hidden }

    Running a test

    To run the previously generated tests we will use the command tests:run-ui :

    ./console tests:run-ui WidgetizePage

    After running the tests for the first time you will notice a new folder plugins/PLUGINNAME/tests/UI/processed-ui-screenshots in your plugin. If everything worked, there will be an image for every captured screenshot. If you’re happy with the result it is time to copy the file over to the expected-ui-screenshots folder, otherwise you have to adjust your test until you get the result you want. From now on, the newly captured screenshots will be compared with the expected images whenever you execute the tests.

    Fixing a test

    At some point your UI test will fail, for example due to expected CSS changes. To fix a test all you have to do is to copy the captured screenshot from the folder processed-ui-screenshots to the folder expected-ui-screenshots.

    Executing the UI tests on Travis

    In case you have not generated a .travis.yml file for your plugin yet you can do this by executing the following command :

    ./console generate:travis-yml --plugin PLUGINNAME

    Next you have to activate Travis for your repository.

    Advanced features

    Isn’t it easy to create a UI test ? We never even created a file ! Of course you can accomplish even more if you want. For example you can specify a fixture to be inserted before running the tests which is useful when your plugin requires custom data. You can also control the browser as it was a human by clicking, moving the mouse, typing text, etc. If you want to discover more features have a look at our existing test cases.

    If you have any feedback regarding our APIs or our guides in the Developer Zone feel free to send it to us.