Recherche avancée

Médias (91)

Autres articles (77)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

  • XMP PHP

    13 mai 2011, par

    Dixit Wikipedia, XMP signifie :
    Extensible Metadata Platform ou XMP est un format de métadonnées basé sur XML utilisé dans les applications PDF, de photographie et de graphisme. Il a été lancé par Adobe Systems en avril 2001 en étant intégré à la version 5.0 d’Adobe Acrobat.
    Étant basé sur XML, il gère un ensemble de tags dynamiques pour l’utilisation dans le cadre du Web sémantique.
    XMP permet d’enregistrer sous forme d’un document XML des informations relatives à un fichier : titre, auteur, historique (...)

Sur d’autres sites (6774)

  • Java.lang.UnsatisfiedLinkError : No implementation found for int [duplicate]

    28 mars 2017, par Muthukumar Subramaniam

    This question already has an answer here :

    I executed youtube watch me android application project. I just add some classes in my project and build with ndk. I got the error like

    java.lang.UnsatisfiedLinkError : No implementation found for int com.ephronsystem.mobilizerapp.Ffmpeg.encodeVideoFrame(byte[]) (tried Java_com_ephronsystem_mobilizerapp_Ffmpeg_encodeVideoFrame and Java_com_ephronsystem_mobilizerapp_Ffmpeg_encodeVideoFrame___3B).

    My code :

    package com.ephronsystem.mobilizerapp;

    public class Ffmpeg {

        static {
           System.loadLibrary("ffmpeg");
       }

       public static native boolean init(int width, int height, int audio_sample_rate, String rtmpUrl);

       public static native void shutdown();

       // Returns the size of the encoded frame.
       public static native int encodeVideoFrame(byte[] yuv_image);

       public static native int encodeAudioFrame(short[] audio_data, int length);
    }

    This is ffmpeg-jni.c

    #include <android></android>log.h>
    #include
    #include
    #include "libavcodec/avcodec.h"
    #include "libavformat/avformat.h"
    #include "libavutil/opt.h"

    #ifdef __cplusplus
    extern "C" {
    #endif

    JNIEXPORT jboolean JNICALL Java_com_ephronsystem_mobilizerapp_Ffmpeg_init(JNIEnv *env, jobject thiz,
                                                                    jint width, jint height,
                                                                    jint audio_sample_rate,
                                                                    jstring rtmp_url);
    JNIEXPORT void JNICALL Java_com_ephronsystem_mobilizerapp_Ffmpeg_shutdown(JNIEnv *env,
    jobject thiz
    );
    JNIEXPORT jint JNICALL Java_com_ephronsystem_mobilizerapp_Ffmpeg_encodeVideoFrame(JNIEnv
    *env,
    jobject thiz,
           jbyteArray
    yuv_image);
    JNIEXPORT jint JNICALL Java_com_ephronsystem_mobilizerapp_Ffmpeg_encodeAudioFrame(JNIEnv *env,
                                                                                jobject thiz,
                                                                                jshortArray audio_data,
                                                                                jint length);

    #ifdef __cplusplus
    }
    #endif

    #define LOGI(...) __android_log_print(ANDROID_LOG_INFO, "ffmpeg-jni", __VA_ARGS__)
    #define URL_WRONLY 2
           static AVFormatContext *fmt_context;
           static AVStream *video_stream;
           static AVStream *audio_stream;

           static int pts
    = 0;
    static int last_audio_pts = 0;

    // Buffers for UV format conversion
    static unsigned char *u_buf;
    static unsigned char *v_buf;

    static int enable_audio = 1;
    static int64_t audio_samples_written = 0;
    static int audio_sample_rate = 0;

    // Stupid buffer for audio samples. Not even a proper ring buffer
    #define AUDIO_MAX_BUF_SIZE 16384  // 2x what we get from Java
    static short audio_buf[AUDIO_MAX_BUF_SIZE];
    static int audio_buf_size = 0;

    void AudioBuffer_Push(const short *audio, int num_samples) {
       if (audio_buf_size >= AUDIO_MAX_BUF_SIZE - num_samples) {
           LOGI("AUDIO BUFFER OVERFLOW: %i + %i > %i", audio_buf_size, num_samples,
                AUDIO_MAX_BUF_SIZE);
           return;
       }
       for (int i = 0; i &lt; num_samples; i++) {
           audio_buf[audio_buf_size++] = audio[i];
       }
    }

    int AudioBuffer_Size() { return audio_buf_size; }

    short *AudioBuffer_Get() { return audio_buf; }

    void AudioBuffer_Pop(int num_samples) {
       if (num_samples > audio_buf_size) {
           LOGI("Audio buffer Pop WTF: %i vs %i", num_samples, audio_buf_size);
           return;
       }
       memmove(audio_buf, audio_buf + num_samples, num_samples * sizeof(short));
       audio_buf_size -= num_samples;
    }

    void AudioBuffer_Clear() {
       memset(audio_buf, 0, sizeof(audio_buf));
       audio_buf_size = 0;
    }

    static void log_callback(void *ptr, int level, const char *fmt, va_list vl) {
       char x[2048];
       vsnprintf(x, 2048, fmt, vl);
       LOGI(x);
    }

    JNIEXPORT jboolean JNICALL Java_com_ephronsystem_mobilizerapp_Ffmpeg_init(JNIEnv *env, jobject thiz,
                                                                    jint width, jint height,
                                                                    jint audio_sample_rate_param,
                                                                    jstring rtmp_url) {
       avcodec_register_all();
       av_register_all();
       av_log_set_callback(log_callback);

       fmt_context = avformat_alloc_context();
       AVOutputFormat *ofmt = av_guess_format("flv", NULL, NULL);
       if (ofmt) {
           LOGI("av_guess_format returned %s", ofmt->long_name);
       } else {
           LOGI("av_guess_format fail");
           return JNI_FALSE;
       }

       fmt_context->oformat = ofmt;
       LOGI("creating video stream");
       video_stream = av_new_stream(fmt_context, 0);

       if (enable_audio) {
           LOGI("creating audio stream");
           audio_stream = av_new_stream(fmt_context, 1);
       }

       // Open Video Codec.
       // ======================
       AVCodec *video_codec = avcodec_find_encoder(AV_CODEC_ID_H264);
       if (!video_codec) {
           LOGI("Did not find the video codec");
           return JNI_FALSE;  // leak!
       } else {
           LOGI("Video codec found!");
       }
       AVCodecContext *video_codec_ctx = video_stream->codec;
       video_codec_ctx->codec_id = video_codec->id;
       video_codec_ctx->codec_type = AVMEDIA_TYPE_VIDEO;
       video_codec_ctx->level = 31;

       video_codec_ctx->width = width;
       video_codec_ctx->height = height;
       video_codec_ctx->pix_fmt = PIX_FMT_YUV420P;
       video_codec_ctx->rc_max_rate = 0;
       video_codec_ctx->rc_buffer_size = 0;
       video_codec_ctx->gop_size = 12;
       video_codec_ctx->max_b_frames = 0;
       video_codec_ctx->slices = 8;
       video_codec_ctx->b_frame_strategy = 1;
       video_codec_ctx->coder_type = 0;
       video_codec_ctx->me_cmp = 1;
       video_codec_ctx->me_range = 16;
       video_codec_ctx->qmin = 10;
       video_codec_ctx->qmax = 51;
       video_codec_ctx->keyint_min = 25;
       video_codec_ctx->refs = 3;
       video_codec_ctx->trellis = 0;
       video_codec_ctx->scenechange_threshold = 40;
       video_codec_ctx->flags |= CODEC_FLAG_LOOP_FILTER;
       video_codec_ctx->me_method = ME_HEX;
       video_codec_ctx->me_subpel_quality = 6;
       video_codec_ctx->i_quant_factor = 0.71;
       video_codec_ctx->qcompress = 0.6;
       video_codec_ctx->max_qdiff = 4;
       video_codec_ctx->time_base.den = 10;
       video_codec_ctx->time_base.num = 1;
       video_codec_ctx->bit_rate = 3200 * 1000;
       video_codec_ctx->bit_rate_tolerance = 0;
       video_codec_ctx->flags2 |= 0x00000100;

       fmt_context->bit_rate = 4000 * 1000;

       av_opt_set(video_codec_ctx, "partitions", "i8x8,i4x4,p8x8,b8x8", 0);
       av_opt_set_int(video_codec_ctx, "direct-pred", 1, 0);
       av_opt_set_int(video_codec_ctx, "rc-lookahead", 0, 0);
       av_opt_set_int(video_codec_ctx, "fast-pskip", 1, 0);
       av_opt_set_int(video_codec_ctx, "mixed-refs", 1, 0);
       av_opt_set_int(video_codec_ctx, "8x8dct", 0, 0);
       av_opt_set_int(video_codec_ctx, "weightb", 0, 0);

       if (fmt_context->oformat->flags &amp; AVFMT_GLOBALHEADER)
           video_codec_ctx->flags |= CODEC_FLAG_GLOBAL_HEADER;

       LOGI("Opening video codec");
       AVDictionary *vopts = NULL;
       av_dict_set(&amp;vopts, "profile", "main", 0);
       //av_dict_set(&amp;vopts, "vprofile", "main", 0);
       av_dict_set(&amp;vopts, "rc-lookahead", 0, 0);
       av_dict_set(&amp;vopts, "tune", "film", 0);
       av_dict_set(&amp;vopts, "preset", "ultrafast", 0);
       av_opt_set(video_codec_ctx->priv_data, "tune", "film", 0);
       av_opt_set(video_codec_ctx->priv_data, "preset", "ultrafast", 0);
       av_opt_set(video_codec_ctx->priv_data, "tune", "film", 0);
       int open_res = avcodec_open2(video_codec_ctx, video_codec, &amp;vopts);
       if (open_res &lt; 0) {
           LOGI("Error opening video codec: %i", open_res);
           return JNI_FALSE;   // leak!
       }

       // Open Audio Codec.
       // ======================

       if (enable_audio) {
           AudioBuffer_Clear();
           audio_sample_rate = audio_sample_rate_param;
           AVCodec *audio_codec = avcodec_find_encoder(AV_CODEC_ID_AAC);
           if (!audio_codec) {
               LOGI("Did not find the audio codec");
               return JNI_FALSE;  // leak!
           } else {
               LOGI("Audio codec found!");
           }
           AVCodecContext *audio_codec_ctx = audio_stream->codec;
           audio_codec_ctx->codec_id = audio_codec->id;
           audio_codec_ctx->codec_type = AVMEDIA_TYPE_AUDIO;
           audio_codec_ctx->bit_rate = 128000;
           audio_codec_ctx->bit_rate_tolerance = 16000;
           audio_codec_ctx->channels = 1;
           audio_codec_ctx->profile = FF_PROFILE_AAC_LOW;
           audio_codec_ctx->sample_fmt = AV_SAMPLE_FMT_FLT;
           audio_codec_ctx->sample_rate = 44100;

           LOGI("Opening audio codec");
           AVDictionary *opts = NULL;
           av_dict_set(&amp;opts, "strict", "experimental", 0);
           open_res = avcodec_open2(audio_codec_ctx, audio_codec, &amp;opts);
           LOGI("audio frame size: %i", audio_codec_ctx->frame_size);

           if (open_res &lt; 0) {
               LOGI("Error opening audio codec: %i", open_res);
               return JNI_FALSE;   // leak!
           }
       }

       const jbyte *url = (*env)->GetStringUTFChars(env, rtmp_url, NULL);

       // Point to an output file
       if (!(ofmt->flags &amp; AVFMT_NOFILE)) {
           if (avio_open(&amp;fmt_context->pb, url, URL_WRONLY) &lt; 0) {
               LOGI("ERROR: Could not open file %s", url);
               return JNI_FALSE;  // leak!
           }
       }
       (*env)->ReleaseStringUTFChars(env, rtmp_url, url);

       LOGI("Writing output header.");
       // Write file header
       if (avformat_write_header(fmt_context, NULL) != 0) {
           LOGI("ERROR: av_write_header failed");
           return JNI_FALSE;
       }

       pts = 0;
       last_audio_pts = 0;
       audio_samples_written = 0;

       // Initialize buffers for UV format conversion
       int frame_size = video_codec_ctx->width * video_codec_ctx->height;
       u_buf = (unsigned char *) av_malloc(frame_size / 4);
       v_buf = (unsigned char *) av_malloc(frame_size / 4);

       LOGI("ffmpeg encoding init done");
       return JNI_TRUE;
    }

    JNIEXPORT void JNICALL Java_com_ephronsystem_mobilizerapp_Ffmpeg_shutdown(JNIEnv
    *env,
    jobject thiz
    ) {
    av_write_trailer(fmt_context);
    avio_close(fmt_context
    ->pb);
    avcodec_close(video_stream
    ->codec);
    if (enable_audio) {
    avcodec_close(audio_stream
    ->codec);
    }
    av_free(fmt_context);
    av_free(u_buf);
    av_free(v_buf);

    fmt_context = NULL;
    u_buf = NULL;
    v_buf = NULL;
    }

    JNIEXPORT jint JNICALL Java_com_ephronsystem_mobilizerapp_Ffmpeg_encodeVideoFrame(JNIEnv
    *env,
    jobject thiz,
           jbyteArray
    yuv_image) {
    int yuv_length = (*env)->GetArrayLength(env, yuv_image);
    unsigned char *yuv_data = (*env)->GetByteArrayElements(env, yuv_image, 0);

    AVCodecContext *video_codec_ctx = video_stream->codec;
    //LOGI("Yuv size: %i w: %i h: %i", yuv_length, video_codec_ctx->width, video_codec_ctx->height);

    int frame_size = video_codec_ctx->width * video_codec_ctx->height;

    const unsigned char *uv = yuv_data + frame_size;

    // Convert YUV from NV12 to I420. Y channel is the same so we don't touch it,
    // we just have to deinterleave UV.
    for (
    int i = 0;
    i &lt; frame_size / 4; i++) {
    v_buf[i] = uv[i * 2];
    u_buf[i] = uv[i * 2 + 1];
    }

    AVFrame source;
    memset(&amp;source, 0, sizeof(AVFrame));
    source.data[0] =
    yuv_data;
    source.data[1] =
    u_buf;
    source.data[2] =
    v_buf;
    source.linesize[0] = video_codec_ctx->
    width;
    source.linesize[1] = video_codec_ctx->width / 2;
    source.linesize[2] = video_codec_ctx->width / 2;

    // only for bitrate regulation. irrelevant for sync.
    source.
    pts = pts;
    pts++;

    int out_length = frame_size + (frame_size / 2);
    unsigned char *out = (unsigned char *) av_malloc(out_length);
    int compressed_length = avcodec_encode_video(video_codec_ctx, out, out_length, &amp;source);

    (*env)->
    ReleaseByteArrayElements(env, yuv_image, yuv_data,
    0);

    // Write to file too
    if (compressed_length > 0) {
    AVPacket pkt;
    av_init_packet(&amp;pkt);
    pkt.
    pts = last_audio_pts;
    if (video_codec_ctx->coded_frame &amp;&amp; video_codec_ctx->coded_frame->key_frame) {
    pkt.flags |= 0x0001;
    }
    pkt.
    stream_index = video_stream->index;
    pkt.
    data = out;
    pkt.
    size = compressed_length;
    if (
    av_interleaved_write_frame(fmt_context,
    &amp;pkt) != 0) {
    LOGI("Error writing video frame");
    }
    } else {
    LOGI("??? compressed_length &lt;= 0");
    }

    last_audio_pts++;

    av_free(out);
    return
    compressed_length;
    }

    JNIEXPORT jint JNICALL Java_com_ephronsystem_mobilizerapp_Ffmpeg_encodeAudioFrame(JNIEnv
    *env,
    jobject thiz,
           jshortArray
    audio_data,
    jint length
    ) {
    if (!enable_audio) {
    return 0;
    }

    short *audio = (*env)->GetShortArrayElements(env, audio_data, 0);
    //LOGI("java audio buffer size: %i", length);

    AVCodecContext *audio_codec_ctx = audio_stream->codec;

    unsigned char *out = av_malloc(128000);

    AudioBuffer_Push(audio, length
    );

    int total_compressed = 0;
    while (

    AudioBuffer_Size()

    >= audio_codec_ctx->frame_size) {
    AVPacket pkt;
    av_init_packet(&amp;pkt);

    int compressed_length = avcodec_encode_audio(audio_codec_ctx, out, 128000,
                                                AudioBuffer_Get());

    total_compressed +=
    compressed_length;
    audio_samples_written += audio_codec_ctx->
    frame_size;

    int new_pts = (audio_samples_written * 1000) / audio_sample_rate;
    if (compressed_length > 0) {
    pkt.
    size = compressed_length;
    pkt.
    pts = new_pts;
    last_audio_pts = new_pts;
    //LOGI("audio_samples_written: %i  comp_length: %i   pts: %i", (int)audio_samples_written, (int)compressed_length, (int)new_pts);
    pkt.flags |= 0x0001;
    pkt.
    stream_index = audio_stream->index;
    pkt.
    data = out;
    if (
    av_interleaved_write_frame(fmt_context,
    &amp;pkt) != 0) {
    LOGI("Error writing audio frame");
    }
    }
    AudioBuffer_Pop(audio_codec_ctx
    ->frame_size);
    }

    (*env)->
    ReleaseShortArrayElements(env, audio_data, audio,
    0);

    av_free(out);
    return
    total_compressed;
    }
  • The use cases for a element in HTML

    1er janvier 2014, par silvia

    The W3C HTML WG and the WHATWG are currently discussing the introduction of a <main> element into HTML.

    The <main> element has been proposed by Steve Faulkner and is specified in a draft extension spec which is about to be accepted as a FPWD (first public working draft) by the W3C HTML WG. This implies that the W3C HTML WG will be looking for implementations and for feedback by implementers on this spec.

    I am supportive of the introduction of a <main> element into HTML. However, I believe that the current spec and use case list don’t make a good enough case for its introduction. Here are my thoughts.

    Main use case : accessibility

    In my opinion, the main use case for the introduction of <main> is accessibility.

    Like any other users, when blind users want to perceive a Web page/application, they need to have a quick means of grasping the content of a page. Since they cannot visually scan the layout and thus determine where the main content is, they use accessibility technology (AT) to find what is known as “landmarks”.

    “Landmarks” tell the user what semantic content is on a page : a header (such as a banner), a search box, a navigation menu, some asides (also called complementary content), a footer, …. and the most important part : the main content of the page. It is this main content that a blind user most often wants to skip to directly.

    In the days of HTML4, a hidden “skip to content” link at the beginning of the Web page was used as a means to help blind users access the main content.

    In the days of ARIA, the aria @role=main enables authors to avoid a hidden link and instead mark the element where the main content begins to allow direct access to the main content. This attribute is supported by AT – in particular screen readers – by making it part of the landmarks that AT can directly skip to.

    Both the hidden link and the ARIA @role=main approaches are, however, band aids : they are being used by those of us that make “finished” Web pages accessible by adding specific extra markup.

    A world where ARIA is not necessary and where accessibility developers would be out of a job because the normal markup that everyone writes already creates accessible Web sites/applications would be much preferable over the current world of band-aids.

    Therefore, to me, the primary use case for a <main> element is to achieve exactly this better world and not require specialized markup to tell a user (or a tool) where the main content on a page starts.

    An immediate effect would be that pages that have a <main> element will expose a “main” landmark to blind and vision-impaired users that will enable them to directly access that main content on the page without having to wade through other text on the page. Without a <main> element, this functionality can currently only be provided using heuristics to skip other semantic and structural elements and is for this reason not typically implemented in AT.

    Other use cases

    The <main> element is a semantic element not unlike other new semantic elements such as <header>, <footer>, <aside>, <article>, <nav>, or <section>. Thus, it can also serve other uses where the main content on a Web page/Web application needs to be identified.

    Data mining

    For data mining of Web content, the identification of the main content is one of the key challenges. Many scholarly articles have been published on this topic. This stackoverflow article references and suggests a multitude of approaches, but the accepted answer says “there’s no way to do this that’s guaranteed to work”. This is because Web pages are inherently complex and many <div>, <p>, <iframe> and other elements are used to provide markup for styling, notifications, ads, analytics and other use cases that are necessary to make a Web page complete, but don’t contribute to what a user consumes as semantically rich content. A <main> element will allow authors to pro-actively direct data mining tools to the main content.

    Search engines

    One particularly important “data mining” tool are search engines. They, too, have a hard time to identify which sections of a Web page are more important than others and employ many heuristics to do so, see e.g. this ACM article. Yet, they still disappoint with poor results pointing to findings of keywords in little relevant sections of a page rather than ranking Web pages higher where the keywords turn up in the main content area. A <main> element would be able to help search engines give text in main content areas a higher weight and prefer them over other areas of the Web page. It would be able to rank different Web pages depending on where on the page the search words are found. The <main> element will be an additional hint that search engines will digest.

    Visual focus

    On small devices, the display of Web pages designed for Desktop often causes confusion as to where the main content can be found and read, in particular when the text ends up being too small to be readable. It would be nice if browsers on small devices had a functionality (maybe a default setting) where Web pages would start being displayed as zoomed in on the main content. This could alleviate some of the headaches of responsive Web design, where the recommendation is to show high priority content as the first content. Right now this problem is addressed through stylesheets that re-layout the page differently depending on device, but again this is a band-aid solution. Explicit semantic markup of the main content can solve this problem more elegantly.

    Styling

    Finally, naturally, <main> would also be used to style the main content differently from others. You can e.g. replace a semantically meaningless <div id=”main”> with a semantically meaningful <main> where their position is identical. My analysis below shows, that this is not always the case, since oftentimes <div id=”main”> is used to group everything together that is not the header – in particular where there are multiple columns. Thus, the ease of styling a <main> element is only a positive side effect and not actually a real use case. It does make it easier, however, to adapt the style of the main content e.g. with media queries.

    Proposed alternative solutions

    It has been proposed that existing markup serves to satisfy the use cases that <main> has been proposed for. Let’s analyse these on some of the most popular Web sites. First let’s list the propsed algorithms.

    Proposed solution No 1 : Scooby-Doo

    On Sat, Nov 17, 2012 at 11:01 AM, Ian Hickson <ian@hixie.ch> wrote :
    | The main content is whatever content isn’t
    | marked up as not being main content (anything not marked up with <header>,
    | <aside>, <nav>, etc).
    

    This implies that the first element that is not a <header>, <aside>, <nav>, or <footer> will be the element that we want to give to a blind user as the location where they should start reading. The algorithm is implemented in https://gist.github.com/4032962.

    Proposed solution No 2 : First article element

    On Sat, Nov 17, 2012 at 8:01 AM, Ian Hickson  wrote :
    | On Thu, 15 Nov 2012, Ian Yang wrote :
    | >
    | > That’s a good idea. We really need an element to wrap all the <p>s,
    | > <ul>s, <ol>s, <figure>s, <table>s ... etc of a blog post.
    |
    | That’s called <article>.
    

    This approach identifies the first <article> element on the page as containing the main content. Here’s the algorithm for this approach.

    Proposed solution No 3 : An example heuristic approach

    The readability plugin has been developed to make Web pages readable by essentially removing all the non-main content from a page. An early source of readability is available. This demonstrates what a heuristic approach can perform.

    Analysing alternative solutions

    Comparison

    I’ve picked 4 typical Websites (top on Alexa) to analyse how these three different approaches fare. Ideally, I’d like to simply apply the above three scripts and compare pictures. However, since the semantic HTML5 elements <header>, <aside>, <nav>, and <footer> are not actually used by any of these Web sites, I don’t actually have this choice.

    So, instead, I decided to make some assumptions of where these semantic elements would be used and what the outcome of applying the first two algorithms would be. I can then compare it to the third, which is a product so we can take screenshots.

    Google.com

    http://google.com – search for “Scooby Doo”.

    The search results page would likely be built with :

    • a <nav> menu for the Google bar
    • a <header> for the search bar
    • another <header> for the login section
    • another <nav> menu for the search types
    • a <div> to contain the rest of the page
    • a <div> for the app bar with the search number
    • a few <aside>s for the left and right column
    • a set of <article>s for the search results
    “Scooby Doo” would find the first element after the headers as the “main content”. This is the element before the app bar in this case. Interestingly, there is a <div @id=main> already in the current Google results page, which “Scooby Doo” would likely also pick. However, there are a nav bar and two asides in this div, which clearly should not be part of the “main content”. Google actually placed a @role=main on a different element, namely the one that encapsulates all the search results.

    “First Article” would find the first search result as the “main content”. While not quite the same as what Google intended – namely all search results – it is close enough to be useful.

    The “readability” result is interesting, since it is not able to identify the main text on the page. It is actually aware of this problem and brings a warning before displaying this page :

    Readability of google.com

    Facebook.com

    https://facebook.com

    A user page would likely be built with :

    • a <header> bar for the search and login bar
    • a <div> to contain the rest of the page
    • an <aside> for the left column
    • a <div> to contain the center and right column
    • an <aside> for the right column
    • a <header> to contain the center column “megaphone”
    • a <div> for the status posting
    • a set of <article>s for the home stream
    “Scooby Doo” would find the first element after the headers as the “main content”. This is the element that contains all three columns. It’s actually a <div @id=content> already in the current Facebook user page, which “Scooby Doo” would likely also pick. However, Facebook selected a different element to place the @role=main : the center column.

    “First Article” would find the first news item in the home stream. This is clearly not what Facebook intended, since they placed the @role=main on the center column, above the first blog post’s title. “First Article” would miss that title and the status posting.

    The “readability” result again disappoints but warns that it failed :

    YouTube.com

    http://youtube.com

    A video page would likely be built with :

    • a <header> bar for the search and login bar
    • a <nav> for the menu
    • a <div> to contain the rest of the page
    • a <header> for the video title and channel links
    • a <div> to contain the video with controls
    • a <div> to contain the center and right column
    • an <aside> for the right column with an <article> per related video
    • an <aside> for the information below the video
    • a <article> per comment below the video
    “Scooby Doo” would find the first element after the headers as the “main content”. This is the element that contains the rest of the page. It’s actually a <div @id=content> already in the current YouTube video page, which “Scooby Doo” would likely also pick. However, YouTube’s related videos and comments are unlikely to be what the user would regard as “main content” – it’s the video they are after, which generously has a <div id=watch-player>.

    “First Article” would find the first related video or comment in the home stream. This is clearly not what YouTube intends.

    The “readability” result is not quite as unusable, but still very bare :

    Wikipedia.com

    http://wikipedia.com (“Overscan” page)

    A Wikipedia page would likely be built with :

    • a <header> bar for the search, login and menu items
    • a <div> to contain the rest of the page
    • an &ls ; article> with title and lots of text
    • <article> an <aside> with the table of contents
    • several <aside>s for the left column
    Good news : “Scooby Doo” would find the first element after the headers as the “main content”. This is the element that contains the rest of the page. It’s actually a <div id=”content” role=”main”> element on Wikipedia, which “Scooby Doo” would likely also pick.

    “First Article” would find the title and text of the main element on the page, but it would also include an <aside>.

    The “readability” result is also in agreement.

    Results

    In the following table we have summarised the results for the experiments :

    Site Scooby-Doo First article Readability
    Google.com FAIL SUCCESS FAIL
    Facebook.com FAIL FAIL FAIL
    YouTube.com FAIL FAIL FAIL
    Wikipedia.com SUCCESS SUCCESS SUCCESS

    Clearly, Wikipedia is the prime example of a site where even the simple approaches find it easy to determine the main content on the page. WordPress blogs are similarly successful. Almost any other site, including news sites, social networks and search engine sites are petty hopeless with the proposed approaches, because there are too many elements that are used for layout or other purposes (notifications, hidden areas) such that the pre-determined list of semantic elements that are available simply don’t suffice to mark up a Web page/application completely.

    Conclusion

    It seems that in general it is impossible to determine which element(s) on a Web page should be the “main” piece of content that accessibility tools jump to when requested, that a search engine should put their focus on, or that should be highlighted to a general user to read. It would be very useful if the author of the Web page would provide a hint through a <main> element where that main content is to be found.

    I think that the <main> element becomes particularly useful when combined with a default keyboard shortcut in browsers as proposed by Steve : we may actually find that non-accessibility users will also start making use of this shortcut, e.g. to get to videos on YouTube pages directly without having to tab over search boxes and other interactive elements, etc. Worthwhile markup indeed.

  • ffmpeg get RGB values ?

    13 novembre 2012, par Nav

    I've seen this, this and this but I still can't figure out how to get the RGB values from the tutorial code.

    if(avcodec_open(pCodecCtx, pCodec)&lt;0) {_getch();return -1;}
    pFrame = avcodec_alloc_frame();
    pFrameRGB=avcodec_alloc_frame();
    if(pFrameRGB==NULL)    {_getch();return -1;}
    numBytes=avpicture_get_size(PIX_FMT_RGB24, pCodecCtx->width, pCodecCtx->height);
    buffer=(uint8_t *)av_malloc(numBytes*sizeof(uint8_t));
    // Assign appropriate parts of buffer to image planes in pFrameRGB
    avpicture_fill((AVPicture *)pFrameRGB, buffer, PIX_FMT_RGB24, pCodecCtx->width, pCodecCtx->height);

    i=0;
    int framecounter=0;
    while(av_read_frame(pFormatCtx, &amp;packet)>=0)
    {
     if(packet.stream_index==videoStream)
       {
         // Decode video frame
         //int avcodec_decode_video2(AVCodecContext *avctx, AVFrame *picture, int *got_picture_ptr, const AVPacket *avpkt);
         avcodec_decode_video2(pCodecCtx, pFrame, &amp;frameFinished, &amp;packet);

         if(frameFinished)
         {
            // Convert the image from its native format to RGB
        //img_convert((AVPicture *)pFrameRGB, PIX_FMT_RGB24, (AVPicture*)pFrame, pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height);
            if(++i&lt;=5)  {SaveFrame(pFrameRGB, pCodecCtx->width, pCodecCtx->height, i);}
         }
       }
     av_free_packet(&amp;packet);
    }

    Since img_convert wasn't available, I took it from here but it threw an unhandled exception error. Why is the RGB format so tough to understand ? Why assign zero to G and B of RGB, as shown here ?

    How can I get my RGB values as a simple R, G and B which spans between 0 and 255 ? Also, I don't understand the use of x and y, when video width and height are already given. y is an increment of height, but what about x ?

    Can anyone please show me a complete working code of obtaining rgb values from a video or at least explain how data and linesize help in storing RGB info ? The lack of tutorials is appalling and the complexity of the API is surprising, after having worked on Processing videos.