Recherche avancée

Médias (91)

Autres articles (26)

  • Personnaliser les catégories

    21 juin 2013, par

    Formulaire de création d’une catégorie
    Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
    Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire.
    Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
    Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)

  • Le plugin : Podcasts.

    14 juillet 2010, par

    Le problème du podcasting est à nouveau un problème révélateur de la normalisation des transports de données sur Internet.
    Deux formats intéressants existent : Celui développé par Apple, très axé sur l’utilisation d’iTunes dont la SPEC est ici ; Le format "Media RSS Module" qui est plus "libre" notamment soutenu par Yahoo et le logiciel Miro ;
    Types de fichiers supportés dans les flux
    Le format d’Apple n’autorise que les formats suivants dans ses flux : .mp3 audio/mpeg .m4a audio/x-m4a .mp4 (...)

  • Selection of projects using MediaSPIP

    2 mai 2011, par

    The examples below are representative elements of MediaSPIP specific uses for specific projects.
    MediaSPIP farm @ Infini
    The non profit organizationInfini develops hospitality activities, internet access point, training, realizing innovative projects in the field of information and communication technologies and Communication, and hosting of websites. It plays a unique and prominent role in the Brest (France) area, at the national level, among the half-dozen such association. Its members (...)

Sur d’autres sites (4424)

  • Q&A : An interview with Matomo founder, Matthieu Aubry

    20 novembre 2018, par Joselyn Khor — About, Community

    Hey everyone ! Joselyn here. As always the views of our community remain top of mind. So to make sure you guys know the thinking behind these new projects, we reached out to Matomo’s founder, Matthieu, to ask questions you might want answered. Please check it out below !

    Hi guys, it’s Matthieu ! Here to answer some questions about the rebrand and the future of Matomo and Innocraft.

    What’s upcoming ?

    We’ve been busy implementing our rebrand into all aspects of Matomo and there’s also our new website, which is launching today ! The new website will help people better understand what Matomo is and how they can benefit from using modern web analytics.

    Why was Matomo and Innocraft brought onto one website ?

    In the past the separation caused a bit of confusion so we’re taking this as a chance to unite both the business brand, Innocraft and community brand, Matomo, on one website. Putting our focus on one brand, Matomo, makes it easier for people to see us with fresh eyes. We have a community side as well as a business side and while the community is still incredibly important to us, we find we have a powerful analytics tool that is capable of helping businesses too.

    Is Matomo becoming commercial or turning corporate ?

    No. nothing is changing. Matomo is still an open-source project and community. Although we’ll have a pricing page and “start free trial” on the new website brought over from Innocraft.cloud, the Matomo community will still play the biggest part on the Matomo website. We have dedicated sections focused on Community and On-Premise.

    The rebrand exercise helped us gain a refreshed perspective. After reflecting on how far we’ve come, we can feel more confident about Matomo Analytics itself as a platform. We believe it’s a great chance to bring that confidence into the brand and vision. We are proud that it’s an awesome open-source platform and at the same time it’s also powerful as a tool for businesses.

    Why is there no ‘download for free’ button on the homepage ?

     

    Matomo CTA simplified
    We feel many users coming to the site will get confused about our hosting options (Cloud and On-Premise) which is something you don’t usually consider when choosing an analytics tool.

    The reason for us to not have that button is when people see a “download for free” button on the homepage next to a “try it for free” button, it creates confusion. For those who do choose to download Matomo often become confused when they are left with a .zip file unaware how to install it and the technical requirements of self-hosting. We feel presenting our users with the simplest installation option first will give them the best chance possible to try Matomo to its full potential, without cost.

    And you can still find the link to Download Matomo in the footer of each page.

     

    Is Matomo still free to download and have forever ?

    Absolutely. The free open-source download can be found on the On-Premise section of the website, or download Matomo here.

    Why is it important to have a business behind the project ?

    There’s the reality that we have to make money in order for the Matomo project to survive … and thrive. The reason we still need a business side (Innocraft) is to fund and sustain the Matomo project. Whenever people purchase premium features, this helps finance the development of Matomo for our community.

    Because of the business we’re able to continually maintain and develop Matomo for you guys as well as future users. For example, the next release Matomo 3.8.0 is already mostly developed and will bring lots of interesting features too, like the two-factor authentication, Brute Force Protection, failed tracking requests reporting, lots of JavaScript tracker improvements, a new total summary row below reports, and many more security fixes, bug fixes, and other new features.

    So we see a business being very helpful in supporting our open-source community. Without a business side, our free, open-source project would not be able to survive.

    How will you protect the Matomo project ?

    We’ve ensured the Matomo project will be protected for the future as we wish to turn it into a not-for-profit foundation.

    We’ve also got a safeguard where the open-source code will stay under a GPL license forever. This is so we can guarantee, that no matter what happens, the Matomo project itself will stay completely free software.

    Is there a way for people to help ?

    There are heaps of ways to help ! You can help other Matomo users in the forums, contribute to fixes on GitHub, leave a great review (e.g. alternativeTo), help look for bugs with our Security Bounty Programme or participate and spread the word about Matomo in our community social media pages – Mastodon, Facebook, Twitter. Telling your friends about us would be very helpful too !

    What’s planned for the future ?
    We’ve worked hard to become the #1 open-source analytics platform (1.4 million websites use Matomo today), but now we need to empower even more individuals and businesses to take back control of their own data.

    Showing our community that we have a powerful platform is crucial, but alongside that our values are what define us. User privacy is still of utmost importance and we’re here to make it known that power needs to rest in the hands of people and not large corporations.

    You can rest easy knowing you’re doing your part in using trustworthy and dependable tools. By joining many other companies who are growing this movement to decentralise the Internet, we can build a safer, online world together.

    Join this analytics revolution and let us know what you think about Matomo !

  • FFMPEG : AV out of sync when writing a part of a video to a new file

    30 mars 2017, par IT_Layman

    I’m developing a data preprocessing program for a computer vision project using FFMPEG and Face detection API. In this program, I need to extract the shots that contain human faces from a given input video file and output them into a new file. But when I played the output video file generated by that program, the video and audio track was out of sync. I think a possible reason is that the timestamp of video frame or audio frame is set incorrectly, but I can’t fix it by myself as I’m not very familiar with FFMPEG library, Please help me solving this out-of-sync issue.

    To simplify the code shown below, I have removed all face detection code and use an empty function called faceDetect to represent it instead.

    // ffmpegAPI.cpp : Defines the entry point for the console application.
    //
       #include "stdafx.h"
       #include <iostream>

       extern "C" {
       #include <libavutil></libavutil>opt.h>
       #include <libavcodec></libavcodec>avcodec.h>
       #include <libavformat></libavformat>avformat.h>
       #include <libavutil></libavutil>avutil.h>
       #include <libavutil></libavutil>channel_layout.h>
       #include <libavutil></libavutil>common.h>
       #include <libavutil></libavutil>imgutils.h>
       #include <libavutil></libavutil>mathematics.h>
       #include <libavutil></libavutil>samplefmt.h>
       #include <libavutil></libavutil>pixdesc.h>
       #include <libswscale></libswscale>swscale.h>

    }
    bool faceDetect(AVFrame *frame)
    {
       /*...*/
       return true;
    }
    int main(int argc, char **argv)
    {
       int64_t videoPts = 0, audioPts = 0;
       int samples_count = 0;
       AVFormatContext *ifmt_ctx = NULL, *ofmt_ctx = NULL;
       AVOutputFormat *ofmt = NULL;
       AVPacket pkt;
       AVFrame *frame = NULL;
       int videoindex = -1; int audioindex = -1;
       double videoTime = DBL_MAX;
       const char *in_filename, *out_filename;
       int ret, i;
       in_filename = "C:\\input.flv";//Input file name
       out_filename = "C:\\output.avi";//Output file name
       av_register_all();
       //Open input file
       if ((ret = avformat_open_input(&amp;ifmt_ctx, in_filename, 0, 0)) &lt; 0) {
           fprintf(stderr, "Could not open input file '%s'", in_filename);
           goto end;
       }
       //Find input streams
       if ((ret = avformat_find_stream_info(ifmt_ctx, 0)) &lt; 0) {
           fprintf(stderr, "Failed to retrieve input stream information");
           goto end;
       }
       //Retrive AV stream information
       for (i = 0; i &lt; ifmt_ctx->nb_streams; i++)
       {
           AVStream *stream;
           AVCodecContext *codec_ctx;
           stream = ifmt_ctx->streams[i];//Get current stream
           codec_ctx = stream->codec;//Get current stream codec
           if (codec_ctx->codec_type == AVMEDIA_TYPE_VIDEO)
           {
               videoindex = i;//video stream index
           }
           else if (codec_ctx->codec_type == AVMEDIA_TYPE_AUDIO)
           {
               audioindex = i;//audio stream index
           }
           if (videoindex == -1)//no video stream is found
           {
               printf("can't find video stream\n");
               goto end;

           }
       }
       av_dump_format(ifmt_ctx, 0, in_filename, 0);
       //Configure output
       avformat_alloc_output_context2(&amp;ofmt_ctx, NULL, NULL, out_filename);
       if (!ofmt_ctx) {
           fprintf(stderr, "Could not create output context\n");
           ret = AVERROR_UNKNOWN;
           goto end;
       }
       ofmt = ofmt_ctx->oformat;
       //Configure output streams
       for (i = 0; i &lt; ifmt_ctx->nb_streams; i++) {//Traversal input streams
           AVStream *in_stream = ifmt_ctx->streams[i];//Get current stream
           AVStream *out_stream = avformat_new_stream(ofmt_ctx, in_stream->codec->codec);//Create a corresponding output stream
           if (!out_stream) {
               fprintf(stderr, "Failed allocating output stream\n");
               ret = AVERROR_UNKNOWN;
               goto end;
           }
           //Copy codec from current input stream to corresponding output stream
           ret = avcodec_copy_context(out_stream->codec, in_stream->codec);
           if (ret &lt; 0) {
               fprintf(stderr, "Failed to copy context from input to output stream codec context\n");
               goto end;
           }
           if (i == videoindex)//Video stream
           {
               if (out_stream->codec->codec_id == AV_CODEC_ID_H264)
               {
                   out_stream->codec->me_range = 16;
                   out_stream->codec->max_qdiff = 4;
                   out_stream->codec->qmin = 10;
                   out_stream->codec->qmax = 51;
                   out_stream->codec->qcompress = 1;

               }
           }
           AVCodecContext *codec_ctx = out_stream->codec;
           if (codec_ctx->codec_type == AVMEDIA_TYPE_VIDEO
               || codec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) {
               //Find codec encoder
               AVCodec *encoder = avcodec_find_encoder(codec_ctx->codec_id);
               if (!encoder) {
                   av_log(NULL, AV_LOG_FATAL, "Necessary encoder not found\n");
                   ret = AVERROR_INVALIDDATA;
                   goto end;
               }
               //Open encoder
               ret = avcodec_open2(codec_ctx, encoder, NULL);
               if (ret &lt; 0) {
                   av_log(NULL, AV_LOG_ERROR, "Cannot open video encoder for stream #%u\n", i);
                   goto end;
               }
               out_stream->codec->codec_tag = 0;
               if (ofmt_ctx->oformat->flags &amp; AVFMT_GLOBALHEADER)
                   out_stream->codec->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
           }
           //Open the decoder for input stream
           codec_ctx = in_stream->codec;
           if (codec_ctx->codec_type == AVMEDIA_TYPE_VIDEO
               || codec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) {
               ret = avcodec_open2(codec_ctx,
                   avcodec_find_decoder(codec_ctx->codec_id), NULL);
               if (ret &lt; 0) {
                   av_log(NULL, AV_LOG_ERROR, "Failed to open decoder for stream #%u\n", i);
               }
           }
       }
       av_dump_format(ofmt_ctx, 0, out_filename, 1);
       //Open output file for writing
       if (!(ofmt->flags &amp; AVFMT_NOFILE)) {
           ret = avio_open(&amp;ofmt_ctx->pb, out_filename, AVIO_FLAG_WRITE);
           if (ret &lt; 0) {
               fprintf(stderr, "Could not open output file '%s'", out_filename);
               goto end;
           }
       }

       //Write video header
       ret = avformat_write_header(ofmt_ctx, NULL);
       if (ret &lt; 0) {
           fprintf(stderr, "Error occurred when opening output file\n");
           goto end;
       }
       //Write frames in a loop
       while (1) {
           AVStream *in_stream, *out_stream;
           //Read one frame from the input file
           ret = av_read_frame(ifmt_ctx, &amp;pkt);
           if (ret &lt; 0)
               break;
           in_stream = ifmt_ctx->streams[pkt.stream_index];//Get current input stream
           out_stream = ofmt_ctx->streams[pkt.stream_index];//Get current output stream
           if (pkt.stream_index == videoindex)//video frame
           {
               int got_frame;
               frame = av_frame_alloc();
               if (!frame) {
                   ret = AVERROR(ENOMEM);
                   break;
               }
               //Readjust packet timestamp for decoding
               av_packet_rescale_ts(&amp;pkt,
                   in_stream->time_base,
                   in_stream->codec->time_base);
               //Decode video frame
               int len = avcodec_decode_video2(in_stream->codec, frame, &amp;got_frame, &amp;pkt);
               if (len &lt; 0)
               {
                   av_frame_free(&amp;frame);
                   av_log(NULL, AV_LOG_ERROR, "Decoding failed\n");
                   break;
               }
               if (got_frame)//Got a decoded video frame
               {
                   int64_t pts = av_frame_get_best_effort_timestamp(frame);
                   //determine if the frame image contains human face
                   bool result = faceDetect(frame);
                   if (result) //face contained
                   {
                       videoTime = pts* av_q2d(out_stream->time_base);
                       frame->pts = videoPts++;//Set pts of video frame
                       AVPacket enc_pkt;
                       av_log(NULL, AV_LOG_INFO, "Encoding video frame\n");
                       //Create packet for encoding
                       enc_pkt.data = NULL;
                       enc_pkt.size = 0;
                       av_init_packet(&amp;enc_pkt);
                       //Encoding frame
                       ret = avcodec_encode_video2(out_stream->codec, &amp;enc_pkt,
                           frame, &amp;got_frame);
                       av_frame_free(&amp;frame);
                       if (!(got_frame))
                           ret = 0;
                       /* Configure encoding properties */
                       enc_pkt.stream_index = videoindex;
                       av_packet_rescale_ts(&amp;enc_pkt,
                           out_stream->codec->time_base,
                           out_stream->time_base);
                       av_log(NULL, AV_LOG_DEBUG, "Muxing frame\n");
                       /* Write encoded frame */
                       ret = av_interleaved_write_frame(ofmt_ctx, &amp;enc_pkt);
                       if (ret &lt; 0)
                           break;
                   }
                   else //no face contained
                   {
                       //Set the videoTime as maximum double value,
                       //making the corresponding audio frame not been processed
                       if (videoTime &lt; DBL_MAX)
                           videoTime = DBL_MAX;
                   }

               }
               else
               {
                   av_frame_free(&amp;frame);
               }
           }
           else//Audio frame
           {
               //Get current frame time
               double audioTime = pkt.pts * av_q2d(in_stream->time_base);
               if (audioTime >= videoTime)
               {//The current frame should be written into output file
                   int got_frame;
                   frame = av_frame_alloc();
                   if (!frame) {
                       ret = AVERROR(ENOMEM);
                       break;
                   }
                   //Readjust packet timestamp for decoding
                   av_packet_rescale_ts(&amp;pkt,
                       in_stream->time_base,
                       in_stream->codec->time_base);
                   //Decode audio frame
                   int len = avcodec_decode_audio4(in_stream->codec, frame, &amp;got_frame, &amp;pkt);
                   if (len &lt; 0)
                   {
                       av_frame_free(&amp;frame);
                       av_log(NULL, AV_LOG_ERROR, "Decoding failed\n");
                       break;
                   }
                   if (got_frame)//Got a decoded audio frame
                   {
                       //Set pts of audio frame
                       frame->pts = audioPts;
                       audioPts += frame->nb_samples;
                       AVPacket enc_pkt;
                       av_log(NULL, AV_LOG_INFO, "Encoding audio frame");
                       //Create packet for encoding
                       enc_pkt.data = NULL;
                       enc_pkt.size = 0;
                       av_init_packet(&amp;enc_pkt);
                       //Encode audio frame
                       ret = avcodec_encode_audio2(out_stream->codec, &amp;enc_pkt,
                           frame, &amp;got_frame);
                       av_frame_free(&amp;frame);
                       if (!(got_frame))
                           ret = 0;
                       /* Configure encoding properties */
                       enc_pkt.stream_index = audioindex;
                       av_packet_rescale_ts(&amp;enc_pkt,
                           out_stream->codec->time_base,
                           out_stream->time_base);
                       av_log(NULL, AV_LOG_DEBUG, "Muxing frame\n");
                       /* Write encoded frame */
                       ret = av_interleaved_write_frame(ofmt_ctx, &amp;enc_pkt);
                       if (ret &lt; 0)
                           break;
                   }
                   else //Shouldn't be written
                   {
                       av_frame_free(&amp;frame);
                   }
               }
           }
           av_packet_unref(&amp;pkt);
       }
       //Write video trailer
       av_write_trailer(ofmt_ctx);
    end://Clean up
       av_log(NULL, AV_LOG_INFO, "Clean up\n");
       av_frame_free(&amp;frame);
       for (i = 0; i &lt; ifmt_ctx->nb_streams; i++) {
           avcodec_close(ifmt_ctx->streams[i]->codec);
           if (ofmt_ctx &amp;&amp; ofmt_ctx->nb_streams > i &amp;&amp; ofmt_ctx->streams[i] &amp;&amp; ofmt_ctx->streams[i]->codec)
               avcodec_close(ofmt_ctx->streams[i]->codec);
       }
       avformat_close_input(&amp;ifmt_ctx);
       /* Close output file */
       if (ofmt_ctx &amp;&amp; !(ofmt_ctx->oformat->flags &amp; AVFMT_NOFILE))
           avio_closep(&amp;ofmt_ctx->pb);
       avformat_free_context(ofmt_ctx);
       if (ret &lt; 0 &amp;&amp; ret != AVERROR_EOF) {
           char buf[256];
           av_strerror(ret, buf, sizeof(buf));
           av_log(NULL, AV_LOG_ERROR, "Error occurred:%s\n", buf);
           system("Pause");
           return 1;
       }
       //Program end
       printf("The End.\n");
       system("Pause");
       return 0;
    }
    </iostream>
  • Unable to Access IP Camera in OpenCV

    3 janvier 2017, par user7258890

    I’m trying to use Javafx and OpenCV to access a webcam (Axis M1013) over wireless to run vision processing for my FRC team. When I run my code, I can access the GUI that I made using Scenebuilder, but when I try to start the camera, the program crashes. It seems to me like it’s having trouble utilizing the VideoCapture class. I know for a fact that my problem is not with the camera, because i can access the feed through a browser feed, and my laptop’s webcam will work fine. I’m using OpevCV version 2.4.13, jdk 8u101x64, and ffmpeg 3.2

    I have looked at other posts at :

    stackoverflow.com/questions/18625948/opencv-java-unsatisfiedlinkerror

    answers.opencv.org/question/21720/java-webcam-capture

    answers.opencv.org/question/20071/unsatisfiedlinkerror-given-by-highghuiimread-on-java

    I have tried the solutions mentioned in all of these. So far, none have resolved my problem.

    My code is based mostly off the tutorials at :

    opencv-java-tutorials.readthedocs.io/en/latest/03-first-javafx-application-with-opencv.html

    Here is my main java code :

       @Override
    public void start(Stage primaryStage) {
       System.load("C:\\opencv\\build\\x64\\vc12\\bin\\opencv_ffmpeg_64.dll");
       try {
           FXMLLoader loader = new FXMLLoader(getClass().getResource("Sample.fxml"));
           BorderPane root = (BorderPane) loader.load();
           Scene scene = new Scene(root, 400, 400);
           scene.getStylesheets().add(getClass().getResource("application.css").toExternalForm());
           primaryStage.setScene(scene);
           primaryStage.show();
       } catch(Exception e) {
           System.out.println("error opening camera gui");
       }
    }

    public static void main(String[] args) {
       launch(args);
    }

    And here is my camera controller :

    public class SampleController {
    @FXML
    private Button Start_btn;

    @FXML
    private ImageView currentFrame;
    private ScheduledExecutorService timer;
    private VideoCapture capture;
    private boolean cameraActive = false;

    @FXML
    public void startCamera()
    {
       System.loadLibrary("opencv_ffmpeg_64");
       if (!this.cameraActive)
       {
           try
           {
            capture = new VideoCapture("http://FRC:FRC@axis-camera-223-catapult.lan:554/mjpg/1/video.mjpg");
           }
           catch (Exception e)
           {
               System.out.println("an error occured when attempting to access camera.");
           }
           // is the video stream available?
           if (this.capture.isOpened())
           {
               this.cameraActive = true;

               // grab a frame every 33 ms (30 frames/sec)
               Runnable frameGrabber = new Runnable() {

                   @Override
                   public void run()
                   {
                       // effectively grab and process a single frame
                       Mat frame = grabFrame();
                       // convert and show the frame
                       Image imageToShow = Utils.mat2Image(frame);
                       updateImageView(currentFrame, imageToShow);
                   }
               };

               this.timer = Executors.newSingleThreadScheduledExecutor();
               this.timer.scheduleAtFixedRate(frameGrabber, 0, 33, TimeUnit.MILLISECONDS);

               // update the button content
               this.Start_btn.setText("Stop Camera");
           }
           else
           {
               // log the error
               System.err.println("Impossible to open the camera connection...");
           }
       }
       else
       {
           // the camera is not active at this point
           this.cameraActive = false;
           // update again the button content
           this.Start_btn.setText("Start Camera");

           // stop the timer
           this.stopAquisition();
       }

       System.out.println("Camera is now on.");
    }

    /**
    * Get frame from open video
    * @return
    */
       private Mat grabFrame()

       {
           Mat frame = new Mat();
           if (this.capture.isOpened())
           {
               try
               {
                   this.capture.read(frame);

                   }
           catch (Exception e)
           {
               System.err.println("Exception during the image elaboration: " + e);
           }
       }
       return frame;
    }
       private void stopAquisition()
       {
           if (this.timer!=null &amp;&amp; !this.timer.isShutdown())
           {
               try
               {
                   this.timer.shutdown();
                   this.timer.awaitTermination(33, TimeUnit.MILLISECONDS);
               }
               catch (InterruptedException e)
               {
                   System.err.println("Exception in stopping the frame capture, trying to release camera now..." + e);

               }
           }
           if (this.capture.isOpened())
           {
               this.capture.release();
           }
       }
       private void updateImageView(ImageView view, Image image)
       {
           Utils.onFXThread(view.imageProperty(), image);
       }
       protected void setClosed()
       {
           this.stopAquisition();
       }
    }

    When I try and run this as i have said before, the GUI will launch, but when i try and open the camera, i get an error message :

    Exception in thread "JavaFX Application Thread" java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
    at javafx.fxml.FXMLLoader$MethodHandler.invoke(FXMLLoader.java:1774)
    at javafx.fxml.FXMLLoader$ControllerMethodEventHandler.handle(FXMLLoader.java:1657)
    at com.sun.javafx.event.CompositeEventHandler.dispatchBubblingEvent(CompositeEventHandler.java:86)
    at com.sun.javafx.event.EventHandlerManager.dispatchBubblingEvent(EventHandlerManager.java:238)
    at com.sun.javafx.event.EventHandlerManager.dispatchBubblingEvent(EventHandlerManager.java:191)
    at com.sun.javafx.event.CompositeEventDispatcher.dispatchBubblingEvent(CompositeEventDispatcher.java:59)
    at com.sun.javafx.event.BasicEventDispatcher.dispatchEvent(BasicEventDispatcher.java:58)
    at com.sun.javafx.event.EventDispatchChainImpl.dispatchEvent(EventDispatchChainImpl.java:114)
    at com.sun.javafx.event.BasicEventDispatcher.dispatchEvent(BasicEventDispatcher.java:56)
    at com.sun.javafx.event.EventDispatchChainImpl.dispatchEvent(EventDispatchChainImpl.java:114)
    at com.sun.javafx.event.BasicEventDispatcher.dispatchEvent(BasicEventDispatcher.java:56)
    at com.sun.javafx.event.EventDispatchChainImpl.dispatchEvent(EventDispatchChainImpl.java:114)
    at com.sun.javafx.event.EventUtil.fireEventImpl(EventUtil.java:74)
    at com.sun.javafx.event.EventUtil.fireEvent(EventUtil.java:49)
    at javafx.event.Event.fireEvent(Event.java:198)
    at javafx.scene.Node.fireEvent(Node.java:8411)
    at javafx.scene.control.Button.fire(Button.java:185)
    at com.sun.javafx.scene.control.behavior.ButtonBehavior.mouseReleased(ButtonBehavior.java:182)
    at com.sun.javafx.scene.control.skin.BehaviorSkinBase$1.handle(BehaviorSkinBase.java:96)
    at com.sun.javafx.scene.control.skin.BehaviorSkinBase$1.handle(BehaviorSkinBase.java:89)
    at com.sun.javafx.event.CompositeEventHandler$NormalEventHandlerRecord.handleBubblingEvent(CompositeEventHandler.java:218)
    at com.sun.javafx.event.CompositeEventHandler.dispatchBubblingEvent(CompositeEventHandler.java:80)
    at com.sun.javafx.event.EventHandlerManager.dispatchBubblingEvent(EventHandlerManager.java:238)
    at com.sun.javafx.event.EventHandlerManager.dispatchBubblingEvent(EventHandlerManager.java:191)
    at com.sun.javafx.event.CompositeEventDispatcher.dispatchBubblingEvent(CompositeEventDispatcher.java:59)
    at com.sun.javafx.event.BasicEventDispatcher.dispatchEvent(BasicEventDispatcher.java:58)
    at com.sun.javafx.event.EventDispatchChainImpl.dispatchEvent(EventDispatchChainImpl.java:114)
    at com.sun.javafx.event.BasicEventDispatcher.dispatchEvent(BasicEventDispatcher.java:56)
    at com.sun.javafx.event.EventDispatchChainImpl.dispatchEvent(EventDispatchChainImpl.java:114)
    at com.sun.javafx.event.BasicEventDispatcher.dispatchEvent(BasicEventDispatcher.java:56)
    at com.sun.javafx.event.EventDispatchChainImpl.dispatchEvent(EventDispatchChainImpl.java:114)
    at com.sun.javafx.event.EventUtil.fireEventImpl(EventUtil.java:74)
    at com.sun.javafx.event.EventUtil.fireEvent(EventUtil.java:54)
    at javafx.event.Event.fireEvent(Event.java:198)
    at javafx.scene.Scene$MouseHandler.process(Scene.java:3757)
    at javafx.scene.Scene$MouseHandler.access$1500(Scene.java:3485)
    at javafx.scene.Scene.impl_processMouseEvent(Scene.java:1762)
    at javafx.scene.Scene$ScenePeerListener.mouseEvent(Scene.java:2494)
    at com.sun.javafx.tk.quantum.GlassViewEventHandler$MouseEventNotification.run(GlassViewEventHandler.java:380)
    at com.sun.javafx.tk.quantum.GlassViewEventHandler$MouseEventNotification.run(GlassViewEventHandler.java:294)
    at java.security.AccessController.doPrivileged(Native Method)
    at com.sun.javafx.tk.quantum.GlassViewEventHandler.lambda$handleMouseEvent$354(GlassViewEventHandler.java:416)
    at com.sun.javafx.tk.quantum.QuantumToolkit.runWithoutRenderLock(QuantumToolkit.java:389)
    at com.sun.javafx.tk.quantum.GlassViewEventHandler.handleMouseEvent(GlassViewEventHandler.java:415)
    at com.sun.glass.ui.View.handleMouseEvent(View.java:555)
    at com.sun.glass.ui.View.notifyMouse(View.java:937)
    at com.sun.glass.ui.win.WinApplication._runLoop(Native Method)
    at com.sun.glass.ui.win.WinApplication.lambda$null$148(WinApplication.java:191)
    at java.lang.Thread.run(Unknown Source)
    Caused by: java.lang.reflect.InvocationTargetException
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
    at java.lang.reflect.Method.invoke(Unknown Source)
    at sun.reflect.misc.Trampoline.invoke(Unknown Source)
    at sun.reflect.GeneratedMethodAccessor1.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
    at java.lang.reflect.Method.invoke(Unknown Source)
    at sun.reflect.misc.MethodUtil.invoke(Unknown Source)
    at javafx.fxml.FXMLLoader$MethodHandler.invoke(FXMLLoader.java:1771)
    ... 48 more
    Caused by: java.lang.UnsatisfiedLinkError:     org.opencv.highgui.VideoCapture.VideoCapture_1(Ljava/lang/String;)J
    at org.opencv.highgui.VideoCapture.VideoCapture_1(Native Method)
    at org.opencv.highgui.VideoCapture.<init>(VideoCapture.java:128)
    at jfxtest1.SampleController.startCamera(SampleController.java:36)
    ... 58 more
    </init>

    If anyone can help me, that would be much appreciated.

    Thanks in advance.