Recherche avancée

Médias (10)

Mot : - Tags -/wav

Autres articles (58)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Librairies et binaires spécifiques au traitement vidéo et sonore

    31 janvier 2010, par

    Les logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
    Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
    Binaires complémentaires et facultatifs flvtool2 : (...)

  • Utilisation et configuration du script

    19 janvier 2011, par

    Informations spécifiques à la distribution Debian
    Si vous utilisez cette distribution, vous devrez activer les dépôts "debian-multimedia" comme expliqué ici :
    Depuis la version 0.3.1 du script, le dépôt peut être automatiquement activé à la suite d’une question.
    Récupération du script
    Le script d’installation peut être récupéré de deux manières différentes.
    Via svn en utilisant la commande pour récupérer le code source à jour :
    svn co (...)

Sur d’autres sites (9899)

  • android ffmpeg bad video output

    20 août 2014, par Sujith Manjavana

    I’m following this tutorial to create my first ffmpeg app. I have successfully build the shared libs and compiled the project without any errors. But when i run the app on my nexus 5 the output is this this

    Here is the native code

    #include <libavcodec></libavcodec>avcodec.h>
    #include <libavformat></libavformat>avformat.h>
    #include <libswscale></libswscale>swscale.h>
    #include <libavutil></libavutil>pixfmt.h>

    #include
    #include

    #include
    #include <android></android>native_window.h>
    #include <android></android>native_window_jni.h>

    #define LOG_TAG "android-ffmpeg-tutorial02"
    #define LOGI(...) __android_log_print(4, LOG_TAG, __VA_ARGS__);
    #define LOGE(...) __android_log_print(6, LOG_TAG, __VA_ARGS__);

    ANativeWindow*      window;
    char                *videoFileName;
    AVFormatContext     *formatCtx = NULL;
    int                 videoStream;
    AVCodecContext      *codecCtx = NULL;
    AVFrame             *decodedFrame = NULL;
    AVFrame             *frameRGBA = NULL;
    jobject             bitmap;
    void*               buffer;
    struct SwsContext   *sws_ctx = NULL;
    int                 width;
    int                 height;
    int                 stop;

    jint naInit(JNIEnv *pEnv, jobject pObj, jstring pFileName) {
       AVCodec         *pCodec = NULL;
       int             i;
       AVDictionary    *optionsDict = NULL;

       videoFileName = (char *)(*pEnv)->GetStringUTFChars(pEnv, pFileName, NULL);
       LOGI("video file name is %s", videoFileName);
       // Register all formats and codecs
       av_register_all();
       // Open video file
       if(avformat_open_input(&amp;formatCtx, videoFileName, NULL, NULL)!=0)
           return -1; // Couldn't open file
       // Retrieve stream information
       if(avformat_find_stream_info(formatCtx, NULL)&lt;0)
           return -1; // Couldn't find stream information
       // Dump information about file onto standard error
       av_dump_format(formatCtx, 0, videoFileName, 0);
       // Find the first video stream
       videoStream=-1;
       for(i=0; inb_streams; i++) {
           if(formatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO) {
               videoStream=i;
               break;
           }
       }
       if(videoStream==-1)
           return -1; // Didn't find a video stream
       // Get a pointer to the codec context for the video stream
       codecCtx=formatCtx->streams[videoStream]->codec;
       // Find the decoder for the video stream
       pCodec=avcodec_find_decoder(codecCtx->codec_id);
       if(pCodec==NULL) {
           fprintf(stderr, "Unsupported codec!\n");
           return -1; // Codec not found
       }
       // Open codec
       if(avcodec_open2(codecCtx, pCodec, &amp;optionsDict)&lt;0)
           return -1; // Could not open codec
       // Allocate video frame
       decodedFrame=avcodec_alloc_frame();
       // Allocate an AVFrame structure
       frameRGBA=avcodec_alloc_frame();
       if(frameRGBA==NULL)
           return -1;
       return 0;
    }

    jobject createBitmap(JNIEnv *pEnv, int pWidth, int pHeight) {
       int i;
       //get Bitmap class and createBitmap method ID
       jclass javaBitmapClass = (jclass)(*pEnv)->FindClass(pEnv, "android/graphics/Bitmap");
       jmethodID mid = (*pEnv)->GetStaticMethodID(pEnv, javaBitmapClass, "createBitmap", "(IILandroid/graphics/Bitmap$Config;)Landroid/graphics/Bitmap;");
       //create Bitmap.Config
       //reference: https://forums.oracle.com/thread/1548728
       const wchar_t* configName = L"ARGB_8888";
       int len = wcslen(configName);
       jstring jConfigName;
       if (sizeof(wchar_t) != sizeof(jchar)) {
           //wchar_t is defined as different length than jchar(2 bytes)
           jchar* str = (jchar*)malloc((len+1)*sizeof(jchar));
           for (i = 0; i &lt; len; ++i) {
               str[i] = (jchar)configName[i];
           }
           str[len] = 0;
           jConfigName = (*pEnv)->NewString(pEnv, (const jchar*)str, len);
       } else {
           //wchar_t is defined same length as jchar(2 bytes)
           jConfigName = (*pEnv)->NewString(pEnv, (const jchar*)configName, len);
       }
       jclass bitmapConfigClass = (*pEnv)->FindClass(pEnv, "android/graphics/Bitmap$Config");
       jobject javaBitmapConfig = (*pEnv)->CallStaticObjectMethod(pEnv, bitmapConfigClass,
               (*pEnv)->GetStaticMethodID(pEnv, bitmapConfigClass, "valueOf", "(Ljava/lang/String;)Landroid/graphics/Bitmap$Config;"), jConfigName);
       //create the bitmap
       return (*pEnv)->CallStaticObjectMethod(pEnv, javaBitmapClass, mid, pWidth, pHeight, javaBitmapConfig);
    }

    jintArray naGetVideoRes(JNIEnv *pEnv, jobject pObj) {
       jintArray lRes;
       if (NULL == codecCtx) {
           return NULL;
       }
       lRes = (*pEnv)->NewIntArray(pEnv, 2);
       if (lRes == NULL) {
           LOGI(1, "cannot allocate memory for video size");
           return NULL;
       }
       jint lVideoRes[2];
       lVideoRes[0] = codecCtx->width;
       lVideoRes[1] = codecCtx->height;
       (*pEnv)->SetIntArrayRegion(pEnv, lRes, 0, 2, lVideoRes);
       return lRes;
    }

    void naSetSurface(JNIEnv *pEnv, jobject pObj, jobject pSurface) {
       if (0 != pSurface) {
           // get the native window reference
           window = ANativeWindow_fromSurface(pEnv, pSurface);
           // set format and size of window buffer
           ANativeWindow_setBuffersGeometry(window, 0, 0, WINDOW_FORMAT_RGBA_8888);
       } else {
           // release the native window
           ANativeWindow_release(window);
       }
    }

    jint naSetup(JNIEnv *pEnv, jobject pObj, int pWidth, int pHeight) {
       width = pWidth;
       height = pHeight;
       //create a bitmap as the buffer for frameRGBA
       bitmap = createBitmap(pEnv, pWidth, pHeight);
       if (AndroidBitmap_lockPixels(pEnv, bitmap, &amp;buffer) &lt; 0)
           return -1;
       //get the scaling context
       sws_ctx = sws_getContext (
               codecCtx->width,
               codecCtx->height,
               codecCtx->pix_fmt,
               pWidth,
               pHeight,
               AV_PIX_FMT_RGBA,
               SWS_BILINEAR,
               NULL,
               NULL,
               NULL
       );
       // Assign appropriate parts of bitmap to image planes in pFrameRGBA
       // Note that pFrameRGBA is an AVFrame, but AVFrame is a superset
       // of AVPicture
       avpicture_fill((AVPicture *)frameRGBA, buffer, AV_PIX_FMT_RGBA,
               pWidth, pHeight);
       return 0;
    }

    void finish(JNIEnv *pEnv) {
       //unlock the bitmap
       AndroidBitmap_unlockPixels(pEnv, bitmap);
       av_free(buffer);
       // Free the RGB image
       av_free(frameRGBA);
       // Free the YUV frame
       av_free(decodedFrame);
       // Close the codec
       avcodec_close(codecCtx);
       // Close the video file
       avformat_close_input(&amp;formatCtx);
    }

    void decodeAndRender(JNIEnv *pEnv) {
       ANativeWindow_Buffer    windowBuffer;
       AVPacket                packet;
       int                     i=0;
       int                     frameFinished;
       int                     lineCnt;
       while(av_read_frame(formatCtx, &amp;packet)>=0 &amp;&amp; !stop) {
           // Is this a packet from the video stream?
           if(packet.stream_index==videoStream) {
               // Decode video frame
               avcodec_decode_video2(codecCtx, decodedFrame, &amp;frameFinished,
                  &amp;packet);
               // Did we get a video frame?
               if(frameFinished) {
                   // Convert the image from its native format to RGBA
                   sws_scale
                   (
                       sws_ctx,
                       (uint8_t const * const *)decodedFrame->data,
                       decodedFrame->linesize,
                       0,
                       codecCtx->height,
                       frameRGBA->data,
                       frameRGBA->linesize
                   );
                   // lock the window buffer
                   if (ANativeWindow_lock(window, &amp;windowBuffer, NULL) &lt; 0) {
                       LOGE("cannot lock window");
                   } else {
                       // draw the frame on buffer
                       LOGI("copy buffer %d:%d:%d", width, height, width*height*4);
                       LOGI("window buffer: %d:%d:%d", windowBuffer.width,
                               windowBuffer.height, windowBuffer.stride);
                       memcpy(windowBuffer.bits, buffer,  width * height * 4);
                       // unlock the window buffer and post it to display
                       ANativeWindow_unlockAndPost(window);
                       // count number of frames
                       ++i;
                   }
               }
           }
           // Free the packet that was allocated by av_read_frame
           av_free_packet(&amp;packet);
       }
       LOGI("total No. of frames decoded and rendered %d", i);
       finish(pEnv);
    }

    /**
    * start the video playback
    */
    void naPlay(JNIEnv *pEnv, jobject pObj) {
       //create a new thread for video decode and render
       pthread_t decodeThread;
       stop = 0;
       pthread_create(&amp;decodeThread, NULL, decodeAndRender, NULL);
    }

    /**
    * stop the video playback
    */
    void naStop(JNIEnv *pEnv, jobject pObj) {
       stop = 1;
    }

    jint JNI_OnLoad(JavaVM* pVm, void* reserved) {
       JNIEnv* env;
       if ((*pVm)->GetEnv(pVm, (void **)&amp;env, JNI_VERSION_1_6) != JNI_OK) {
            return -1;
       }
       JNINativeMethod nm[8];
       nm[0].name = "naInit";
       nm[0].signature = "(Ljava/lang/String;)I";
       nm[0].fnPtr = (void*)naInit;

       nm[1].name = "naSetSurface";
       nm[1].signature = "(Landroid/view/Surface;)V";
       nm[1].fnPtr = (void*)naSetSurface;

       nm[2].name = "naGetVideoRes";
       nm[2].signature = "()[I";
       nm[2].fnPtr = (void*)naGetVideoRes;

       nm[3].name = "naSetup";
       nm[3].signature = "(II)I";
       nm[3].fnPtr = (void*)naSetup;

       nm[4].name = "naPlay";
       nm[4].signature = "()V";
       nm[4].fnPtr = (void*)naPlay;

       nm[5].name = "naStop";
       nm[5].signature = "()V";
       nm[5].fnPtr = (void*)naStop;

       jclass cls = (*env)->FindClass(env, "roman10/tutorial/android_ffmpeg_tutorial02/MainActivity");
       //Register methods with env->RegisterNatives.
       (*env)->RegisterNatives(env, cls, nm, 6);
       return JNI_VERSION_1_6;
    }

    Here is the build.sh

    #!/bin/bash
    NDK=$HOME/Desktop/adt/android-ndk-r9
    SYSROOT=$NDK/platforms/android-9/arch-arm/
    TOOLCHAIN=$NDK/toolchains/arm-linux-androideabi-4.8/prebuilt/linux-x86_64
    function build_one
    {
    ./configure \
       --prefix=$PREFIX \
       --enable-shared \
       --disable-static \
       --disable-doc \
       --disable-ffmpeg \
       --disable-ffplay \
       --disable-ffprobe \
       --disable-ffserver \
       --disable-avdevice \
       --disable-doc \
       --disable-symver \
       --cross-prefix=$TOOLCHAIN/bin/arm-linux-androideabi- \
       --target-os=linux \
       --arch=arm \
       --enable-cross-compile \
       --sysroot=$SYSROOT \
       --extra-cflags="-Os -fpic $ADDI_CFLAGS" \
       --extra-ldflags="$ADDI_LDFLAGS" \
       $ADDITIONAL_CONFIGURE_FLAG
    make clean
    make
    make install
    }
    CPU=arm
    PREFIX=$(pwd)/android/$CPU
    ADDI_CFLAGS="-marm"
    build_one

    It works on the Galaxy tab2. what can i do to make it work on all devices ?? Please help me..

  • Android JNI x264 as a library give relocation R_ARM_MOVW_ABS_NC ; recompile with -fPIC

    25 juin 2014, par MoAdiB

    I’m trying to build x264 library to use it in a shared library that I load using jni on android. I’m able to build everything as an executable without any error but when I build as a shared library I get errors with dynamic relocation of R_ARM_MOVW_ABS_NC :

    [armeabi-v7a] SharedLibrary  : libx264.so
    /home/martin/bin/android-ndk/toolchains/arm-linux-androideabi-4.8/prebuilt/linux-x86_64/bin/../lib/gcc/arm-linux-androideabi/4.8/../../../../arm-linux-androideabi/bin/ld: error: /home/martin/Documents/Sources/ScreenSharingServer/app/src/main/obj/local/armeabi-v7a/objs/x264/common/arm/pixel-a.o: requires unsupported dynamic reloc R_ARM_MOVW_ABS_NC; recompile with -fPIC
    /home/martin/bin/android-ndk/toolchains/arm-linux-androideabi-4.8/prebuilt/linux-x86_64/bin/../lib/gcc/arm-linux-androideabi/4.8/../../../../arm-linux-androideabi/bin/ld: error: /home/martin/Documents/Sources/ScreenSharingServer/app/src/main/obj/local/armeabi-v7a/objs/x264/common/arm/mc-a.o: requires unsupported dynamic reloc R_ARM_MOVW_ABS_NC; recompile with -fPIC
    /home/martin/bin/android-ndk/toolchains/arm-linux-androideabi-4.8/prebuilt/linux-x86_64/bin/../lib/gcc/arm-linux-androideabi/4.8/../../../../arm-linux-androideabi/bin/ld: error: /home/martin/Documents/Sources/ScreenSharingServer/app/src/main/obj/local/armeabi-v7a/objs/x264/common/arm/dct-a.o: requires unsupported dynamic reloc R_ARM_MOVW_ABS_NC; recompile with -fPIC
    /home/martin/bin/android-ndk/toolchains/arm-linux-androideabi-4.8/prebuilt/linux-x86_64/bin/../lib/gcc/arm-linux-androideabi/4.8/../../../../arm-linux-androideabi/bin/ld: error: /home/martin/Documents/Sources/ScreenSharingServer/app/src/main/obj/local/armeabi-v7a/objs/x264/common/arm/quant-a.o: requires unsupported dynamic reloc R_ARM_MOVW_ABS_NC; recompile with -fPIC
    /home/martin/bin/android-ndk/toolchains/arm-linux-androideabi-4.8/prebuilt/linux-x86_64/bin/../lib/gcc/arm-linux-androideabi/4.8/../../../../arm-linux-androideabi/bin/ld: error: /home/martin/Documents/Sources/ScreenSharingServer/app/src/main/obj/local/armeabi-v7a/objs/x264/common/arm/predict-a.o: requires unsupported dynamic reloc R_ARM_MOVW_ABS_NC; recompile with -fPIC
    collect2: error: ld returned 1 exit status
    make: *** [/home/martin/Documents/Sources/ScreenSharingServer/app/src/main/obj/local/armeabi-v7a/libx264.so] Error 1

    Here is my android.mk :

    LOCAL_PATH := $(my-dir)

    include $(CLEAR_VARS)

    APP_ABI := armeabi armeabi-v7a
    TARGET_ARCH_ABI := armeabi-v7a
    LOCAL_ARM_NEON := true
    ARCH_ARM_HAVE_NEON := true

    AM_CFLAGS := -march=armv7-a -mfpu=neon
    AM_CCASFLAGS := -march=armv7-a -mfpu=neon

    LOCAL_SRC_FILES:= common/mc.c common/predict.c common/pixel.c common/macroblock.c \
          common/frame.c common/dct.c common/cpu.c common/cabac.c \
          common/common.c common/osdep.c common/rectangle.c \
          common/set.c common/quant.c common/deblock.c common/vlc.c \
          common/mvpred.c common/bitstream.c \
          encoder/analyse.c encoder/me.c encoder/ratecontrol.c \
          encoder/set.c encoder/macroblock.c encoder/cabac.c \
          encoder/cavlc.c encoder/encoder.c encoder/lookahead.c \
          common/threadpool.c \
          common/arm/mc-c.c common/arm/predict-c.c \
          x264.c \
          common/arm/cpu-a.S common/arm/pixel-a.S common/arm/mc-a.S \
          common/arm/dct-a.S common/arm/quant-a.S common/arm/deblock-a.S \
          common/arm/predict-a.S

    LOCAL_SHARED_LIBRARIES := libcutils
    LOCAL_STATIC_LIBRARIES := swscale

    LOCAL_C_INCLUDES := $(LOCAL_PATH) $(LOCAL_PATH)/..
    LOCAL_EXPORT_C_INCLUDES := $(LOCAL_PATH) $(LOCAL_PATH)/..

    LOCAL_CFLAGS := -fPIC -O3 -ffast-math -fstrict-aliasing -DANDROID -std=c99
    LOCAL_CPPFLAGS := -fPIC
    LOCAL_LDFLAGS := -lm
    LOCAL_MODULE := x264

    include $(BUILD_SHARED_LIBRARY)

    When I build with V=1 I see -fPIC option in every compilation line...

    I wonder if anybody can help me understand what I do wrong.

    Thanks for your time !

    Martin

  • OpenCV and Network Cameras -or- How to spy on the neighbors ?

    16 mai 2014, par Alexander

    A bit of context ; this program was built originally to work with USB cameras - but because of the setup between where the cameras needs to be and where the computer is it makes more sense to switch to cameras run over a network. Now I’m trying to convert the program to accomplish this, but my efforts thus far have met with poor results. I’ve also asked this same question over on the OpenCV forums. Help me spy on my neighbors ! (This is with their permission, of course !) :D


    I’m using :

    • OpenCV v2.4.6.0
    • C++
    • D-Link Cloud Camera 7100 (Installer is DCS-7010L, according to the instructions.)

    I am trying to access the DLink camera’s video feed through OpenCV.

    I can access the camera through it’s IP address with a browser without any issues. Unfourtunately ; my program is less cooperative. When attempting to access the camera the program gives the OpenCV-generated error :

    warning : Error opening file (../../modules/highgui/src/cap_ffmpeg_impl.hpp:529)

    This error occurs with just about everything I try that doesn’t somehow generate more problems.

    For reference - the code in OpenCV’s cap_ffmpeg_impl.hpp around line 529 is as follows :

    522    bool CvCapture_FFMPEG::open( const char* _filename )
    523    {
    524        unsigned i;
    525        bool valid = false;
    526
    527        close();
    528
    529    #if LIBAVFORMAT_BUILD >= CALC_FFMPEG_VERSION(52, 111, 0)
    530        int err = avformat_open_input(&amp;ic, _filename, NULL, NULL);
    531    #else
    532        int err = av_open_input_file(&amp;ic, _filename, NULL, 0, NULL);
    533    #endif
    ...
    616    }

    ...for which I have no idea what I’m looking at. It seems to be looking for the ffmpeg version - but I’ve already installed the latest ffmpeg on that computer, so that shouldn’t be the issue.

    This is the edited down version I tried to use as per Sebastian Schmitz’s recommendation :

    1    #include <fstream>                            // File input/output
    2    #include <iostream>                           // cout / cin / etc
    3    #include                       // Windows API stuff
    4    #include                             // More input/output stuff
    5    #include <string>                         // "Strings" of characters strung together to form words and stuff
    6    #include <cstring>                            // "Strings" of characters strung together to form words and stuff
    7    #include <streambuf>                      // For buffering load files
    8    #include <array>                          // Functions for working with arrays
    9    #include <opencv2></opencv2>imgproc/imgproc.hpp>        // Image Processor
    10    #include <opencv2></opencv2>core/core.hpp>          // Basic OpenCV structures (cv::Mat, Scalar)
    11    #include <opencv2></opencv2>highgui/highgui.hpp>        // OpenCV window I/O
    12    #include "opencv2/calib3d/calib3d.hpp"
    13    #include "opencv2/features2d/features2d.hpp"
    14    #include "opencv2/opencv.hpp"
    15    #include "resource.h"                     // Included for linking the .rc file
    16    #include                             // For sleep()
    17    #include <chrono>                         // To get start-time of program.
    18    #include <algorithm>                      // For looking at whole sets.
    19
    20    #ifdef __BORLANDC__
    21      #pragma argsused
    22    #endif
    23
    24    using namespace std;                      // Standard operations. Needed for most basic functions.
    25    using namespace std::chrono;              // Chrono operations. Needed getting starting time of program.
    26    using namespace cv;                           // OpenCV operations. Needed for most OpenCV functions.
    27
    28    string videoFeedAddress = "";
    29    VideoCapture videoFeedIP = NULL;
    30    Mat clickPointStorage; //Artifact from original program.
    31
    32    void displayCameraViewTest()
    33    {
    34      VideoCapture cv_cap_IP;
    35      Mat color_img_IP;
    36      int capture;
    37      IplImage* color_img;
    38      cv_cap_IP.open(videoFeedAddress);
    39      Sleep(100);
    40      if(!cv_cap_IP.isOpened())
    41      {
    42          cout &lt;&lt; "Video Error: Video input will not work.\n";
    43          cvDestroyWindow("Camera View");
    44          return;
    45      }
    46      clickPointStorage.create(color_img_IP.rows, color_img_IP.cols, CV_8UC3);
    47      clickPointStorage.setTo(Scalar(0, 0, 0));
    48      cvNamedWindow("Camera View", 0); // create window
    49      IplImage* IplClickPointStorage = new IplImage(clickPointStorage);
    50      IplImage* Ipl_IP_Img;
    51      
    52      for(;;)
    53      {
    54          cv_cap_IP.read(color_img_IP);
    55          IplClickPointStorage = new IplImage(clickPointStorage);
    56          Ipl_IP_Img = new IplImage(color_img_IP);
    57          cvAdd(Ipl_IP_Img, IplClickPointStorage, color_img);
    58          cvShowImage("Camera View", color_img); // show frame
    59          capture = cvWaitKey(10); // wait 10 ms or for key stroke
    60          if(capture == 27 || capture == 13 || capture == 32){break;} // if ESC, Return, or space; close window.
    61      }
    62      cv_cap_IP.release();
    63      delete Ipl_IP_Img;
    64      delete IplClickPointStorage;
    65      cvDestroyWindow("Camera View");
    66      return;
    67    }
    68    
    69    int main()
    70    {
    71      while(1)
    72      {
    73          cout &lt;&lt; "Please Enter Video-Feed Address: ";
    74          cin >> videoFeedAddress;
    75          if(videoFeedAddress == "exit"){return 0;}
    76          cout &lt;&lt; "\nvideoFeedAddress: " &lt;&lt; videoFeedAddress &lt;&lt; endl;
    77          displayCameraViewTest();
    78          if(cvWaitKey(10) == 27){return 0;}
    79      }
    80      return 0;
    81    }
    </algorithm></chrono></array></streambuf></cstring></string></iostream></fstream>

    Using added ’cout’s I was able to narrow it down to line 38 : "cv_cap_IP.open(videoFeedAddress) ;"

    No value I enter for the videoFeedAddress variable seems to get a different result. I found THIS site that lists a number of possible addresses to connect to it. Since there exists no 7100 anywhere in the list & considering that the install is labeled "DCS-7010L" I used the addresses found next to the DCS-7010L listings. When trying to access the camera most of them can be reached through the browser, confirming that they reach the camera - but they don’t seem to affect the outcome when I use them in the videoFeedAddress variable.

    I’ve tried many of them both with and without username:password, the port number (554), and variations on ?.mjpg (the format) at the end.

    I searched around and came across a number of different "possible" answers - but none of them seem to work for me. Some of them did give me the idea for including the above username:password, etc stuff, but it doesn’t seem to be making a difference. Of course, the number of possible combinations is certainly rather large- so I certainly have not tried all of them (more direction here would be appreciated). Here are some of the links I found :

    1. This is one of the first configurations my code was in. No dice.
    2. This one is talking about files - not cameras. It also mentions codecs - but I wouldn’t be able to watch it in a web browser if that were the problem, right ? (Correct me if I’m wrong here...)
    3. This one has the wrong error code/points to the wrong line of code !
    4. This one mentions compiling OpenCV with ffmpeg support - but I believe 2.4.6.0 already comes with that all set and ready ! Otherwise it’s not that different from what I’ve already tried.
    5. Now THIS one appears to be very similar to what I have, but the only proposed solution doesn’t really help as I had already located a list of connections. I do not believe this is a duplicate, because as per THIS meta discussion I had a lot more information and so didn’t feel comfortable taking over someone else’s question - especially if I end up needing to add even more information later.

    Thank you for reading this far. I realize that I am asking a somewhat specific question - although I would appreciate any advice you can think of regarding OpenCV & network cameras or even related topics.


    TLDR : Network Camera and OpenCV are not cooperating. I’m unsure if
    it’s the address I’m using to direct the program to the camera or the
    command I’m using - but no adjustment I make seems to improve the
    result beyond what I’ve already done ! Now my neighbors will go unwatched !