Recherche avancée

Médias (91)

Autres articles (8)

  • XMP PHP

    13 mai 2011, par

    Dixit Wikipedia, XMP signifie :
    Extensible Metadata Platform ou XMP est un format de métadonnées basé sur XML utilisé dans les applications PDF, de photographie et de graphisme. Il a été lancé par Adobe Systems en avril 2001 en étant intégré à la version 5.0 d’Adobe Acrobat.
    Étant basé sur XML, il gère un ensemble de tags dynamiques pour l’utilisation dans le cadre du Web sémantique.
    XMP permet d’enregistrer sous forme d’un document XML des informations relatives à un fichier : titre, auteur, historique (...)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Contribute to a better visual interface

    13 avril 2011

    MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
    Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.

Sur d’autres sites (4151)

  • Hung out to dry

    31 mai 2013, par Mans — Law and liberty

    Outrage was the general reaction when Google recently announced their dropping of XMPP server-to-server federation from Hangouts, as the search giant’s revamped instant messaging platform is henceforth to be known. This outrage is, however, largely unjustified ; Google’s decision is merely a rational response to issues of a more fundamental nature. To see why, we need to step back and look at the broader instant messaging landscape.

    A brief history of IM

    The term instant messaging (IM) gained popularity in the mid-1990s along with the rise of chat clients such as ICQ, AOL Instant Messenger, and later MSN Messenger. These all had one thing in common : they were closed systems. Although global in the sense of allowing access from anywhere on the Internet, communication was possible only within each network, and only using the officially sanctioned client software. Contrast this with email, where users are free to choose any service provider as well as client software, inter-server communication over open protocols delivering messages to their proper destinations.

    The email picture has, however, not always been so rosy. During the 1970s and 80s a multitude of incompatible email systems (e.g. UUCP and X.400) were in more or less widespread use on various networks. As these networks gave way to the ARPANET/Internet, so did their mail systems to the SMTP email we all use today. A similar consolidation has yet to occur in the area of instant messaging.

    Over the years, a few efforts towards a cross-domain instant messaging have been undertaken. One early example is the Zephyr system created as part of Project Athena at MIT in the late 1980s. While it never saw significant uptake, it is still in use at a few universities. A more successful story is that of XMPP. Conceived under the name Jabber in the late 1990s, XMPP is an open standard specified in a set of IETF RFCs. In addition to being open, a distinguishing feature of XMPP compared to other contemporary IM systems is its decentralised nature, server-to-server connections allowing communication between users with accounts on different systems. Just like email.

    The social network

    A more recent emergence on the Internet is the social network. Although not the first of its kind, Facebook was the first to achieve its level of penetration, both geographically and across social groups. A range of messaging options, including email-style as well as instant messaging (chat), are available, all within the same web interface. What it does not allow is communication outside the Facebook network. Other social networks operate in the same spirit.

    The popularity of social networks, to the extent that they for many constitute the primary means of communication, has in a sense brought back fragmented networks of the 1980s. Even though they share infrastructure, up to and including the browser application, the social networks create walled-off regions of the Internet between which little or no exchange is possible.

    The house that Google built

    In 2005, Google launched Talk, an XMPP-based instant messaging service allowing users to connect using either Google’s official client application or any third-party XMPP client. Soon after, server-to-server federation was activated, enabling anyone with a Google account to exchange instant messages with users of any other federated XMPP service. An in-browser chat interface was also added to Gmail.

    It was arguably only with the 2011 introduction of Google+ that Google, despite its previous endeavours with Orkut and Buzz, had a viable contender in the social networking space. Since its inception, Google+ has gone through a number of changes where features have been added or reworked. Instant messaging within Google+ was until recently available only in mobile clients. On the desktop, the sole messaging option was Hangouts which, although featuring text chat, cannot be considered instant messaging in the usual sense.

    With a sprawling collection of messaging systems (Talk, Google+ Messenger, Hangouts), some action to consolidate them was a logical step. What we got was a unification under the Hangouts name. A redesigned Google+ now sports in-browser instant messaging similar the the Talk interface already present in Gmail. At the same time, the standalone desktop Talk client is discontinued, as is the Messenger feature in mobile Google+. All together, the changes make for a much less confusing user experience.

    The sky is falling down

    Along with the changes to the messaging platform, one announcement stoked anger on the Internet : Google’s intent to discontinue XMPP federation (as of this writing, it is still operational). Google, the (self-described) champions of openness on the Internet were seen to be closing their doors to the outside world. The effects of the change are, however, not quite so earth-shattering. Of the other major messaging networks to offer XMPP at all (Facebook, Skype, and the defunct Microsoft Messenger), none support federation ; a Google user has never been able to chat with a Facebook user.

    XMPP federation appears to be in use mainly by non-profit organisations or individuals running their own servers. The number of users on these systems is hard to assess, though it seems fair to assume it is dwarfed by the hundreds of millions using Google or Facebook. As such, the overall impact of cutting off communication with the federated servers is relatively minor, albeit annoying for those affected.

    A fragmented world

    Rather than chastising Google for making a low-impact, presumably founded, business decision, we should be asking ourselves why instant messaging is still so fragmented in the first place, whereas email is not. The answer can be found by examining the nature of entities providing these services.

    Ever since the commercialisation of the Internet started in the 1990s, email has been largely seen as being part of the Internet. Access to email was a major selling point for Internet service providers ; indeed, many still use the email facilities of their ISP. Instant messaging, by contrast, has never come as part of the basic offering, rather being a third-party service running on top of the Internet.

    Users wishing to engage in instant messaging have always had to seek out and sign up with a provider of such a service. As the IM networks were isolated, most would choose whichever service their friends were already using, and a small number of networks, each with a sustainable number of users, came to dominate. In the early days, dedicated IM services such as ICQ were popular. Today, social networks have taken their place with Facebook currently in the dominant position. With the new Hangouts, Google offers its users the service they want in the way they have come to expect.

    Follow the money

    We now have all the pieces necessary to see why inter-domain instant messaging has never taken off, and the answer is simple : the major players have no commercial incentive to open access to their IM networks. In fact, they have good reason to keep the networks closed. Ensuring that a person leaving the network loses contact with his or her friends, increases user retention by raising the cost of switching to another service. Monetising users is also better facilitated if they are forced to remain on, say, Facebook’s web pages while using its services rather than accessing them indirectly, perhaps even through a competing (Google, say) frontend. The users do not generally care much, since all their friends are already on the same network as themselves.

    While Google Talk was a standalone service, only loosely coupled to other Google products, these aspects were of lesser importance. After all, Google still had access to all the messages passing through the system and could analyse them for advert targeting purposes. Now that messaging is an integrated part of Google+, and thus serves as a direct competitor to the likes of Facebook, the situation has changed. All the reasons for Facebook not to open its network now apply equally to Google as well.

  • Why when converting avi video file to another format the first 2-3 seconds are blurry ?

    13 juin 2016, par Sharon Gabriel

    The source file is avi. The target new file is mp4.
    The first 2-3 seconds are blurry. Then after 2-3 second the whole video until the end is smooth and sharp.

    Another sub question is how come that 2.16 GB avi file after conversion using ffmpeg is only 1.34 MB ? It’s not part of a movie or something it’s collection of screenshots images i did in c# and then used AviFile Lib to create from them a avi video file. and yet from 2.16 GB to 1.34 MB and it keep the quality i think almost the same quality like the original avi file and the same duration 2:20 minutes.

    About the blurry problem this is my code where i set the ffmpeg arguments and set the process :

    private void Convert()
           {
               try
               {
                   Control.CheckForIllegalCrossThreadCalls = false;
                   if (ComboBox1.SelectedIndex == 3)
                   {
                       strFFCMD = " -i " + (char)34 + InputFile + (char)34 + " -c:v libx264 -s 1920x1080 -pix_fmt yuv420p -qp 18 -profile high444 -c:a libvo_aacenc -b:a 128k -ar 44100 -ac 2 -y " + OutputFile;
                   }    
                   psiProcInfo.FileName = exepath;
                   psiProcInfo.Arguments = strFFCMD;        
                   psiProcInfo.UseShellExecute = false;      
                   psiProcInfo.WindowStyle = ProcessWindowStyle.Hidden;    
                   psiProcInfo.RedirectStandardError = true;            
                   psiProcInfo.RedirectStandardOutput = true;        
                   psiProcInfo.CreateNoWindow = true;                
                   prcFFMPEG.StartInfo = psiProcInfo;          
                   prcFFMPEG.Start();
                   ffReader = prcFFMPEG.StandardError;

                   do
                   {
                       if (Bgw1.CancellationPending)
                       {
                           return;
                       }
                       Button5.Enabled = true;
                       Button3.Enabled = false;
                       strFFOUT = ffReader.ReadLine();                    
                       RichTextBox1.Text = strFFOUT;
                       if (strFFOUT != null)
                       {
                           if (strFFOUT.Contains("frame="))
                           {
                               currentFramestr = strFFOUT.Substring(7, 6).Trim();
                               Regex rx = new Regex(@"^\d+");
                               Match m = rx.Match(currentFramestr);
                               if (m.Success)
                               {
                                   currentFrameInt = System.Convert.ToInt32(m.Value);
                               }
                           }
                       }
                       string percentage = ((double)ProgressBar1.Value / (double)ProgressBar1.Maximum * 100.0).ToString();
                       textBox3.Text = ProgressBar1.Value.ToString();                    
                       ProgressBar1.Maximum = FCount + 1;
                       ProgressBar1.Value = (currentFrameInt);
                       Label12.Text = "Current Encoded Frame: " + currentFrameInt;
                       Label11.Text = percentage;
                   } while (!(prcFFMPEG.HasExited || string.IsNullOrEmpty(strFFOUT)));
               }
               catch(Exception err)
               {
                   string errors = err.ToString();
               }
           }

    psiProcInfo is ProcessStartInfo

    prcFFMPEG is Process

    And this is how it looks like when i play the target the new created converted video file the mp4 the first seconds :

    Duration : 00:02:20

    Width : 1920 Height : 1080

    Data Rate and Total Rate both : 80kbps

    Frame rate : 2 frames/second

    Blurry

    This is the output of the ffmpeg console while converting the file.

     ffmpeg version 2.8.git Copyright (c) 2000-2015 the FFmpeg developers
     built with gcc 5.2.0 (GCC)
     configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libdcadec --enable-libfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-libzimg --enable-lzma --enable-decklink --enable-zlib
     libavutil      55. 11.100 / 55. 11.100
     libavcodec     57. 17.100 / 57. 17.100
     libavformat    57. 20.100 / 57. 20.100
     libavdevice    57.  0.100 / 57.  0.100
     libavfilter     6. 21.100 /  6. 21.100
     libswscale      4.  0.100 /  4.  0.100
     libswresample   2.  0.101 /  2.  0.101
     libpostproc    54.  0.100 / 54.  0.100
    [avi @ 00000147a882b660] Stream #0: not enough frames to estimate rate; consider increasing probesize
    Input #0, avi, from 'C:\temp\video\new.avi':
     Duration: 00:02:20.50, start: 0.000000, bitrate: 132710 kb/s
       Stream #0:0: Video: rawvideo, bgra, 1920x1080, 2 fps, 2 tbr, 2 tbn, 2 tbc
    Please use -profile:a or -profile:v, -profile is ambiguous
    Codec AVOption b (set bitrate (in bits/s)) specified for output file #0 (C:\temp\video\5.mp4) has not been used for any stream. The most likely reason is either wrong type (e.g. a video option with no video streams) or that it is a private option of some encoder which was not actually used for any stream.
    [libx264 @ 00000147a882c820] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2
    [libx264 @ 00000147a882c820] profile High, level 4.0
    [libx264 @ 00000147a882c820] 264 - core 148 r2638 7599210 - H.264/MPEG-4 AVC codec - Copyleft 2003-2015 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=12 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=2 scenecut=40 intra_refresh=0 rc=cqp mbtree=0 qp=18 ip_ratio=1.40 pb_ratio=1.30 aq=0
    Output #0, mp4, to 'C:\temp\video\5.mp4':
     Metadata:
       encoder         : Lavf57.20.100
       Stream #0:0: Video: h264 (libx264) ([33][0][0][0] / 0x0021), yuv420p, 1920x1080, q=-1--1, 2 fps, 16384 tbn, 2 tbc
       Metadata:
         encoder         : Lavc57.17.100 libx264
    Stream mapping:
     Stream #0:0 -> #0:0 (rawvideo (native) -> h264 (libx264))
    Press [q] to stop, [?] for help
    frame=    8 fps=0.0 q=0.0 size=       0kB time=00:00:00.00 bitrate=N/A speed=   0x    
    frame=   15 fps= 14 q=0.0 size=       0kB time=00:00:00.00 bitrate=N/A speed=   0x    
    frame=   21 fps= 13 q=18.0 size=      92kB time=00:00:00.00 bitrate=N/A speed=   0x    
    frame=   30 fps= 14 q=18.0 size=     141kB time=00:00:04.50 bitrate= 257.3kbits/s speed=2.03x    
    frame=   37 fps= 13 q=20.0 size=     164kB time=00:00:08.00 bitrate= 167.6kbits/s speed=2.82x    
    frame=   46 fps= 14 q=18.0 size=     185kB time=00:00:12.50 bitrate= 121.0kbits/s speed= 3.7x    
    frame=   51 fps= 13 q=19.0 size=     194kB time=00:00:15.00 bitrate= 106.1kbits/s speed=3.87x    
    frame=   58 fps= 13 q=18.0 size=     210kB time=00:00:18.50 bitrate=  93.2kbits/s speed=4.19x    
    frame=   65 fps= 13 q=20.0 size=     224kB time=00:00:22.00 bitrate=  83.6kbits/s speed=4.46x    
    frame=   71 fps= 13 q=19.0 size=     238kB time=00:00:25.00 bitrate=  78.1kbits/s speed=4.56x    
    frame=   78 fps= 13 q=18.0 size=     253kB time=00:00:28.50 bitrate=  72.6kbits/s speed=4.75x    
    frame=   83 fps= 13 q=19.0 size=     265kB time=00:00:31.00 bitrate=  70.0kbits/s speed= 4.7x    
    frame=   89 fps= 12 q=20.0 size=     280kB time=00:00:34.00 bitrate=  67.4kbits/s speed=4.73x    
    frame=   95 fps= 12 q=19.0 size=     291kB time=00:00:37.00 bitrate=  64.5kbits/s speed=4.73x    
    frame=  102 fps= 12 q=18.0 size=     308kB time=00:00:40.50 bitrate=  62.3kbits/s speed=4.84x    
    frame=  107 fps= 12 q=19.0 size=     317kB time=00:00:43.00 bitrate=  60.4kbits/s speed=4.82x    
    frame=  115 fps= 12 q=19.0 size=     336kB time=00:00:47.00 bitrate=  58.6kbits/s speed=4.96x    
    frame=  123 fps= 12 q=20.0 size=     354kB time=00:00:51.00 bitrate=  56.8kbits/s speed=5.09x    
    frame=  132 fps= 12 q=20.0 size=     371kB time=00:00:55.50 bitrate=  54.8kbits/s speed=5.25x    
    frame=  139 fps= 13 q=20.0 size=     392kB time=00:00:59.00 bitrate=  54.5kbits/s speed=5.32x    
    frame=  146 fps= 13 q=19.0 size=     408kB time=00:01:02.50 bitrate=  53.5kbits/s speed=5.37x    
    frame=  150 fps= 12 q=20.0 size=     417kB time=00:01:04.50 bitrate=  52.9kbits/s speed=5.28x    
    frame=  155 fps= 12 q=18.0 size=     428kB time=00:01:07.00 bitrate=  52.4kbits/s speed=5.25x    
    frame=  161 fps= 12 q=20.0 size=     441kB time=00:01:10.00 bitrate=  51.6kbits/s speed=5.26x    
    frame=  167 fps= 12 q=19.0 size=     462kB time=00:01:13.00 bitrate=  51.9kbits/s speed=5.29x    
    frame=  174 fps= 12 q=20.0 size=     483kB time=00:01:16.50 bitrate=  51.7kbits/s speed=5.33x    
    frame=  181 fps= 12 q=18.0 size=     614kB time=00:01:20.00 bitrate=  62.8kbits/s speed=5.36x    
    frame=  187 fps= 12 q=20.0 size=     763kB time=00:01:23.00 bitrate=  75.3kbits/s speed=5.35x    
    frame=  193 fps= 12 q=19.0 size=     852kB time=00:01:26.00 bitrate=  81.2kbits/s speed=5.36x    
    frame=  199 fps= 12 q=18.0 size=     865kB time=00:01:29.00 bitrate=  79.6kbits/s speed=5.37x    
    frame=  206 fps= 12 q=20.0 size=     932kB time=00:01:32.50 bitrate=  82.6kbits/s speed=5.39x    
    frame=  211 fps= 12 q=20.0 size=     943kB time=00:01:35.00 bitrate=  81.3kbits/s speed=5.38x    
    frame=  217 fps= 12 q=18.0 size=    1007kB time=00:01:38.00 bitrate=  84.1kbits/s speed=5.38x    
    frame=  223 fps= 12 q=20.0 size=    1175kB time=00:01:41.00 bitrate=  95.3kbits/s speed=5.38x    
    frame=  230 fps= 12 q=20.0 size=    1195kB time=00:01:44.50 bitrate=  93.7kbits/s speed=5.42x    
    frame=  235 fps= 12 q=18.0 size=    1205kB time=00:01:47.00 bitrate=  92.3kbits/s speed= 5.4x    
    frame=  241 fps= 12 q=20.0 size=    1222kB time=00:01:50.00 bitrate=  91.0kbits/s speed= 5.4x    
    frame=  247 fps= 12 q=19.0 size=    1232kB time=00:01:53.00 bitrate=  89.3kbits/s speed=5.39x    
    frame=  255 fps= 12 q=19.0 size=    1252kB time=00:01:57.00 bitrate=  87.7kbits/s speed=5.45x    
    frame=  260 fps= 12 q=20.0 size=    1274kB time=00:01:59.50 bitrate=  87.3kbits/s speed=5.44x    
    frame=  267 fps= 12 q=20.0 size=    1287kB time=00:02:03.00 bitrate=  85.7kbits/s speed=5.45x    
    frame=  272 fps= 12 q=18.0 size=    1304kB time=00:02:05.50 bitrate=  85.1kbits/s speed=5.43x    
    frame=  278 fps= 12 q=20.0 size=    1314kB time=00:02:08.50 bitrate=  83.8kbits/s speed=5.41x    
    frame=  281 fps= 12 q=-1.0 Lsize=    1376kB time=00:02:19.50 bitrate=  80.8kbits/s speed=5.76x    
    video:1372kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.299861%
    [libx264 @ 00000147a882c820] frame I:2     Avg QP:15.00  size: 98930
    [libx264 @ 00000147a882c820] frame P:80    Avg QP:18.00  size:  7068
  • Adventures In NAS

    1er janvier, par Multimedia Mike — General

    In my post last year about my out-of-control single-board computer (SBC) collection which included my meager network attached storage (NAS) solution, I noted that :

    I find that a lot of my fellow nerds massively overengineer their homelab NAS setups. I’ll explore this in a future post. For my part, people tend to find my homelab NAS solution slightly underengineered.

    So here I am, exploring this is a future post. I’ve been in the home NAS game a long time, but have never had very elaborate solutions for such. For my part, I tend to take an obsessively reductionist view of what constitutes a NAS : Any small computer with a pool of storage and a network connection, running the Linux operating system and the Samba file sharing service.


    Simple hard drive and ethernet cable

    Many home users prefer to buy turnkey boxes, usually that allow you to install hard drives yourself, and then configure the box and its services with a friendly UI. My fellow weird computer nerds often buy cast-off enterprise hardware and set up more resilient, over-engineered solutions, as long as they have strategies to mitigate the noise and dissipate the heat, and don’t mind the electricity bills.

    If it works, awesome ! As an old hand at this, I am rather stuck in my ways, however, preferring to do my own stunts, both with the hardware and software solutions.

    My History With Home NAS Setups
    In 1998, I bought myself a new computer — beige box tower PC, as was the style as the time. This was when normal people only had one computer at most. It ran Windows, but I was curious about this new thing called “Linux” and learned to dual boot that. Later that year, it dawned on me that nothing prevented me from buying a second ugly beige box PC and running Linux exclusively on it. Further, it could be a headless Linux box, connected by ethernet, and I could consolidate files into a single place using this file sharing software named Samba.

    I remember it being fairly onerous to get Samba working in those days. And the internet was not quite so helpful in those days. I recall that the thing that blocked me for awhile was needing to know that I had to specify an entry for the Samba server machine in the LMHOSTS (Lanman hosts) file on the Windows 95 machine.

    However, after I cracked that code, I have pretty much always had some kind of ad-hoc home NAS setup, often combined with a headless Linux development box.

    In the early 2000s, I built a new beige box PC for a file server, with a new hard disk, and a coworker tutored me on setting up a (P)ATA UDMA 133 (or was it 150 ? anyway, it was (P)ATA’s last hurrah before SATA conquered all) expansion card and I remember profiling that the attached hard drive worked at a full 21 MBytes/s reading. It was pretty slick. Except I hadn’t really thought things through. You see, I had a hand-me-down ethernet hub cast-off from my job at the time which I wanted to use. It was a 100 Mbps repeater hub, not a switch, so the catch was that all connected machines had to be capable of 100 Mbps. So, after getting all of my machines (3 at the time) upgraded to support 10/100 ethernet (the old off-brand PowerPC running Linux was the biggest challenge), I profiled transfers and realized that the best this repeater hub could achieve was about 3.6 MBytes/s. For a long time after that, I just assumed that was the upper limit of what a 100 Mbps network could achieve. Obviously, I now know that the upper limit ought to be around 11.2 MBytes/s and if I had gamed out that fact in advance, I would have realized it didn’t make sense to care about super-fast (for the time) disk performance.

    At this time, I was doing a lot for development for MPlayer/xine/FFmpeg. I stored all of my multimedia material on this NAS. I remember being confused when I was working with Y4M data, which is raw frames, which is lots of data. xine, which employed a pre-buffering strategy, would play fine for a few seconds and then stutter. Eventually, I reasoned out that the files I was working with had a data rate about twice what my awful repeater hub supported, which is probably the first time I came to really understand and respect streaming speeds and their implications for multimedia playback.

    Smaller Solutions
    For a period, I didn’t have a NAS. Then I got an Apple AirPort Extreme, which I noticed had a USB port. So I bought a dual drive brick to plug into it and used that for a time. Later (2009), I had this thing called the MSI Wind Nettop which is the only PC I’ve ever seen that can use a CompactFlash (CF) card for a boot drive. So I did just that, and installed a large drive so it could function as a NAS, as well as a headless dev box. I’m still amazed at what a low-power I/O beast this thing is, at least when compared to all the ARM SoCs I have tried in the intervening 1.5 decades. I’ve had spinning hard drives in this thing that could read at 160 MBytes/s (‘dd’ method) and have no trouble saturating the gigabit link at 112 MBytes/s, all with its early Intel Atom CPU.

    Around 2015, I wanted a more capable headless dev box and discovered Intel’s line of NUCs. I got one of the fat models that can hold a conventional 2.5″ spinning drive in addition to the M.2 SATA SSD and I was off and running. That served me fine for a few years, until I got into the ARM SBC scene. One major limitation here is that 2.5″ drives aren’t available in nearly the capacities that make a NAS solution attractive.

    Current Solution
    My current NAS solution, chronicled in my last SBC post– the ODroid-HC2, which is a highly compact ARM SoC with an integrated USB3-SATA bridge so that a SATA drive can be connected directly to it :


    ODROID-HC2 NAS

    ODROID-HC2 NAS


    I tend to be weirdly proficient at recalling dates, so I’m surprised that I can’t recall when I ordered this and put it into service. But I’m pretty sure it was circa 2018. It’s only equipped with an 8 TB drive now, but I seem to recall that it started out with only a 4 TB drive. I think I upgraded to the 8 TB drive early in the pandemic in 2020, when ISPs were implementing temporary data cap amnesty and I was doing what a r/DataHoarder does.

    The HC2 has served me well, even though it has a number of shortcomings for a hardware set chartered for NAS :

    1. While it has a gigabit ethernet port, it’s documented that it never really exceeds about 70 MBytes/s, due to the SoC’s limitations
    2. The specific ARM chip (Samsung Exynos 5422 ; more than a decade old as of this writing) lacks cryptography instructions, slowing down encryption if that’s your thing (e.g., LUKS)
    3. While the SoC supports USB3, that block is tied up for the SATA interface ; the remaining USB port is only capable of USB2 speeds
    4. 32-bit ARM, which prevented me from running certain bits of software I wanted to try (like Minio)
    5. Only 1 drive, so no possibility for RAID (again, if that’s your thing)

    I also love to brag on the HC2’s power usage : I once profiled the unit for a month using a Kill-A-Watt and under normal usage (with the drive spinning only when in active use). The unit consumed 4.5 kWh… in an entire month.

    New Solution
    Enter the ODroid-HC4 (I purchased mine from Ameridroid but Hardkernel works with numerous distributors) :


    ODroid-HC4 with 2 drives

    ODroid-HC4 with an SSD and a conventional drive


    I ordered this earlier in the year and after many months of procrastinating and obsessing over the best approach to take with its general usage, I finally have it in service as my new NAS. Comparing point by point with the HC2 :

    1. The gigabit ethernet runs at full speed (though a few things on my network run at 2.5 GbE now, so I guess I’ll always be behind)
    2. The ARM chip (Amlogic S905X3) has AES cryptography acceleration and handles all the LUKS stuff without breaking a sweat ; “cryptsetup benchmark” reports between 500-600 MBytes/s on all the AES variants
    3. The USB port is still only USB2, so no improvement there
    4. 64-bit ARM, which means I can run Minio to simulate block storage in a local dev environment for some larger projects I would like to undertake
    5. Supports 2 drives, if RAID is your thing

    How I Set It Up
    How to set up the drive configuration ? As should be apparent from the photo above, I elected for an SSD (500 GB) for speed, paired with a conventional spinning HDD (18 TB) for sheer capacity. I’m not particularly trusting of RAID. I’ve watched it fail too many times, on systems that I don’t even manage, not to mention that aforementioned RAID brick that I had attached to the Apple AirPort Extreme.

    I had long been planning to use bcache, the block caching interface for Linux, which can use the SSD as a speedy cache in front of the more capacious disk. There is also LVM cache, which is supposed to achieve something similar. And then I had to evaluate the trade-offs in whether I wanted write-back, write-through, or write-around configurations.

    This was all predicated on the assumption that the spinning drive would not be able to saturate the gigabit connection. When I got around to setting up the hardware and trying some basic tests, I found that the conventional HDD had no trouble keeping up with the gigabit data rate, both reading and writing, somewhat obviating the need for SSD acceleration using any elaborate caching mechanisms.

    Maybe that’s because I sprung for the WD Red Pro series this time, rather than the Red Plus ? I’m guessing that conventional drives do deteriorate over the years. I’ll find out.

    For the operating system, I stuck with my newest favorite Linux distro : DietPi. While HardKernel (parent of ODroid) makes images for the HC units, I had also used DietPi for the HC2 for the past few years, as it tends to stay more up to date.

    Then I rsync’d my data from HC2 -> HC4. It was only about 6.5 TB of total data but it took days as this WD Red Plus drive is only capable of reading at around 10 MBytes/s these days. Painful.

    For file sharing, I’m pretty sure most normal folks have nice web UIs in their NAS boxes which allow them to easily configure and monitor the shares. I know there are such applications I could set up. But I’ve been doing this so long, I just do a bare bones setup through the terminal. I installed regular Samba and then brought over my smb.conf file from the HC2. 1 by 1, I tested that each of the old shares were activated on the new NAS and deactivated on the old NAS. I also set up a new share for the SSD. I guess that will just serve as a fast I/O scratch space on the NAS.

    The conventional drive spins up and down. That’s annoying when I’m actively working on something but manage not to hit the drive for like 5 minutes and then an application blocks while the drive wakes up. I suppose I could set it up so that it is always running. However, I micro-manage this with a custom bash script I wrote a long time ago which logs into the NAS and runs the “date” command every 2 minutes, appending the output to a file. As a bonus, it also prints data rate up/down stats every 5 seconds. The spinning file (“nas-main/zz-keep-spinning/keep-spinning.txt”) has never been cleared and has nearly a quarter million lines. I suppose that implies that it has kept the drive spinning for 1/2 million minutes which works out to around 347 total days. I should compare that against the drive’s SMART stats, if I can remember how. The earliest timestamp in the file is from March 2018, so I know the HC2 NAS has been in service at least that long.

    For tasks, vintage cron still does everything I could need. In this case, that means reaching out to websites (like this one) and automatically backing up static files.

    I also have to have a special script for starting up. Fortunately, I was able to bring this over from the HC2 and tweak it. The data disks (though not boot disk) are encrypted. Those need to be unlocked and only then is it safe for the Samba and Minio services to start up. So one script does all that heavy lifting in the rare case of a reboot (this is the type of system that’s well worth having on a reliable UPS).

    Further Work
    I need to figure out how to use the OLED display on the NAS, and how to make it show something more useful than the current time and date, which is what it does in its default configuration with HardKernel’s own Linux distro. With DietPi, it does nothing by default. I’m thinking it should be able to show the percent usage of each of the 2 drives, at a minimum.

    I also need to establish a more responsible backup regimen. I’m way too lazy about this. Fortunately, I reason that I can keep the original HC2 in service, repurposed to accept backups from the main NAS. Again, I’m sort of micro-managing this since a huge amount of data isn’t worth backing up (remember the whole DataHoarder bit), but the most important stuff will be shipped off.

    The post Adventures In NAS first appeared on Breaking Eggs And Making Omelettes.