• News

    Which web development languages to learn in 2019?

    In 2019, as every year, the web and mobile development sectors are evolving, particularly in terms of programming languages. Which are currently the most used and why? How to select the one that corresponds to our needs?

    In this article, you will find the list of languages that are generally most used and learnt in the development sphere.

    JavaScript

    This one has been at the top of the list for several years now. In question? Used on the front and back end, Java allows the creation of websites and web applications and has a large number of functionalities to accelerate the development process.

    Python

    An object-oriented programming language and its associated web development frameworks, like Django. Very easy to understand, this is the programming language for creative people and particularly suitable for developing websites with high traffic.

    C++

    Derived from the C language (one of the oldest programming languages), C++ is relatively easy to learn the binary language. It has a minimal core of keywords but allows a large number of possible combinations to be obtained.

    PHP

    We can only mention this programming language, used by the majority of CMS and websites of all types. PHP can be very easily inserted into an HTML page before sending it to the browser. It is straightforward to use, and you will be able to get a solid foundation in PHP in just a few months.

    SQL (Structured Query Language)

    This language is used to exploit relational databases. But what exactly does that mean? That means that SQL allows the reliable storage of your website information. With SQL, you can create tables, extract, delete or modify data and define user access permissions. Its syntax follows precise rules.

    Ruby

    Simple, intuitive, and flexible; this “object-oriented” programming language is increasingly popular with web developers. Its flexibility is defined in many respects, but in particular because of its syntax (addition of optional parentheses). Its framework is also particularly well provided, which has contributed to the rise in popularity of this language in recent years.

    Choose the programming language adapted to your needs

    Are you looking to develop websites or web applications? Mobile applications? What type? It all depends on your field of activity and the time you want to spend learning the Dev language.

    As seen above, some technologies are more difficult to understand than others. If you have a timing criterion to respect, focus on Ruby or Python. But perhaps this does not correspond to the projects you want to carry out? It’s up to you to decide the difference. Why not ask experienced developers for advice?

  • review of 1&1 IONOS hosting
    News

    1&1 IONOS Web Hosting Review

    Here is our review of the German hosting company 1&1, now called IONOS, one of the cheapest on the market in Europe

    Finding the right web hosting for your website is something that should not be done lightly.

    It is essential to take the time to compare the different hosts with each other.

    In particular, the highly mathematical equation of finding the perfect balance between high performance and attractive price must be solved. If you are looking for a test complet 1&1 IONOS, please read on !

    So, in a few moments, after taking the time to read this article until the end, you will know whether or not it is the host that can meet all your requirements.

    Presentation of 1&1 IONOS hosting

    1&1 IONOS merging1&1 was founded in Germany in 1988, making it one of the oldest companies in the sector.

    It is now considered one of the leading brands in its field, mainly because it is present in 10 countries such as France, of course, but also Spain, the United Kingdom, and the United States for example.

    Its other strength is to be particularly aggressive concerning its rates.

    This is why many website managers prefer the 1&1 web host to be sure to benefit from an exceptional quality/price ratio.

    As we will tell you below, 1&1 develops many solutions, and continually improves them.

    There is, therefore, a good chance that you will find all your happiness with this brand, which does everything possible to meet the ever-increasing needs of each of its customers as precisely as possible.

    Hosting packs offered by 1&1 IONOS

    If one thing is sure, it is that 1&1 develops a truly complete and comprehensive hosting offer.

    By choosing this web host, it is therefore very likely that you will find what you are looking for since you will be able to choose between the following services.

    MyWebsite hosting: If your needs are limited and you want to create a lightweight website or blog of a few pages, then MyWebsite hosting may be right for you. In addition to being able to host your site, you will be able to fully customize and manage it thanks to integrated software that will save you from having to dive into complicated and time-consuming programming. In other words, 1&1 MyWebsite hosting will be perfect for beginners.
    Web hosting: From Starter Pack to Unlimited Pro Pack, 1&1 offers a hosting solution that can be adjusted to your needs. You can automatically switch from one pack to another depending on the performance of your website. Whether you have an e-commerce site under Prestashop or a blog under WordPress for example, this solution can be attractive to launch you at a lower cost.

    Linux hosting: More specifically, shared Linux hosting will be useful if your website has been designed under this operating system. Be careful, however. Hosting, whether running on Linux or Windows, is independent of your computer’s operating system. It will, therefore, be possible to use 1&1 Linux hosting even if your PC is running under Windows. Regarding performance and offers, they are the same as for the previous web hosting.

    WordPress hosting: If you are planning to create a site in WordPress, then 1&1 WordPress hosting is the right one for you. With packages identical to traditional hosting, you can install the latest version of the CMS and a selection of the best plugins in just a few clicks. After a few minutes, you can already start working on your site without having to be an expert in web development.

    All these services can be chosen regardless of the 1&1 IONOS hosting selected, and one thing is certain, they will significantly improve your efficiency.

    There are more options available if you are after VPS, cloud or dedicated hosting solutions but we stuck to shared plans here ! Find all their offers here.

    The pricing of 1&1 IONOS website hosting

    Rates offered by 1&1 are among the most attractive in the sector.

    Indeed, with a web hosting accessible from 0.99€ per month, you will be able to launch your project and test its viability at a lower cost.

    Of course, if your needs change, you will be able to upgrade your offer to a more powerful hosting, or even a dedicated server, the prices can reach 120€ per month for an ultra-powerful server.

  • linux-kernel-4.18-release
    News

    Linux Kernel 4.18: Lighter & More Secured Code

    The latest version of the Linux kernel removes nearly 100,000 lines of code, adds file encryption and Berkeley Packet Filter among numerous improvements.

    While the most significant changes could leave developers who have been waiting for these changes for some time speechless, others could escape the mass of Linux users. So, not to be missed, here is a selection of new features brought to you by the Linux 4.18 kernel.

    A vast cleaning of the code

    The Linux 4.18 kernel has eliminated almost 100,000 lines of obsolete code.
    That’s a lot! However, this doesn’t mean that some of your favorite features have disappeared. This meticulous cleaning removed all the useless code, and it seems that there were many of them.

    As a result, the new kernel should occupy less memory, run slightly more efficiently and be less vulnerable to attacks that might be taking advantage of the neglected parts of the old code. The remaining code is also a little cleaner and easier to handle.

    BPFILTER to reinforce network security

    BPFILTER, a name derived from Berkeley Packet Filter, is another exciting feature of this core. Initially, BPFILTER was used to filter packets for other tools such as tcpdump. Its introduction in Linux 4.18 could eventually replace both iptables and nftables. BPFILTER also extends the capabilities of Linux in several areas, in particular, network security, load balancing, and performance monitoring, without affecting performance.

    We can, therefore, say that BPFILTER represents a fundamental change for the network and security. The transition from the previous technology should be straightforward. BPFILTER has simple solutions for translating iptables rules into its rule format. It also provides flexibility to decide where to apply the filtering rules, perhaps on the network interface itself. It also offers solutions to modernize and revitalize the network stack.

    Speck file system encryption

    Since Linux version 4.18, FSCRYPT, which operates at the file system level rather than the block device level, supports the Speck128/Speck256 encryption algorithms.

    Although Speck has raised some concerns because its development by the National Security Agency (NSA) and the U.S. agency may not have shared all of its design elements, it offers a new option for file system encryption. It may stay unused, but it appears to be the only encryption option available for low-end Android devices and therefore may play an important role.

    Improved performance

    The mechanism for controlling concomitance without blocking the user space of so-called “reboot sequences” is included in the new kernel. Calls to the RSEQ system allow faster operations in the user space, as demonstrated by some micro-benchmarks. It also provides significant improvements in the kernel programmer code. These changes should translate into a substantial increase in the overall performance of the system.

    Another improvement in Linux 4.18 is support for bi-directional transfer in USB 3.2, which accelerates data transfers from USB devices to hosts and devices using C-type cables. The Linux 4.18 kernel also has a kernel scan interface for asynchronous I/O. It allows you to consult a set of file descriptors to determine which files can perform I/O without blocking them. This change also improves performance. The new kernel also includes a TCP copy-free receiving API and support for high-performance XDP AF sockets that promises to improve performance while saving CPU cycles and memory bandwidth.

    In conclusion, the Linux 4.18 kernel shows many improvements regarding performance, security, and functionality that Linux system users should appreciate. You can also reassure them about the 2038 deadline!

  • News

    The DevOps tool GitLab 11.1 with enhanced security controls.

    The DevOps platform based on the Git software version control system, GitLabn has been upgraded to version 11.1.

    The first visible change from GitLab version 11.1, the security dashboard now allows you to view the latest security status of the default branch for each project. It will enable security teams to see if there is a potential problem and take appropriate action.

    The dashboard also allows you to remove false positives or generate specific problems to solve vulnerabilities. The security team can also adjust the criticality level of vulnerabilities. This dashboard is in the Project menu in the project’s lateral navigation space.

    GitLab version 11.1 offers better security control options.

    GitLab version 11.1 also includes Static Application Security Testing (SAST) tools for Node.js, which detect code vulnerabilities when sending changes to a repository.

    SAST support is already available for C, C++, Go, Java and Python. Filtering by file name, path, and extension for advanced syntax search have also extended the code search capabilities.

    Upgrade to version 11.1

    Runner’s performance improves with version 11.1.

    For example, the correction of the pagination of POST requests called webhooks, to ensure that the display of the page is not interrupted during the editing of these webhooks. Delivered with GitLab, the Runner tool – which is used to perform CI/CD integration and continuous delivery tasks – is also upgraded to version 11.1, providing better management of Docker idle times and the ability to sign packages in RPM and DEB formats.

    The table of configurable problems is now accessible via the GitLab API, which allows for custom workflows.

    Transferring projects between namespaces via an API is possible. Besides, the user interface of GitLab 11.1 also benefits from several improvements. First, the developers revised the merge request widget. Second, the contribution analysis page is more readable. The addition of a merge request panel in the Web IDE allows the merge request to be displayed side by side in the code and the IDE.

    A drop-down menu has been added to switch from one group to another to the group link at the top of the navigation, making it easier to access. The pages summarizing the steps have been redesigned. It is a first step in the simplification work to facilitate team monitoring. A problem can be classified as “confidential” directly from the comment field. Finally, the design of the Kubernetes page now uses tabs for each option when adding a cluster.

    GitLab version 11.1 is available as a demo for on-site or public cloud deployment.

  • News

    Scale Stack and Database Proxy Prototype

    Back in January when I was between jobs I had a free weekend to do some fun hacking. I decided to start a new open source project that had been brewing in the back of my head and since then have been poking at it on the weekends and an occasional late night. I decided to call it Scale Stack because it aims to provide a scalable network service stack. This may sound a bit generic and boring, but let me show a graph of a database proxy module I slapped together in the past couple days:

    Database Proxy Graph

    I setup MySQL 5.5.2-m2 and ran the sysbench read-only tests against it with 1-8192 threads. I then started up the database proxy module built on Scale Stack so sysbench would route through that, and you can see the concurrency improved quite a bit at higher thread counts. The database module doesn’t do much, it simply does connection concentration, mapping M to N connections, where N is a fixed parameter given at startup.

    In this case I always mapped all incoming sysbench connections down to 128 connections between Scale Stack and MySQL. It also uses a fixed number of threads and is entirely non-blocking. As you can see the max throughput around 64 threads is a bit lower, but I’ve not done much to optimize this yet (there should be some easy improvements where I simply stuck in a mutex instead of doing a lockless queue). It’s only a simple proof-of-concept module to see how well this would work, but it’s a start to a potentially useful module built on the other Scale Stack components. One other thing to mention is that these tests were run on a single 16-core Intel machine. I’d really like to test this with multiple machines at some point.

    So, what is Scale Stack?

    Check out the website for a simple overview of what it is. The goal is to pick up where the operating system kernel leaves off with the network stack. It is written in C++ and is extremely modular with only the module loader, option parsing, and basic log in the kernel library. It uses Monty Taylor’s pandora-build autoconf files to provide a sane modular build system, along with some modifications I made so dependency tracking is done between modules. You can actually use it to write modules that would do anything, I’m just most interested in network service based modules.

    The kernel/module loader is also just a library, so you can actually embed this into existing applications as well. Some of the modules I’ve written for it are a threaded event handling module based on libevent/pthreads and a TCP socket module. There is also an echo server and simple proxy module I created while testing the event and socket modules. The database proxy module builds on top of the event and socket module. The code is under the BSD license and is up on Launchpad, so feel free to check it out and contribute. If you need a base to build high-performance network services on, you should definitely take a look and talk with me.

    What’s up next?

    I have a long list of things I would like to do with this, but first up are still some basics. This includes other socket type modules like TLS/SSL, UDP, and Unix sockets. Then are some more protocol modules such as Drizzle, a real MySQL protocol module, and others like HTTP, Gearman, and memcached. It’s fairly trivial to write these since the socket modules handle all buffering and provide a simple API. As for the DatabaseProxy module, I’d like to rework how things are now so it’s not MySQL protocol specific, integrate other protocol modules, improve performance, add in multi-tenancy support for quality-of-service queuing based on account rules, and a laundry list of other features I won’t bore you with right now.

    I also have plans for other services besides a database proxy, especially one that could combine a number of protocols into a generic URI server with pluggable handlers so you can do some interesting translations between modules (like Apache httpd but not http-centric). For example, think of the crazy things you can do with Twisted for Python, but now with a fast, threaded C++ kernel. I also still need to experiment with live reloading of modules, but I’m not sure if this will be worthwhile yet.

    If any of this sounds interesting, get in touch, I’d love to have some help! I’ll have some blog posts later on how to get started writing modules, but for now just take a look at the existing modules. The EchoServer is a good place to start since it is pretty simple. Also, if you’ll be at the MySQL Conference and Expo next week, I’d be happy to talk more about it then.

  • News

    SlackDB Updates

    Since I announced SlackDB a few weeks ago, I’ve had a number of questions and interesting conversations in response. I thought I would summarize the initial feedback and answer some questions to help clarify things. One of the biggest questions was “Isn’t this what Drizzle is doing?”, and the answer is no. They are both being designed for “the cloud” and speak the MySQL protocol, but they provide very different guarantees around consistency and high-availability. The simple answer is that SlackDB will provide true multi-master configurations through a deterministic and idempotent replication model (conflicts will be resolved via timestamps), where Drizzle still maintains transactions and ACID properties, which imply single master. Drizzle could add support for clustered configurations and distributed transactions (like the NDB storage engine), but writes would still happen on the majority (maintain quorum) since the concept of global state needs to be maintained.

    This led Mark Callaghan to ask why not just modify Drizzle to support these behaviors? He has a good point since most of the properties I’m talking about exist at the storage engine level. There are still a number of changes that would need to happen in the kernel around catalog, database, and table creation to support the replication model. SlackDB also won’t need a number of constructs provided by the Drizzle kernel (various locks, transaction support) so query processing can be lighter-weight. So while it’s probably possible with enough patches and plugins to make this work in Drizzle, I believe it will be easier (both socially and technically) to do this from scratch. With either approach there is still a fair amount of code to be written, and I’ve decided to use Erlang since it allows programmers to express ideas concisely and more quickly with an acceptable trade-off in runtime efficiency. This would make it even more difficult to integrate with Drizzle.

    A couple folks asked why I chose the BSD license instead of GPL or Apache. I didn’t want a copyleft license, so GPL was out, but after chatting some more I decided to switch SlackDB to the Apache 2.0 license for the patent protection clause. As much as I dislike patents and would prefer not to acknowledge them, I figured having the protection clauses in there would make it less likely that anyone using the software would have to deal with them once there are other contributors who may hold patents.

    I presented the techniques I’m using behind SlackDB in a session at OpenSQL Camp Boston last weekend, and overall they were well received. There was a lot of great feedback and suggestions about other projects and libraries doing related things that may help speed things along. I was glad to see I wasn’t the only person thinking about these properties for relational databases, as Josh Berkus of PostgreSQL fame also led a session on ordering events and conflict resolution within relational data when you loosen up consistency.

    I also attended Surge in Baltimore and listened to a talk by Justin Sheehy about “Embracing Concurrency At Scale.” You can see another recording of the same talk here. Justin explained the concepts and problems with systems trying to maintain any kind of globally consistent state quite well, and I agree with almost everything in his presentation. This recent blog post by Coda Hale also explains some of the other key principles around what you must give up in order to get the level of availability required by most systems these days. These help explain the reasons why I started SlackDB – I’m trying to combine these properties with a relational data model. Right now I’m still only able to put my limited spare time into it, but I’m hoping to find a way to put more time into the project. Hopefully you will agree we need a database like this and will help out too. 🙂

  • Walmart partners with Microsoft
    News

    Walmart partners with Microsoft (to fight Amazon)

    Walmart and Microsoft announced a five-year technology partnership to fight against Amazon’s influence.

    At Microsoft’s Inspire conference in Las Vegas July 14-18, Walmart revealed that the two companies have sealed a strategic partnership. Based on the Cloud computing services offered by Microsoft, the agreement would allow its partner to benefit from the Azure application platform and Microsoft 365. Companies will also work on the development of new projects focused on artificial intelligence and service improvement using machine learning technology. For example, artificial intelligence solutions could help Walmart personalize its marketing campaigns according to Internet users.

    Walmart is Amazon’s biggest retail competitor, while Microsoft is Amazon’s biggest rival when it comes to cloud services.

    It was therefore logical that the two companies finally joined forces to try to limit the power of their common enemy.

    Nevertheless, it is important to remember that the partnership comes at a time when Microsoft has expressed interest in designing a technology rivalling the one operated by physical Amazon Go stores. Announced in January 2018, Amazon’s first store successfully replaced staff with ambitious technologies based on artificial intelligence. Similarly, customers no longer have to wait in line because the cash registers no longer exist: now you pay with your Amazon account and mobile phone.

    Microsoft, for its part, would be looking at similar technology that could operate cameras attached to shopping carts in stores. To date, the firm has already hired a computer vision specialist from Amazon. Moreover, Bill Gates’ company would be in talks with Walmart, which would certainly find its interest in this project since it would enable it to catch up with Bezos’ company.

    However, this assumption was not made when the partnership was officially announced.