On the steady state mailing list someone...

13 June 2014

On the steady state mailing list someone posted that ressource consumption misses the point of steady state economics, which should be defined as a steady population and constant output.

Here is my reply:

As for defining steady state in terms of resource consumption/waste on balance with resource availability/waste sinks, I do not see any other useful definition of a steady state. Population can stay the same while consuming more resources more unsustainably (until a crash) and some hazy economic "aggregate" metric is liable to inform us even less about reality. Of course resource consumption/waste-sink balance puts constraints on population and other things, but it is resources/sinks that are fundamental (if we had infinite resources we could have infinite population).

Your conditions of "(1) A steady human population and (2) a steady physical level of output" is what misses the point, as both can stay constant and lead to ecological collapse. I can pump at a constant, exponential, or even declining level of output from a well and pump it dry; what matters is if the total water pumped is below or above the recharge rate of the ground water.

However, I agree that a steady-state can be dynamic, using resources ever better and increasing leisure time would be nice, but this is not a necessary condition for steady state; leisure time could stay constant.
As with GDP zero-growth, population "zero growth" is a likely characteristic of any steady-state society, but the converse is not necessarily true, as a zero growth population can still collapse the ecosystems (population growth would be negative after that).

This error in reasoning I think comes from population over-shoot examples in the natural world. There is a huge difference between natural overshoot and what humanity faces: other animals do not have technology, so their only means to overshoot is population; but to then apply this lesson to humanity and reduce overshoot to population makes no sense as it ignores our technological systems.
The distinction I think is very important as focusing on population is usually made to "shift blame" to poor regions with high population growth ignoring the resources they actually consume, and excusing the far higher consumption of rich regions, where the persons making these claims usually reside — thus shifting focus from the industrial "consume everything" economic system to voiceless "overshoot patsies" elsewhere who have little measurable impact on the global ecological system.
For instance, the very rich, who consume enormous amounts, are particularly interested to decrease population, a target that has been mentioned is reducing by 15% the projected plateau of population growth (essentially all these population "savings" to be in poor countries).
15 % reduction is consumption is no where near what is needed to have a meaningful impact, and since these un-peoples will be not-born in poor regions, they would represent even less actual reduction in consumption.

Now, if we accept 15% is basically meaningless to the worlds problems, and set our sites on 50% to 80% reduction, on a relevant time scale, this could only be achieved by:

1) World War III using nuclear weapons: likely to have far greater negative ecological impact than the population reduction has positive ecological impact.
2) Widespread ecosystems collapse resulting in global famine: ecosystems collapse is what we’re trying to avoid.
3) Nazi-style systematic genocides: "workable" in theory but I would argue the executive managers necessary are unlikely to have the ethical characteristics to manage the earth’s resources sustainably: as I would argue it’s unethical to kill if other options are available (which they are).
4) High-fatality rate global pandemic.
5) Moving 50-80% of people into space colonies.
1&2) The first two options, if undertaken for population reduction measures, achieve the result they are intended to avoid, ecological collapse.

3) The third would mean Nazi-style totalitarian takeover of most of the world, either overtly or covertly (systematic killings could be administered through food, drinking water etc. with a time-delay kill vectors difficult to attribute a cause).
4) The fourth would be an "ethical" possibility if such a pandemic happened spontaneously, if "engineered" see option 3. However if spontaneous it’s not "unethical" but neither it is an ethical course of action, as sitting around for something to happen continuing business as usual is not constructive but wishful thinking: maybe the wish will come true but that is not a course of action but inaction, even if one was pinning one’s hopes on pandemic the ethical course would still be to do all other actions likely to decrease resource consumption and waste-sink saturation.

5) No current technology is available to achieve this, and it’s entirely unlikely to arrive on any relevant time scale.
However, if we focus on resource consumption rather than population we arrive at a very different analyses.

We have the technology and the methods to reduce consumption and ecological impact radically, requiring no global upheaval that makes managing things likely to spin out of control.

Some (as in most of what we need) measures we can simply choose to do now (through appropriate regulation):
- Reducing meat consumption.
- Reducing frivolous personal fossil transportation.
- Mandating cradle-to-grave product life-cycle management (i.e. taxing externalities).
- Banning (with enforcement) high ecological impact activities like shark finning, deep sea bottom trawling, logging in high-biodiversity areas, etc.
- Multiple independent studies (i.e. real science) into the ecological impacts vs. yield of the agricultural systems available (rather than basing policy on single corporate funded, private data, "studies").
- Renewable energy, in particular solar concentrating systems which are low impact, high temperature, low cost, globally deployable and in particular in poor regions where deforestation can be reduced (disclaimer: I work in this field).

So the above are common sense policies that are both ethical and together would have far more impact than even a 15% reduction in population. The only argument against them are "people don’t want to consume less" or "corporation won’t tolerate internalizing costs"; true there is resistance to these policies, but basing argumentation on the assumption that people are unreasonable / unethical is unlikely to yield any ethical result. If people are unethical then they are liable to want to consume all available resources to "keep the party" going regardless of alternatives, and any feasible plan the "enlightened" manage to get going is either likely to be mismanaged in any case or is delusional to begin with (i.e. the "enlightened" are liable to be just another ignorant / unethical group).

Considering our ressource consumption is not tied to population, as with essentially every other creature, but almost entirely due to our technological systems, technology / infrastructure choice is where all the sustainability gains are.

The data says we are heading to stable population in any case, and the data also says our global ecomomic system is incredibly wasteful: most grain going towards meat, most fuel burned for unecessary transport, most energy could be easilly generated clost-to-point of use with renewables, in particular solar (which powers the other non-geothermal renewables).

Of course, we can always nitpick and claim renewables are intermittent, but if the energy is cost effective what’s the problem of doing a majority of energy tasks when the primary solar energy is there in abundance (i.e. extremely low-cost) and storing some biofuels-charcoal for when needed?

The problem with "adapting" is that it goes against business as usual. But starting with business as usual as the criteria is liable to result in business as usual as the conclusion. However, it’s business as usual that’s got us into our ecolgoical quagmire, there’s no reason to assume like-minded thinking will solve the problem. People have lived with intermittent interior lighting (the sun), intermittent water (monsoons), intermittent food access (migrations passing, harvests, big catch), and have managed to live for hundreds of thousands, if not millions, of years under such conditions. Sleeping at night, water storage (West India essentially perfected rain water storage, but threw away that system easilly replicable and sustainable system in favour of massing damning projects and well-pumping), food storage, have been successful adaptation strategies to intermittent ressources.

Of course technologies exist to compliment the low-cost sun-shine making this almost a moot point now, but my point is that it shouldn’t be a criteria, if the energy is low-cost when available (i.e. when the sun shines) adapting to this is relatively easy, much easier than adapting to a radically different cliamte pattern / atmospheric chemistry, ocean-death or ecosystems collapse (which a recent paper showed could happen without obvious predictive signals that are feasible to monitor; i.e. hitting tipping points may not be obvious until after-the-fact).

> On the steady state mailing list someone...

Working on a relaunch of this project

18 May 2014

I’ve been working a lot on Solar Fire / GoSol.org the last few years, working on creating the lowest barrier to entry solar energy device possible, which I believe could have much bigger impact than mere words.

However, I still think books are important too so I’m going to organize a more traditional funding model to make time to write a mature version of the book. I’ve been working on a lot of new material for this new launch.

So, i’ll be launching a new model to finish the work in one go the best I can. Again, don’t hesitate to contact me if you want to help out with relaunch or to just stay informed when it launches.

> Working on a relaunch of this project

Climate Chaos

3 December 2013

Global warming does not appear in the forefront of the main text of Decentralized Democracy. This is because it is impossible to understand global warming as a problem without first understanding the problems of soil erosion, water depletion, deforestation, ocean death and species extinction, among others. Global warming is a problem for humanity as it affects these systems and exacerbates the problems we already face in these domains. However, our soil, water, forestry and land practices are already completely unsustainable, and so it is pointless to discuss mitigating the consequences of global warming without first making our forest, agriculture and ocean practices sustainable. And in doing so, then it is most likely the problem of global warming will be resolved as a consequence: soil and water allows forest to grow, healthy forest absorb pollutants and stabilize the atmosphere; the oceans are as or more important than the forests and the problem of ocean acidification alone demands a radical reduction of carbon dioxide pollution. We also know that fossil fuels is a finite resource, so burning it is by definition unsustainable. So, though global warming is a great problem, essentially any sustainable economy we could think of would resolve the issue.

Thus, the global warming debate should not be framed as a question of needing to do a few things to stabilize the atmosphere, but rather, with or without global warming, we need to do a lot of things to maintain the ecosystems we depend upon, and on top of all these obvious problems with relatively obvious solutions, global warming could amplify all of them completely out of control and render us extinct if we don’t do something soon — to resolve soil erosion, water depletion, deforestation, ocean death, and mass species extinction.

However, once these primary problems are understood, it becomes possible to understand the problem of global warming in a relevant context.

Much contention of course exists around what precisely are the causes of global warming, how much global warming will occur, and what the consequences of this global warming will be. Since the climate is a complex system it is difficult to understand all three. As for predictions, models range from showing it is possible that global warming could feed itself and run out of control, heating the planet 15 degrees or more, to models showing it could trigger a new ice age.

The only thing that is certain is that the climate will change in some way due to our modifications of land, water bodies and the atmosphere, as all complex systems change when factors affecting them change in a significant way. Thus, the term climate change (a climatology term covering any and all changes of climate) was taken to refer to this entire issue (at the institutional level at least). However, climate change isn’t a very good name as it does not connote how big this change might be or whether it is for better or for worse.

A much better name is climate chaos. For, when we understand the climate as a complex system we immediately know it is foolish to try to determine with a high degree of certainty what precisely will happen, as the very nature of complex systems is that we cannot predict any event at all with certainly. Rather, what we can know is that currently the climate is in a stable state and our actions risk pushing it into a chaotic state. By definition the outcome of a chaotic state of affairs cannot be predicted with certainty, except to stay that it is unpredictable.

Though it is important to try to understand and predict as much as we can, to first be aware that a chaotic state may be approaching and second understand events better as they unfold, the ethical imperative and reasonable course of actions can be formulated without any complex modelling and little scientific understanding.

To put this is perspective it is useful to take an example far from the atmosphere, in fact underground at the CERN particle accelerator. While CERN was being built, there was a debate in scientific circles of whether it was ethical to turn it on, as there was a chance CERN would produce black holes and/or exotic particles and a chance we have no idea what these black holes and exotic particles would do. Though the risk was agreed to be very small, what was less clear was how much risk is acceptable. Is it ethical to risk destroying the planet with a 1% chance for a scientific experiment, a 0.1 % chance, 0.001%, and so on ? Where must the line be drawn between relatively irrelevant scientific experiments (irrelevant to most if not all the problems humanity faces today) [1]) and the safety of the entire earth?

Where exactly this line is to be drawn is difficult to place (especially with an inapplicable ethic), but essentially anyone would consider scientists to be mad if they risked the entire world in an experiment with a 50 % chance or even 1 %, and most I think would agree 0.1% or 0.001% is still fairly high, considering how many times experiments must be repeated to have significance.

By thinking this through we arrive at the understanding that any action entails the responsibility of all the possible outcomes, regardless of its probability. For instance,we view drinking a mysterious liquid as irresponsible since there is a chance it may be poisonous. When a potential outcome is absolutely unacceptable, then the action becomes unacceptable.

In the case of the mysterious liquid, if there is no need to risk death then there is no reason to drink the liquid, regardless of whether we surmise it has a 50% or 1% or 0.1% chance of being poisonous.

The only time when it becomes reasonable is when the chance of death from not drinking the liquid in question is greater than the chance of death from drinking it. For instance, the water I drink everyday I cannot know with 100% certainty is safe, but I do know with nearly 100% certainty the consequences if I don’t drink any water at all.

Likewise, the only reasonable way to risk the entire planet is if there was an even greater risk to the planet from not doing it.

In the case of CERN it may or may not be reasonable that the chance a particle accelerator would save the planet directly or indirectly, is greater than the chance a CERN exotic particle would destroy the planet, but, if so, this reasoning can only be be supported in a bubble, as there are other risks to the planet significantly more dangerous — namely soil erosion, water depletion, deforestation, ocean death, and species extinction — that are a far greater priority than asteroid defence programs or space travel (though this does not mean such programs should not exist, only that their respective funding should be proportional to their priority in these troubled times).

The CERN budget maybe relatively small, but what CERN scientists should ask is whether other more immediate problems are being adequately addressed?

Applied to climate chaos, it is irrelevant whether the risk of catastrophic warming and/or cooling is 90%, 10%, 1%, only that common sense tells us dumping billions of tons of waste into the atmosphere and modifying the ecosystems in profound ways entails risk to the global ecosystems.

This risk is unacceptable as there is no greater risk to the planet and humanity that the modern economy addresses: There is no reason to risk destroying the planet to maintain frivolous consumption, and therefore there is every reason to reverse soil erosion, water depletion, deforestation, ocean death, and mass species extinction through simple things such as non-consumption, direct solar energy, planting a lot of forest gardens, local production and management of essential goods and services, more vegetarian diets (as in not meat at every meal), and stopping the acidification of the oceans by not wantonly burning things in superfluous pursuits.

The participants in the CERN risk debate all agreed that the risk of globl destruction for pure scientific experiments should be very small, on the order of 0.0001 % or less. The debate was in what miniscule number in particular should be chosen, who had the right to set this number, and how exactly calculating what level of global risk CERN presented should be carried out.

Yet, nearly all ecological models show there is a far greater level of risk than 0.0001% for a truly catastropic event if we continue to destroy ecosystems, release novel and unstudied chemicals, discupt ocean and atmospheric chemistry, melt large ice-masses and so on; so I fail to see how any competent scientist (who agrees a 0.001 risk to the planet should be avoided [2]) could be seriously concerned with anything else.

> Decent Democracy > Appendix > Climate Chaos

Solar Fire and Hannah Arendt

28 August 2012

In my previous (and first blog) oto chronicle the long process of writing Denentralized Democracy, I was under the impression I had time in front of me to work on Decentralized Democracy.

... Then we decided to launch a Solar Fire campaign to develop a wood based solar concentrator. Though of course this is work on decentalized democracy, since developing a solar based fire is a critical precondition it’s bloged about on www.solarfire.org.

But solar is not the only important thing, and so I return nudging forward the overall task here at Decent-Democracy.

In finishing this morning Hannah Arendt’s, On revolution, I am shaken by the gems on decentralized government, what Arendt call "the council system" in the last chapters of the book.

I have no specific affinity in the revolutionary literary tradition (outside understanding history) for reasons Arendt expresses succinctly:

"The part of the professional revolutionists usually consists not in making a revolution but in rising to power after it has broken our, and their advantage in this power struggle lies less in their theories and mental or organization preparation than in the simple fact that their names are the only ones which are publicly known.—p252

She also has an end note mentioning "[...] an interesting example. At the election to the National Assembly in 1871, the suffrage in France had become free, but since there existed no parties the new voters tended to vote for the only canditates they know at all, with the result that the new republic had become the ’Republic of Dukes’. [3])

So in this light the purpose of Decent-Democracy is to make a guide book not for self-appointed revolutionaires, but to real communities, real ’councils’ that ’For the remarkable thing about the counciles was of course not only that they crossed all party lines, that members of the various parties sat in them together, but that such party membership played no role whatsoever. They were in fact the only political organs for people who belonged to no party. Hence, they invariably came into conflict with all assemblies, with the old parliaments as well as with the new ’consituent assemblies;, for the simple reason that the latter, even in their most extreme wings, were still the children of the party system. At this stage of events, that is, in the midst of revolution, it was party programmes more than anything else that seperated the councils from teh parties; for these programmes, no matter how revolutionary, were all ’ready-made formulas; which demanded not action but execution — ’ to be carried out energetically in practice’, as Rosa Luxemburg pointed out [...] the counciles were bound to rebel against any such policy since the very cleavage between the party experts who ’knew’ and the mass of people who were supposed to apply this knowledge left out of account teh average citizen’s capacity to act and to form his own opinion. The councils, in other words, were bound to become superfluous if the spirit of the revolutionary party prevailed. Wherever knowing and doing have parted company, the space of freedom is lost."p. 256

For Arendt notes the spontaneous organization of the people into the council system, in every major revolution since the French revolution (as well as previously noting the "town hall meatings" as the driving force of the American revolution), such as "the French captical under siege by the Prussian army ’spontaneously reorganized itself into a miniature federal body’, which then formed the nucleus for the Parisian Commune government in the spring of 1871; the year 1905, when the wave of spontaneous strikes in Russia suddenly developed a political leadership of its own, outside all revolutionary parties and groups, and the workers in the factories organized themselves into councils, soviets, for the purpose of represnetative self-government; the February Revolution of 1917 in Russia, when ’despite different political tendencies among the Russian workers, the orgaization itself, that is the soviet, was not even subject to discussion’; the years 1918 and 1919 in Germany, when, after the defeat of the army, soldiers and workers in open rebellion constituted themselves into Arbeiter- und Soldaternate, demanding, in Berlin, that this Ratesystem become the foundation stone of teh new German constitution, and establising, together with the Bohemiams of the coffee houses, in Munich in the spring of 1919, the short-lived Bavarian Raterepublick; the last date, finally, is the autumn of 1956, when the Hungarian Revolution from is very beginning produced the council system anew in Budapest, from which it spread all over the country ’with incredible rapidity’. — p254 (Arendts sites many interesting sources) [4]

"The mere enumeration of these dates suggests a continuity that in fact never existed. It is precisely the absence of continuity, tradition, and organized influence that makes the sameness of the phenomenon so very striking. Outstanding among the concils’ common characteristcis is, of course, the spontaneity of their coming into being, because it clearly and flagrantly contradicts the theoretical ’twentieth-century model of revolution — planned, prepared, and executed almost to cold scientific exactness by the professional revolutionaries’. It is true that wherever the revolution was not defeated and not followed by some sort of restoration the one-party dictatorship, that is, the model of the professional revolutionary, eventually prevailed, but it prevailed only after a violent struggle with teh organs and institutions of the revolution itself [i.e. the council system]. The councils. moreover, were always organs of order as much as organs of action, an it was indeed their aspiration to lay down the new order that brought them into conflict with the groups of professional revolutionaries, who wished to degrade them to mere executive organs of revolutionary activity." p.255

Ok, I could essentially quote most of the last quarter of the book, so these one’s I just wanted to get down to incorporate into the Decent book. Well, ok, one last one:

"... The founders should have found it easy enough to console themselves with the thought that the Revolution had opened the political realm at least to those whose inclination for ’virtuous disposition’ was strong, whose passion for distinction was ardent enough to embark upon the extraordinary hazards of a political career. Jefferson, however, refused to be consoled. He feared an ’elective despotism’ as bad as, or worse than, the tyranny they had risen against: ’If once [our people] become inattentive to public affairs, you and I, and Congress and Assemblies, Judges and Governors, shall all become wolves." — p. 230 (Quotin Hefferson from a letter to Colonel Edwards Carrington, 16 January 1787).

So what has this to do with Solar Fire

Everything! Since all the above leads up to the question "Why did the revolutionary party (controlled by professional revolutionists who mostly ’show up’ after the revolution is already underway) defeat the spontaneous council system of the people?" as Ardents says in no unequivocal terms:

The outbreak of most revolutions has surprised the revolutionist groups and parties no less than all others, and there exists hardly a revolution whose outbreak could be blamed upon their activities. It usually was the other way round: revolution broke out and liberated, as it were, the professional revolutonists from wherever they happened to be—from jail, or from the coffee house, or from the library. Not even Lenin’s party of preofessional revolutionists would ever have been able to ’make’ a revolution; the best they could do was to be around, or hurry home at the right moment, that is, at the moment of collapse. Tocqueville’s observation in 1848, that the monarchy fell ’before rather than beneath the blows of the victors, who were as astonished by their triumph as were the vanquished at their defeat,’ has been verified over and over again. p.252

Indeed, she observes that Bolshevik party (professional revolutionists) had to name their new government after the soviet councils who they did everything to destroy and did destroy, but so associated were the soviets with the cause, work and purpose of the revolution, that

"Practically, the current ’realism’, despair of the people’s political capacities, not unlike the realism of Saint-Just, is based solidly upon the concious or unconscious determination to ignore the reality of the councils and to take for granted that there is not, and never has been, and alternative to the present system."p. 262

"The councils, obviously, were spaces of freedom. As such, they invariably refused to regard themselves as temporary organs of the revolution and, on the contrary, made all attemps at establishing themselves as permanent organs of government. Far from wishing to make the revolution permanent, their explicitly expressed goal was ’to lay the foundations of a republic acclaimed in all its concequences, the only government which will close forever the era of invations and civil war;’ no paradise on earth, no classless society, no dream of socialist community fraternity, but the establishment of ’the true Republic’ was the ’reward’ hoped for as the end of the struggle. p.256 Arendt citing Anweiler.

Her answer is found on ... to be continued.

> Solar Fire and Hannah Arendt

Small handy linux tools

17 April 2012

I’ll be keeping a list here of the small Linux tools I often use so I don’t have to go searching around when I forget the name.

Graphics

Color picker
apt-get install gcolor2

Security

Extract server signatures from profile
ssh-keygen -l -f .ssh/known_hosts

System

Check processor speed and info
cat /proc/cpuinfo

GKrellM

apt-get install gkrellm

Crontab E in nano

... Never bothered to undestand vi ... if you’re in the same boat you may wonder how to edit crontab in nano.
env EDITOR=nano crontab -e

Mysql Restore database to different server

Mysql restore to other computer causes a problem because the linux "user" in charge of starting and shuting down mysql no has a different password for mysql and can’t access it.

Get info from:
cat /etc/mysql/debian.cnf
scroll down to:

user     = debian-sys-maint
password = thepassword
socket   = /var/run/mysqld/mysqld.sock
basedir  = /usr

mysql -u root -p MySqlRootPASSWORD

Then log into mysql and grant priveledges
GRANT SHUTDOWN ON *.* TO ’debian-sys-maint’@’localhost’ IDENTIFIED BY ’thepassword’;
GRANT SELECT ON `mysql`.`user` TO ’debian-sys-maint’@’localhost’ IDENTIFIED BY ’thepassword’;

... Hmm... by definition this page is for tools I forget the name of, so I can’t think of anymore at the moment. I’ll add more as I go searching for them.

> Small handy linux tools

Two lessons in computer security

15 April 2012

If you’ve landed on this page, it’s probably because you’ve come to realize there is no click-through solution to information security and so have been searching like a maniac for some real solution.

Indeed, it’s the “click to solve my current problem†mentality which is largely the reason the internet is so insecure to begin with.

Here is the first part of a concoction of lessons I’m putting together of what I’ve learned in computer security.

So security lesson #1 is: Don’t click!

Unfortunately, there’s only 2 alternatives to Don’t click, which is A. pay someome to take care of security or B. Understand things and take care of security yourself.

Though in a huge organization option A may be necessary on some level, it can never be real solution, since who do you trust? Even security firms are being publicly hacked these days (which just begs the question of how many more are covert-hacked, an issue we’ll deal with shortly).

But probably you can’t afford a high top level security consultancy to come in and vett your system and train you and your people to implement best practices. And any partial or non-expert requires understanding computer security to evaluate how secure it is and whether it’s being done right.

So this security guide starts with the basics so you can at least know what people are talking about and the major issues to deal with. Bringing us to:

Security lesson #2: Research before implementing something.

Research the issues as you come to understand them, on this site and other sites. Run searches like "Thing is Bad", "Thing is good", "Thing vs other thing" etc to get a survey of the different opinions concerning the thing in question.

Now, getting back to who can you trust, when it comes to computers the answer is surprisingly few. Even people that are trying to do their best may make a simple mistake which creates a huge security flaw, and not just people you’re working with, but people writing the software you’re using.

Software is so complicated that it’s impossible for any one expert to go through any average program and say it’s totally secure, much less a huge complicated system of programs working together. Indeed, software is so complicated that any claim to security should be viewed as highly dubious whoever it is, like free energy devices. The only solution is the "open the hood" approach. If anyone can look under the hood and verify for themselves the claim, only then do we start to have faith some outrageous claim is true.

But not only do many software companies not make it easy for others to verify, they make it impossibly hard. Unlike a chair or a hammer that any organization can buy and test for themselves the makers claims, software can be encrypted and obfuscated in many ways to make it essentially impossible to "see how it works".

> Two lessons in computer security

Switching to KDE

15 April 2012

Gnome 2 is and was the greatest. Highly efficient and everything just works. Gnome 2 atop of Debian, which also just works all the time, was and is fantastic.

But alas, though Gnome 2 is still in Debian Stable squeeze, it’s no longer in development so will die eventually.

I’ve thus decided to be pro-active and switch to KDE. There’s of course other interesting desktops, such as LXCF that I use on older computers, but I also maintain computers for family members (and introducing people to Linux when I can) so I don’t want to have to be configuring everything all the time or fixing bugs. So Gnome 2 was great for this purpose, and I hope KDE can fill those shoes.

Though I’m not specifically against Gnome 3 or unity, it’s probably a lot better for new linux users or people who mostly do single-tasking, so hopefully Gnome 3 and Unity will bring more people into the Linux fold, which can only be a good thing.

But for intense multi-tasking it seems like KDE is taking this road so I feel I might as well jump on board now, while I can still fallback on Gnome 2 in a pinch.

It’s also great to have a fairly modern KDE 4 in the Debian stable repositories, since on my work computer I only use a newer version of everything to correct serious bugs.

Thoughts on KDE so far

I’ve always liked a lot of things in KDE every time I’ve checked it out now and again ... but I always hit some strange bug or missing feature which brought me back to Gnome 2, which has always been bugless for me. Since it’s not like Gnome 2 did any particular, rather it’s great since it does nothing and does it well, leaving you to work.

But now with KDE 4.x things seem to be consolidating, and I seem to be able to get my work done without random things disappearing (or the network not working etc.).

KDE positives

Kate

Kate is really awesome for programming. Intelligent color/bolding of things based on the language your programming, and the options to change essentially anything.

I used Kate in Gnome 2 as well, but it takes a long time to load up (as it has to pull all the K libraries), so for quick edits I generally used Gedit, which ok ... I guess. So, it’s good to have Kate open instantly.

Widgets

I’m not going to lie, widgets are good.

The Desktop widget prevents crazy proliferation of files on the Desktop eventually overlapping each other and rendering the Desktop useless to keep track of small things you should take care of ... someday, and on that day it’s because the file happens to be on the Desktop that you remember anything about the issue.

The unit conversion widget’s fairly handy as well as the RSS feed. Monitoring system resources you may as well pop on there as well.

There a paste widget for pasting things, so far only using the default generating random passwords but I may find other uses.

You can also download plenty of widgets, but I haven’t gone through them yet.

Differences to watch for

So far the biggest difference has been finding the keyboard and show-desk top icon.

I need to configure the keyboard to be able to switch to French layout, in Gnome 2 it’s under "Keyboard", in KDE it’s under "Input devices",.

The show-desktop icon you have to right click on the panel where you want it and add it as a widget. Though on the newer version of KDE it’s there by default.

Things I don’t like

Multi-Desktops tabs make no sense

Biggest problem is that when you switch desktops the tabs at the bottom don’t switch, which defeats the purpose of multiple desktops.

I do a lot of crazy multi tasking, and like to organize each task on a different desktop so I can easily switch back and forth depending on what I feel like accomplishing at any given moment. Generally I don’t close windows at all until I close the computer.

So having all the tabs shown on each Desktop is a fairly big drawback.

But this maybe configurable, or accomplished through this "activities" concept which at the moment I haven’t checked out what it does or how it works yet.

Update: minimized windows resolved!

Took surprisingly long to find how to change this beharviour though it was right infront of me.

Problem is that there’s "computer > system settings > desktop" which I assumed would have the option. It has the option of different activity on each desktop, which allows a different set of widgets on each desktop, which is interesting ... but it crashed continually ... and it still maintained the minimized windows on all desktops.

The key is to close all your windows, right click on the now empty bar and choose task-manager settings. Then under filters select "only show tasks from current desktop". You can also turn off grouping by program which just complicates things in my opinion.

File browser weird

By default the file browser shows temporary files, and doesn’t show "last modified" column in details mode. I assume it can be configured somehow or is "fixed" in the next versions.

Remote-server bugs

Connecting to remote servers with fish seems a bit buggy, sometimes it says I can’t save ... and other times it opens folders but doesn’t show the content. Don’t know if it’s really a bug or whether I’ve configured something wrong, or whether it’s fixed in newer version, will report when I understand more.

Only real complaint so, but not annoying enough to abandon the KDE attempt so far, works most of the time.

> Switching to KDE

Starting a blog

2 March 2012

I’ve run into some spare time over the next few months where I hope to finish this free-book.

I’ve been working fairly intensely on Solar Fire this past year since I believe strongly that the pre-condition for spreading decentralized philosophy and mode of living are the tools we need to actually live decentralized. Many tools we need have been preserved from decentralized times past and many new tools have been developed, but decentralized solar energy seems to be a critical piece that I felt I could contribute to.

I of course still think so, but I feel I’ve simplified the solar fire technique as far as I can take it at least. I’ve recently come up with the tri-truss technique that I’ll be throwing up some models about on solarfire.org, and I’ll blog it here as well when I do.

So I think I can contribute more on the decent ideas front which is also needed along with the tools to actually do it.

So I’ll be posting about what I’m additions/improvements in the book, future plans, and perhaps some commentary on current events from the decentralized perspective.

So stay tuned and don’t hesitate to email me any thoughts at wissenz (at) gmail.com.

> Pages > Starting a blog

The Wants Theory Refuted

2 June 2011

In the first case, the framework is seriously undermined as the repair presupposes discarding the premise that wants are empirical facts that one simply "knows". For if one get’s what one wants but is not made happy, the only way to salvage the framework is to conclude that one might not know what one "truly wants". One is now on an endless search to uncovered what one truly wants, without ever knowing for sure, and thus the system breaks down. At some point in time one must simply decide what one wants, which seems equivalent to the alternative of the entire framework: that intentions are decisions not empirically verifiable facts.

In general, the premise that fulfilling "wants", of any given time not the tautology of wanting to be happy, equals happiness can only be proved if one actually fulfills all of one’s wants: one cannot base one’s decisions on what one doesn’t know. In practice then, the theory is simply unworkable and unverifiable until one has carried the system through, which cannot be a justification for adopting it now. However, there are even further problems with the theory as we see with friend number two.

Friend number two argued more tangibly by appealing to what he thought at the time was the irrefutable virtue and general superiority of his wants: "This is what I want: to develop and spread solar technology for the good of the world and to get fifty million dollars in the mean time."

I defeated his reasoning by pointing out that these two goals were competing against each other. Every dollar that comes under his control he can either put into goal A or goal B. He of course needs to put enough money into himself to carry out goal A efficiently. But sustaining oneself and improving one’s abilities to be more effective in developing solar technology is not to be confused with having fifty million dollars and doing solar unrelated things.

Previously he had simply imagined that these goals where mutually inclusive and reasoned things would just magically work out, but he quickly realized that this wasn’t so.

How then could he decide to draw the line between goal A and B.

Furthermore, what if he accomplish these goals? Solar technology has grown beyond his ability to contribute further (something that cannot actually occur since solar technology can always be improved), and he has fifty million dollars, what does he do now? He’d do other things he wants, he answers. But what, he didn’t really know. Would having fifty million dollars really solve all his problems: would a loving wife and exuberant children be the necessary corollary? And would this state of affairs simply continue indefinitely? He began to doubt himself, thus falling victim to criticism number one, regardless.

So, I reasoned that his actual goal was simply to have goals. His first principle was his last principle. And so, the harder the goal the longer his ambition to have goals is satisfied.

In general, if one’s wants are finite, and the purpose of life is to fulfill wants, if one actually fulfills one’s wants, one no longer has purpose, and so one would collapse into an motionless or apathetic state, which hardly seems happy.

Thus, the only way to avoid the question "what should one do?" (as in ethics), is to have not only wants that never prove dissatisfactory but an infinite number of them. One must be able to not only imagine the fulfillment of one’s current wants and be totally convinced that such would be true happiness, but one must be able to know exactly what one’s next want would be, and the next, and be able to repeat this process indefinitely. Otherwise, one cannot argue that wants are empirical in nature: a simple fact and the actual job of thinking is to fulfill these wants efficiently. The alternative is that one must decide what one’s should do through some method of reasoning, a method of reasoning that would have to be consistent and complete (as in an ethical system).

Furthermore, since the only way to circumvent ethics, the abstract question of what one should do, devoid of any pretense, is to mystically know exactly what one wants all the time and never be failed by this "sense" (which is another avenue the wants theory falls into disarray, as empirical data is sensory, and so one must have a "wants sense" to establish these empirical facts), one would hardly waste one’s time arguing about ethics with a bunch of strange creatures who seem to lack empirically verifiable wants through this sense.

Not only would such a person be too busy fulfilling wants, but the generalization of the theory is extraordinarily difficult. For, the only way for the theory to work is to assert that everyone not only has these wants but that these wants never contradict each other in any individual person; for if wants did contradict each other then that human is bound up in irrationality, by definition impossible to escape, and has scant use of a theory. Any philosophy that asserts: humans are irrationality, is pointless to accept. If one accepts it, one is irrational (whether it is true or false), if one rejects it one is still irrational if it is true.

Other obvious questions are, where do these wants come from? How exactly do we "observe them empirically"? How exactly is the activity of the mind separated from one’s wants? If there is no distinction, one cannot argue that "one should do one’s best to fulfill these wants" as, if one doesn’t that is clearly what one wants to do (but how can one want to do what one doesn’t want to do, which seems to be the state of anyone who rejects the theory?). How does the wants theory resolve the usual problem of self reference?

The general counter argument is, what alternative is there? The alternative is the total reconstruction of one’s entire decision making framework. Starting from nothing, no pretenses or pre assumed "goodness" or "desirableness", and contending what one should do. I call such a state the void, as one is devoid of all ambitions: one does not even possess the ambition to form ambitions, which gives this state of mind its meaning.

After rejecting my previous mode of behaviour which I concluded was meaningless and everything I had thought was important was clearly unimportant as I wouldn’t even remember in a few years, and so coming to this void, I of course still moved and talked and slept and so on, but the difference is I could not answer why I was doing so. And so this is the beginning of actual philosophic contemplation.

Some might argue that removing all identity from oneself and starting from zero is some psychological process to some meta account of humans. I disagree, I view it as the intrinsic basis of ethics. In general, if one finds a flaw in one’s decision making process one must remove everything that has been based on that flaw, as the first part of wisdom is the absence of foolishness, and so if the flaw is recognized as the basis of one’s thinking, one must remove everything. And so, if this is reasonable, if one did find a flaw in one’s fundamental reasoning, as in a contradiction attributed to one’s most basic assumptions, clearly one should as quickly as possible remove the flaw from one’s reasoning and any action that results from it. The most reasonable course of action is then to become devoid of any reasoning system, but without a reasoning system one does not have the intention to form a reasoning system. Thus I refer to any such first principle decision as a pure decision, not simply the product of more fundamental decisions (I decide to eat potato because I have decided to continue living, and eating this particular potato I understand to be an efficient fulfillment of my decision to live), or in renaissance terminology, an act of pure will.

Though this may sound mystical, I think it is born from a lack of words: our language is designed to explain what are decisions are, which we almost always do by appealing to some more general generally accepted decision, or at least accepted by the person we are talking to (or more precisely we don’t talk to people who do not share some general decision). The idea of not having made any decision at all, and removing one’s adherence to any previous decision, and thus being in a decisionless state does not seem to exist with ease in our language, or any language I gather. What I am trying to describe is a logical necessity, assuming one can be logical, which we must assume for the assumption of the opposite is by definition useless.

> The Wants Theory Refuted

Why Spip?

2 June 2011

What is Spip?

Spip is a open source Content Management System (CMS) running in PHP/MSQL. Spip is free software licensed under the GNU/GPL. Spip was originally made for a magazine, where an important criteria is that authors can start publishing and editing content without any training nor risk to the system.

Spip user experience?

Spip allows authors to be created to connect to the site and manage their own content, add and edit pages and upload images, files and video. Each author can be given publishing rights (admin) to one or a selection of folders. For folders outside an authors admin space, they can submit articles but only an admin can publish it on the public side.

The “back-office†interface of Spip is designed to be as simple as possible and require the minimum of actions of the authors to publish their content. The steps are navigating to their section (organized same as folders on their computer), click “write an article†, clicking “save†and then “publish online†. When logged in, authors can also stay on the public site and simply double click on any of their existing content to change it on-location.

Under this basic interaction, articles appear in the navigation automatically where they are supposed to and images are rescaled to what they are supposed to automatically. To augment their text, the author can select something and click on bold/italic/heading/link etc. Lists can be made by simply starting each new line of the list with a dash. Advanced users can create HTML code offline and simply insert as text, for more sophisticated presentations.

Comparison

Though all open source CMS’s are fundamentally the same in the sense that, as they are open source, their code can be changed to do anything possible (there is nothing one CMS can do that another CMS can’t be rewritten to do), they are not the same in terms of the effort required to solve any given problem. Clearly, the closer a CMS already is to the final objective, the easier the task will be.

In Spip there is absolutely no pre-defined design framework. This is important, since professional and unique web sites are designed from scratch, for the same reasons unique books are written from scratch and by modifying existing ones.

Most other CMS’s are created to allow amateurs to build websites without any coding knowledge and configure with simple multiple choice. Though advanced users can tweak the “cooky-cutter†, the farther ones departs from the standard template framework the harder it is to code and the more confusing it becomes for the users to interact with.

Design
Spip does not provide a design framework, but is better thought as providing six major services.

1. Content management
2. Templates
3. Markup Language
4. Language Management
5. Caching and compression
6. Plugins

1. Content management

The first service is allowing users to interact with the database in a secure and logical way. Content is organized into “sections†and “articles†exactly similar to folders and files on their computer. They can upload images, files, documents etc. decide if they want comments to be allowed for the article, choose a logo, associate key words, etc. (or not if they don’t have the permissions to do so).

2. Templates

The second service is template management. The three main templates of a Spip site are for the “home page†, the “default section page†and “default article pages†. These templates are not required to have anything in common at all and can be completely dissimilar if need be.

However, when a common object is needed in multiple templates a “sub-template†can be made and inserted where required in the main templates; thus, repetition is reduced to zero.

For further differentiation of site design, other templates can be written for specific sections or/or all subsections of a section, or individual articles. A vast complex hierarchy of templates can be created if need be.

For all sections of the site, the design can be adapted to the content and not the content to the design.

3. Markup Language

The third service is providing the Spip LOOP (Boucle) language tailored to the common needs of the web developer. This language simplifies interaction with the database to solve common problems.

A basic example of a Loop is returning all the the articles in a section, ordering them under a given criteria, then extracting the Title and Text, and then padding these objects with code written by the webmaster (HTML/CSS, Javascrip, Flash, PHP, etc.). This task is very easy to accomplish in Spip because each section has a number, so only this number is required for the loop to return all the articles, there’s quick code for all the usual ordering criteria, and the padding code is simply as it would normally be.

The LOOP language also easily allows taking only a part of the list (say the first or the last five entries), alternating the code padding (so the first entry is blue, the second grey, the third maroon, then repeating these for the next entries), entries can be excluded on an individual basis or by some criteria, as well as applying various filters to the objects. For instance, only the first 200 characters may be desired to make a list of introductions.

Images filtering is also very powerful. Images inserted into the text can be scaled, cropped, rotated, colorized etc. all automatically.

Also, the syntax easily support “backup LOOPS†if no entries are returned by the primary loop. So it’s easy to automate the “what if what we would normally expect isn’t there†that often arise in web development. An example of this is that a primary loop can return all the articles in a section but if no such articles exist a secondary loop returns all the articles in the subsections of that section or something else entirely. All lists can be easily paginated as desired (if a list is too long for one page it is broken up into several pages).

For advanced uses, the LOOP language is designed to smoothly include standard PHP to solve uncommon problems not envisioned by the Spip developers. For instance, if a highly specific ordering or inclusion/exclusion criteria is needed that no one has ever needed before, PHP and or Regular expression can be coded directly into the LOOP. So there’s never a need for completely independent PHP code to be superimposed on the system, which often creates “reinventing the wheel†type situations.

4. Language Management

Language management is one of the main reasons Spip became so popular in Europe, as multilingualism is the rule not the exception. Spip accomplishes this by associating as having a language to each page as defined by the site administrators. In the templates, the webmaster can insert things like Hello which will return “Hello†if the page using the template is English and “Bonjour†if the page needing the template is French and Hola for Spanish. The “multi†tag is the easy way but isn’t a perfect solution for when a phrase needs to appear in many different locations.

To solve this latter problem phrase references can be inserted into the code appearing as <:home:> and language files created that will replace these references with the corresponding phrase for that reference. So, if a new language is added to the site, all that it is needed is to translate a centralized language reference page for all pages requested in that language to return the correct phrases. So, repetition can be reduced to zero and the webmaster doesn’t even need to touch the templates for someone to translate the entire navigation.

Likewise, LOOPs entries can exclude/included by language, so a single loop can serve completely different content depending on what language requests use of the loop. So, it’s easy to code multilingual templates.

5. Caching and compression

When a page is calculated via the content-template interaction, the result is a HTML/CSS, Javascript and/or Flach page served to the user. This page and all associated content (i.e. re-sized images, external RSS feeds) is then cached for a time decided by the webmaster or until it’s recalculated manually when a user changes the main content associated with the page (a page with a long caching time can become outdated if content exterior to the page is changed, e.i. Blogrolls; for a super dynamic page the caching time can simply be set to zero, but this is generally not required).

When pages are recalculated, the load on the server is minimal since Spip is a very light engine providing only the bare essentials.

This caching system has been tested in Apache to not be measurably slower than serving individual files.

Spip can also be set to automatically compress HTML, CSS, Javascript pages to reduce transfer size (a service often paid for).

6. Plugins

There are less plugins for Spip than other CMS’s, but this is because Spip is not intended to be built via a multiple choice of plugins. Rather plugins are designed to augment the engine to allow new classes of problems to be easily solved, for instance augment the Loop syntax for less common problems. This approach is meant to keep the core engine as simple as and as fast as possible, by allowing the webmaster to only include added functionality on a needs basis. The plugins are generally designed to be easily deactivated without causing any problems.

> Why Spip?

 


copyright 2006 - 2020 Eerik Wissenz